Recordings of human brain suggest that concepts are represented through sparse sets of neurons that fire together when the concept is activated: we talk about neuronal assemblies. Neuroscientists have identified local learning rules to adjust synaptic weights, but to our knowledge there is no mathematical proof that such local rules enable to learn, nor that they create neuronal assemblies. In this purpose, we propose a spiking neural network named CHANI (Correlation-based Hawkes Aggregation of Neurons with bio-Inspiration), whose neurons activity is modeled by Hawkes processes. Synaptic weights are updated thanks to an expert aggregation algorithm, providing a local and simple learning rule. We were able to prove that our network can learn on average and asymptotically. Moreover, we demonstrated that it automatically produces neuronal assemblies in the sense that the network can encode several classes and that a same neuron in the intermediate layers might be activated by more than one class.