Dissecting associative memory mechanisms

Cabannes Vivien, META

Learning arguably involves the discovery and memorization of abstract rules. The aim of this talk is to present associative memory mechanisms when in presence of discrete data. We focus on a model that accumulates outer products of token embeddings into high-dimensional matrices. On the statistical side, we derive precise scaling laws that recover and generalize the classical scalings of Hopfield networks. On the optimization side, we reduce the dynamics beyond cross-entropy minimization to a system of interacting particles. In the overparameterized regime, we show how the logarithmic growth of the ``margin'' enables maximum storage of token associations independently of their frequencies in the training data, although the dynamics might encounter benign loss spikes due to memory competition and poor data curriculum. In the underparameterized regime, we illustrate the risk of catastrophic forgetting due to limited capacity. Those facts showcase how our simple convex model replicates many characteristic facts of modern neural networks. This is not too surprising since many researchers envision transformers as big memory machines. To strengthen this intuition, we explain how the serialization of three memory modules can build one induction head –induction head is a central concept in the mechanistic interpretability literature, serving as a sort of gate enabling the formation of logical circuits in transformers.

Area: CS61 - Deep learning theory (Alberto Bietti)

Keywords: associative memory mechanism