Memory Retrieval and Consolidation in Large Language Models through Function Tokens
Abstract
Function tokens in large language models activate predictive features during inference and guide memory consolidation during pre-training by predicting subsequent content tokens.
The remarkable success of large language models (LLMs) stems from their ability to consolidate vast amounts of knowledge into the memory during pre-training and to retrieve it from the memory during inference, enabling advanced capabilities such as knowledge memorization, instruction-following and reasoning. However, the mechanisms of memory retrieval and consolidation in LLMs remain poorly understood. In this paper, we propose the function token hypothesis to explain the workings of LLMs: During inference, function tokens activate the most predictive features from context and govern next token prediction (memory retrieval). During pre-training, predicting the next tokens (usually content tokens) that follow function tokens increases the number of learned features of LLMs and updates the model parameters (memory consolidation). Function tokens here roughly correspond to function words in linguistics, including punctuation marks, articles, prepositions, and conjunctions, in contrast to content tokens. We provide extensive experimental evidence supporting this hypothesis. Using bipartite graph analysis, we show that a small number of function tokens activate the majority of features. Case studies further reveal how function tokens activate the most predictive features from context to direct next token prediction. We also find that during pre-training, the training loss is dominated by predicting the next content tokens following function tokens, which forces the function tokens to select the most predictive features from context.
Community
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Evolution of Concepts in Language Model Pre-Training (2025)
- Crosscoding Through Time: Tracking Emergence&Consolidation Of Linguistic Representations Throughout LLM Pretraining (2025)
- Pretraining with hierarchical memories: separating long-tail and common knowledge (2025)
- Expanding Computation Spaces of LLMs at Inference Time (2025)
- A circuit for predicting hierarchical structure in-context in Large Language Models (2025)
- Disentangling Recall and Reasoning in Transformer Models through Layer-wise Attention and Activation Analysis (2025)
- Language Modeling with Learned Meta-Tokens (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper