Pretraining with hierarchical memories: separating long-tail and common knowledge
Abstract
A memory-augmented architecture with hierarchical parametric memory banks improves language model performance while reducing parameter size and computational requirements.
The impressive performance gains of modern language models currently rely on scaling parameters: larger models store more world knowledge and reason better. Yet compressing all world knowledge into parameters is unnecessary, as only a fraction is used per prompt, and impractical for edge devices with limited inference-time memory and compute. We address this shortcoming by a memory-augmented architecture and a pretraining strategy aligned with existing hardware paradigms. We introduce small language models that access large hierarchical parametric memory banks encoding world knowledge. During pretraining and inference, we fetch a small, context-dependent memory block and add it to the model. Our pretraining learns to store long-tail world knowledge in the memory parameters, while the small language model acts as an anchor capturing common knowledge and general reasoning abilities. Through trillion-token-scale experiments, we show significant gains: a 160M-parameters model augmented with an 18M-parameters memory fetched from a 4.6B memory bank obtains comparable performance to a regular model with more than 2x the parameters. Through extensive experiments, we study the optimal type and size of parametric memories in transformers, scaling them to over 21B parameters. We find that our proposed hierarchical feed-forward memories work robustly across transformer architectures, whether added during pretraining or post-hoc.
Community
Pretraining with Hierarchical Memories: separating knowledge and reasoning for On-Device LLM deployment
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Memory Decoder: A Pretrained, Plug-and-Play Memory for Large Language Models (2025)
- TokMem: Tokenized Procedural Memory for Large Language Models (2025)
- Learning Facts at Scale with Active Reading (2025)
- Optimal Sparsity of Mixture-of-Experts Language Models for Reasoning Tasks (2025)
- UltraMemV2: Memory Networks Scaling to 120B Parameters with Superior Long-Context Learning (2025)
- Retrieval Capabilities of Large Language Models Scale with Pretraining FLOPs (2025)
- StateX: Enhancing RNN Recall via Post-training State Expansion (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper