-
XQuant: Breaking the Memory Wall for LLM Inference with KV Cache Rematerialization
Paper • 2508.10395 • Published • 41 -
Speed Always Wins: A Survey on Efficient Architectures for Large Language Models
Paper • 2508.09834 • Published • 52 -
Causal Attention with Lookahead Keys
Paper • 2509.07301 • Published • 20
Tanmay Gangwani
tgangs
AI & ML interests
None yet
Recent Activity
updated
a collection
6 days ago
LLM general
updated
a collection
8 days ago
Explainability
updated
a collection
17 days ago
Agents
Organizations
None yet