SLA: Beyond Sparsity in Diffusion Transformers via Fine-Tunable Sparse-Linear Attention Paper • 2509.24006 • Published 13 days ago • 110
Efficient Hyperparameter Tuning via Trajectory Invariance Principle Paper • 2509.25049 • Published 12 days ago • 4
SageAttention3: Microscaling FP4 Attention for Inference and An Exploration of 8-Bit Training Paper • 2505.11594 • Published May 16 • 75