Quantum Variational Activation Functions Empower Kolmogorov-Arnold Networks
Abstract
Quantum variational activation functions (QVAFs) and quantum-inspired Kolmogorov-Arnold networks (QKANs) enhance parameter efficiency and expressivity in quantum machine learning, offering scalability and improved performance in various tasks.
Variational quantum circuits (VQCs) are central to quantum machine learning, while recent progress in Kolmogorov-Arnold networks (KANs) highlights the power of learnable activation functions. We unify these directions by introducing quantum variational activation functions (QVAFs), realized through single-qubit data re-uploading circuits called DatA Re-Uploading ActivatioNs (DARUANs). We show that DARUAN with trainable weights in data pre-processing possesses an exponentially growing frequency spectrum with data repetitions, enabling an exponential reduction in parameter size compared with Fourier-based activations without loss of expressivity. Embedding DARUAN into KANs yields quantum-inspired KANs (QKANs), which retain the interpretability of KANs while improving their parameter efficiency, expressivity, and generalization. We further introduce two novel techniques to enhance scalability, feasibility and computational efficiency, such as layer extension and hybrid QKANs (HQKANs) as drop-in replacements of multi-layer perceptrons (MLPs) for feed-forward networks in large-scale models. We provide theoretical analysis and extensive experiments on function regression, image classification, and autoregressive generative language modeling, demonstrating the efficiency and scalability of QKANs. DARUANs and QKANs offer a promising direction for advancing quantum machine learning on both noisy intermediate-scale quantum (NISQ) hardware and classical quantum simulators.
Community
Quantum-inspired Kolmogorov–Arnold Networks (QKAN): Scalable, Efficient, and Expressive Neural Architectures
We present Quantum-inspired Kolmogorov–Arnold Networks (QKANs), a new neural architecture that unifies quantum variational circuits with learnable activation functions to deliver faster, more parameter-efficient, and highly expressive models. At the core of QKAN are Quantum Variational Activation Functions (QVAFs) implemented via single-qubit DatA Re-Uploading ActivatioNs (DARUANs). DARUANs exhibit an exponentially growing frequency spectrum with repeated data uploads, enabling dramatic reductions in parameter count compared to classical Fourier-based activations while preserving expressive power.
By embedding DARUANs into Kolmogorov–Arnold Networks (KANs), QKANs retain the interpretability of classical KANs while achieving superior generalization, efficiency, and scalability. To further enhance large-scale usability, we introduce Hybrid QKANs (HQKANs) as drop-in replacements for MLPs in deep architectures, including GPT-style models, reducing computational cost and memory usage without sacrificing performance.
We provide rigorous theoretical analysis and demonstrate QKAN’s effectiveness across regression, image classification, and autoregressive generative modeling tasks. Our fully open-source code and preprint make it easy for the community to experiment with quantum-inspired neural architectures.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Quantum Graph Attention Network: A Novel Quantum Multi-Head Attention Mechanism for Graph Learning (2025)
- Bridging Classical and Quantum Computing for Next-Generation Language Models (2025)
- TensoMeta-VQC: A Tensor-Train-Guided Meta-Learning Framework for Robust and Scalable Variational Quantum Computing (2025)
- Embedding-Aware Quantum-Classical SVMs for Scalable Quantum Machine Learning (2025)
- Vectorized Attention with Learnable Encoding for Quantum Transformer (2025)
- Quantum Visual Fields with Neural Amplitude Encoding (2025)
- Quantum Relational Knowledge Distillation (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper