Kairos-50M: Adaptive Time Series Foundation Model
This model is presented in the paper Kairos: Towards Adaptive and Generalizable Time Series Foundation Models.
Model Description
Kairos-50M is a 50-million parameter time series foundation model designed for zero-shot forecasting across diverse domains. It features adaptive tokenization and instance-specific positional encodings to handle heterogeneous time series data with varying information density.
Key Features
- π Mixture-of-Size Dynamic Patching (MoS-DP): Adaptively selects tokenization granularity based on local information density
- π Instance-adaptive Rotary Position Embedding (IARoPE): Tailors positional encodings to unique temporal characteristics of each series
- π Zero-shot Forecasting: Strong generalization across domains without fine-tuning
- β‘ Efficient: Superior performance with fewer parameters
Model Specifications
- Parameters: ~50 million
- Training Data: PreSTS corpus (300+ billion time points)
- Architecture: Transformer-based with adaptive components
Model Family
- Kairos-50M: https://huggingface.co/mldi-lab/Kairos_50m (Current)
- Kairos-23M: https://huggingface.co/mldi-lab/Kairos_23m
- Kairos-10M: https://huggingface.co/mldi-lab/Kairos_10m
Usage
import torch
from tsfm.model.kairos import AutoModel
# load model
model = AutoModel.from_pretrained(
"mldi-lab/Kairos_50m", trust_remote_code=True
)
# forecasting configurations
batch_size, context_length, prediction_length = 1, 2048, 96
seqs = torch.randn(batch_size, context_length)
prediction_length = 96
forecast = model(
past_target=seqs.clone().detach().float(),
prediction_length=prediction_length,
generation=True,
preserve_positivity=True,
average_with_flipped_input=True
)
# extract the prediction results
forecast = forecast["prediction_outputs"]
print(forecast.shape)
For detailed usage examples, please refer to the main repository.
Citation
If you use this model, please cite:
@article{feng2025kairos,
title={Kairos: Towards Adaptive and Generalizable Time Series Foundation Models},
author={Feng, Kun and Lan, Shaocheng and Fang, Yuchen and He, Wenchao and Ma, Lintao and Lu, Xingyu and Ren, Kan},
journal={arXiv preprint arXiv:2509.25826},
year={2025}
}
License
Apache License 2.0
- Downloads last month
- 107
Inference Providers
NEW
This model isn't deployed by any Inference Provider.
π
Ask for provider support