Toto-2.0-2.5B-FT
This is a benchmarking checkpoint, not a general-purpose model. Toto-2.0-2.5B-FT is the Toto 2.0 2.5B base model finetuned on the GIFT-Eval training split for our #2-on-GIFT-Eval-leaderboard submission. It is released for reproducibility only.
For real workloads, please use the base Toto 2.0 collection. The base checkpoints are pretrained without any public data, generalize to every benchmark we have evaluated, and are what we recommend deploying.
β¨ What this is?
A single Toto 2.0 2.5B base checkpoint finetuned on a mix that includes the GIFT-Eval training split, used to probe how far the base model can be pushed on a single in-distribution benchmark.
π Finetuning recipe
Starting from a fully-decayed Toto-2.0-2.5B base checkpoint, we finetuned for 10,000 steps on a mix designed to expose the model to in-distribution structure without overfitting to GIFT-Eval alone:
| Source | Share |
|---|---|
| GIFT-Eval Pretrain | 45% |
| Datadog 5-minute+ observability metrics | 25% |
| GIFT-Eval train split | 15% |
| Synthetic (TempoPFN) | 10% |
| Datadog 10s observability metrics | 2.5% |
| Datadog 60s observability metrics | 2.5% |
The public portion (45% GIFT-Eval Pretrain) is drawn from the Toto 1.0 mix of GIFT-Eval Pretrain and the Chronos pretraining corpus, and is non-leaking with respect to the GIFT-Eval test split.
NorMuon and AdamW learning rates were both dropped by roughly an order of magnitude from pretraining (to 0.05 and 0.001 respectively). All other architecture and inference settings match the base 2.5B model.
π Additional Resources
- Technical Report β (coming soon)
- Blog Post
- Base model: Toto-2.0-2.5B β the unfinetuned checkpoint, which is what we recommend deploying
- Toto 2.0 Collection β all five base sizes (4m β 2.5B)
- Toto 2.0 Family-and-Friends β companion FFORMA-ensemble submission, also benchmark-only
- GIFT-Eval benchmark β leaderboard hosting this submission
- GitHub Repository
π Citation
(citation coming soon)
- Downloads last month
- 63
Model tree for Datadog/Toto-2.0-2.5B-FT
Base model
Datadog/Toto-2.0-2.5BEvaluation results
- CRPS on GIFT-EvalGIFT-Eval Time Series Forecasting Leaderboard0.463
- MASE on GIFT-EvalGIFT-Eval Time Series Forecasting Leaderboard0.679