Toto-2.0-2.5B-FT

This is a benchmarking checkpoint, not a general-purpose model. Toto-2.0-2.5B-FT is the Toto 2.0 2.5B base model finetuned on the GIFT-Eval training split for our #2-on-GIFT-Eval-leaderboard submission. It is released for reproducibility only.

For real workloads, please use the base Toto 2.0 collection. The base checkpoints are pretrained without any public data, generalize to every benchmark we have evaluated, and are what we recommend deploying.

✨ What this is?

A single Toto 2.0 2.5B base checkpoint finetuned on a mix that includes the GIFT-Eval training split, used to probe how far the base model can be pushed on a single in-distribution benchmark.

GIFT-Eval bar metrics β€” Toto 2.0 2.5B-FT highlighted
On the full GIFT-Eval leaderboard (foundation models + finetuned + ensemble + agentic), Toto-2.0-2.5B-FT places #2 on CRPS rank, MASE rank, and #3 on raw CRPS / MASE, behind only the Toto 2.0 Family-and-Friends ensemble.

πŸ” Finetuning recipe

Starting from a fully-decayed Toto-2.0-2.5B base checkpoint, we finetuned for 10,000 steps on a mix designed to expose the model to in-distribution structure without overfitting to GIFT-Eval alone:

Source Share
GIFT-Eval Pretrain 45%
Datadog 5-minute+ observability metrics 25%
GIFT-Eval train split 15%
Synthetic (TempoPFN) 10%
Datadog 10s observability metrics 2.5%
Datadog 60s observability metrics 2.5%

The public portion (45% GIFT-Eval Pretrain) is drawn from the Toto 1.0 mix of GIFT-Eval Pretrain and the Chronos pretraining corpus, and is non-leaking with respect to the GIFT-Eval test split.

NorMuon and AdamW learning rates were both dropped by roughly an order of magnitude from pretraining (to 0.05 and 0.001 respectively). All other architecture and inference settings match the base 2.5B model.

πŸ”— Additional Resources

πŸ“– Citation

(citation coming soon)
Downloads last month
63
Safetensors
Model size
2B params
Tensor type
F32
Β·
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for Datadog/Toto-2.0-2.5B-FT

Finetuned
(1)
this model

Evaluation results