Papers
arxiv:2310.10688
A decoder-only foundation model for time-series forecasting
Published on Oct 14, 2023
Authors:
Abstract
A large language model adapted for time-series forecasting achieves near-optimal zero-shot performance on diverse datasets across different time scales and granularities.
AI-generated summary
Motivated by recent advances in large language models for Natural Language Processing (NLP), we design a time-series foundation model for forecasting whose out-of-the-box zero-shot performance on a variety of public datasets comes close to the accuracy of state-of-the-art supervised forecasting models for each individual dataset. Our model is based on pretraining a patched-decoder style attention model on a large time-series corpus, and can work well across different forecasting history lengths, prediction lengths and temporal granularities.
Community
Models citing this paper 10
Browse 10 models citing this paperDatasets citing this paper 0
No dataset linking this paper
Cite arxiv.org/abs/2310.10688 in a dataset README.md to link it from this page.