✨ TL;DR
CAARL uses large language models to forecast co-evolving time series by converting temporal dependencies into narrative form, enabling interpretable chain-of-thought reasoning for predictions. The approach decomposes series into autoregressive segments and builds dependency graphs that LLMs can process as text, achieving competitive accuracy with enhanced transparency.
Forecasting co-evolving time series presents significant challenges due to intricate dependencies between multiple series and nonstationary dynamics that change over time. Traditional forecasting methods often lack interpretability, making it difficult to understand why predictions are made and how contextual factors influence changes across related time series. This opacity limits trust and practical deployment in domains where understanding the reasoning behind forecasts is as important as accuracy itself.
CAARL (Context-Aware ARLLM) addresses this by transforming time series forecasting into a narrative reasoning task suitable for large language models. The method first decomposes time series into autoregressive segments, then constructs a temporal dependency graph capturing relationships between co-evolving series. This graph is serialized into a narrative format that LLMs can process, creating a chain-of-thought-like reasoning path. The intermediate steps in this reasoning explicitly capture contextual dynamics, allowing the model to generate forecasts while providing transparent explanations of how different series influence each other over time.