SynTSBench is a comprehensive benchmark framework for evaluating time series deep learning models on synthetic data with controlled characteristics. This repository contains tools for:
- Synthetic Data Generation: Generate time series with specific patterns like trends, seasonality, noise, multivariate relationships, etc.
- Model Evaluation: Evaluate multiple state-of-the-art time series models on the synthetic data
- Benchmarking: Compare model performance across different data characteristics and forecasting tasks
SynTSBench helps researchers and practitioners understand which deep learning architectures are best suited for specific time series patterns and characteristics.
SynTSBench is a synthetic data-based evaluation framework for time series forecasting models. As illustrated in the figure above, the framework employs a systematic approach to assess model capabilities:
-
Data Generation Layer: The framework generates controlled time series data from basic univariate components (such as trends, seasonality, and noise) to complex multivariate patterns with defined inter-variable relationships.
-
Evaluation Dimensions: The framework systematically assesses model capabilities across multiple dimensions:
- Temporal Pattern Learning: Evaluates the model's ability to capture fundamental temporal patterns like trends and seasonality
- Robustness: Tests the model's resistance to disturbances such as noise and outliers
- Dependency Modeling: Assesses the model's capability to understand and leverage complex dependencies among multiple variables
- Complex Pattern Recognition: Tests the model's performance on non-linear patterns, long-term dependencies, and other complex scenarios
-
Controllability Advantage: Through synthetic data generation, we can precisely control various data characteristics, enabling deep insights into the strengths and limitations of different model architectures in specific scenarios.
-
Synthetic Data Generation: Generate diverse time series datasets with controlled properties like:
- Trends (linear, non-linear)
- Seasonal patterns (with varying periods)
- Noise levels
- Anomalies
- Multivariate relationships
- Short/long range dependencies
- Complex patterns
-
Multiple Tasks Support:
- Long-term forecasting
- Short-term forecasting
- Imputation
- Anomaly detection
- Classification
-
Extensive Model Library: Includes 30+ state-of-the-art time series models:
- Transformer-based (Transformer, Informer, Autoformer, etc.)
- CNN-based (TimesNet, etc.)
- RNN-based (SegRNN, etc.)
- MLP-based (DLinear, TSMixer, etc.)
- Advanced architectures (TimeMixer, TimeKAN, TimeLLM, Mamba, etc.)
dataset/: Jupyter notebooks for generating synthetic time seriesdata_provider/: Data loading and processing utilitiesexp/: Experiment modules for different taskslayers/: Neural network layer implementationsmodels/: Time series model implementationsscripts/: Utility scripts for running experimentstutorial/: Tutorial notebooksutils/: Utility functions for data processing and evaluation
# Clone the repository
git clone https://github.com/username/SynTSBench.git
cd SynTSBench
# Install requirements
pip install -r requirements.txtUse the notebooks in the dataset/ directory to generate synthetic time series datasets with specific properties.
python run.py \
--task_name long_term_forecast \
--is_training 1 \
--model_id experiment1 \
--model TimesNet \
--data generated \
--root_path ./data/ \
--data_path generated_trend.csv \
--features M \
--seq_len 96 \
--label_len 48 \
--pred_len 96 \
--e_layers 2 \
--d_model 512 \SynTSBench includes 30+ time series models such as:
- Transformer-based:
Transformer,Informer,Autoformer,FEDformer,Pyraformer,ETSformer,iTransformer,PatchTST,Crossformer,LightTS,Reformer,CATS - MLP-based:
DLinear,TSMixer,TimeMixer,PaiFilter,TexFilter,N-BEATS - CNN-based:
SCINet,TimesNet - RNN-based:
SegRNN,TPGN - Advanced architectures:
TimeLLM,TimeKAN,Mamba,MambaSimple,Koopa
Add custom data generation scripts in the dataset/ directory.
- Create a new model file in the
models/directory - Implement the model following the interface of other models
- Add the model to
MODEL_CONFIGSingenerate_model_script.py
After running experiments, use the collection scripts to gather results:
python collect_results.pyIf you use this code, please cite:
to be updated
@article{syntsbench2023,
title={SynTSBench: A Synthetic Time Series Benchmark for Evaluating Deep Learning Models},
author={...},
journal={...},
year={2023}
}
This project is licensed under the MIT License - see the LICENSE file for details.
Our implementation adapts Time-Series-Library as the code base and have extensively modified it to our purposes. We thank the authors for sharing their implementations and related resources.
