跳转到主要内容

使用 PyTorch 进行时间序列预测 - 数据加载器、归一化、指标和模型

项目描述

PyTorch Forecasting

PyTorch 预测 是一个基于 PyTorch 的包,用于使用最先进的深度学习架构进行预测。它提供了一个高级 API,并使用 PyTorch Lightning 在 GPU 或 CPU 上扩展训练,具有自动记录功能。

文档 · 教程 · 发布说明
开源 MIT
社区 !discord !slack
CI/CD github-actions readthedocs platform Code Coverage
代码 !pypi !conda !python-versions !black

我们关于 Towards Data Science 的文章介绍了这个包并提供了一些背景信息。

PyTorch 预测旨在通过神经网络简化最先进的时间序列预测,无论是针对现实案例还是研究。目标是提供一个具有最大灵活性的高级 API,并为初学者提供合理的默认值。具体来说,该包提供

  • 一个时间序列数据集类,该类抽象处理变量转换、缺失值、随机子采样、多个历史长度等。
  • 一个基础模型类,提供时间序列模型的基本训练,以及Tensorboard日志记录和通用可视化,如实际值与预测值比较图和依赖图。
  • 针对时间序列预测进行了增强的多层神经网络架构,适用于实际部署,并带有内置的解释能力。
  • 多步预测的时间序列指标。
  • 使用optuna进行超参数调整。

该软件包基于pytorch-lightning构建,允许直接在CPU和单个或多个GPU上训练。

安装

如果您正在使用Windows,您需要首先使用以下命令安装PyTorch:

pip install torch -f https://download.pytorch.org/whl/torch_stable.html.

否则,您可以继续使用以下命令:

pip 安装 pytorch-forecasting

或者,您可以通过conda安装该包:

conda install pytorch-forecasting pytorch -c pytorch>=1.7 -c conda-forge

PyTorch Forecasting现在从conda-forge频道安装,而PyTorch是从pytorch频道安装的。

要使用MQF2损失(多元分位数损失),还需要安装pip install pytorch-forecasting[mqf2]

文档

请访问https://pytorch-forecasting.readthedocs.io以阅读详细的教程。

可用模型

文档提供了可用模型的比较

有关实现新模型或其他自定义组件的信息,请参阅如何实现新模型教程。它涵盖了基本和高级架构。

使用示例

网络可以使用PyTorch Lightning Trainerpandas DataFrame上进行训练,这些DataFrame首先转换为TimeSeriesDataSet

# imports for training
import lightning.pytorch as pl
from lightning.pytorch.loggers import TensorBoardLogger
from lightning.pytorch.callbacks import EarlyStopping, LearningRateMonitor
# import dataset, network to train and metric to optimize
from pytorch_forecasting import TimeSeriesDataSet, TemporalFusionTransformer, QuantileLoss
from lightning.pytorch.tuner import Tuner

# load data: this is pandas dataframe with at least a column for
# * the target (what you want to predict)
# * the timeseries ID (which should be a unique string to identify each timeseries)
# * the time of the observation (which should be a monotonically increasing integer)
data = ...

# define the dataset, i.e. add metadata to pandas dataframe for the model to understand it
max_encoder_length = 36
max_prediction_length = 6
training_cutoff = "YYYY-MM-DD"  # day for cutoff

training = TimeSeriesDataSet(
    data[lambda x: x.date <= training_cutoff],
    time_idx= ...,  # column name of time of observation
    target= ...,  # column name of target to predict
    group_ids=[ ... ],  # column name(s) for timeseries IDs
    max_encoder_length=max_encoder_length,  # how much history to use
    max_prediction_length=max_prediction_length,  # how far to predict into future
    # covariates static for a timeseries ID
    static_categoricals=[ ... ],
    static_reals=[ ... ],
    # covariates known and unknown in the future to inform prediction
    time_varying_known_categoricals=[ ... ],
    time_varying_known_reals=[ ... ],
    time_varying_unknown_categoricals=[ ... ],
    time_varying_unknown_reals=[ ... ],
)

# create validation dataset using the same normalization techniques as for the training dataset
validation = TimeSeriesDataSet.from_dataset(training, data, min_prediction_idx=training.index.time.max() + 1, stop_randomization=True)

# convert datasets to dataloaders for training
batch_size = 128
train_dataloader = training.to_dataloader(train=True, batch_size=batch_size, num_workers=2)
val_dataloader = validation.to_dataloader(train=False, batch_size=batch_size, num_workers=2)

# create PyTorch Lighning Trainer with early stopping
early_stop_callback = EarlyStopping(monitor="val_loss", min_delta=1e-4, patience=1, verbose=False, mode="min")
lr_logger = LearningRateMonitor()
trainer = pl.Trainer(
    max_epochs=100,
    accelerator="auto",  # run on CPU, if on multiple GPUs, use strategy="ddp"
    gradient_clip_val=0.1,
    limit_train_batches=30,  # 30 batches per epoch
    callbacks=[lr_logger, early_stop_callback],
    logger=TensorBoardLogger("lightning_logs")
)

# define network to train - the architecture is mostly inferred from the dataset, so that only a few hyperparameters have to be set by the user
tft = TemporalFusionTransformer.from_dataset(
    # dataset
    training,
    # architecture hyperparameters
    hidden_size=32,
    attention_head_size=1,
    dropout=0.1,
    hidden_continuous_size=16,
    # loss metric to optimize
    loss=QuantileLoss(),
    # logging frequency
    log_interval=2,
    # optimizer parameters
    learning_rate=0.03,
    reduce_on_plateau_patience=4
)
print(f"Number of parameters in network: {tft.size()/1e3:.1f}k")

# find the optimal learning rate
res = Tuner(trainer).lr_find(
    tft, train_dataloaders=train_dataloader, val_dataloaders=val_dataloader, early_stop_threshold=1000.0, max_lr=0.3,
)
# and plot the result - always visually confirm that the suggested learning rate makes sense
print(f"suggested learning rate: {res.suggestion()}")
fig = res.plot(show=True, suggest=True)
fig.show()

# fit the model on the data - redefine the model with the correct learning rate if necessary
trainer.fit(
    tft, train_dataloaders=train_dataloader, val_dataloaders=val_dataloader,
)

项目详情


下载文件

下载适用于您的平台文件。如果您不确定选择哪个,请了解有关安装包的更多信息。

源分布

pytorch_forecasting-1.1.1.tar.gz (152.7 kB 查看哈希值)

上传时间

构建分布

pytorch_forecasting-1.1.1-py3-none-any.whl (177.6 kB 查看哈希值)

上传时间 Python 3

支持者