Model Evaluation

../_images/model-evaluation.png

Evaluate different time series’ models by comparing the errors they make in terms of: root mean squared error (RMSE), median absolute error (MAE), mean absolute percent error (MAPE), prediction of change in direction (POCID), coefficient of determination (), Akaike information criterion (AIC), and Bayesian information criterion (BIC).

Signals

Inputs

  • Time series

    Time series as output by As Timeseries widget.

  • Time series model (multiple)

    The time series model to evaluate (e.g. VAR or ARIMA).

Description

../_images/model-evaluation-stamped.png
  1. Number of folds for time series cross-validation.
  2. Number of forecast steps to produce in each fold.
  3. Results for various error measures and information criteria on cross-validated and in-sample data.

Note

This slide (source) shows how cross validation on time series is performed. In this case, the number of folds (1) is 10 and the number of forecast steps in each fold (2) is 1.

Note

In-sample errors are the errors calculated on the training data itself. A stable model is one where in-sample errors and out-of-sample errors don’t differ significantly.