Repository logo
 

Evaluating Forecasts at Multiple Horizons: An Extension of the Diebold–Mariano Approach

Published version
Peer-reviewed

Repository DOI


Change log

Abstract

ABSTRACT Forecast accuracy tests are fundamental tools for comparing competing predictive models. The widely used Diebold–Mariano (DM) test assesses whether differences in forecast errors are statistically significant. However, its standard form is limited to pairwise comparisons at a single forecast horizon. A number of solutions to this exist in the literature. Relative to these, this paper, based on a Mahalanobis approach, rather than approaches based on asymptotic normality. Our method incorporates cross‐covariances between errors and generalizes the DM framework to account for overlapping forecast windows and autocorrelation. The test provides a transparent alternative for forecasters and practitioners needing unified inference across temporal spans. We use simulations to assess conditions under which our test performs well, showing promising results in situations where big data are likely to be applicable.

Description

Publication status: Published

Journal Title

Journal of Forecasting

Conference Name

Journal ISSN

0277-6693
1099-131X

Volume Title

Publisher

Wiley

Rights and licensing

Except where otherwised noted, this item's license is described as http://creativecommons.org/licenses/by-nc/4.0/