Evaluating Forecasts at Multiple Horizons: An Extension of the Diebold–Mariano Approach
Published version
Peer-reviewed
Repository URI
Repository DOI
Change log
Abstract
ABSTRACT Forecast accuracy tests are fundamental tools for comparing competing predictive models. The widely used Diebold–Mariano (DM) test assesses whether differences in forecast errors are statistically significant. However, its standard form is limited to pairwise comparisons at a single forecast horizon. A number of solutions to this exist in the literature. Relative to these, this paper, based on a Mahalanobis approach, rather than approaches based on asymptotic normality. Our method incorporates cross‐covariances between errors and generalizes the DM framework to account for overlapping forecast windows and autocorrelation. The test provides a transparent alternative for forecasters and practitioners needing unified inference across temporal spans. We use simulations to assess conditions under which our test performs well, showing promising results in situations where big data are likely to be applicable.
Description
Publication status: Published
Journal Title
Conference Name
Journal ISSN
1099-131X

