I recently returned from the International Symposium on Forecasting "Frontiers in Forecasting" conference in Riverside. I presented some of my work on inflation uncertainty in a session devoted to uncertainty and the real economy. A highlight was the talk by Barbara Rossi, a featured presenter from Universitat Pompeu Fabra, on "Forecasting in Unstable Environments: What Works and What Doesn't." (This post will be a bit more technical than my usual.)
Rossi spoke about instabilities in reduced form models and gave an overview of the evidence on what works and what doesn't in guarding against these instabilities. The basic issue is that the predictive ability of different models and variables changes over time. For example, the term spread was a pretty good predictor of GDP growth until the 1990s, and the credit spread was not. But in the 90s the situation reversed, and the credit spread became a better predictor of GDP growth while the term spread got worse.
Rossi noted that break tests and time varying parameter models, two common ways to protect against instabilities in forecasting relationships, do involve tradeoffs. For example, it is common to test for a break in an empirical relationship, then estimate a model in which the coefficients before and after the break differ. Including a break point reduces the bias of your estimates, but also reduces the precision. The more break points you add, the shorter are the time samples you use to estimate the coefficients. This is similar to what happens if you start adding tons of control variables to a regression when your number of observations is small.
Rossi also discussed rolling window estimation. Choosing the optimal window size is a challenge, with a similar bias/precision trade-off. The standard practice of reporting results from only a single window size is problematic, because the window size may have been selected based on "data snooping" to obtain the most desirable results. In work with Atsushi Inoue, Rossi develops out of sample forecast tests that are robust to window size. Many of the basic tools and tests from macroeconomic forecasting-- Granger casualty tests, forecast comparison tests, and forecast optimally tests-- can be made more robust to instabilities. For details, see Raffaella Giacomini and Rossi's chapter in the Handbook of Research Methods and Applications on Empirical Macroeconomics and references therein.
A bit of practical advice from Rossi was to maintain large-dimensional datasets as a guard against instability. In unstable environments, variables that are not useful now may be useful later, and it is increasingly computationally feasible to store and work with big datasets.