Pages

Wednesday, June 24, 2015

Forecasting in Unstable Environments

I recently returned from the International Symposium on Forecasting "Frontiers in Forecasting" conference in Riverside. I presented some of my work on inflation uncertainty in a session devoted to uncertainty and the real economy. A highlight was the talk by Barbara Rossi,  a featured presenter from Universitat Pompeu Fabra, on "Forecasting in Unstable Environments: What Works and What Doesn't." (This post will be a bit more technical than my usual.)

Rossi spoke about instabilities in reduced form models and gave an overview of the evidence on what works and what doesn't in guarding against these instabilities. The basic issue is that the predictive ability of different models and variables changes over time. For example, the term spread was a pretty good predictor of GDP growth until the 1990s, and the credit spread was not. But in the 90s the situation reversed, and the credit spread became a better predictor of GDP growth while the term spread got worse.

Rossi noted that break tests and time varying parameter models, two common ways to protect against instabilities in forecasting relationships, do involve tradeoffs. For example, it is common to test for a break in an empirical relationship, then estimate a model in which the coefficients before and after the break differ. Including a break point reduces the bias of your estimates, but also reduces the precision. The more break points you add, the shorter are the time samples you use to estimate the coefficients. This is similar to what happens if you start adding tons of control variables to a regression when your number of observations is small.

Rossi also discussed rolling window estimation. Choosing the optimal window size is a challenge, with a similar bias/precision trade-off. The standard practice of reporting results from only a single window size is problematic, because the window size may have been selected based on "data snooping" to obtain the most desirable results. In work with Atsushi Inoue, Rossi develops out of sample forecast tests that are robust to window size. Many of the basic tools and tests from macroeconomic forecasting-- Granger casualty tests, forecast comparison tests, and forecast optimally tests-- can be made more robust to instabilities. For details, see Raffaella Giacomini and Rossi's chapter in the Handbook of Research Methods and Applications on Empirical Macroeconomics and references therein.

A bit of practical advice from Rossi was to maintain large-dimensional datasets as a guard against instability. In unstable environments, variables that are not useful now may be useful later, and it is increasingly computationally feasible to store and work with big datasets.

Wednesday, June 17, 2015

Another Four Percent

When Jeb Bush announced his presidential candidacy on Monday, he made a bold claim. "There's not a reason in the world we can’t grow at 4 percent a year,” he said, “and that will be my goal as president.”

You can pretty much guarantee that whenever a politician claims "there's not a reason in the world," plenty of people will be happy to provide one, and this case is no exception. Those reasons aside, for now, where did this 4 percent target come from? Jordan Weissmann explains that "the figure apparently originated during a conference call several years ago, during which Bush and several other advisers were brainstorming potential economic programs for the George W. Bush Institute...Jeb casually tossed out the idea of 4 percent growth, which everybody loved, even though it was kind of arbitrary." Jeb Bush himself calls 4 percent "a nice round number. It's double the growth that we are growing at." (To which Jon Perr snippily adds, "It's also an even number and the square of two.")

Let's face it, we have a thing for nice, round, kind of arbitrary numbers. The 2 percent inflation target, for example, was not chosen as the precise solution to some optimization problem, but more as a "rough guess [that] acquired force as a focal point." Psychology research shows that people put in extra effort to reach round number goals,  like a batting average of .300 rather than .299. A 4 percent growth target reduces something multidimensional and hard to define--economic success--to a single, salient number. An explicit numerical target provides an easy guide for accountability. This can be very useful, but it can also backfire.

As an analogy, imagine that citizens of some country have a vague, noble goal for their education system, like "improving student learning." They want to encourage school administrators and teachers to pursue this goal and hold them accountable. But with so many dimensions of student learning, it is difficult to gauge effort or success. They could introduce a mandatory, standardized math test for all students, and rate a teacher as "highly successful" if his or her students' scores improve by at least 10% over the course of the year. A nice round number. This would provide a simple, salient way to judge success, and it would certainly change what goes on in the classroom, with obvious upsides and downsides. Many teachers would put in more effort to ensure that students learned math--at least, the math covered on the test--but might neglect literature, art, or gym. Administrators might have incentive to engage in some deceptive accounting practices, finding reasons why a particular student's score should not be counted, why a group of students should switch classrooms. Even outright cheating, though likely rare, is possible, especially if jobs are hinging on the difference between 9.9% improvement and 10%. What is changing one or two answers?

Ceteris paribus, more math skills would bring a variety of benefits, just like more growth would, as the George W. Bush Institute's 4% Growth Project likes to point out. But making 4 percent growth the standard for success could also change policymakers' incentives and behaviors in some perverse ways. Potential policies' ability to boost growth will be overemphasized, and other merits or flaws (e.g. for the environment or the income distribution) underemphasized. The purported goal is sustained 4 percent growth over long time periods, which implies making the kind of long-run-minded reforms that boost both actual and potential GDP--not just running the economy above capacity for as long as possible until the music stops. But realistically, a president would worry more about achieving 4 percent while in office and less about afterwards, encouraging short-termism at best, or more unsavory practices at worst.

Even with all of these caveats, if the idea of a 4 percent solution still sounds appealing, it is worth opening up the discussion to what other 4 percent solutions might be better. Laurence Ball, Brad Delong, and Paul Krugman have made the case for 4 percent inflation target. I see their points but am not fully convinced. But what about 4 percent unemployment? Or 4 percent nominal wage growth? Are they more or less attainable than 4 percent GDP growth, and how would the benefits compare? If we do decide to buy into a 4 percent target, it is worth at least pausing to think about which 4 percent. 

Tuesday, June 16, 2015

Wage Increases Do Not Signal Impending Inflation

When the FOMC meets over the next two days, they will surely be looking for signs of impending inflation. Even though actual inflation is below target, any hint that pressure is building will be seized upon by more hawkish committee members as impetus for an earlier rate rise. The relatively strong May jobs report and uptick in nominal wage inflation are likely to draw attention in this respect.

Hopefully the FOMC members are aware of new research by two of the Fed's own economists, Ekaterina Peneva and Jeremy Rudd, on the passthrough (or lack thereof) of labor costs to price inflation. The research, which fails to find an important role for labor costs in driving inflation movements, casts doubts on wage-based explanations of inflation dynamics in recent years. They conclude that "price inflation now responds less persistently to changes in real activity or costs; at the same time, the joint dynamics of inflation and compensation no longer manifest the type of wage–price spiral that was evident in earlier decades."

Peneva and Rudd use a time-varying parameter/stochastic volatility VAR framework which lets them see how core inflation responds to a shock to the growth rate of labor costs at different times. The figure below shows how the response has varied over the past few decades. In 1975 and 1985, a rise in labor cost growth was followed by a rise in core inflation, but in recent decades, both before and after the Great Recession, there is no such response:

Peneva and Rudd do not take a strong stance on why wage-price dynamics appear to have changed. But their findings do complement research from Yash Mehra in 2000, who suggests that "One problem with this popular 'cost-push' view of the inflation process is that it does not recognize the influences of Federal Reserve policy and the resulting inflation environment on determining the causal influence of wage growth on inflation. If the Fed follows a non-accommodative monetary policy and keeps inflation low, then firms may not be able to pass along excessive wage gains in the form of higher product prices." Mehra finds that "Wage growth no longer helps predict inflation if we consider subperiods that begin in the early 1980s...The period since the early 1980s is the period during which the Fed has concentrated on keeping inflation low. What is new here is the finding that even in the pre- 1980 period there is another subperiod, 1953Q1 to 1965Q4, during which wage growth does not help predict inflation. This is also the subperiod during which inflation remained mostly low, mainly due to monetary policy pursued by the Fed."