Showing posts with label unemployment. Show all posts
Showing posts with label unemployment. Show all posts

Tuesday, October 24, 2017

Is Taylor a Hawk or Not?

Two Bloomberg articles published just a week apart call John Taylor, a contender for Fed Chair, first hawkish, then dovish. The first, by Garfield Clinton Reynolds, notes:
...The dollar rose and the 10-year U.S. Treasury note fell on Monday after Bloomberg News reported Taylor, a professor at Stanford University, impressed President Donald Trump in a recent White House interview. 
Driving those trades was speculation that the 70 year-old Taylor would push rates up to higher levels than a Fed helmed by its current chair, Janet Yellen. That’s because he is the architect of the Taylor Rule, a tool widely used among policy makers as a guide for setting rates since he developed it in the early 1990s.
But the second, by Rich Miller, claims that "Taylor’s Walk on Supply Side May Leave Him More Dove Than Yellen." Miller explains,
"While Taylor believes the [Trump] administration can substantially lift non-inflationary economic growth through deregulation and tax changes, Yellen is more cautious. That suggests that the Republican Taylor would be less prone than the Democrat Yellen to raise interest rates in response to a policy-driven economic pick-up."
What actually makes someone a hawk? Simply favoring rules-based policy is not enough. A central banker could use a variation of the Taylor rule that implies very little response to inflation, or that allows very high average inflation. Beliefs about the efficacy of supply-side policies also do not determine hawk or dove status. Let's look at the Taylor rule from Taylor's 1993 paper:
r = p + .5y + .5(p – 2) + 2,
where r is the federal funds rate, y is the percent deviation of real GDP from target, and p is inflation over the previous 4 quarters. Taylor notes (p. 202) that lagged inflation is used as a proxy for expected inflation, and y=100(Y-Y*)/Y* where Y is real GDP and Y* is trend GDP (a proxy for potential GDP).

The 0.5 coefficients on the y and (p-2) terms reflect how Taylor estimated that the Fed approximately behaved, but in general a Taylor rule could have different coefficients, reflecting the central bank's preferences. The bank could also have an inflation target p* not equal to 2, and replace (p-2) with (p-p*). Just being really committed to following a Taylor rule does not tell you what steady state inflation rate or how much volatility a central banker would allow. For example, a central bank could follow a rule with p*=5 and a relatively large coefficient on y and small coefficient on (p-5), allowing both high and volatile inflation.

What do "supply side" beliefs imply? Well, Miller thinks that Taylor believes the Trump tax and deregulatory policy changes will raise potential GDP, or Y*. For a given value of Y, a higher estimate of Y* implies a lower estimate of y, which implies lower r. So yes, in the very short run, we could see lower r from a central banker who "believes" in supply side economics than from one who doesn't, all else equal.

But what if Y* does not really rise as much as a supply-sider central banker thinks it will? Then the lower r will result in higher p (and Y), to which the central bank will react by raising r. So long as the central bank follows the Taylor principle (so the sum of the coefficients on p and (p-p*) in the rule are greater than 1), equilibrium long-run inflation is p*.

The parameters of the Taylor rule reflect the central bank's preferences. The right-hand-side variables, like Y*, are measured or forecasted. That reflects a central bank's competence at measuring and forecasting, which depends on a number of factors ranging from the strength of its staff economists to the priors of the Fed Chair to the volatility and unpredictability of other economic conditions and policies. 

Neither Taylor nor Yellen seems likely to change the inflation target to something other than 2 (and even if they wanted to, they could not unilaterally make that decision.) They do likely differ in their preferences for stabilizing inflation versus stabilizing output, and in that respect I'd guess Taylor is more hawkish. 

Yellen's efforts to look at alternative measures of labor market conditions in the past are also about Y*. In some versions of the Taylor rule, you see unemployment measures instead of output measures (where the idea is that they generally comove). Willingness to consider multiple measures of employment and/or output is really just an attempt to get a better measure on how far the real economy is from "potential." It doesn't make a person inherently more or less hawkish.

As an aside, this whole discussion presumes that monetary policy itself (or more generally, aggregate demand shifts) do not change Y*. Hysteresis theories reject that premise. 

Friday, August 18, 2017

The Low Misery Dilemma

The other day, Tim Duy tweeted:

It took me a moment--and I'd guess I'm not alone--to even recognize how remarkable this is. The New York Times ran an article with the headline "Fed Officials Confront New Reality: Low Inflation and Low Unemployment." Confront, not embrace, not celebrate.

The misery index is the sum of unemployment and inflation. Arthur Okun proposed it in the 1960s as a crude gauge of the economy, based on the fact that high inflation and high unemployment are both miserable (so high values of the index are bad). The misery index was pretty low in the 60s, in the 6% to 8% range, similar to where it has been since around 2014. Now it is around 6%. Great, right?

The NYT article notes that we are in an opposite situation to the stagflation of the 1970s and early 80s, when both high inflation and high unemployment were concerns. The misery index reached a high of 21% in 1980. (The unemployment data is only available since 1948).

Very high inflation and high unemployment are each individually troubling for the social welfare costs they impose (which are more obvious for unemployment). But observed together, they also troubled economists for seeming to run contrary to the Phillips curve-based models of the time. The tradeoff between inflation and unemployment wasn't what economists and policymakers had believed, and their misunderstanding probably contributed to the misery.

Though economic theory has evolved, the basic Phillips curve tradeoff idea is still an important part of central bankers' models. By models, I mean both the formal quantitative models used by their staffs and the way they think about how the world works. General idea: if the economy is above full employment, that should put upward pressure on wages, which should put upward pressure on prices.

So low unemployment combined with low inflation seem like a nice problem to have, but if they are indeed a new reality-- that is, something that will last--then there is something amiss in that chain of logic. Maybe we are not at full employment, because the natural rate of unemployment is a lot lower than we thought, or we are looking at the wrong labor market indicators. Maybe full employment does not put upward pressure on wages, for some reason, or maybe we are looking at the wrong wage measures. For example, San Francisco Fed researchers argue that wage growth measures should be adjusted in light of retiring Baby Boomers. Or maybe the link between wage and price inflation has weakened.

Until policymakers feel confident that they understand why we are experiencing both low inflation and low unemployment, they can't simply embrace the low misery. It is natural that they will worry that they are missing something, and that the consequences of whatever that is could be disastrous. The question is what to do in the meanwhile.

There are two camps for Fed policy. One camp favors a wait-and-see approach: hold rates steady until we actually observe inflation rising above 2%. Maybe even let it stay above 2% for awhile, to make up for the lengthy period of below-2% inflation. The other camp favors raising rates preemptively, just in case we are missing some sign that inflation is about to spiral out of control. This latter possibility strikes me as unlikely, but I'm admittedly oversimplifying the concerns, and also haven't personally experienced high inflation.


Monday, August 7, 2017

Labor Market Conditions Index Discontinued

A few years ago, I blogged about the Fed's new Labor Market Conditions Index (LMCI). The index attempts to summarize the state of the labor market using a statistical technique that captures the primary common variation from 19 labor market indicators. I was skeptical about the usefulness of the LMCI for a few reasons. And as it turns out, the LMCI is now discontinued as of August 3.

The discontinuation is newsworthy because the LMCI was cited in policy discussions at the Fed, even by Janet Yellen. The index became high-profile enough that I was even interviewed about it on NPR's Marketplace.

One issue that I noted with the index in my blog was the following:
A minor quibble with the index is its inclusion of wages in the list of indicators. This introduces endogeneity that makes it unsuitable for use in Phillips Curve-type estimations of the relationship between labor market conditions and wages or inflation. In other words, we can't attempt to estimate how wages depend on labor market tightness if our measure of labor market tightness already depends on wages by construction.
This corresponds to one reason that is provided for the discontinuation of the index: "including average hourly earnings as an indicator did not provide a meaningful link between labor market conditions and wage growth."

The other reasons provided for discontinuation are that "model estimates turned out to be more sensitive to the detrending procedure than we had expected" and "the measurement of some indicators in recent years has changed in ways that significantly degraded their signal content."

I also noted in my blog post and on NPR that the index is almost perfectly correlated with the unemployment rate, meaning it provides very little additional information about labor market conditions. (Or interpreted differently, meaning that the unemployment rate provides a lot of information about labor market conditions.) The development of the LMCI was part of a worthy effort to develop alternative informative measures of labor market conditions that can help policymakers gauge where we are relative to full employment and predict what is likely to happen to prices and wages. So since resources and attention are limited, I think it is wise that they can be directed toward developing and evaluating other measures. 

Wednesday, May 31, 2017

Low Inflation at "Essentially Full Employment"

Yesterday, Brad Delong took issue with Charles Evans' recent claim that "Today, we have essentially returned to full employment in the U.S." Evans, President of the Federal Reserve Bank of Chicago and a member of the FOMC, was speaking before the Bank of Japan Institute for Monetary and Economic Studies in Tokyo on "lessons learned and challenges ahead" in monetary policy. Delong points out that the age 25-54 employment-to-population ratio in the United States of 78.5% is low by historical standards and given social and demographic trends.

Evans' claim that the U.S. has returned to full employment is followed by his comment that "Unfortunately, low inflation has been more stubborn, being slower to return to our objective. From 2009 to the present, core PCE inflation, which strips out the volatile food and energy components, has underrun 2% and often by substantial amounts." Delong asks,
And why the puzzlement at the failure of core inflation to rise to 2%? That is a puzzle only if you assume that you know with certainty that the unemployment rate is the right variable to put on the right hand side of the Phillips Curve. If you say that the right variable is equal to some combination with weight λ on prime-age employment-to-population and weight 1-λ on the unemployment rate, then there is no puzzle—there is simply information about what the current value of λ is.
It is not totally obvious why prime-age employment-to-population should drive inflation distinctly from unemployment--that is, why Delong's λ should not be zero, as in the standard Phillips Curve. Note that the employment-to-population ratio grows with the labor force participation rate (LFPR) and declines with the unemployment rate. Typically, labor force participation is mostly acyclical: its longer run trends dwarf any movements at the business cycle frequency (see graph below). So in a normal recession, the decline in the employment-to-population ratio is mostly attributable to the rise in the unemployment rate, not the fall in LFPR (so it shouldn't really matter if you simply impose λ=0).

https://fred.stlouisfed.org/series/LNS11300060
As Christopher Erceg and Andrew Levin explain, a recession of moderate size and severity does not prompt many departures from the labor market, but long recessions can produce quite pronounced declines in labor force participation. In their model, this gradual response of labor force participation to the unemployment rate arises from high adjustment costs of moving in and out of the formal labor market. But the Great Recession was protracted enough to lead people to leave the labor force despite the adjustment costs. According to their analysis:
cyclical factors can fully account for the post-2007 decline of 1.5 percentage points in the LFPR for prime-age adults (i.e., 25–54 years old). We define the labor force participation gap as the deviation of the LFPR from its potential path implied by demographic and structural considerations, and we find that as of mid-2013 this gap stood at around 2%. Indeed, our analysis suggests that the labor force gap and the unemployment gap each accounts for roughly half of the current employment gap, that is, the shortfall of the employment-to-population rate from its precrisis trend.
Erceg and Levin discuss their results in the context of the Phillips Curve, noting that "a large negative participation gap induces labor force participants to reduce their wage demands, although our calibration implies that the participation gap has less influence than the unemployment rate quantitatively." This means that both unemployment and labor force participation enter the right hand side of the Phillips Curve (and Delong's λ is nonzero), so if a deep recession leaves the LFPR (and, accordingly, the employment-to-population ratio) low even as unemployment returns to its natural rate, inflation will still remain low.

Erceg and Levin also discuss implications for monetary policy design, considering the consequences of responding to the cyclical component of the LFPR in addition to the unemployment rate.
We use our model to analyze the implications of alternative monetary policy strategies against the backdrop of a deep recession that leaves the LFPR well below its longer run potential level. Specifically, we compare a noninertial Taylor rule, which responds to inflation and the unemployment gap to an augmented rule that also responds to the participation gap. In the simulations, the zero lower bound precludes the central bank from lowering policy rates enough to offset the aggregate demand shock for some time, producing a deep recession; once the shock dies away sufficiently, policy responds according to the Taylor rule. A key result of our analysis is that monetary policy can induce a more rapid closure of the participation gap through allowing the unemployment rate to fall below its longrun natural rate. Quite intuitively, keeping unemployment persistently low draws cyclical nonparticipants back into labor force more quickly. Given that the cyclical nonparticipants exert some downward pressure on inflation, some undershooting of the long-run natural rate actually turns out to be consistent with keeping inflation stable in our model.
While the authors don't explicitly use the phrase "full employment," their paper does provide a rationale for the low core inflation we're experiencing despite low unemployment. Erceg and Levin's paper was published in the Journal of Money, Credit, and Banking in 2014; ungated working paper versions from 2013 are available here.

Sunday, January 8, 2017

Post-Election Political Divergence in Economic Expectations

"Note that among Democrats, year-ahead income expectations fell and year-ahead inflation expectations rose, and among Republicans, income expectations rose and inflation expectations fell. Perhaps the most drastic shifts were in unemployment expectations:rising unemployment was anticipated by 46% of Democrats in December, up from just 17% in June, but for Republicans, rising unemployment was anticipated by just 3% in December, down from 41% in June. The initial response of both Republicans and Democrats to Trump’s election is as clear as it is unsustainable: one side anticipates an economic downturn, and the other expects very robust economic growth."
This is from Richard Curtin, Director of the Michigan Survey of Consumers. He is comparing the economic sentiments and expectations of Democrats, Independents, and Republicans who took the survey in June and December 2016. A subset of survey respondents take the survey twice, with a six-month gap. So these are the respondents who took the survey before and after the election. The results are summarized in the table below, and really are striking, especially with regards to unemployment. Inflation expectations also rose for Democrats and fell for Republicans (and the way I interpret the survey data is that most consumers see inflation as a bad thing, so lower inflation expectations means greater optimism.)

Notice, too, that self-declared Independents are more optimistic after the election than before. More of them are expecting lower unemployment and fewer are expecting higher unemployment. Inflation expectations also fell from 3% to 2.3%, and income expectations rose. Of course, this is likely based on a very small sample size.
Source: Richard Curtin, Michigan Survey of Consumers

Monday, May 4, 2015

Firm Balance Sheets and Unemployment in the Great Recession

The balance sheets of households and financial firms have received a lot of emphasis in research on the Great Recession. The balance sheets of non-financial firms, in contrast, have received less attention. At first glance, this is perfectly reasonable; households and financial firms had high and rising leverage in the years leading up to the Great Recession, while non-financial firms' leverage remained constant (Figure 1, below).

New research by Xavier Giroud and Holger M. Mueller argues that the flat trendline for non-financial firms' leverage obscures substantial variation across firms, which proves important to understanding employment in the recession. Some firms saw large increases in leverage prior to the recession and others large declines. Using an establishment-level dataset with more than a quarter million observations, Giroud and Mueller find that "firms that tightened their debt capacity in the run-up ('high-leverage firms') exhibit a significantly larger decline in employment in response to household demand shocks than firms that freed up debt capacity ('low-leverage firms')."
The authors emphasize that "we do not mean to argue that household balance sheets or those of financial intermediaries are unimportant. On the contrary, our results are consistent with the view that falling house prices lead to a drop in consumer demand by households (Mian, Rao, and Sufi (2013)), with important consequences for employment (Mian and Sufi (2014)). But households do not lay off workers. Firms do. Thus, the extent to which demand shocks by households translate into employment losses depends on how firms respond to these shocks."

Firms' responses to household demand shocks depend largely on their balance sheets. Low-leverage firms were able to increase their borrowing during the recession to avoid reducing employment, while high-leverage firms were financially constrained and could not raise external funds to avoid reducing employment and cutting back investment:
"In fact, all of the job losses associated with falling house prices are concentrated among establishments of high-leverage firms. By contrast, there is no significant association between changes in house prices and changes in employment during the Great Recession among establishments of low-leverage firms."

Thursday, December 11, 2014

Mixed Signals and Monetary Policy Discretion

Two recent Economic Letters from the Federal Reserve Bank of San Francisco highlight the difficulty of making monetary policy decisions when alternative measures of labor market slack and the output gap give mixed signals. In Monetary Policy when the Spyglass is Smudged, Early Elias, Helen Irvin, and Òscar Jordà show that conventional policy rules based on the output gap and on the deviation of the unemployment rate from its natural rate generate wide-ranging policy rate prescriptions. Similarly, in Mixed Signals: Labor Markets and Monetary Policy, Canyon Bosler, Mary Daly, and Fernanda Nechio calculate the policy rate prescribed by a Taylor rule under alternative measures of labor market slack. The figure below illustrates the large divergence in alternative prescribed policy rates since the Great Recession.

Source: Bosler, Daly, and Nechio (2014), Figure 2
Uncertainty about the state of the labor market makes monetary policy more challenging and requires more discretion and judgment on the part of policymakers. What does discretion and judgment look like in practice? I think it should involve reasoning qualitatively to determine if some decisions lead to possible outcomes that are definitively worse than others. For example, here's how I would reason through the decision about whether to raise the policy rate under high uncertainty about the labor market:

Suppose it is May and the Fed is deciding whether to increase the target rate by 25 basis points. Assume inflation is still at or slightly below 2%, and the Fed would like to tighten monetary policy if and only if the "true" state of the labor market x is sufficiently high, say above some threshold X. The Fed does not observe x but has some very noisy signals about it.  They think there is about a fifty-fifty chance that x is above X, so it is not at all obvious whether tightening is appropriate. There are four possible scenarios:

  1. The Fed does not increase the target rate, and it turns out that x>X.
  2. The Fed does not increase the target rate, and it turns out that x<X.
  3. The Fed does increase the target rate, and it turns out that x>X.
  4. The Fed does increase the target rate, and it turns out that x>X.

Cases (2) and (3) are great. In case (2), the Fed did not tighten when tightening was not appropriate, and in case (3), the Fed tightened when tightening was appropriate. Cases (1) and (4) are "mistakes." In case (1), the Fed should have tightened but did not, and in case (4), the Fed should not have tightened but did. Which is worse?

If we think just about immediate or short-run impacts, case (1) might mean inflation goes higher than the Fed wants and x goes even higher above X; case (4) might mean unemployment goes higher than the Fed wants and x falls even further below X. Maybe you have an opinion on which of those short-run outcomes is worse, or maybe not. But the bigger difference between the outcomes comes when you think about the Fed's options at its subsequent meeting. In case (1), the Fed could choose how much they want to raise rates to restrain inflation. In case (4), the Fed could keep rates constant or reverse the previous meeting's rate increase.

In case (4), neither option is good. Keeping the target at 25 basis points is too restrictive. Labor market conditions were bad to begin with and keeping policy tight will make them worse. But reversing the rate increase is a non-starter. The markets expect that after the first rate increase, rates will continue on an upward trend, as in previous tightening episodes. Reversing the rate increase would cause financial market turmoil, damage credibility, and require policymakers to admit that they were wrong. Case (1) is much more attractive. I think any concern that inflation could take off and get out of control is unwarranted. In the space between two FOMC meetings, even if inflation were to rise above target, inflation expectations are not likely to rise too far. The Fed could easily restrain expectations at the next meeting by raising rates as aggressively as needed.

So going back to the four possible scenarios, (2) and (3) are good, and (4) is much worse than (1). If the Fed raises rates, scenarios (3) and (4) are about equally likely. If the Fed holds rates constant, (1) and (2) are about equally likely. Thus, holding rates constant under high uncertainty about the state of the labor market is a better option than potentially raising rates too soon.

Wednesday, July 23, 2014

Yellen's Storyline Strategy

Storyline, launched this week at the Washington Post, is "dedicated to the power of stories to help us understand complicated, critical things." Storyline will be a sister site to Wonkblog, and will mix storytelling and data journalism. The editor, Jim Tankersley, introduces the new site:
"We’re focused on public policy, but not on Washington process. We care about policy as experienced by people across America. About the problems in people’s lives that demand a shift from government policymakers and about the way policies from Washington are shifting how people live. 
We’ll tell those stories at Web speed and frequency. 
We’ll ground them in data — insights from empirical research and our own deep-dive analysis — to add big-picture context to tightly focused human drama."
I couldn't help but be reminded of Janet Yellen's first public speech as Fed chair on March 31. She took the approach Tankersley is aiming for. She began with the data:
"Since the unemployment rate peaked at 10 percent in October 2009, the economy has added more than 7-1/2 million jobs and the unemployment rate has fallen more than 3 percentage points to 6.7 percent. That progress has been gradual but remarkably steady--February was the 41st consecutive month of payroll growth, one of the longest stretches ever....But while there has been steady progress, there is also no doubt that the economy and the job market are not back to normal health. That will not be news to many of you, or to the 348,000 people in and around Chicago who were counted as looking for work in January...The recovery still feels like a recession to many Americans, and it also looks that way in some economic statistics. At 6.7 percent, the national unemployment rate is still higher than it ever got during the 2001 recession... Research shows employers are less willing to hire the long-term unemployed and often prefer other job candidates with less or even no relevant experience."
Then she added three stories:
"That is what Dorine Poole learned, after she lost her job processing medical insurance claims, just as the recession was getting started. Like many others, she could not find any job, despite clerical skills and experience acquired over 15 years of steady employment. When employers started hiring again, two years of unemployment became a disqualification. Even those needing her skills and experience preferred less qualified workers without a long spell of unemployment. That career, that part of Dorine's life, had ended. 
For Dorine and others, we know that workers displaced by layoffs and plant closures who manage to find work suffer long-lasting and often permanent wage reductions. Jermaine Brownlee was an apprentice plumber and skilled construction worker when the recession hit, and he saw his wages drop sharply as he scrambled for odd jobs and temporary work. He is doing better now, but still working for a lower wage than he earned before the recession. 
Vicki Lira lost her full-time job of 20 years when the printing plant she worked in shut down in 2006. Then she lost a job processing mortgage applications when the housing market crashed. Vicki faced some very difficult years. At times she was homeless. Today she enjoys her part-time job serving food samples to customers at a grocery store but wishes she could get more hours."
The inclusion of these anecdotes was unusual enough for a monetary policy speech that Yellen felt obligated to explain herself, in what could be a perfect advertisement for Storyline.
"I have described the experiences of Dorine, Jermaine, and Vicki because they tell us important things that the unemployment rate alone cannot. First, they are a reminder that there are real people behind the statistics, struggling to get by and eager for the opportunity to build better lives. Second, their experiences show some of the uniquely challenging and lasting effects of the Great Recession. Recognizing and trying to understand these effects helps provide a clearer picture of the progress we have made in the recovery, as well as a view of just how far we still have to go."
Recognition of the power of story is not new to the Federal Reserve. I noted in a January 2013 post that the word "story" appears 82 times in 2007 FOMC transcripts. One member, Frederic Mishkin, said that "We need to tell a story, a good narrative, about [the forecasts]. To be understood, the forecasts need a story behind them. I strongly believe that we need to write up a good story and that a good narrative can help us obtain public support for our policy actions—which is, again, a critical factor."

But Yellen's emphasis on personal stories as a communication device does seem new. I think it is no coincidence that her husband, George Akerlof, is the author (with Rachel Kranton) of Identity Economics, which "introduces identity—a person’s sense of self—into economic analysis." Yellen's stories of Dorine, Jermaine, and Vicki are stories of identity, conveying the idea that a legitimate cost of a poor labor market is an identity cost. 

Thursday, July 17, 2014

Thoughts on the Fed's New Labor Market Conditions Index

I'm usually one to get excited about new data series and economic indicators. I am really excited, for example, about the Fed's new Survey of Consumer Expectations, and have already incorporated it into my own research. However, reading about the Fed's new Labor Market Conditions Index (LMCI), which made its debut in the July 15 Monetary Policy Report, I was slightly underwhelmed, and I'll try to explain why.

David Wessel introduces the index as follows:
Once upon a time, when the Federal Reserve talked about the labor market, it was almost always talking about the unemployment rate or the change in the number of jobs. But the world has grown more complicated, and Fed Chairwoman Janet Yellen has pointed to a host of other labor-market measures. 
But these different indicators often point in different directions, which can make it hard to tell if the labor market is getting better or getting worse. So four Fed staff economists have come to the rescue with a new “labor markets conditions index” that uses a statistical model to summarize monthly changes in 19 labor-market into a single handy gauge.
The Fed economists employ a widely-used statistical model called a dynamic factor model. As they describe:
A factor model is a statistical tool intended to extract a small number of unobserved factors that summarize the comovement among a larger set of correlated time series. In our model, these factors are assumed to summarize overall labor market conditions. What we call the LMCI is the primary source of common variation among 19 labor market indicators. One essential feature of our factor model is that its inference about labor market conditions places greater weight on indicators whose movements are highly correlated with each other. And, when indicators provide disparate signals, the model's assessment of overall labor market conditions reflects primarily those indicators that are in broad agreement.
The 19 labor market indicators that are summarized by the LMCI include measures of unemployment, underemployment, employment, weekly work hours, wages, vacancies, hiring, layoffs, quits, and sentiment in consumer and business surveys. The data is monthly and seasonally adjusted, and the index begins in 1976.

A minor quibble with the index is its inclusion of wages in the list of indicators. This introduces endogeneity that makes it unsuitable for use in Phillips Curve-type estimations of the relationship between labor market conditions and wages or inflation. In other words, we can't attempt to estimate how wages depend on labor market tightness if our measure of labor market tightness already depends on wages by construction.

The main reason I'm not too excited about the LMCI is that its correlation coefficient with the unemployment rate is -0.96. They are almost perfectly negatively correlated--and when you consider measurement error you can't even reject that they are perfectly negatively correlated-- so the LMCI doesn't tell you anything that the unemployment rate wouldn't already tell you. Given the choice, I'd rather just use the unemployment rate since it is simpler, intuitive, and already widely-used.

In the Monetary Policy Report, it is hard to see the value added by the LMCI. The report shows a graph of the three-month moving average of the change in LMCI since 2002 (below). Values above zero are interpreted as an improving labor market and below zero a deteriorating labor market. Below the graph, I placed a graph of the change in the unemployment rate since 2002. They are qualitatively the same. When unemployment is rising, the index indicates that labor market conditions are deteriorating, and when unemployment is falling, the index indicates that labor market conditions are improving.


The index takes 19 indicators that tell us different things about the labor market and distills the information down to one indicator based on common movements in the indicators. What they have in common happens to be summarized by the unemployment rate. That is perfectly fine. If we need a single summary statistic of the labor market, we can use the unemployment rate or the LMCI.

The thing is that we don't really need or even want a single summary statistic of the labor market to be used for policymaking. The Fed does not practice rule-based monetary policy that requires it to make policy decisions based on a small number of measures. A benefit of discretionary policy is that policymakers can look at what many different indicators are telling them. As Wessel wrote, "the world has grown more complicated, and Fed Chairwoman Janet Yellen has pointed to a host of other labor-market measures." Yellen noted, for example, that the median duration of unemployment and proportion of workers employed part time because they are unable to find full-time work remain above their long-run average. This tells us something different than what the unemployment rate tells us, but that's OK; the FOMC has the discretion to take multiple considerations into account.

The construction of the LMCI is a nice statistical exercise, and the fact that it is so highly correlated with the unemployment rate is an interesting result that would be worth investigating further; maybe this will be discussed in the forthcoming FEDS working paper that will describe the LMCI in more detail. I just want to stress the Fed economists' wise point that "A single model is...no substitute for judicious consideration of the various indicators," and recommend that policymakers and journalists not neglect the valuable information contained in various labor market indicators now that we have a "single handy gauge."

Tuesday, July 8, 2014

The Unemployment Cost of Below-Target Inflation

Recently, inflation in the United States has been consistently below its 2% target. The situation in Sweden is similar, but has lasted much longer. The Swedish Riksbank announced a 2% CPI inflation target in 1993, to apply beginning in 1995. By 1997, the target was credible in the sense that inflation expectations were consistently in line with the target. From 1997 to 2011, however, CPI inflation only averaged 1.4%. In a forthcoming paper in the AEJ: Macroeconomics, Lars Svensson uses the Swedish case to estimate the possible unemployment cost of inflation below a credible target.

Svensson notes that inflation expectations that are statistically and economically higher than inflation for many years do not pass standard tests of rationality. He builds upon the "near-rational" expectations framework of Akerlof, Dickens, and Perry (2000). In Akerlof et al.'s model, when inflation is fairly close to zero, a fraction of people simply neglect inflation, and behave as if inflation were zero. This is not too unreasonable--it saves them the computational trouble of thinking about inflation and isn't too costly if inflation is really low. Thus, at low rates of inflation, prices and wages are consistently lower relative to nominal aggregate demand than they would be at zero inflation, permitting sustained higher output and employment. At higher levels of inflation, fewer people neglect it. This gives the Phillips Curve has a "hump shape"; unemployment is non-monotonic in inflation but is minimized at some low level of inflation.

In the case of Sweden, the near-rational model is modified because people are not behaving as if inflation were zero, but rather as if it were 2%, when in fact it is lower than 2%. Instead of permitting higher output and employment, the reverse happens. The figure below shows Svensson's interpretation of the Swedish economy's location on its hypothetical modified long-run Phillips curve. The encircled section is where Sweden has been for over a decade, with inflation mostly below 2% and unemployment high. If inflation were to increase above 2%, unemployment would actually decline, because people would still behave as if it were 2%. But there is a limit. If inflation gets too high (beyond about 4% in the figure), people no longer behave as if inflation were 2%, and the inflation-unemployment tradeoff changes sign.
Source: Svensson 2014

As Svensson explains,
"Suppose that nominal wages are set in negotiations a year in advance to achieve a particular target real wage next year at the price level expected for that year. If the inflation expectations equal the inflation target, the price level expected for next year is the current price level increased by the inflation target. This together with the target real wage then determines the level of nominal wages set for next year. If actual inflation over the coming year then falls short of the inflation target, the price level next year will be lower than anticipated, and the real wage will be higher than the target real wage. This will lead to lower employment and higher unemployment."
Svensson presents narrative evidence that central wage negotiations in Sweden are indeed influenced by the 2% inflation target rather than by actual inflation. The wage-settlement policy platform of the Industrial Trade Unions states that "[The Riksbank’s inflation target] is an important starting point for the labor-market parties when they negotiate about new wages... In negotiations about new wage settlements, the parties should act as if the Riksbank will attain its inflation target."

The figure below shows the empirical inflation-unemployment relationship in Sweden from 1976 to 2012. The long-run Phillips curve was approximately vertical in the 1970s and 80s. The observations on the far right are the economic crisis of the early 1990s. The points in red are the inflation targeting regime. The downward-sloping black line is the estimated long-run Phillips curve for this regime with average inflation below the credible target. You can see two black dots on the line, at 2% inflation and at 1.4% inflation (the average over the period). The distance between the dots on the unemployment axis is the "excess unemployment" that has resulted from maintaining inflation below target. The unemployment rate would be about 0.8% lower if inflation averaged 2% (and presumable lower still if inflation averaged slightly above 2%).

Source: Svensson 2014
Can this analysis be applied the the United States? Even though the U.S. has only had an official 2% target since January 2012, Fuhrer (2011) notes that inflation expectations have been stabilized around 2% since 2000, since the Fed was presumed to have an implicit 2% target. The figure below plots unemployment and core CPI inflation in the U.S. from 1970 to 2012, with 2000 and later in red. Like in Sweden, the long-run Phillips curve is downward-sloping in the (implicit) inflation-targeting period. Since 2000, however, average U.S. inflation was 2%, so overall there was no unemployment cost of sustained below-target inflation. The downward slope, though, means that if we get into a situation like Sweden's where we consistently undershoot 2%, which could result if the target is treated more like a ceiling than a symmetric target, this would have excess unemployment costs.

Source: Svensson 2014
Svensson concludes with policy implications:
"I believe the main policy conclusion to be that if one wants to avoid the average unemployment cost, it is important to keep average inflation over a longer period in line with the target, a kind of average inflation targeting (Nessén and Vestin 2005). This could also be seen as an additional argument in favor of price-level targeting...On the other hand, in Australia, Canada, and the U.K., and more recently in the euro area and the U.S., the central banks have managed to keep average inflation on or close to the target (the implicit target when it is not explicit) without an explicit price-level targeting framework.  
Should the central bank try to exploit the downward-sloping long-run Phillips curve and secretly, by being more expansionary, try to keep average inflation somewhat above the target, so as to induce lower average unemployment than for average inflation on target?...This would be inconsistent with an open and transparent monetary policy."

Monday, June 2, 2014

Long-Term Unemployment and the Dual Mandate

The Federal Reserve's mandate from Congress, as described in the Federal Reserve Act, is to promote "maximum employment, stable prices, and moderate long-term interest rates." In January 2012, the FOMC clarified that a PCE inflation rate of 2% was most consistent with the price stability part of the so-called "dual mandate." The FOMC's Statement on Longer-Run Goals and Monetary Policy Strategy says:
In setting monetary policy, the Committee seeks to mitigate deviations of inflation from its longer-run goal and deviations of employment from the Committee's assessments of its maximum level. These objectives are generally complementary. However, under circumstances in which the Committee judges that the objectives are not complementary, it follows a balanced approach in promoting them, taking into account the magnitude of the deviations and the potentially different time horizons over which employment and inflation are projected to return to levels judged consistent with its mandate.
Lately, PCE inflation has been well below 2%. There is no similarly-specific definition of maximum employment, but I think it is safe to say that we are not there yet. So we seem to be in a situation in which the objectives are complementary--employment-boosting policies that also put some upward pressure on prices would get us closer to maximum employment and stable prices at the same time. This is supposed to be the "easy case" for monetary policymakers, earning the name "divine coincidence." The tougher case would come if inflation were to get up to or above 2% and employment were still too low. Then there would be a tradeoff between the employment and price stability objectives, making the Committee's balancing act more difficult.

San Francisco Federal President John Williams presents the case that the rise in the share of long-term unemployment should affect the approach that Committee members take when faced with such a balancing act. Williams and SF Fed Economist Glenn Rudebusch have a new working paper called "A Wedge in the Dual Mandate: Monetary Policy and Long-Term Unemployment." In this paper, they document key empirical facts about the share of long-term unemployment and explain how it alters the relationship between employment and inflation.

Source: Rudebusch and Williams 2014, p. 4.

First, the key empirical facts about long-term unemployment share in the U.S.:
  1. It has trended upward over the past few decades.
  2. It is countercyclical.
  3. Its countercyclicality has increased in recent decades.
Fact 1 has been attributed to the aging population, women's rising labor force attachment, and increasing share of job losses that are permanent separations rather than temporary layoffs (Groshen and Potter 2003, Aaronson et al. 2010, Valletta 2011).

The long-term unemployed appear to place less downward pressure on wages and prices than the short-term unemployed. This may be because the long-term unemployed are less tied to the labor market and search less intensely for a job as they grow discouraged (e.g., Krueger and Mueller 2011). Stock (2011) and Gordon (2013) find that distinguishing between long- and short-term unemployment can help account for the puzzling lack of disinflation following the Great Recession. Rudebusch and Williams find a similar result by running Phillips Curve regressions that include short-term and long-term unemployment gaps as regressors. Only the coefficient on the short-term unemployment gap is negative and statistically significant, implying that short-run unemployment exerts downward pressure on prices while long-run unemployment does not.

The finding that long-term and short-term unemployment have different implications for price dynamics is very relevant for the Fed's pursuit of its dual mandate.  When the long-term unemployment share is high, this introduces a "wedge" between the employment and price stability portions of the mandate. Employment and inflation won't be as prone to moving in the same direction, because employment can get very low without much downward pressure on wages (and prices). To formalize what this means for monetary policy decisions, Rudebusch and Williams build a stylized model economy (skip ahead if not interested in the details) in which the central bank's objective is to minimize a quadratic loss function:
\begin{equation}
L=\tilde \pi + \lambda \tilde u,
\end{equation}
where pi tilde and u tilde are deviations of inflation from its target and unemployment from the natural rate, and lambda is the weight that policymakers place on unemployment stabilization versus inflation stabilization. Let the unemployment deviation depend on a demand shock v and the deviation of the short-term interest rate from the natural rate (IS equation):
\begin{equation}
\tilde u = \eta \tilde i + v
\end{equation}
 Let deviations in inflation depend on an inflation shock e and the deviation of the short-run unemployment rate s from its natural rate:
\begin{equation}
\tilde \pi = -\kappa \tilde s + e
\end{equation}
The short-run unemployment rate is a fraction of the overall unemployment rate u,
\begin{equation}
s=\theta u,
\end{equation} where
\begin{equation}
\theta=\bar \theta -\delta \tilde u + z,
\end{equation}
where z is a shock to the short-run unemployment share. If we substitute these equations into the loss function and take first order conditions, we arrive at an expression for the optimal deviation of inflation from its target:
\begin{equation}
\tilde \pi*=\frac{\lambda}{\lambda+\kappa^2 \theta^2}e-\frac{\lambda \kappa \bar u}{\lambda+\kappa^2 \theta^2}z
\end{equation}

The first thing to notice here is that the demand shock v does not show up in the optimal policy decision. This is the divine coincidence-- demand shocks impose no tradeoff between the employment and inflation objectives.

The second thing to notice is how the inflation shock e and the short-run unemployment share z. The first term is standard, and shows how the optimal policy partially offsets a positive (negative) inflation shock by raising (lowering) unemployment. The other term is new to this paper. A positive shock to theta acts like a negative inflation shock in the sense of prescribing a higher short-run unemployment rate and lower inflation rate. Another policy implication arises because theta affects the slope of the Phillips curve with respect to aggregate unemployment. This creates an asymmetry in the optimal policy response to inflation shocks that depends on theta.

During the Great Recession, the short-run unemployment share theta reached historic lows. Rudebusch and Williams simulate their model starting in the first quarter of 2014, using actual data from 2013 as initial conditions. They compare the optimal policy response implied by their model to that implied by a standard model (i.e. one that does not distinguish between short-run and long-run unemployment). The figure below displays the results. The quantitative results are not meant to be taken too seriously, since the models involve drastic oversimplifications, but the qualitative differences are  illustrative.The model that distinguishes between short-run and long-run unemployment (shown as black lines) prescribes a higher inflation rate--temporarily above 2%--than the standard model (shown as dashed red lines).

Source: Rudebusch and Williams 2014, p. 22
The authors conclude (emphasis added):
During the recent recession and recovery, the number of discouraged jobless excluded from the unemployment rate and the number of part-time employees wanting full-time work have reached historic highs. If the true measure of labor underutilization included these individuals, even though they have little or no eff ect on wage and price setting, then the wedge in the Fed's dual mandate would be even wider. Based on the analysis in this paper, the implications are clear: Optimal policy should trade off a transitory period of excessive inflation (beyond what is calculated using this paper's model) in order to bring the broader measure of underemployment to normal levels more quickly.
This is a very interesting and intuitive result, and is highly relevant to the policymaking environment in both the United States and elsewhere, particularly Europe. However, I'm not sure that the FOMC as a whole will take this paper's implications seriously. The committee does not share a single "loss function" like the quadratic function presented in the paper, which treats overshooting and undershooting of the inflation target symmetrically. Mark Thoma's comment on the paper is, "I'll believe the Fed will allow *intentional overshooting* of its inflation target when I see it."

Tuesday, March 25, 2014

Guest Post: The Second Machine Age Book Review Part I

This guest post was contributed by Richard Serlin, who teaches personal finance at the University of Arizona and is president and co-founder of National Personal Finance Education. Serlin blogs at richardhserlin.blogspot.com. Serlin will be reviewing The Second Machine Age: Work, Progress, and Prosperity in a Time of Brilliant Technologies by Erik Brynjolfsson and Andrew MacAfee in a series of guest posts.

I will right away get to the crux of the book, for me, and perhaps most people, and then do a beginning-to-end review over multiple posts that will refer back to this crux again and again.

The crux is, Will the explosion in computer/robot/machine ability result in mass unemployment this time, even though previous technological revolutions haven't? If yes, why? If no, why?

With my primary career in personal finance, this an issue I've worked hard to understand. Because there is a very real possibility of massive unemployment, and/or non-livable market wages. And I really do mean non-livable, like not even enough money to buy enough food to stay alive. There potentially are a huge number of jobs today that computers/robots will be able to do over the next generation at a comparative cost of pennies per hour, or less.

So, I will attempt to answer this big question, using much of what Brynjolfsson and MacAfee say in The Second Machine Age, as well as in their first book on these issues, Race with the Machine, which I also carefully read cover-to-cover. The authors don't directly say what will follow. This is my interpretation, or my interpretation of what they are saying combined with some of my own thinking.

We start with the classical L-shaped production function, which I think is especially instructive. Anyone who's taken beginning, or perhaps intermediate, microeconomics has seen this:



The graph has two inputs of production, labeled Z1 and Z2. I'm going to consider Z1 to be units of labor, specifically unskilled (or low skilled) workers, and Z2 to be a package of other – complementary – inputs. This package includes machines, buildings, and other physical capital, but also skilled labor.

So, for example, one unit of Z1 might be 1,000 unskilled workers. One unit of Z2 might be 800,000 square feet of facility, 400 machines, 500 robots,…, and 300 skilled workers: engineers, advanced technicians, MBA's, CPA's, etc.

Now, let's assume that there are two choices of production processes in the world to make use of raw materials. There's the one above – the ultra-high output one. And, there's using only unskilled labor (or unskilled labor with relatively low-tech tools). This can make the end products too, but at 1/1,000th the output per hour worked. As a result, if workers were forced to resort to this kind of work, they would have a subsistence wage (or might not even be able to subsist). The technology is just too primitive, too ancient.

Of course, even in prehistoric times, with the most primitive technology, people were usually able to produce enough to eat, usually at least to live into their 20's. But, they had more than just the value of their labor. They had access to free raw materials, basically by just taking them from whatever land they came across, or could fight to get. Today's unskilled would have little in the way of free, or owned, raw materials, or wealth of any kind. Already just 85 people have more wealth than the poorest 3.5 billion. To get raw materials they'd have to sell some of their labor endowment, and if that was worth too little, they would not be able to get enough raw materials to subsist.

The key feature of L-shaped isoquants is that without adding more L2, you're not going to get any more production no matter how many units of L1 you add. So, here, you're just not going to employ any more unskilled workers (L1), unless you can get more building space, machines, computers, robots, and – crucial to the argument I'm going to make – skilled workers: engineers, advanced technicians, college degreed business people (and not just a paper degree, one with the skills, knowledge, and analytical abilities to go with it), etc.

The skilled workers are just useless productivity-wise without complementary units of L2. Otherwise, all they can do is the primitive production method which produces so little that they starve. People will pay very little in the needed raw materials to the unskilled for the primitive production method, because for just a relatively tiny expenditure on high tech production they can produce the same as with all of the billions of unskilled laborers in the world working with primitive tools. Not only that, there are so many products today that the wealthy and middle class want that are simply impossible for the unskilled alone to produce at all, in any quantity, without the skilled, and the high-tech production method.

The unskilled only become valuable if the units of L2 get so large that complementary units of L1, unskilled laborers, start to get relatively scarce.

Now, what's happened historically. Metal stamping machines started replacing blacksmiths, but then we just started producing more and more metal stamping, and other, machines until all of the initial unemployed were countered with an equal number of jobs running, maintaining, and working with, the metal stamping and other machines. In other words, we just kept building more and more L2 until we pretty much soaked up all of the unemployed L1.

The initial building of the L2 made some lose their jobs, but every unit of L2 that you built required some units of L1 to complement it. When there was tons of unemployed L1, the L1 was cheap, and it made sense to just keep building tons of L2 to take advantage of that cheap L1 until the price (wage) of L2 (unskilled, or low skilled, workers) got extremely high by historical standards. And this was also because the combination of L1 and L2 produced so vastly much more than before using the primitive production function.

In other words, maybe blacksmiths and such forever lost those jobs, but we kept producing so many metal stamping machines, and other machines, and assembly lines, and blast furnaces,…, that eventually we replaced all of those jobs with jobs that were necessary for the new high-tech production method, assisting, maintaining, and complementing the new machines. And we produced so many of these new machines that the unskilled became relatively scarce enough, compared to the new productive capacity, to drive their wages far higher than ever in history.

So, you could just say, That's the solution today! The computers and robots won't 100% not need humans for a very long time, if ever. Just keep building more and more computers, and more and more robots, then you'll need more and more people to attend to, work with, and complement those computers and robots, until every unemployed human is now employed! They're all maintaining, assisting, and otherwise working with the robots and computers that took so many jobs originally!

The biggest current problem with this is there's a bottleneck. And it's a very serious one – skilled workers. It's relatively easy to keep building more and more and more robots, and computers, and facilities, and high-tech machines (at least if you have the skilled workers), but to produce enough trained engineers, and business managers, and skilled technicians, etc. to complement, and keep employed, all of the billions of unskilled workers globally, that's what we're not nearly up to the task for. That's the bottleneck, or at least the biggest and hardest one.

Without far more skilled workers – many highly skilled – there will not be nearly enough need for the masses of unskilled workers. There just won't be anything for them to do that's a high-tech production method like we've discussed without more skilled workers. All you'll be able to do with them is the primitive production method that employs only unskilled workers. And that production method is so relatively low output, it will be paid too little in raw materials to create non-poverty, or perhaps even subsistence, wages for most of the workers.

So it looks like to me the solution depends most on attacking this bottleneck, skilled labor – and the right skills needed for an L2 package. You do this, and you keep employing more and more of a smaller and smaller number of remaining unskilled workers, until their unutilized numbers get small enough to push their wages to a middle class, or at least non-destitute, level.


Of course, this is easier said than done. It would require a massive investment in education and training – starting prenatal; see the work of Nobel Prize winning economist James Heckman – but that effort, in of itself, would create an enormous number of jobs. And it would be hugely high-social-return and positive-social-NPV. If your goal is to maximize total societal utils, or if this is an important goal for you, then this is enormously efficient. 

The authors of The Second Machine Age recognize the crucial point of the importance of high public investment in education to prevent massive technological unemployment. And they also note that they are far from the first prominent economists to do so. On pages 208-10 they write:

The United States was the clear leader in primary education in the first half of the twentieth century, having realized that inequality was a "race between education and technology", to use a phrase coined by Jan Tinbergen (winner of the first Nobel Prize in Economic Sciences) and used by the economists Claudia Goldin and Lawrence Katz as the title of their influential 2010 book…Over the past half century that strong U.S. advantage in primary education has vanished…It's been said that America's greatest idea was mass education. It's still a great idea that applies at all levels, not just K-12 and university education, but also preschool, vocational, and lifelong learning.

I'll add too, that it's not just a bottleneck of insufficient skilled workers to utilize all of the unskilled in the high-output production method. On top of this, to make education even more important, we're also looking at a potentially profound shrinking in the proportion of workers needed in high-output production that are unskilled.

In other words, those L-shaped isoquants may shift 50%, or much more, to the left within a decade or two. Brynjolfsson and McAfee present dramatic evidence that long-vexing stumbling blocks in human-like robotics and computers are finally being overcome, with dramatic recent progress, after decades of slow frustration. And this is what you typically see with exponential growth of the kind we have with Moore's Law. At first the progress, the slope, is not so steep, but suddenly it takes off skyward, as each new doubling now doubles an enormous number.

I'll review this in detail in a later post, but for now I'll quote the authors on pages 31-2:

After revisiting Rethink and seeing Baxter in action we understood why Texas Instruments Vice President Remi El-Quazzane said in early 2012, "We have a firm belief that the robotics market is on the cusp of exploding." There's a lot of evidence to support his view. The volume and variety of robots in use at companies is expanding rapidly, and innovators and entrepreneurs have made deep inroads against Moravec's paradox.

It may not for long be that computers can beat Garry Kasparov, but they can't flip a burger, or oil machines spread across the factory floor. There's recently been breakthrough progress; long intractable walls have fallen in machine pattern recognition, sensation, and dexterity, and it's showing up in a lot more than just the Google Car.

Now, most of my economic analysis so far comes from The Second Machine Age, or I think is implied by it. In particular, I've bolded the terms complementary and bottleneck. Brynjolfsson and McAfee's use of these terms was important in leading me to my analysis. And I think it's possible that given how they use these terms they were also thinking in terms of L-shaped production functions, or something similar.

For example, starting on page 181-2, they write:

The better machines can substitute for human workers, the more likely it is they'll drive down the wages of humans with similar skills…But in principle, machines can have very different strengths and weaknesses than humans. When engineers work to amplify these differences, building on the areas where machines are strong and humans are weak, then the machines are more likely to complement humans, rather than substitute for them. Effective production is more likely to require both human and machine inputs, and the value of the human inputs will grow, not shrink, as the power of machines increases. A second lesson of economics and business strategy is that it's great to be a complement to something that's increasingly plentiful.

And on page 213:

We have little doubt that improving education will boost the bounty by providing more of the complementary skills our economy needs to make effective use of new technologies.

For the term Bottleneck, on page 200:

The college premium exists in part because so many types of raw data are getting dramatically cheaper, and as data get cheaper, the bottleneck increasingly is the ability to interpret and use the data.

Another important term the authors use is inelasticity of demand, but that will have to wait until my part II post! There, I will begin a detailed chapter by chapter review.

Friday, June 7, 2013

Depressing Slow Recovery Graphs

Earlier this year, Fed Vice Chair Janet Yellen described the economic recovery as "painfully slow," and said that an "important tailwind in most economic recoveries is one that tends to be taken for granted--the faith most of us have, based on history and personal experience, that recessions are temporary and that the economy will soon get back to normal." This tailwind, she implied, was particularly weak. Here I've made two graphs that give an indication of the painfully slow recovery.

The Michigan Survey of Consumers asks respondents, "Compared with 5 years ago, do you think the chances that you (and your husband/wife) will have a comfortable retirement have gone up, gone down, or remained about the same?"

Before 2008, on average 45% of people would say that their chances of a comfortable retirement had stayed the same. About 28% would say their chances got worse, and 26% would say their chances got better. Figure 1, below, shows the percent of respondents who chose better or worse each month. By October 2008, only 11% of respondents thought their chances of a comfortable retirement were better than 5 years ago; 45% thought they were worse. 

As of October 2012, the numbers are barely improved: 15% of people think their chances of a comfortable retirement are better than they were in 2007, and 41% think they are worse.

Figure 2 shows the percent of respondents in the highest and lowest income terciles who think their chances of a comfortable retirement are worse than 5 years ago. For the top income tercile, hit harder by falling asset prices, this number peaked at 62% in February 2009, and averaged 41% over 2012. For the bottom income tercile, hit harder by the deteriorating labor market, this number peaked later, at 56% in May 2011,  and averaged 45% over 2012.


Figure 1: Constructed with data from Michigan Survey of Consumers

Figure 2: Constructed with data from Michigan Survey of Consumers