Pages

Wednesday, December 30, 2015

Did Main Street Expect the Rate Hike?

Over a year ago, I looked at data from the Michigan Survey of Consumers to see whether most households were expecting interest rates to rise. I saw that, as of May 2014, about 63% of consumers expected interest rates to rise within the year (i.e. by May 2015). This was considerably higher than the approximately 40% of consumers who expected rates to rise within the year in 2012.

Of course, the Federal Reserve did not end up raising rates until December 2015. Did a greater fraction of consumers anticipate a rise in rates leading up to the hike? Based on the updated Michigan Survey data, it appears not. As Figure 1 below shows, the share of consumers expecting higher rates actually dropped slightly, to just above half, in late 2014 and early 2015. By the most recent available survey date, November 2015, 61% expected rates to rise within the year.

Figure 1: Data from Michigan Survey of Consumers. Analysis by Binder.
Figure 2 zooms in on just the last three years. You can see that there does not appear to be any real resolution in uncertainty leading up to the rate hike. Consistently between half and two thirds of consumers have expected rates to rise within the year every month since late 2013.

Figure 2: Data from Michigan Survey of Consumers. Analysis by Binder.


Wednesday, November 18, 2015

Fed's New Community Advisory Council to Meet on Friday

The Federal Reserve Board’s newly-established Community Advisory Council (CAC) will meet for the first time on Friday, November 20. The solicitation for statements of interest for membership on the CAC, released earlier this year, describes the council as follows:
“The Board created the Community Advisory Council (CAC) as an advisory committee to the Board on issues affecting consumers and communities. The CAC will comprise a diverse group of experts and representatives of consumer and community development organizations and interests, including from such fields as affordable housing, community and economic development, small business, and asset and wealth building. CAC members will meet semiannually with the members of the Board in Washington, DC to provide a range of perspectives on the economic circumstances and financial services needs of consumers and communities, with a particular focus on the concerns of low- and moderate-income consumers and communities. The CAC will complement two of the Board's other advisory councils--the Community Depository Institutions Advisory Council (CDIAC) and the Federal Advisory Council (FAC)--whose members represent depository institutions. The CAC will serve as a mechanism to gather feedback and perspectives on a wide range of policy matters and emerging issues of interest to the Board of Governors and aligns with the Federal Reserve's mission and current responsibilities. These responsibilities include, but are not limited to, banking supervision and regulatory compliance (including the enforcement of consumer protection laws), systemic risk oversight and monetary policy decision-making, and, in conjunction with the Office of the Comptroller of the Currency (OCC) and Federal Deposit Insurance Corporation (FDIC), responsibility for implementation of the Community Reinvestment Act (CRA).”
The fifteen council members will serve staggered three-year terms and meet semi-annually. Members include the President of the Greater Kansas City AFL-CIO, executive director of the Association for Neighborhood and Housing Development, and law professor Catherine Lee Wilson, who teaches courses including bankruptcy and economic justice at University of Nebraska-Lincoln.

The Board website notes that a summary will be posted following the meeting. I do wonder why only a summary, and not a transcript or video, will be released. While the Board is not obligated to act on the CAC's advise, in the interest of transparency, I would like full documentation of the concerns and suggestions brought forth by the CAC. That way we can at least observe what the Board decides to address or not.

Friday, October 30, 2015

Did the Natural Rate Fall***?

Paul Krugman describes the natural rate of interest as "a standard economic concept dating back a century; it’s the rate of interest at which the economy is neither depressed and deflating nor overheated and inflating. And it’s therefore the rate monetary policy is supposed to achieve."

The reason he brings it up-- aside from obvious interest in what the Fed should do about interest rates-- is because a recent paper by Thomas Laubach of Federal Reserve and San Francisco Fed President John Williams has just provided updated estimates of the natural rate for the U.S. Laubach and Williams estimate that the natural rate has fallen to around 0% in the past few years.

The authors' estimates come from a methodology they developed in 2001 (published 2003). The earlier paper noted the imprecision of estimates of the natural rate. The solid line in the figure below presents their estimates of the natural real interest rate, while the dashed line is the real federal funds rate. The green shaded region is the 70% confidence interval around the estimates of the natural rate. (Technical aside: Since the estimation procedure uses the Kalman filter, they compute these confidence intervals using Monte Carlo methods from Hamilton (1986) that account for both filter and parameter uncertainty.) The more commonly reported 90% or 95% confidence interval would of course be even wider, and would certainly include both 0% and 6% in 2000.
Source: Laubach and Williams 2001
The newer paper does not appear to provide confidence intervals or standard errors for the estimates of the natural rate. As the figure below shows, the decline in the point estimate is pretty steep, and this decline is robust to alternative assumptions made in the computation, but robustness and precision are not equivalent.
Source: Laubach and Williams 2015

Note the difference in y-axes on the two preceding figures. If you were to draw those green confidence bands from the older paper on the updated figure from the newer paper, they would basically cover the whole figure. In a "statistical significance" sense (three stars***!), we might not be able to say that the natural rate has fallen. (I can't be sure without knowing the standard errors of the updated estimates, but that's my guess given the width of the 70% confidence intervals on the earlier estimates, and my hunch that the confidence intervals for the newer estimates are even wider, because lots of confidence intervals got wider around 2008.)

I point this out not to say that these findings are insignificant. Quite the opposite, in fact. The economic significance of a decline in the natural rate is so large, in terms of policy implications and what it says about the underlying growth potential of the economy, that this result merits a lot of attention even if it lacks p<0.05 statistical significance. I think it is more common in the profession to overemphasize statistical significance over economic significance.


Tuesday, October 13, 2015

Desire to Serve, Ability to Perform, and Courage to Act

Ben Bernanke’s new book, “The Courage to Act: A Memoir of a Crisis and its Aftermath,” was released on October 5. When the title of the book was revealed in April, it apparently hit a few nerves. Market Watch reported that “Not everyone has been enamored with either Bernanke or his book-titling skills,” listing representative negative reactions to the title from Twitter.

On October 7, Stephen Colbert began an interview of Bernanke by asking about his choice of title for the book, to which Bernanke responded, “I totally blame my wife, it was entirely her idea.”

I hope to comment more substantively on the book after I get a chance to read it, but for now, I just wanted to point out a fun fact about the title. The phrase “courage to act” is the third of three parts of the U.S. Air Force Fire Protection motto: “the desire to serve, the ability to perform, and the courage to act.”

Bernanke has made an explicit analogy between monetary policymakers in the crisis and fire fighters before. In a speech at Princeton in April 2014, he said, “In the middle of a big fire, you don’t start worrying about the fire laws. You try to get the fire out.” On his blog, Bernanke described a bill proposed by Senators Elizabeth Warren and David Vitter as “roughly equivalent to shutting down the fire department to encourage fire safety.” The appeal of the fire fighter analogy to technocratic policymakers with academic backgrounds must be huge. How many nerds’ dreams can be summed up by the notion of saving people from fire…with your brain!

Do we want our policymakers “playing fire fighter”? Ideally, we would be better off if they were more like Smoky the Bear, preventing rather than responding to emergencies. Anat Admati, among others, makes this point in her piece “Where’s the Courage to Act on Banks?” in which she argues that “banks need much more capital, specifically in the form of equity. In this area, the reforms engendered by the crisis have fallen far short.”

Air Force Fire Protection selected its motto by popular vote in 1980. The nominator of the motto was Sargent William J. Sawyers. A discussion of the new motto in the 1980 Fire Protection Newsletter reveals additional dimensions of the analogy, as well as its limits: 
The motto signifies that the first prerequisite of a fire fighter is "the desire to serve." The fire fighter must understand that he is "serving" the public and there is no compensation which is adequate to reward the fire fighter for what they may ultimately give - their life. The second part of the motto is absolutely necessary if the fire fighter is to do the job and do it safely. "The ability to perform" signifies not only a physical and mental ability but also that knowledge is possessed which enables the fire fighter to accomplish the task. The final segment of the motto indicates that fire fighters must have an underlying "courage to act" even when they know what's at stake. To enter a smoke filled building not knowing what's in it or where the fire is, or whether the building is about to collapse requires "courage." To fight an aircraft fire involving munitions, pressure cylinders, volatile fuels, fuel tanks, and just about anything else imaginable requires "courage."
The tripartite Air Force Fire Protection motto emphasizes intrinsic motivation for public service and personal competence as prerequisites to courage. Indeed, in the Roman Catholic tradition, courage, or fortitude, is a cardinal virtue. But as St. Thomas Aquinas explains, fortitude ranks third among the cardinal virtues, behind prudence and justice. He writes that “prudence, since it is a perfection of reason, has the good essentially: while justice effects this good, since it belongs to justice to establish the order of reason in all human affairs: whereas the other virtues safeguard this good, inasmuch as they moderate the passions, lest they lead man away from reason's good. As to the order of the latter, fortitude holds the first place, because fear of dangers of death has the greatest power to make man recede from the good of reason.”

Courage alone, without prudence and justice, is akin to running into a burning building, literally or metaphorically. It may either be commendable or the height of recklessness. As we evaluate Bernanke’s legacy at the Fed, and the role of the Fed more generally, any appraisal of courage should be preceded by consideration of the prudence and justice of Fed actions.

Other mottos that were nominated for the Air Force Fire Protection motto are also interesting to consider in light of the Fed-as-fire-fighter analogy. Which others could Bernanke have considered as book titles? The proposed mottos include:
  • Let us know to let you know we care. 
  • Wherever flames may rage, we are there. 
  • Duty bound. 
  • To serve and preserve. 
  • To intercede in time of need. 
  • When no one else can do. 
  • Duty bound when the chips are down. 
  • For those special times. 
  • Forever vigilant
  • Honor through compassion and bravery.
  • To care to be there. 
  • Prepared for the challenge. 
  • Readiness is our profession.
  • To protect - to serve
  • Without fear and without reproach.
  • Fire prevention - our job is everyone's business
  • Support your fire fighters, we can't do the job alone.

Monday, September 21, 2015

Whose Expectations Augment the Phillips Curve?

My first economics journal publication is now available online in Economics Letters. This link provides free access until November 9, 2015. It is a brief piece (hence the letter format) titled "Whose Expectations Augment the Phillips Curve?" The short answer:
"The inflation expectations of high-income, college-educated, male, and working-age people play a larger role in inflation dynamics than do the expectations of other groups of consumers or of professional forecasters."
Update: The permanent link to the paper is here.

Sunday, September 6, 2015

Which Measure of Inflation Should a Central Bank Target?

"Various monetary proposals can be viewed as inflation targeting with a nonstandard price index: The gold standard uses only the price of gold, and a fixed exchange rate uses only the price of a foreign currency."
That's from a 2003 paper by Greg Mankiw and Ricardo Reis called "What Measure of Inflation Should a Central Bank Target?" At the time, the Federal Reserve had not explicitly announced its inflation target, though an emphasis on core inflation, which excludes volatile food and energy prices (the blue line in the figure below), arose under Alan Greenspan's chairmanship. Other central banks, including the Bank of England and the European Central Bank, instead focus on headline inflation. In 2012, the Fed formalized PCE inflation as its price stability target, but closely monitors core inflation as a key indicator of underlying inflation trends.. Some at the Fed, including St. Louis Fed President James Bullard, have argued that the Fed should focus more on headline inflation (the red line) and less on core inflation (the blue line).

Source: FRED
Mankiw and Reis frame the issue more generally than just a choice between core and headline inflation. A price index assigns weights to prices in each sector of the economy. A core price index would put zero weight on food and energy prices, for example, but you could also construct a price index that put a weight of 0.75 on hamburgers and 0.25 on milkshakes, if that struck your fancy. Mankiw and Reis ask how a central bank can optimally choose these weights for the price index it will target in order to maximize some objective.

In particular, they suppose the central bank's objective is to minimize the volatility of the output gap. They explain, "We are interested in finding the price index that, if kept on an assigned target, would lead to the greatest stability in economic activity. This concept might be called the stability price index." They model how the weight on each sector in this stability price index depend on certain sectoral properties: the cyclical sensitivity of the sector, the proclivity of the sector to experience idiosyncratic shocks, and the speed with which the prices in the sector can adjust.

The findings are mostly intuitive. If a particular sector's prices are very procyclical, do not experience large idiosyncratic shocks, or are very sticky, that sector should receive relatively large weight in the stability price index. Each of these characteristics makes the sector more useful from a signal-extraction perspective as an indicator of economic activity.

Next, Mankiw and Reis do a "back-of-the-envelop" exercise to calculate the weights for a stability price index for the United States using data from 1957 to 2001. They consider four sectors: food, energy, other goods and services, and nominal wages. They stick to four sectors for simplicity, but it would also be possible to include other prices, like gold and other asset prices. To the extent that these are relatively flexible and volatile, they would probably receive little weight.  The inclusion of nominal wages is interesting, because it is a price of labor, not of a consumption good, so it gets a weight of 0 in the consumer price index. But nominal wages are procyclical, not prone to indiosyncratic shocks, and sticky, so the result is that the stability price index weight on nominal wages is near one, while the other sectors get weights near zero. This finding is in line with other results, even derived from very different models, about the optimality of including nominal wages in the monetary policy target.

More recently, Josh Bivens and others have proposed nominal wage targets for monetary policy, but they frame this as an alternative to unemployment as an indicator of labor market slack for the full employment component of the Fed's mandate. In Mankiw and Reis' paper, even a strict inflation targeting central bank with no full employment goal may want to use nominal wages as a big part of its preferred measure of inflation. (Since productivity

If we leave nominal wages out of the picture, the results provide some justification for a focus on core, rather than headline, inflation. Namely, food and energy prices are very volatile and not very sticky. Note, however, that the paper assumes that the central bank has perfect credibility, and can thus achieve whatever inflation target it commits to. In Bullard's argument against a focus on core inflation, he implicitly challenges this assumption:
"One immediate benefit of dropping the emphasis on core inflation would be to reconnect the Fed with households and businesses who know price changes when they see them. With trips to the gas station and the grocery store being some of the most frequent shopping experiences for many Americans, it is hardly helpful for Fed credibility to appear to exclude all those prices from consideration in the formation of monetary policy."
Bullard's concern is that since food and energy prices are so visible and salient for consumers, they might play an oversized role in perceptions and expectations of inflation. If the Fed holds core inflation steady while gas prices and headline inflation rise, maybe inflation expectations will rise a lot, becoming unanchored, and causing feedback into core inflation. There is mixed evidence on whether this is a real concern in recent years. I'm doing some work on this topic myself, and hope to share results soon.

As an aside to my students in Senior Research Seminar, I highly recommend the Mankiw and Reis paper as an example of how to write well, especially if you plan to do a theoretical thesis.

Note: The description of the Federal Reserve's inflation target in the first paragraph of this post was edited for accuracy on September 25.

Sunday, August 30, 2015

False Discoveries and the ROC Curves of Social Science

Diagnostic tests for diseases can suffer from two types of errors. A type I error is a false positive, and a type II error is a false negative. The sensitivity or true positive rate is the probability that a test result will be positive when the disease is actually present. The specificity or true negative rate is the probability that a test result will be negative when the disease is not actually present. Different choices of diagnostic criteria correspond to different combinations of sensitivity and specificity. A more sensitive diagnostic test could reduce false negatives, but might increase the false positive rate. Receiver operating characteristic (ROC) curves are a way to visually present this tradeoff by plotting true positive rates or sensitivity on the y-axis and false positive rates (100%-specificity) on the x-axis.

Source: https://www.medcalc.org/manual/roc-curves.php

As the figure shows, ROC curves are upward sloping-- diagnosing more true positives typically means also increasing the rate of false positives. The curve goes through (0,0) and (100,100), because it is possible to either diagnose nobody as having the disease and get a 0% true positive rate and 0% false positive rate, or to diagnose everyone as having the disease and get a 100% true positive rate and 100% false positive rate. The further an ROC is above the 45 degree line, the better the diagnostic test is, because for any level of false positives, you get a higher level of true positives.

Rafa Irizarry at the Simply Statistics blog makes a really interesting analogy between diagnosing disease and making scientific discoveries. Scientific findings can be true or false, and if we imagine that increasing the rate of important true discoveries also increases the rate of false positive discoveries, we can plot ROC curves for scientific disciplines. Irizarry imagines the ROC curves for biomedical science and physics (see the figure below). Different fields of research vary in the position and shape of the ROC curve--what you can think of as the production possibilities frontier for knowledge in that discipline-- and in the position on the curve.

In Irizarry's opinion, physicists make fewer important discoveries per decade and also fewer false positives per decade than biomedical scientists. Given the slopes of the curves he has drawn, biomedical scientists could make fewer false positives, but at a cost of far fewer important discoveries.

Source: Rafa Irizarry
A particular scientific field could move along its ROC curve by changing the field's standards regarding peer review and replication, changing norms regarding significance testing, etc. More critical review standards for publication would be represented by a shift down and to the left along the ROC curve, reducing the number of false findings that would be published, but also potentially reducing the number of true discoveries being published. A field could shift its ROC curve outward (good) or inward (bad) by changing the "discovery production technology" of the field.

The importance of discoveries is subjective, and we don't really know numbers of  "false positives" in any field of science. Some never go detected. But lately, evidence of fraudulent or otherwise irreplicable findings in political science and psychology point to potentially high false positive rates in the social sciences. A few days ago, Science published an article on "Estimating the Reproducibility of Psychological Science." From the abstract:
We conducted replications of 100 experimental and correlational studies published in three psychology journals using high-powered designs and original materials when available. Replication effects were half the magnitude of original effects, representing a substantial decline. Ninety-seven percent of original studies had statistically significant results. Thirty-six percent of replications had statistically significant results; 47% of original effect sizes were in the 95% confidence interval of the replication effect size; 39% of effects were subjectively rated to have replicated the original result; and if no bias in original results is assumed, combining original and replication results left 68% with statistically significant effects.
As studies of this type hint that the social sciences may be far to the right along an ROC curve, it is interesting to try to visualize the shape of the curve. The physics ROC curve that Irizarry drew is very steep near the origin, so an attempt to reduce false positives further would, in his view, sharply reduce the number of important discoveries. Contrast that to his curve for biomedical science. He indicates that biomedical scientists are on a relatively flat portion of the curve, so reducing the false positive rate would not reduce the number of important discoveries by very much.

What does the shape of the economics ROC curve look like in comparison to those of other sciences, and where along the curve are we? What about macroeconomics in particular? Hypothetically, if we have one study that discovers that the fiscal multiplier is smaller than one, and another study that discovers that the fiscal multiplier is greater than one, then one study is an "important discovery" and one is a false positive. If these were our only two macroeconomic studies, we would be exactly on the 45 degree line with perfect sensitivity but zero specificity.


Thursday, August 6, 2015

Macroeconomics Research at Liberal Arts Colleges

I spent the last two days at the 11th annual Workshop on Macroeconomics Research at Liberal Arts Colleges at Union College. The workshop reflects the growing emphasis that liberal arts colleges place on faculty research. There were four two-hour sessions of research presentations--international, banking, information and expectations, and theory--in addition to breakout sessions on pedagogy. I presented my research in the information and expectations session.

I definitely recommend this workshop to other liberal arts macro professors. The end of summer timing was great. I got to think about how to prioritize my research goals before the semester starts and to hear advice on teaching and course planning from a lot of really passionate teachers. It was very encouraging to witness how many liberal arts college professors at all stages of their careers have maintained very active research agendas while also continually improving in their roles as teachers and advisors.

After dinner on the first day of the workshop, there was a panel discussion about publishing with undergraduates. I also attended a pedagogy session on advising undergraduate research. Many of the liberal arts colleges represented at the workshop have some form of a senior thesis requirement. A big part of the discussion was how to balance the emphasis on "product vs. process" for undergraduate research. In other words, how active of a role should a faculty member take in trying to ensure a high-quality final product of a senior thesis project versus ensuring that different learning goals are met. What should those learning goals be? Some possibilities include helping students decide if they want to go to grad school, teach independence, writing skills, econometric techniques, the ability to for an economic argument. And relatedly, how should grades or honors designations reflect the final product and the learning goals that are emphasized?

We also discussed the relative merits of helping students publish their research, either in an undergraduate journal or a professional journal. There was a lot of lack of clarity about how it affects an assistant professor's tenure case if they have very low-ranked publications with undergraduate coauthors, and a general desire for more explicit guidelines about whether that is considered a valuable contribution.

These discussions of research by or with undergraduates left me really curious to hear about others' experiences doing or supervising undergraduate research. I'd be very happy to feature some examples of research with or by undergraduates as guest posts. Send me an email if you're interested.

At least two other conference participants have blogs, and they are definitely worth checking out. Joseph Joyce of Wellesley blogs about international finance at "Capital Ebbs and Flows." Bill Craighead of Wesleyan blogs at "Twenty-Cent Paradigms." Both have recent thoughtful commentary on Greece.

Friday, July 31, 2015

Surveys in Crisis

In "Household Surveys in Crisis," Bruce D. Meyer, Wallace K.C. Mok, and James X. Sullivan describe household surveys as "one of the main innovations in social science research of the last century." Large, nationally representative household surveys are the source of official rates of unemployment, poverty, and health insurance coverage, and are used to allocate government funds. But the quality of survey data is declining on at least three counts.

The first and most commonly studied problem is the rise in unit nonresponse, meaning fewer people are willing to take a survey when asked. Two other growing problems are item nonresponse-- when someone agrees to take the survey but refuses to answer particular questions-- and inaccurate responses. Of course, the three problems can be related. For example, attempts to reduce unit nonresponse by persuading reluctant households to take a survey could raise item nonresponse and inaccurate responses if these reluctant participants rush through a survey they didn't really want to take in the first place.

Unit nonresponse, item nonresponse, and inaccurate responses would not be too troublesome if they were random enough that survey statistics were unbiased, but that is unlikely to be the case. Nonresponse and misreporting may be systematically correlated with relevant characteristics such as income or receipt of government funds. Meyer, Mok, and Sullivan look at survey data about government transfer programs for which corresponding administrative data is also available, so they can compare survey results to presumably more accurate administrative data. In this case, the survey data understates incomes at the bottom of the distribution, understates the rate of program receipt and the poverty reducing effects of government programs, and overstates measures of poverty and of inequality. For other surveys that cannot be linked to administrative data, it is difficult to say which direction biases will go.

Why has survey quality declined? The authors discuss many of the traditional explanations:
"Among the traditional reasons proposed include increasing urbanization, a decline in public spirit, increasing time pressure, rising crime (this pattern reversed long ago), increasing concerns about privacy and confidentiality, and declining cooperation due to 'over-surveyed' households (Groves and Couper 1998; Presser and McCullogh 2011; Brick and Williams 2013). The continuing increase in survey nonresponse as urbanization has slowed and crime has fallen make these less likely explanations for present trends. Tests of the remaining hypotheses are weak, based largely on national time-series analyses with a handful of observations. Several of the hypotheses require measuring societal conditions that can be difficult to capture: the degree of public spirit, concern about confidentiality, and time pressure...We are unaware of strong evidence to support or refute a steady decline in public spirit or a rise in confidentiality concerns as a cause for declines in survey quality."
They find it most likely that the sharp rise in the number of government surveys administered in the US since 1984 has resulted in declining cooperation by "over-surveyed" households. "We suspect that talking with an interviewer, which once was a rare chance to tell someone about your life, now is crowded out by an annoying press of telemarketers and commercial surveyors."

Personally, I have not received any requests to participate in government surveys and rarely receive commercial survey requests. Is this just because I moved around so much as a student? Am I about to be flooded with requests? I think I would actually find it fun to take some surveys after working with the data so much. Please leave a comment about your experience with taking (or declining to take) surveys.

The authors also note that since there is a trend toward greater leisure time, it is unlikely that increased time pressure is resulting in declining survey quality. However, while people have more leisure time, they may also have more things to do with their leisure time (I'm looking at you, Internet) that they prefer to taking surveys. Intuitively I would guess that as people have grown more accustomed to doing everything online, they are less comfortable talking to an interviewer in person or on the phone. Since I almost never have occasion to go to the post office, I can imagine forgetting to mail in a paper survey. Switching surveys to online format could result in a new set of biases, but may eventually be the way to go.

I would also guess that the Internet has changed people's relationship with information, even information about themselves. When you can look up anything easily, that can change what you decide to remember and what facts you feel comfortable reporting off the top of your head to an interviewer.

Wednesday, July 8, 2015

Trading on Leaked Macroeconomic Data

The official release times of U.S. macroeconomic data are big deals in financial markets. A new paper finds evidence of substantial informed trading before the official release time of certain macroeconomic variables, suggesting that information is often leaked. Alexander Kurov, Alessio Sancetta, Georg H. Strasser, and Marketa Halova Wolfe examine high-frequency stock index and Treasury futures markets data around releases of U.S. macroeconomic announcements:
These announcements are quintessential updates to public information on the economy and fundamental inputs to asset pricing. More than a half of the cumulative annual equity risk premium is earned on announcement days (Savor & Wilson, 2013) and the information is almost instantaneously reflected in prices once released (Hu, Pan, & Wang, 2013). To ensure fairness, no market participant should have access to this information until the official release time. Yet, in this paper we find strong evidence of informed trading before several key macroeconomic news announcements....Prices start to move about 30 minutes before the official release time and the price move during this pre-announcement window accounts on average for about a half of the total price adjustment.
They consider the 30 macroeconomic announcements that other authors have shown tend to move markets, and find evidence of:

  • Significant pre-announcement price drift for: CB consumer confidence index, existing home sales, GDP preliminary, industrial production, ISM manufacturing index, ISM non-manufacturing index, and pending home sales.
  • Some pre-announcement drift for: advance retail sales, consumer price index, GDP advance, housing starts, and initial jobless claims.
  • No pre-announcement drift for: ADP employment, durable goods orders, new home sales, non-farm employment, producer price index, and UM consumer sentiment.
The figure below shows mean cumulative average returns in the E-mini S&P 500 Futures market from 60 minutes before the release time to 60 minutes after the release time for the series with significant evidence of pre-announcement drift.

Source: Kurov et al. 2015, Figure A1, panel c. Cumulative average returns in the E-mini S&P 500 Futures market .
Why do prices start to move before release time? It could be that some traders are superior forecasters, making better use of publicly-available information, and waiting until a few minutes before the announcement to make their trades. Alternatively, information might be leaked before the official release. Kurov et al. note that, while the first possibility cannot be ruled out entirely, the leaked information explanation appears highly likely. The authors conducted a phone and email survey of the organizations responsible for the macroeconomic data in their study to find out about data release procedures:
The release procedures fall into one of three categories. The first category involves posting the announcement on the organization’s website at the official release time, so that all market participants can access the information at the same time. The second category involves pre-releasing the information to selected journalists in “lock-up rooms” adding a risk of leakage if the lock-up is imperfectly guarded. The third category, previously not documented in academic literature, involves an unusual pre-release procedure used in three announcements: Instead of being pre-released in lock-up rooms, these announcements are electronically transmitted to journalists who are asked not to share the information with others. These three announcements are among the seven announcements with strong drift.
I wish I had a better sense of who was obtaining the leaked information and how much they were making from it.

Wednesday, June 24, 2015

Forecasting in Unstable Environments

I recently returned from the International Symposium on Forecasting "Frontiers in Forecasting" conference in Riverside. I presented some of my work on inflation uncertainty in a session devoted to uncertainty and the real economy. A highlight was the talk by Barbara Rossi,  a featured presenter from Universitat Pompeu Fabra, on "Forecasting in Unstable Environments: What Works and What Doesn't." (This post will be a bit more technical than my usual.)

Rossi spoke about instabilities in reduced form models and gave an overview of the evidence on what works and what doesn't in guarding against these instabilities. The basic issue is that the predictive ability of different models and variables changes over time. For example, the term spread was a pretty good predictor of GDP growth until the 1990s, and the credit spread was not. But in the 90s the situation reversed, and the credit spread became a better predictor of GDP growth while the term spread got worse.

Rossi noted that break tests and time varying parameter models, two common ways to protect against instabilities in forecasting relationships, do involve tradeoffs. For example, it is common to test for a break in an empirical relationship, then estimate a model in which the coefficients before and after the break differ. Including a break point reduces the bias of your estimates, but also reduces the precision. The more break points you add, the shorter are the time samples you use to estimate the coefficients. This is similar to what happens if you start adding tons of control variables to a regression when your number of observations is small.

Rossi also discussed rolling window estimation. Choosing the optimal window size is a challenge, with a similar bias/precision trade-off. The standard practice of reporting results from only a single window size is problematic, because the window size may have been selected based on "data snooping" to obtain the most desirable results. In work with Atsushi Inoue, Rossi develops out of sample forecast tests that are robust to window size. Many of the basic tools and tests from macroeconomic forecasting-- Granger casualty tests, forecast comparison tests, and forecast optimally tests-- can be made more robust to instabilities. For details, see Raffaella Giacomini and Rossi's chapter in the Handbook of Research Methods and Applications on Empirical Macroeconomics and references therein.

A bit of practical advice from Rossi was to maintain large-dimensional datasets as a guard against instability. In unstable environments, variables that are not useful now may be useful later, and it is increasingly computationally feasible to store and work with big datasets.

Wednesday, June 17, 2015

Another Four Percent

When Jeb Bush announced his presidential candidacy on Monday, he made a bold claim. "There's not a reason in the world we can’t grow at 4 percent a year,” he said, “and that will be my goal as president.”

You can pretty much guarantee that whenever a politician claims "there's not a reason in the world," plenty of people will be happy to provide one, and this case is no exception. Those reasons aside, for now, where did this 4 percent target come from? Jordan Weissmann explains that "the figure apparently originated during a conference call several years ago, during which Bush and several other advisers were brainstorming potential economic programs for the George W. Bush Institute...Jeb casually tossed out the idea of 4 percent growth, which everybody loved, even though it was kind of arbitrary." Jeb Bush himself calls 4 percent "a nice round number. It's double the growth that we are growing at." (To which Jon Perr snippily adds, "It's also an even number and the square of two.")

Let's face it, we have a thing for nice, round, kind of arbitrary numbers. The 2 percent inflation target, for example, was not chosen as the precise solution to some optimization problem, but more as a "rough guess [that] acquired force as a focal point." Psychology research shows that people put in extra effort to reach round number goals,  like a batting average of .300 rather than .299. A 4 percent growth target reduces something multidimensional and hard to define--economic success--to a single, salient number. An explicit numerical target provides an easy guide for accountability. This can be very useful, but it can also backfire.

As an analogy, imagine that citizens of some country have a vague, noble goal for their education system, like "improving student learning." They want to encourage school administrators and teachers to pursue this goal and hold them accountable. But with so many dimensions of student learning, it is difficult to gauge effort or success. They could introduce a mandatory, standardized math test for all students, and rate a teacher as "highly successful" if his or her students' scores improve by at least 10% over the course of the year. A nice round number. This would provide a simple, salient way to judge success, and it would certainly change what goes on in the classroom, with obvious upsides and downsides. Many teachers would put in more effort to ensure that students learned math--at least, the math covered on the test--but might neglect literature, art, or gym. Administrators might have incentive to engage in some deceptive accounting practices, finding reasons why a particular student's score should not be counted, why a group of students should switch classrooms. Even outright cheating, though likely rare, is possible, especially if jobs are hinging on the difference between 9.9% improvement and 10%. What is changing one or two answers?

Ceteris paribus, more math skills would bring a variety of benefits, just like more growth would, as the George W. Bush Institute's 4% Growth Project likes to point out. But making 4 percent growth the standard for success could also change policymakers' incentives and behaviors in some perverse ways. Potential policies' ability to boost growth will be overemphasized, and other merits or flaws (e.g. for the environment or the income distribution) underemphasized. The purported goal is sustained 4 percent growth over long time periods, which implies making the kind of long-run-minded reforms that boost both actual and potential GDP--not just running the economy above capacity for as long as possible until the music stops. But realistically, a president would worry more about achieving 4 percent while in office and less about afterwards, encouraging short-termism at best, or more unsavory practices at worst.

Even with all of these caveats, if the idea of a 4 percent solution still sounds appealing, it is worth opening up the discussion to what other 4 percent solutions might be better. Laurence Ball, Brad Delong, and Paul Krugman have made the case for 4 percent inflation target. I see their points but am not fully convinced. But what about 4 percent unemployment? Or 4 percent nominal wage growth? Are they more or less attainable than 4 percent GDP growth, and how would the benefits compare? If we do decide to buy into a 4 percent target, it is worth at least pausing to think about which 4 percent. 

Tuesday, June 16, 2015

Wage Increases Do Not Signal Impending Inflation

When the FOMC meets over the next two days, they will surely be looking for signs of impending inflation. Even though actual inflation is below target, any hint that pressure is building will be seized upon by more hawkish committee members as impetus for an earlier rate rise. The relatively strong May jobs report and uptick in nominal wage inflation are likely to draw attention in this respect.

Hopefully the FOMC members are aware of new research by two of the Fed's own economists, Ekaterina Peneva and Jeremy Rudd, on the passthrough (or lack thereof) of labor costs to price inflation. The research, which fails to find an important role for labor costs in driving inflation movements, casts doubts on wage-based explanations of inflation dynamics in recent years. They conclude that "price inflation now responds less persistently to changes in real activity or costs; at the same time, the joint dynamics of inflation and compensation no longer manifest the type of wage–price spiral that was evident in earlier decades."

Peneva and Rudd use a time-varying parameter/stochastic volatility VAR framework which lets them see how core inflation responds to a shock to the growth rate of labor costs at different times. The figure below shows how the response has varied over the past few decades. In 1975 and 1985, a rise in labor cost growth was followed by a rise in core inflation, but in recent decades, both before and after the Great Recession, there is no such response:

Peneva and Rudd do not take a strong stance on why wage-price dynamics appear to have changed. But their findings do complement research from Yash Mehra in 2000, who suggests that "One problem with this popular 'cost-push' view of the inflation process is that it does not recognize the influences of Federal Reserve policy and the resulting inflation environment on determining the causal influence of wage growth on inflation. If the Fed follows a non-accommodative monetary policy and keeps inflation low, then firms may not be able to pass along excessive wage gains in the form of higher product prices." Mehra finds that "Wage growth no longer helps predict inflation if we consider subperiods that begin in the early 1980s...The period since the early 1980s is the period during which the Fed has concentrated on keeping inflation low. What is new here is the finding that even in the pre- 1980 period there is another subperiod, 1953Q1 to 1965Q4, during which wage growth does not help predict inflation. This is also the subperiod during which inflation remained mostly low, mainly due to monetary policy pursued by the Fed."

Monday, May 25, 2015

The Limited Political Implications of Behavioral Economics

A recent post on Marginal Revolution contends that progressives use findings from behavioral economics to support the economic policies they favor, while ignoring the implications that support conservative policies. The short post, originally a comment by blogger and computational biologist Luis Pedro Coelho, is perhaps intentionally controversial, arguing that loss aversion is a case against redistributive policies and social mobility:
"Taking from the higher-incomes to give it to the lower incomes may be negative utility as the higher incomes are valuing their loss at an exaggerated rate (it’s a loss), while the lower income recipients under value it... 
...if your utility function is heavily rank-based (a standard left-wing view) and you accept loss-aversion from the behavioral literature, then social mobility is suspect from an utility point-of-view."
Tyler Cowen made a similar point a few years ago, arguing that "For a given level of income, if some are moving up others are moving down... More upward — and thus downward — relative mobility probably means less aggregate happiness, due to habit formation and frame of reference effects."

I don't think loss aversion, habit formation, and the like make a strong case against (or for) redistribution or social mobility, but I do think Coelho has a point that economists need to watch out for our own confirmation bias when we go pointing out other behavioral biases to support our favorite policies. Simply appealing to behavioral economics, in general, or to loss aversion or any number of documented decision-making biases, rarely makes a strong case for or against broad policy aims or strategies. The reason is best summarized by Wolfgang Pesendorfer in "Behavioral Economics Comes of Age":
Behavioral economics argues that economists ignore important variables that affect behavior. The new variables are typically shown to affect decisions in experimental settings. For economists, the difficulty is that these new variables may be unobservable or even difficult to define in economic settings with economic data. From the perspective of an economist, the unobservable variable amounts to a free parameter in the utility function. Having too many such parameters already, the economist finds it difficult to utilize the experimental finding.
All economic models require making drastic simplifications of reality. Whether they can say anything useful depends on how well they can capture those aspects of reality that are relevant to the question at hand and leave out those that aren't. Behavioral economics has done a good job of pointing out some aspects of reality that standard models leave out, but not always of telling us exactly when these are more relevant than dozens of other aspects of reality we also leave out without second thought. For example, "default bias" seems to be a hugely important factor in retirement savings, so it should definitely be a consideration in the design of very narrow policies regarding 401(K) plan participation, but that does not mean we need to also include it in every macroeconomic model.

Monday, May 11, 2015

Release of "Rewriting the Rules"

I have been working with the Roosevelt Institute and Joseph Stiglitz on report called "Rewriting the Rules of the American Economy: An Agenda for Growth and Shared Prosperity":
In this new report, the Roosevelt Institute exposes the link between the rapidly rising fortunes of America’s wealthiest citizens and increasing economic insecurity for everyone else. The conclusion is clear: piecemeal policy change will not do. To improve economic performance and create shared prosperity, we must rewrite the rules of our economy.
The report will be released tomorrow morning in DC, with remarks by Senator Elizabeth Warren and Mayor Bill de Blasio. You can watch the livestream beginning at 9 a.m. Eastern tomorrow (May 12). There will be an excellent panel of speakers including Rana Foroohar, Heather Boushey, Stan Greenberg, Simon Johnson, Bob Solow, and Lynn Stout. You can also follow along on Twitter with the hashtag #RewriteTheRules.

Monday, May 4, 2015

Firm Balance Sheets and Unemployment in the Great Recession

The balance sheets of households and financial firms have received a lot of emphasis in research on the Great Recession. The balance sheets of non-financial firms, in contrast, have received less attention. At first glance, this is perfectly reasonable; households and financial firms had high and rising leverage in the years leading up to the Great Recession, while non-financial firms' leverage remained constant (Figure 1, below).

New research by Xavier Giroud and Holger M. Mueller argues that the flat trendline for non-financial firms' leverage obscures substantial variation across firms, which proves important to understanding employment in the recession. Some firms saw large increases in leverage prior to the recession and others large declines. Using an establishment-level dataset with more than a quarter million observations, Giroud and Mueller find that "firms that tightened their debt capacity in the run-up ('high-leverage firms') exhibit a significantly larger decline in employment in response to household demand shocks than firms that freed up debt capacity ('low-leverage firms')."
The authors emphasize that "we do not mean to argue that household balance sheets or those of financial intermediaries are unimportant. On the contrary, our results are consistent with the view that falling house prices lead to a drop in consumer demand by households (Mian, Rao, and Sufi (2013)), with important consequences for employment (Mian and Sufi (2014)). But households do not lay off workers. Firms do. Thus, the extent to which demand shocks by households translate into employment losses depends on how firms respond to these shocks."

Firms' responses to household demand shocks depend largely on their balance sheets. Low-leverage firms were able to increase their borrowing during the recession to avoid reducing employment, while high-leverage firms were financially constrained and could not raise external funds to avoid reducing employment and cutting back investment:
"In fact, all of the job losses associated with falling house prices are concentrated among establishments of high-leverage firms. By contrast, there is no significant association between changes in house prices and changes in employment during the Great Recession among establishments of low-leverage firms."

Thursday, April 16, 2015

On Bernanke and Citadel

Two weeks ago, I told the Washington Examiner that we don't need to worry about Ben Bernanke's blogging turning him into a "shadow chair." I must confess that I was taken aback this morning to learn that Bernanke will also become a senior adviser to Citadel, a large hedge fund. Let me explain how this announcement modifies some, but not all, of what I wrote in my last post about Bernanke's post-chairmanship role.

I wrote, "We want our top thinkers going into public service at the Fed and other government agencies. These top thinkers place a high value on having a public voice, and the blogosphere is increasingly the forum for that." I still agree with this at gut level. I think Bernanke is an intellectual with the public interest at heart and that he really intends the blog as a public service  Now I also know more about the personal financial interests he has at stake, which I will keep in mind when reading his blogging. (Which we really all should do with whatever we are reading.) I think most people are capable of acting against their best financial interests to maintain ideals and standards, but even the most upright are subject to subconscious suasion.

I also wrote that I hoped Bernanke's blog would increase Fed accountability and transparency. Maybe, but only very indirectly. I don't think Bernanke is personally violating any bounds either by blogging or by joining Citadel, but that his joining Citadel is symptomatic of larger boundary violations in the governance structure of the Fed system and its ties to Wall Street. Bernanke told the New York Times that he was "sensitive to the public's anxieties about the 'revolving door' between Wall Street and Washington and chose to go to Citadel, in part, because it 'is not regulated by the Federal Reserve and I won’t be doing lobbying of any sort.' He added that he had been recruited by banks but declined their offers. 'I wanted to avoid the appearance of a conflict of interest,' he said. 'I ruled out any firm that was regulated by the Federal Reserve.'"

I take him at his word while at the same time expecting and hoping that the public's anxieties about the revolving door will not be calmed by Bernanke's choice of which particular Wall Street firm to join. The public doesn't draw a clear line, nor should they, between Wall Street institutions regulated by the Fed and not regulated by the Fed, or between "lobbying of any sort" and "very public figure saying things to policymakers." Maybe he ruled out conflict of interest to some degree, but certainly not appearance of conflict of interest. So if this looks a little unseemly, I hope that is enough to catalyze change in Fed governance. Even if Bernanke's link to Wall Street is not inherently problematic, the overall role of Wall Street insiders in Fed governance is too large.

Saturday, April 4, 2015

Do Not Fear the Shadow Chair

I was recently interviewed for an article in the Washington Examiner, "Bernanke is Back and Blogging." The author, Joseph Lawler, asked what I thought about a former Federal Reserve chair taking becoming an active blogger, and in particular whether I thought there was a risk of Bernanke becoming a "shadow chairman." Lawler also interviewed Peter Conti-Brown, who said that this was "absolutely" a risk.

I don't share the concern. My response to Lawler was too long for him to include in its entirety, so I'll post it here.
I don't think we need to worry about Ben Bernanke becoming a "shadow chairman." The blog is not as unprecedented as it might seem. Alan Greenspan and Paul Volcker both remain active public figures who not only comment on the economy, but also advocate particular policies. Greenspan has published several books since he was chairman, and Volcker has a think tank, the Volcker Alliance. Neither of them has become a shadow chair. We want our top thinkers going into public service at the Fed and other government agencies. These top thinkers place a high value on having a public voice, and the blogosphere is increasingly the forum for that. If serving precludes them from later participating in the public forum, we will have trouble attracting the best people to these roles in the future. 
I think it is good to have a former Fed chair participating in a forum like a blog, which is freely available to the public and fosters debate. It is also a good thing if this blog brings more attention to the Fed and how it pursues its mandate. Since Fed officials are not elected, the Fed needs to be accountable to the public in other ways, and accountability requires that people are aware of the Fed and really thinking about and challenging its actions. In my dissertation I show that this is not currently the case-- people don't understand the Fed enough to be able to hold it accountable. I argue that the Fed needs a strong new media strategy as part of their communication strategy. If former Fed officials make their opinions public, the public will likely put more pressure on current officials to respond and explain their own views and any differences of opinion. This increases accountability. The Fed also claims to place high value on transparency, which is a change from the central banking philosophy several decades ago, so they should be glad that people formerly at the Fed are trying to explain their thinking in a clear way that helps people understand. 
As a blogger myself, I think it will be very fun to have Bernanke in the blogosphere and to follow him on Twitter. He will bring such an interesting perspective about which topics are really important to think more about. The topics that interest him enough to prompt him to blog will certainly be topics of great interest to the rest of us bloggers. It will be fun to think through and react to what he writes.
Wishing you a very happy Easter!

Saturday, March 28, 2015

Politicians or Technocrats: Who Splits the Cake?

In most countries, non-elected central bankers conduct monetary policy, while fiscal policy is chosen by elected representatives. It is not obvious that this arrangement is appropriate. In 1997, Alan Blinder suggested that Americans leave "too many policy decisions in the realm of politics and too few in the realm of technocracy," and that tax policy might be better left to technocrats. The bigger issue these days is whether an independent, non-elected Federal Reserve can truly be "accountable" to the public, and whether Congress should have more control over monetary policy.

The standard theoretical argument for delegating monetary policy to a non-elected bureaucrat is the time inconsistency problem. As Blinder explains, "the pain of fighting inflation (higher unemployment for a while) comes well in advance of the benefits (permanently lower inflation). So shortsighted politicians with their eyes on elections would be tempted to inflate too much." But time inconsistency problems arise in fiscal policy too. Blinder adds, "Myopia is a serious practical problem for democratic governments because politics tends to produce short time horizons -- often extending only until the next election, if not just the next public opinion poll. Politicians asked to weigh short-run costs against long-run benefits may systematically shortchange the future."

So why do we assign some types of policymaking to bureaucrats and some to elected officials? And could we do better? In a two-paper series on "Bureaucrats or Politicians?," Alberto Alesina and Guido Tabellini (2007) study the question of task allocation between bureaucrats and politicians. In their model, neither bureaucrats nor politicians are purely "benevolent;" each have different objective functions depending on how they are held accountable:
Politicians are held accountable, by voters, at election time. Top-level bureaucrats are accountable to their professional peers or to the public at large, for how they have fulfilled the goals of their organization. These different accountability mechanisms induce different incentives. Politicians are motivated by the goal of pleasing voters, and hence winning elections. Top bureaucrats are motivated by "career concerns," that is, they want to fulfill the goals of their organization because this improves their external professional prospects in the public or private sector.
The model implies that, for the purpose of maximizing social welfare, some tasks are better suited for bureaucrats and others for politicians. When the public can only imperfectly monitor effort and talent, elected politicians are preferable for tasks where effort matters more than ability. Bureaucrats are preferable for highly technical tasks, like monetary policy, regulatory policy, and public debt management. This is in line with Blinder's intuition; he argued that extremely technical judgments ought to be left to technocrats and value judgments to legislators, while recognizing that both monetary and fiscal policy involve substantial amounts of both technical and value judgments.

Alesina and Tabellini's model also helps formalize and clarify Blinder's intuition on what he calls "general vs. particular" effects. Blinder writes:
Some public policy decisions have -- or are perceived to have -- mostly general impacts, affecting most citizens in similar ways. Monetary policy, for example...is usually thought of as affecting the whole economy rather than particular groups or industries. Other public policies are more naturally thought of as particularist, conferring benefits and imposing costs on identifiable groups...When the issues are particularist, the visible hand of interest-group politics is likely to be most pernicious -- which would seem to support delegating authority to unelected experts. But these are precisely the issues that require the heaviest doses of value judgments to decide who should win and lose. Such judgments are inherently and appropriately political. It's a genuine dilemma.
Alesina and Tabellini consider a bureaucrat and an elected official each assigned a task of "splitting a cake." Depending on the nature of the cake splitting task, a bureaucrat is usually preferable; specifically, "with risk neutrality and fair bureaucrats, the latter are always strictly preferred ex ante. Risk aversion makes the bureaucrat more or less desirable ex ante depending on how easy it is to impose fair treatment of all voters in his task description." Nonetheless, politicians prefer to cut the cake themselves, because it helps them get re-elected with less effort through an incumbency advantage:
The incumbent’s redistributive policies reveal his preferences, and voters correctly expect these policies to be continued if he is reelected. As they cannot observe what the opponent would do, voters face more uncertainty if voting for the opponent...This asymmetry creates an incumbency advantage: the voters are more willing to reappoint the incumbent even if he is incompetent... The incumbency advantage also reduces equilibrium effort.
An interesting associated implication is that "it is in the interest of politicians to pretend that they are ideologically biased in favor of specific groups or policies, even if in reality they are purely opportunistic. The ideology of politicians is like their brand name: it keeps voters attached to parties and reduces uncertainty about how politicians would act once in office."

According to this theoretical model, we might be better off leaving both monetary and fiscal policy to independent bureaucratic agencies. But fiscal policy is inherently redistributive, and politicians prefer not to delegate redistributive tasks. "This might explain why delegation to independent bureaucrats is very seldom observed in fiscal policy, even if many fiscal policy decisions are technically very demanding."

Both Blinder and Alesina and Tabellini--writing in 1997 and 2007, respectively-- made the distinction that tax policy, unlike monetary policy, is redistributive or "particularist." Since then, that distinction seems much less obvious. Back in 2012, Mark Spitznagel opined in the Wall Street Journal that "The Fed is transferring immense wealth from the middle class to the most affluent, from the least privileged to the most privileged." Boston Fed President Eric Rosengren countered that "The net effect [of recent Fed policy] is substantially weighted towards people that are borrowers not lenders, towards people that are unemployed versus people that are employed." Other Fed officials and academic economists are also paying increasing attention to the redistributive implications of monetary policy.

Monetary policymakers can no longer ignore the distributional effects of monetary policy-- and neither can voters and politicians. Alesina and Tabellini's model predicts that the more that elected politicians recognize the "cake splitting" aspect of monetary policy, the more they will want to redelegate it to themselves. Expect stronger cries for "accountability." However, the redistributive nature of monetary policy, according to the model, probably strengthens the argument for leaving it to independent technocrats. The caveat is that "the result may be reversed if the bureaucrat is unfair and implements a totally arbitrary redistribution." The Fed's role in redistributing resources strengthens its case for independence if and only if it takes equity concerns seriously.

Wednesday, March 4, 2015

Federal Reserve Communication with Congress

In 2003, Ben Bernanke described a central bank's communication strategy as "regular procedures for communicating with the political authorities, the financial markets, and the general public." The fact that there are three target audiences of monetary policy communication, with three distinct sets of needs and concerns, is an important point. Alan Blinder and coauthors note that most of the research on monetary policy communication has focused on communication with financial markets. In my working paper "Fed Speak on Main Street," I focus on communication with the general public. But with the recent attention on Congressional calls to audit or reform the Fed, communication with the third audience, political authorities, also merits attention.

Bernanke added that "a central bank's communications strategy, closely linked to the idea of transparency, has many aspects and many motivations." One such motivation is accountabilityFederal Reserve communication with political authorities is contentious because of the tension that can arise between accountability and freedom from political pressure. As Laurence Meyer explained in 2000:
Even a limited degree of independence, taken literally, could be viewed as inconsistent with democratic ideals and, in addition, might leave the central bank without appropriate incentives to carry out its responsibilities. Therefore, independence has to be balanced with accountability--accountability of the central bank to the public and, specifically, to their elected representatives. 
It is important to appreciate, however, that steps to encourage accountability also offer opportunities for political pressure. The history of the Federal Reserve's relationship to the rest of government is one marked by efforts by the rest of government both to foster central bank independence and to exert political pressure on monetary policy.
It is worthwhile to take a step back and ask what is meant by accountability. Colloquially and in academic literature, the term accountability has become "an ever-expanding concept." Accountability does not mean that the Fed needs to please every member of Congress, or even some of them, all the time. If it did, there would be no point in having an independent central bank! So what does accountability mean?  A useful synonym is answerability. The Fed's accountability to Congress means the Fed must answer to Congress-- this requires, of course, that Congress ask something of the Fed. David Wessel explains that this can be a problem:
Congress is having a hard time fulfilling its responsibilities to hold the Fed accountable. Too few members of Congress know enough to ask good questions at hearings where the Fed chair testifies. Too many view hearings as a way to get themselves on TV or to score political points against the other party.
Accountability, in the sense of answerability, is a two-way street requiring effort on the parts of both the Fed and Congress. Recent efforts by Congress to impose "accountability" would clear Congress of the more onerous part of its task. The Federal Reserve Accountability and Transparency Act introduced in 2014 would require that the Fed adopt a rules-based policy. The legislation states that "Upon determining that plans…cannot or should not be achieved, the Federal Open Market Committee shall submit an explanation for that determination and an updated version of the Directive Policy Rule.”

In 1976, Senator Hubert Humphrey made a similar proposal: the president would submit recommendations for monetary policy, and the Federal Reserve Board of Governors would have to explain any proposed deviation within fifteen days. This proposal did not pass, but other legislation in the late 1970s did change the Federal Reserve's objectives and standards for accountability. Prompted by high inflation, the Federal Reserve Reform Act of 1977 made price stability an explicit policy goal. Representative Augustus Hawkins and Senator Humphrey introduced the Full Employment and Balanced Growth Act of 1978, also known as the Humphrey-Hawkins Act, which added a full employment goal and obligated the Fed Chair to make biannual reports to Congress. It was signed into law by President Jimmy Carter on October 27, 1978.

The Humphrey-Hawkins Act, though initially resisted by FOMC members, did improve the Fed's accountability or answerability to Congress. The requirement of twice-yearly reports to Congress literally required the Fed Chair to answer Congress' questions (though likely, for a time, in "Fed Speak.") The outlining of the Fed's policy goals defined the scope of what Congress should ask about. In terms of the Fed's communication strategy with Congress, its format, broadly, is question-and-answer. Its content is the Federal Reserve's mandates. Its tone--clear or obfuscatory, helpful or hostile--has varied over time and across Fed officials and members of Congress. 

Since 1978, changes to the communication strategy, such as the announcement of a 2% long-run goal for PCE inflation in 2012, have attempted to facilitate the Fed's answerability to Congress. The proposal to require that the Fed follow a rule-based policy goes beyond the requirements of accountability. The Fed must be accountable for the outcomes of its policy, but that does not mean restricting the flexibility of its actions. Unusual or extreme economic conditions require discretion on the part of monetary policymakers, which they must be prepared to explain as clearly as possible. 

Janet Yellen remarked in 2013 that "By the eve of the recent financial crisis, it was established that the FOMC could not simply rely on its record of systematic behavior as a substitute for communication--especially under unusual circumstances, for which history had little to teach" [emphasis added]. Imposing systematic behavior in the form of rules-based policy is an even poorer substitute. As monetary begins to normalize, Congress' role in monetary policy is to question the Fed, not to bully it.