As a macroeconomist, I mostly research the types of concepts that are more traditionally associated with economics, like inflation and interest rates. But one of the great things about economics training, in my opinion, is that you receive enough general training to be able to follow much of what is going on in other fields. It is always interesting for me to read papers or attend seminars in applied microeconomics to see the wide (and expanding) scope of the discipline.
Gary Becker won the Nobel Prize in 1992 "for having extended the domain of microeconomic analysis to a wide range of human behaviour and interaction, including nonmarket behaviour" and "to aspects of human behavior which had previously been dealt with by other social science disciplines such as sociology, demography and criminology." The Freakonomics books and podcast have gone a long way in popularizing this approach. But it is not without its critics, both within and outside the profession.
For all that the economic way of thinking and the quantitative tools of econometrics can add in addressing a boundless variety of questions, there is also much that our analysis and tools leave out. In areas like health or criminology, the assumptions and calculations that seem perfectly reasonable to an economist may seem anywhere from misguided to offensive to a medical doctor or criminologist. Roland Fryer's working paper on racial differences in police use of force, for example, was prominently covered with both praise and criticism.
Another NBER working paper, released this week by Jonathan de Quidt and Johannes Haushofer, is also pushing the boundaries of economics, arguing that "depression has not received significant attention in the economics literature." By depression, they are referring to major depressive disorder (MDD), not a particularly severe recession. While neither of the authors holds a medical degree, Haushofer holds doctorates in both economics and neurobiology. In "Depression for Economists," they build a model in which individuals choose to exert either high or low effort; depression is induced by a negative "shock" to an individual's belief about her return to high effort.
In the model, the individual's income depends on her effort, amount of sleep, and food consumption. Her utility depends on her sleep, food consumption, and non-food consumption. She maximizes utility given her belief about her return to effort, which she updates in a Bayesian manner. If her belief about her return to effort declines (synonymous in the model to becoming depressed), she exerts less labor effort. Her total (food and non-food) consumption and utility unambiguously decrease, leading to "depressed mood." In the extreme, she may reduce her labor effort to zero, at which point she would stop learning more about her return to effort and get stuck in a "poverty trap."
The depressed individual's sleeping and food consumption may either increase or decrease, as consumption motives become more important relative to production motives. In other words, she sleeps and eats closer to the amounts that she would choose if she cared only about the utility directly from sleeping and eating, and not about how her sleeping and eating choices affect her ability to produce.
While this result does match the empirical findings in the medical literature that depression may either reduce or increase sleep duration and lead to either over- or under-eating, it seems implausible to me that depressed individuals sleep ten or more hours a day because they just love sleeping, or lose their appetite because they don't enjoy food beyond its ability to help them be productive. I'm not an expert, but from what I understand there are physiological and chemical reasons for the change in sleep patterns and appetite that could be independent of a person's beliefs about their returns to labor effort.
However, the authors argue that an "advantage of our model is that it resonates with prominent psychological
and psychiatric theories of depression, and the therapeutic approaches to which
they gave rise." They refer in particular to "Charles Ferster, who argued that
depression resulted from an overexposure to negative reinforcement and underexposure
to positive reinforcement in the environment (Ferster 1973)...Ferster’s account of the etiology of depression is in line with how we model
depression here, namely as a consequence of exposure to negative shocks." They also refer to the work of psychiatrist Aaron Beck (1967), whose suggested that depression arises from "distorted thinking" motivates the use of Cognitive Behavioral Therapy (CBT), a standard treatment for depression.
The authors note that "Our main goal in writing this paper was to give economists a starting point
for thinking and writing about depression using the language of economics. We
have therefore kept the model as simple as possible." They also steer clear of suggesting any policy implications (other than implicitly providing support for CBT.) It will be fascinating to see whether and how the medical community responds, and also to hear from economists who have themselves experienced depression.
Pages
▼
Saturday, December 31, 2016
Monday, December 5, 2016
The Future is Uncertain, but So Is the Past
In a recently-released research note, Federal Reserve Board economists Alan Detmeister, David Lebow, and Ekaterina Peneva summarize new survey results on consumers' inflation perceptions. The well-known Michigan Survey of Consumers asks consumers about their expectations of future inflation (over the next year and 5- to 10- years), but does not ask them what they believe inflation has been in recent years.
In many macroeconomic models, inflation perceptions should be nearly perfect. After all, inflation statistics are publicly available, and anyone should be able to access them. The Federal Reserve commissioned the Michigan Survey of Consumers Research Center to survey consumers about their perceptions of inflation over the past year and over the past 5- to 10-years, using analogous wording to the questions about inflation expectations. As you might guess, consumers lack perfect knowledge of inflation in the recent past. If you're like most people (which, by dent of reading an economic blog, you are probably not), you probably haven't looked up inflation statistics or read the financial news recently.
But more surprisingly, consumers seem just as uncertain about past inflation, or even more so, as about future inflation. Take a look at these histograms of inflation perceptions and expectations from the February 2016 survey data:
Compare Panel A to Panel C. Panel A shows consumers' perceptions of inflation over the past 5- to 10-years, and Panel C shows their expectations for the next 5- to 10-years. Both panels show a great deal of dispersion, or variation across consumers. But also notice the response heaping at multiples of 5%. In both panels, over 10% of respondents choose 5%, and you also see more 10% responses than either 9% or 11% responses. In a working paper, I show that this response heaping is indicative of high uncertainty. Consumers choose a 5%, 10%, 15%, etc. response to indicate high imprecision in their estimates of future inflation. So it is quite surprising that even more consumers choose the 10, 15, 20, and 25% responses for perceptions of past inflation than for expectations of future inflation.
The response heaping at multiples of 5% is also quite substantial for short-term inflation perceptions (Panel B). Without access to the underlying data, I can't tell for sure whether it is more or less prevalent than for expectations of future short-term inflation, but it is certainly noticeable.
What does this tell us? People are just as unsure about inflation in the relatively recent past as they are about inflation in the near to medium-run future. And this says something important for monetary policymakers. A goal of the Federal Reserve is to anchor medium- to long-run inflation expectations at the 2% target. With strongly-anchored expectations, we should see most expectations near 2% with low uncertainty. If people are uncertain about longer-run inflation, it could either be that they are unaware of the Fed's inflation target, or aware but unconvinced that the Fed will actually achieve its target. It is difficult to say which is the case. The former would imply that we need more public informedness about economic concepts and the Fed, while the latter would imply that the Fed needs to improve its credibility among an already-informed public. Since perceptions are about as uncertain as expectations, this lends support to the idea that people are simply uninformed about inflation-- or that memory of economic statistics is relatively poor.
In many macroeconomic models, inflation perceptions should be nearly perfect. After all, inflation statistics are publicly available, and anyone should be able to access them. The Federal Reserve commissioned the Michigan Survey of Consumers Research Center to survey consumers about their perceptions of inflation over the past year and over the past 5- to 10-years, using analogous wording to the questions about inflation expectations. As you might guess, consumers lack perfect knowledge of inflation in the recent past. If you're like most people (which, by dent of reading an economic blog, you are probably not), you probably haven't looked up inflation statistics or read the financial news recently.
But more surprisingly, consumers seem just as uncertain about past inflation, or even more so, as about future inflation. Take a look at these histograms of inflation perceptions and expectations from the February 2016 survey data:
Source: December 5 FEDS Note |
Compare Panel A to Panel C. Panel A shows consumers' perceptions of inflation over the past 5- to 10-years, and Panel C shows their expectations for the next 5- to 10-years. Both panels show a great deal of dispersion, or variation across consumers. But also notice the response heaping at multiples of 5%. In both panels, over 10% of respondents choose 5%, and you also see more 10% responses than either 9% or 11% responses. In a working paper, I show that this response heaping is indicative of high uncertainty. Consumers choose a 5%, 10%, 15%, etc. response to indicate high imprecision in their estimates of future inflation. So it is quite surprising that even more consumers choose the 10, 15, 20, and 25% responses for perceptions of past inflation than for expectations of future inflation.
The response heaping at multiples of 5% is also quite substantial for short-term inflation perceptions (Panel B). Without access to the underlying data, I can't tell for sure whether it is more or less prevalent than for expectations of future short-term inflation, but it is certainly noticeable.
What does this tell us? People are just as unsure about inflation in the relatively recent past as they are about inflation in the near to medium-run future. And this says something important for monetary policymakers. A goal of the Federal Reserve is to anchor medium- to long-run inflation expectations at the 2% target. With strongly-anchored expectations, we should see most expectations near 2% with low uncertainty. If people are uncertain about longer-run inflation, it could either be that they are unaware of the Fed's inflation target, or aware but unconvinced that the Fed will actually achieve its target. It is difficult to say which is the case. The former would imply that we need more public informedness about economic concepts and the Fed, while the latter would imply that the Fed needs to improve its credibility among an already-informed public. Since perceptions are about as uncertain as expectations, this lends support to the idea that people are simply uninformed about inflation-- or that memory of economic statistics is relatively poor.
Thursday, November 10, 2016
Political Pressures on the Fed and the Trump Presidency
On Monday evening, Charles Weise gave a seminar at Haverford on "Political Pressures on Monetary Policy during the U.S. Great Inflation," a paper he published in 2012. In the paper, he details how Congress and the Presidents (especially Nixon) pressured the Fed, both directly and indirectly, to pursue loose monetary policy that contributed to the Great Inflation in the 1970s.
The paper highlights the fact that although the Fed is nominally independent, Congress and the President can influence the Fed's actions by threatening to restrict the Fed's independence. This is not necessarily a bad thing. One way to try to make the Fed accountable to the public is to make the Fed accountable to publicly-elected officials. This can be achieved by several (imperfect) means-- hearings and testimonies and other transparency requirements, the appointment process, and (threatened) legislation. Problems arise when the interests of the elected officials are not in line with the interests of the electorate. In the 1970s, for example, Nixon's interest in maintaining low unemployment at the cost of high and rising inflation was for the sake of political gain and neglected adverse long-run consequences. Problems can also arise when the interests of elected officials are in line with those of the public, but elected officials' understanding of monetary policy is severely flawed.
It was coincidental that this talk was the evening before Election Day. The candidates' views on the Fed got less attention than many other issues and aspects of the campaign, but they did come up from time to time. Donald Trump, for example, claimed that "We are in a very big, ugly bubble...The Fed is more political than Hillary Clinton.”
Now, a big question is what Trump's election will mean for the future of the Fed. Beyond the relatively minor issue of whether this unexpected election result will cause the Fed to postpone its next rate hike, the larger issues have to do with legislation and future appointments.
In the Great Recession and ever since, we have seen many calls and proposals for more accountability for the Fed from both sides of the political spectrum. Most of these have at least some merit, even if they are misguided to varying degrees. They stem from a recognition that the Fed is powerful, and that its actions affect the distribution of resources and the health of the global economy. But the types of legislation that Trump seems likely to support would drastically restrict the Fed's independence and discretion-- he has even mentioned a desire to return to the gold standard.
Moreover, no legislation designed to promote accountability can be effective without choosing monetary policymakers that are well-qualified technocrats to skillfully implement policy. Janet Yellen's term as Fed Chair ends in 2018, and Trump has suggested that he will not reappoint her. This would represent a departure from the pattern established by Obama's reappointment of Ben Bernanke, who was originally appointed by George W. Bush. Obama's reappointment of Bernanke signalled that the Fed Chair was a technocratic position, not a partisan one. Yellen, like Bernanke, is well-credentialed for her post. Vice Chair Stanley Fischer's term also ends in 2018, and there are two other open seats on the Board of Governors. Monetary policy is complex enough that even a well-intentioned policymaker without substantial knowledge and skill could spell trouble. A neither well-intentioned nor highly skilled policymaker would almost guarantee disaster.
Finally, monetary policy will interact with other economic policies. Lower long-run growth means lower natural interest rates. This means that we are already uncomfortably close to the zero lower bound, and almost certain to hit it again with the next recession. Severely restrictive trade and immigration policy will even further reduce the economy's capacity for growth, compounding this problem.
The paper highlights the fact that although the Fed is nominally independent, Congress and the President can influence the Fed's actions by threatening to restrict the Fed's independence. This is not necessarily a bad thing. One way to try to make the Fed accountable to the public is to make the Fed accountable to publicly-elected officials. This can be achieved by several (imperfect) means-- hearings and testimonies and other transparency requirements, the appointment process, and (threatened) legislation. Problems arise when the interests of the elected officials are not in line with the interests of the electorate. In the 1970s, for example, Nixon's interest in maintaining low unemployment at the cost of high and rising inflation was for the sake of political gain and neglected adverse long-run consequences. Problems can also arise when the interests of elected officials are in line with those of the public, but elected officials' understanding of monetary policy is severely flawed.
It was coincidental that this talk was the evening before Election Day. The candidates' views on the Fed got less attention than many other issues and aspects of the campaign, but they did come up from time to time. Donald Trump, for example, claimed that "We are in a very big, ugly bubble...The Fed is more political than Hillary Clinton.”
Now, a big question is what Trump's election will mean for the future of the Fed. Beyond the relatively minor issue of whether this unexpected election result will cause the Fed to postpone its next rate hike, the larger issues have to do with legislation and future appointments.
In the Great Recession and ever since, we have seen many calls and proposals for more accountability for the Fed from both sides of the political spectrum. Most of these have at least some merit, even if they are misguided to varying degrees. They stem from a recognition that the Fed is powerful, and that its actions affect the distribution of resources and the health of the global economy. But the types of legislation that Trump seems likely to support would drastically restrict the Fed's independence and discretion-- he has even mentioned a desire to return to the gold standard.
Moreover, no legislation designed to promote accountability can be effective without choosing monetary policymakers that are well-qualified technocrats to skillfully implement policy. Janet Yellen's term as Fed Chair ends in 2018, and Trump has suggested that he will not reappoint her. This would represent a departure from the pattern established by Obama's reappointment of Ben Bernanke, who was originally appointed by George W. Bush. Obama's reappointment of Bernanke signalled that the Fed Chair was a technocratic position, not a partisan one. Yellen, like Bernanke, is well-credentialed for her post. Vice Chair Stanley Fischer's term also ends in 2018, and there are two other open seats on the Board of Governors. Monetary policy is complex enough that even a well-intentioned policymaker without substantial knowledge and skill could spell trouble. A neither well-intentioned nor highly skilled policymaker would almost guarantee disaster.
Finally, monetary policy will interact with other economic policies. Lower long-run growth means lower natural interest rates. This means that we are already uncomfortably close to the zero lower bound, and almost certain to hit it again with the next recession. Severely restrictive trade and immigration policy will even further reduce the economy's capacity for growth, compounding this problem.
Saturday, October 15, 2016
Independence at the CFPB and the Fed
One of my major motivations in starting this blog a few years ago was to have a space to grapple with the topic of central bank independence and accountability. One of the most important things I have learned since then is that independence and accountability are highly multi-dimensional concepts; different institutions can be granted different types of independence, and can fail to be accountable in countless ways. As a corollary, nominal or de jure independence does not guarantee de facto independence. Likewise, an institution may be accountable in name only.
A recent ruling by the U.S. Court of Appeals for the District of Columbia about the independence of the Consumer Financial Protection Bureau (CFPB) highlights the complexity of these issues. The CFPB was created under the Dodd-Frank Act of 2010. On Tuesday, a three-judge panel declared that this agency's particular form of independence is unconstitutional. Most notably, the Director of the CFPB-- currently Richard Cordray-- is removable only by the President, and only for cause.
The petitioner in the case against the CFPB, the mortgage lender PHH Corporation, which was subject to a large fine from the CFPB, argued that the CFPB's structure violates Article II of the Constitution. The Appeals Court's decision provides some historical context:
Although the Federal Reserve, unlike the CFPB, has a seven-member Board of Governors, several aspects of their governance are similar: the CFPB Director, like the seven members of the Federal Reserve Board of Governors, is nominated by the President and approved by the Senate. The CFPB Director's term length is 5 years, compared to 14 years for the Governors-- but importantly, both have terms longer than the 4-year Presidential term. The Chair and Vice Chair of the Fed are nominated from the Governors by the President and approved by the Senate for a 4-year term. Both the CFPB Director and the Fed Chair are required to give semi-annual reports to Congress. See these resources for a more detailed comparison of the structure and governance of independent federal agencies.
I find it striking that the phrase individual liberty appears 32 times in the 110-page decision. The very first paragraph states, "This is a case about executive power and individual liberty. The U.S. Government’s executive power to enforce federal law against private citizens – for example, to bring criminal prosecutions and civil enforcement actions – is essential to societal order and progress, but simultaneously a grave threat to individual liberty."
Even though both the CFPB and the Fed have substantial financial regulatory authority, the discourse on Federal Reserve independence does not focus so heavily on liberty (I've barely come across the word at all in my readings on the subject); instead, it focuses on independence as a potential threat to accountability. As I have previously written, "the term accountability has become 'an ever-expanding concept,'" and one that is often not usefully defined. The same might be said for the term liberty. Still, the two terms have different connotations. Accountability requires that the institution carry out its responsibilities satisfactorily, while liberty is more about what the institution doesn't do.
Accountability is a key concept in the literature on delegation of tasks to technocrats or politicians. In "Bureaucrats or Politicians?," Alberto Alesini and Guido Tabellini (2007) build a model in which politicians are held accountable by their desire for re-election, while top-level bureaucrats are held accountable by "career concerns." The social desirability of delegating a task to an unelected bureaucrat depends on how the task affects the distribution of resources or advantages-- and thus, on the strength of interest-group political pressure. As Alan Blinder writes:
Tyler Cowen writes, "I say the regulatory state already has too much arbitrary power, and this [District Court ruling] is a (small) move in the right direction." It is not the reduction of the regulatory state's power that will necessarily enhance either accountability or liberty, but the reduction of the arbitrariness of the regulatory power. This can come about through transparency (which the Fed typically cites as key to the maintenance of accountability), making policies and enforcement more predictable and less retroactive and reducing uncertainty. I don't know that the types of governance changes implied by the District Court ruling (if it holds) will substantially affect the CFPB's transparency or make it any less capable of pursuing its goals, as I tend to agree with Senator Elizabeth Warren's interpretation that the ruling will only require “a small technical tweak.”
A recent ruling by the U.S. Court of Appeals for the District of Columbia about the independence of the Consumer Financial Protection Bureau (CFPB) highlights the complexity of these issues. The CFPB was created under the Dodd-Frank Act of 2010. On Tuesday, a three-judge panel declared that this agency's particular form of independence is unconstitutional. Most notably, the Director of the CFPB-- currently Richard Cordray-- is removable only by the President, and only for cause.
The petitioner in the case against the CFPB, the mortgage lender PHH Corporation, which was subject to a large fine from the CFPB, argued that the CFPB's structure violates Article II of the Constitution. The Appeals Court's decision provides some historical context:
"To carry out the executive power and be accountable for the exercise of that power, the President must be able to control subordinate officers in executive agencies. In its landmark decision in Myers v. United States, 272 U.S. 52 (1926), authored by Chief Justice and former President Taft, the Supreme Court therefore recognized the President’s Article II authority to supervise, direct, and remove at will subordinate officers in the Executive Branch.The decision goes on to add that "No head of either an executive agency or an independent agency operates unilaterally without any check on his or her authority. Therefore, no independent agency exercising substantial executive authority has ever been headed by a single person. Until now."
In 1935, however, the Supreme Court carved out an exception to Myers and Article II by permitting Congress to create independent agencies that exercise executive power. See Humphrey’s Executor v. United States, 295 U.S. 602 (1935). An agency is considered “independent” when the agency heads are removable by the President only for cause, not at will, and therefore are not supervised or directed by the President. Examples of independent agencies include well-known bodies such as the Federal Communications Commission, the Securities and Exchange Commission, the Federal Trade Commission, the National Labor Relations Board, and the Federal Energy Regulatory Commission... To help mitigate the risk to individual liberty, the independent agencies, although not checked by the President, have historically been headed by multiple commissioners, directors, or board members who act as checks on one another. Each independent agency has traditionally been established, in the Supreme Court’s words, as a “body of experts appointed by law and informed by experience."
Although the Federal Reserve, unlike the CFPB, has a seven-member Board of Governors, several aspects of their governance are similar: the CFPB Director, like the seven members of the Federal Reserve Board of Governors, is nominated by the President and approved by the Senate. The CFPB Director's term length is 5 years, compared to 14 years for the Governors-- but importantly, both have terms longer than the 4-year Presidential term. The Chair and Vice Chair of the Fed are nominated from the Governors by the President and approved by the Senate for a 4-year term. Both the CFPB Director and the Fed Chair are required to give semi-annual reports to Congress. See these resources for a more detailed comparison of the structure and governance of independent federal agencies.
I find it striking that the phrase individual liberty appears 32 times in the 110-page decision. The very first paragraph states, "This is a case about executive power and individual liberty. The U.S. Government’s executive power to enforce federal law against private citizens – for example, to bring criminal prosecutions and civil enforcement actions – is essential to societal order and progress, but simultaneously a grave threat to individual liberty."
Even though both the CFPB and the Fed have substantial financial regulatory authority, the discourse on Federal Reserve independence does not focus so heavily on liberty (I've barely come across the word at all in my readings on the subject); instead, it focuses on independence as a potential threat to accountability. As I have previously written, "the term accountability has become 'an ever-expanding concept,'" and one that is often not usefully defined. The same might be said for the term liberty. Still, the two terms have different connotations. Accountability requires that the institution carry out its responsibilities satisfactorily, while liberty is more about what the institution doesn't do.
Accountability is a key concept in the literature on delegation of tasks to technocrats or politicians. In "Bureaucrats or Politicians?," Alberto Alesini and Guido Tabellini (2007) build a model in which politicians are held accountable by their desire for re-election, while top-level bureaucrats are held accountable by "career concerns." The social desirability of delegating a task to an unelected bureaucrat depends on how the task affects the distribution of resources or advantages-- and thus, on the strength of interest-group political pressure. As Alan Blinder writes:
"Some public policy decisions have -- or are perceived to have -- mostly general impacts, affecting most citizens in similar ways. Monetary policy, for example...is usually thought of as affecting the whole economy rather than particular groups or industries. Other public policies are more naturally thought of as particularist, conferring benefits and imposing costs on identifiable groups...When the issues are particularist, the visible hand of interest-group politics is likely to be most pernicious -- which would seem to support delegating authority to unelected experts. But these are precisely the issues that require the heaviest doses of value judgments to decide who should win and lose. Such judgments are inherently and appropriately political. It's a genuine dilemma."The Federal Reserve's Congressional mandate is to promote price stability and maximum employment. Federal Reserve independence is intended to promote these objectives by alleviating political pressure to pursue overly-accomodative monetary policy. Of course, as we have seen in recent years, the interest-group politics of central banking are more nuanced than a simple desire by incumbents for inflation. Interest rate policy and inflation affect different segments of the population in different ways. The CFPB is supposed to enforce federal consumer financial laws and protect consumers in financial markets. The average benefits of the CFPB to individual consumers is probably fairly small, while the costs of regulation and enforcement to a smaller number of financial companies is large. This asymmetry means that political pressure on a financial regulator like the CFPB (or on the Fed, in its regulatory role) is likely to come from the side of the financial institutions. In Blinder's logic, this confers a large value on the delegation of authority to technocrats, while at the same time raising the importance of accountability for political legitimacy.
Tyler Cowen writes, "I say the regulatory state already has too much arbitrary power, and this [District Court ruling] is a (small) move in the right direction." It is not the reduction of the regulatory state's power that will necessarily enhance either accountability or liberty, but the reduction of the arbitrariness of the regulatory power. This can come about through transparency (which the Fed typically cites as key to the maintenance of accountability), making policies and enforcement more predictable and less retroactive and reducing uncertainty. I don't know that the types of governance changes implied by the District Court ruling (if it holds) will substantially affect the CFPB's transparency or make it any less capable of pursuing its goals, as I tend to agree with Senator Elizabeth Warren's interpretation that the ruling will only require “a small technical tweak.”
Tuesday, September 27, 2016
Why are Long-Run Inflation Expectations Falling?
Randal Verbrugge and I have just published a Federal Reserve Bank of Cleveland Economic Commentary called "Digging into the Downward Trend in Consumer Inflation Expectations." The piece focuses on long-run inflation expectations--expectations for the next 5 to 10 years-- from the Michigan Survey of Consumers. These expectations have been trending downward since the summer of 2014, around the same time as oil and gas prices started to decline. It might seem natural to conclude that falling gas prices are responsible for the decline in long-run inflation expectations. But we suggest that this may not be the whole story.
First of all, gas prices have exhibited two upward surges since 2014, neither of which was associated with a rise in long-run inflation expectations. Second, the correlation between gas prices and inflation expectations (a relationship I explore in much more detail in this working paper) appears too weak to explain the size of the decline. So what else could be going on?
If you look at the histogram in Figure 2, below, you can see the distribution of inflation forecasts that consumers give in three different time periods: an early period, the first half of 2014, and the past year. The shaded gray bars correspond to the early period, the red bars to 2014, and the blue bars to the most recent period. Notice that there is some degree of "response heaping" at multiples of 5%. In another paper, I use this response heaping to help quantify consumers' uncertainty about long-run inflation. The idea is that people who are more uncertain about inflation, or have a less precise estimate of what it should be, tend to report a round number-- this is a well-documented tendency in how people communicate imprecision.
The response heaping has declined over time, corresponding to a fall in my consumer inflation uncertainty index for the longer horizon. As we detail in the Commentary, this fall in uncertainty helps explain the decline in the measured median inflation forecast. This is a remnant of the fact that common round forecasts, 5% and 10%, are higher than common non-round forecasts.
There is also a notable change in the distribution of non-round forecasts over time. The biggest change is that 1% forecasts for long-run inflation are much more common than previously (see how the blue bar is higher than the red and gray bars for 1% inflation). I think this is an important sign that some consumers (probably those that are more informed about the economy and inflation) are noticing that inflation has been quite low for an extended period, and are starting to incorporate low inflation into their long-run expectations. More consumers expect 1% inflation than 2%.
First of all, gas prices have exhibited two upward surges since 2014, neither of which was associated with a rise in long-run inflation expectations. Second, the correlation between gas prices and inflation expectations (a relationship I explore in much more detail in this working paper) appears too weak to explain the size of the decline. So what else could be going on?
If you look at the histogram in Figure 2, below, you can see the distribution of inflation forecasts that consumers give in three different time periods: an early period, the first half of 2014, and the past year. The shaded gray bars correspond to the early period, the red bars to 2014, and the blue bars to the most recent period. Notice that there is some degree of "response heaping" at multiples of 5%. In another paper, I use this response heaping to help quantify consumers' uncertainty about long-run inflation. The idea is that people who are more uncertain about inflation, or have a less precise estimate of what it should be, tend to report a round number-- this is a well-documented tendency in how people communicate imprecision.
The response heaping has declined over time, corresponding to a fall in my consumer inflation uncertainty index for the longer horizon. As we detail in the Commentary, this fall in uncertainty helps explain the decline in the measured median inflation forecast. This is a remnant of the fact that common round forecasts, 5% and 10%, are higher than common non-round forecasts.
There is also a notable change in the distribution of non-round forecasts over time. The biggest change is that 1% forecasts for long-run inflation are much more common than previously (see how the blue bar is higher than the red and gray bars for 1% inflation). I think this is an important sign that some consumers (probably those that are more informed about the economy and inflation) are noticing that inflation has been quite low for an extended period, and are starting to incorporate low inflation into their long-run expectations. More consumers expect 1% inflation than 2%.
Friday, September 23, 2016
The Economics of Crime
On September 28, the Economics Department at Haverford College will hold its annual alumni forum. The topic this year is "The Economics of Crime and Incarceration." Our panelists will be
Eric Sterling (Haverford class of '73), Executive Director of the Criminal Justice Policy Foundation, and Mark Kleiman (class of '72), Director of the Crime and Justice Program at New York University’s Marron Institute of Urban Management. In anticipation of the event, especially for any Haverford students who might be reading my blog, I wanted to do a quick survey of the literature on the economics of crime and some of the major topics and themes in this literature.
Eric Sterling (Haverford class of '73), Executive Director of the Criminal Justice Policy Foundation, and Mark Kleiman (class of '72), Director of the Crime and Justice Program at New York University’s Marron Institute of Urban Management. In anticipation of the event, especially for any Haverford students who might be reading my blog, I wanted to do a quick survey of the literature on the economics of crime and some of the major topics and themes in this literature.
Why are crime and incarceration economics topics? In other words, given that there is an entire field--criminology--devoted to the study of crime, why are economists studying it as well? Gary Becker suggested in 1968 that "a useful theory of criminal behavior can dispense with special theories of anomie, psychological inadequacies, or inheritance of special traits and simply extend the economist's usual analysis of choice" (p. 170). In other words, he believed that criminal behavior could be modeled as a rational response to incentives; that the private and social costs of crime, and the costs of apprehension and conviction, could be quantified; and that a socially "optimal" (likely non-zero) level of crime could be computed.
How does the criminal justice system affect the incentives for crime, and, in turn, criminal behavior? Causal effects are quite challenging to study empirically. For example, consider the question of whether a larger police force deters crime. Suppose the data shows a positive correlation between crime rates and size of police force. While it is possible that larger police forces cause more crime, it is also possible that causality runs in the reverse direction: cities with higher crime rates hire more police. Steven Levitt, whose "Freakonomics" fame came in part from his clever approaches to these types of questions, has looked for "instruments," or ways to identify exogenous variations in criminal justice policies.
It is also difficult to identify causal effects of incarceration on criminal recidivism and other outcomes. Prison sentences are not "randomly assigned." So if we see that people who spend longer in prison are more likely to commit a second crime, we can't say whether the extra time in prison had a causal influence on the recidivism. A recent working paper by Manudeep Bhuller, Gordon B. Dahl, Katrine V. Løken, and Magne Mogstad exploits the random assignment of criminal cases in Norway to judges who differ in their stringency of sentencing. They find that imprisonment discourages further criminal behavior. This decline in recidivism is driven by people who were unemployed before incarceration, and who participated in programs in prison aimed at increasing employability. The authors conclude that "Contrary to the widely embraced 'nothing works' doctrine, these findings demonstrate that time spent in prison with a focus on rehabilitation can indeed be preventive." But since not all prison systems have a focus on rehabilitation, they add that "It is important to recognize that our results do
not imply that prison is necessarily preventative in all settings. While this paper establishes
an important proof of concept, evidence from other settings or populations would be useful
to assess the generalizability of our findings."
Some dimensions of crime can be difficult to measure. Many crimes go unreported or undetected. Black market activity, by its very definition, is hidden. Economists have also tried to come up with ways to measure illegal production or trade. See, for example, this study of elephant poaching and ivory smuggling. Online black markets, and other types of crime and fraud committed online, are also the subject of a growing economics literature.
Network economics is also applicable to the study of crime, since it can help with understanding the formation and workings of criminal networks.
Studies of the economics of crime are nearly always controversial. In part, this is because criminal justice itself is so controversial, so whenever an economic study draws implications about criminal justice, it is sure to find some resistance. In addition, many people find Becker's description of crime as a purely rational response to incentives to be lacking. Recall, for example, the controversy surrounding Roland Fryer's recent working paper on racial differences in police use of force. I think part of what people were uncomfortable with was the incorporation of racial discrimination into the utility function, and part was the distinction he made between "statistical discrimination" and racial bias.
I anticipate an interesting discussion on Wednesday and will try to update the blog with my impressions following the forum.
I anticipate an interesting discussion on Wednesday and will try to update the blog with my impressions following the forum.
Sunday, August 28, 2016
The Fed on Facebook
The Federal Reserve Board of Governors has now joined you, your grandma, and 1.7 billion of your closest friends on Facebook. A press release on August 18 says that the Fed's Facebook page aims at "increasing the accessibility and availability of Federal Reserve Board news and educational content." This news is especially interesting to me, since a chapter of my dissertation-- now my working paper "Fed Speak on Main Street"-- includes some commentary on the Federal Reserve's use of social media.
When I wrote the paper, though the Board of Governors did not have a Facebook page, the Regional Federal Reserve Banks did. I noted that the most popular of these, the San Francisco Fed's page, had around 5000 "likes" (compared to 4.5 million for the White House.) I wrote in my conclusion that "The Fed has begun to use interactive new media such as Facebook, Twitter, and YouTube, but its ad hoc approach to these platforms has resulted in a relatively small reach. Federal Reserve efforts to communicate via these media should continue to be evaluated and refined."
About a year later, the San Francisco Fed is up to around 6000 "likes," while the brand new Board of Governors page already has over 14,000. Only a handful of people post comments on the Regional Fed pages, and they are relatively benign. "Great story! I loved it!" and the SF Fed's response, "So glad you liked it, Ellen!" are the only comments below one recent story. Even critical comments are fairly measured: "adding more money into RE market only inflates housing prices, & creates more deserted neighborhoods," following a story on affordable housing in the Bay Area.
On the Board of Governors' page, however, hundreds of almost exclusively negative and outraged comments follow every piece of content. Several news stories describe the page as overrun by "trolls." "Tell me more about the private meeting on Jekyll island and the plans for public prosperity that some of the worlds richest and most powerful bankers made in secret, please," writes a commenter following a post about who owns the Fed.
It is not too surprising that the Board's page has drawn so much more attention than those of the reserve banks. One of the biggest recurrent debates since before the foundation of the Fed surrounds the degree of centralization of power that is appropriate. The Fed's unusual structure reflects a string of compromises that leaves many unsatisfied. The Board in Washington, to many of the Fed's critics, represents unappealing centralization. To be sure, many of the commenters are likely unaware of the Fed's structure, and maybe of the existence of the regional Federal Reserve Banks. They know only to blame "the Fed," which to them is synonymous with the Board of Governors.
In my paper, I look at data from polls that have asked people a variety of questions about the Fed and the Fed Chair. Polls that ask people about who they credit or blame for economic performance appear in the table below. Most people don't think to blame the Fed for economic problems. If asked explicitly whether the Fed should be blamed, many say yes, but many others are unsure. Commenters on the Facebook page are not a representative sample of the population, of course. They are the ones who do blame the Fed.
Arguably, the negative attention on the Fed Board's page is better than no attention at all. As long as they don't start censoring negative comments-- and maybe even consider responding to some common concerns in press conferences or speeches?-- I think this could actually help their reputation for transparency and accountability. It will also be interesting to see whether the rate of interaction with the page dwindles off after it loses novelty.
When I wrote the paper, though the Board of Governors did not have a Facebook page, the Regional Federal Reserve Banks did. I noted that the most popular of these, the San Francisco Fed's page, had around 5000 "likes" (compared to 4.5 million for the White House.) I wrote in my conclusion that "The Fed has begun to use interactive new media such as Facebook, Twitter, and YouTube, but its ad hoc approach to these platforms has resulted in a relatively small reach. Federal Reserve efforts to communicate via these media should continue to be evaluated and refined."
About a year later, the San Francisco Fed is up to around 6000 "likes," while the brand new Board of Governors page already has over 14,000. Only a handful of people post comments on the Regional Fed pages, and they are relatively benign. "Great story! I loved it!" and the SF Fed's response, "So glad you liked it, Ellen!" are the only comments below one recent story. Even critical comments are fairly measured: "adding more money into RE market only inflates housing prices, & creates more deserted neighborhoods," following a story on affordable housing in the Bay Area.
On the Board of Governors' page, however, hundreds of almost exclusively negative and outraged comments follow every piece of content. Several news stories describe the page as overrun by "trolls." "Tell me more about the private meeting on Jekyll island and the plans for public prosperity that some of the worlds richest and most powerful bankers made in secret, please," writes a commenter following a post about who owns the Fed.
It is not too surprising that the Board's page has drawn so much more attention than those of the reserve banks. One of the biggest recurrent debates since before the foundation of the Fed surrounds the degree of centralization of power that is appropriate. The Fed's unusual structure reflects a string of compromises that leaves many unsatisfied. The Board in Washington, to many of the Fed's critics, represents unappealing centralization. To be sure, many of the commenters are likely unaware of the Fed's structure, and maybe of the existence of the regional Federal Reserve Banks. They know only to blame "the Fed," which to them is synonymous with the Board of Governors.
In my paper, I look at data from polls that have asked people a variety of questions about the Fed and the Fed Chair. Polls that ask people about who they credit or blame for economic performance appear in the table below. Most people don't think to blame the Fed for economic problems. If asked explicitly whether the Fed should be blamed, many say yes, but many others are unsure. Commenters on the Facebook page are not a representative sample of the population, of course. They are the ones who do blame the Fed.
Arguably, the negative attention on the Fed Board's page is better than no attention at all. As long as they don't start censoring negative comments-- and maybe even consider responding to some common concerns in press conferences or speeches?-- I think this could actually help their reputation for transparency and accountability. It will also be interesting to see whether the rate of interaction with the page dwindles off after it loses novelty.
Tuesday, August 16, 2016
More Support for a Higher Inflation Target
Ever since the FOMC announcement in 2012 that 2% PCE inflation is consistent with the Fed's price stability mandate, economists have questioned whether the 2% target is optimal. In 2013, for example, Laurence Ball made the case for a 4% target. Two new NBER working papers out this week each approach the topic of the optimal inflation target from different angles. Both, I think, can be interpreted as supportive of a somewhat higher target-- or at least of the idea that moderately higher inflation has greater benefits and smaller costs than conventionally believed.
The first, by Marc Dordal-i-Carreras, Olivier Coibion, Yuriy Gorodnichenko, and Johannes Wieland, is called "Infrequent but Long-Lived Zero-Bound Episodes and the Optimal Rate of Inflation." One benefit of a higher inflation target is to reduce the occurrence of zero lower bound (ZLB) episodes, so understanding the welfare costs of these episodes is important in calculating an optimal inflation target. The authors explain that in standard models with a ZLB, normally-distributed shocks result in short-lived ZLB episodes. This is in contrast with the reality of frequent but long-lived ZLB episodes. They build models that can generate long-lived ZLB episodes and show that welfare costs of ZLB episodes increase steeply with duration; 8 successive quarters at the ZLB is costlier than two separate 4-quarter episodes.
If ZLB episodes are costlier, it makes sense to have a higher inflation target to reduce their frequency. The authors note, however, that the estimate of the optimal target implied by their models are very sensitive to modeling assumptions and calibration:
Empirical evidence of inefficient price dispersion is sparse, since there is relatively minimal fluctuation in inflation in the past few decades, when BLS microdata on consumer prices is available. Nakamura et al. undertook the arduous task of extending the BLS microdataset back to 1977, encompassing higher-inflation episodes. Calculating price dispersion within a category of goods can be problematic, because price dispersion may arise from differences in quality or features of the goods. The authors instead look at the absolute size of price changes, explaining, "Intuitively, if inflation leads prices to drift further away from their optimal level, we should see prices adjusting by larger amounts when they adjust. The absolute size of price adjustments should reveal how far away from optimal the adjusting prices had become before they were adjusted. The absolute size of price adjustment should therefore be highly informative about inefficient price dispersion."
They find that the mean absolute size of price changes is fairly constant from 1977 to the present, and conclude that "There is, thus, no evidence that prices deviated more from their optimal level during the Great Inflation period when inflation was running at higher than 10% per year than during the more recent period when inflation has been close to 2% per year. We conclude from this that the main costs of inflation in the New Keynesian model are completely elusive in the data. This implies that the strong conclusions about optimality of low inflation rates reached by researchers using models of this kind need to be reassessed."
The first, by Marc Dordal-i-Carreras, Olivier Coibion, Yuriy Gorodnichenko, and Johannes Wieland, is called "Infrequent but Long-Lived Zero-Bound Episodes and the Optimal Rate of Inflation." One benefit of a higher inflation target is to reduce the occurrence of zero lower bound (ZLB) episodes, so understanding the welfare costs of these episodes is important in calculating an optimal inflation target. The authors explain that in standard models with a ZLB, normally-distributed shocks result in short-lived ZLB episodes. This is in contrast with the reality of frequent but long-lived ZLB episodes. They build models that can generate long-lived ZLB episodes and show that welfare costs of ZLB episodes increase steeply with duration; 8 successive quarters at the ZLB is costlier than two separate 4-quarter episodes.
If ZLB episodes are costlier, it makes sense to have a higher inflation target to reduce their frequency. The authors note, however, that the estimate of the optimal target implied by their models are very sensitive to modeling assumptions and calibration:
"We find that depending on our calibration of the average duration and the unconditional frequency of ZLB episodes, the optimal inflation rate can range from 1.5% to 4%. This uncertainty stems ultimately from the paucity of historical experience with ZLB episodes, which makes pinning down these parameters with any degree of confidence very difficult. A key conclusion of the paper is therefore that much humility is called for when making recommendations about the optimal rate of inflation since this fundamental data constraint is unlikely to be relaxed anytime soon."The second paper, by Emi Nakamura, Jón Steinsson, Patrick Sun, and Daniel Villar, is called "The Elusive Costs of Inflation: Price Dispersion during the U.S. Great Inflation." This paper notes that in standard New Keynesian models with Calvo pricing, one of the main welfare costs of inflation comes from inefficient price dispersion. When inflation is high, prices get further from optimal between price resets. This distorts the allocative role of prices, as relative prices no longer accurately reflect relative costs of production. In a standard New Keynesian model, the implied cost of this reduction in production efficiency is about 10% if you move from 0% inflation to 12% inflation. This is huge-- an order of magnitude greater than the welfare costs of business cycle fluctuations in output. This is why standard models recommend a very low inflation target.
Empirical evidence of inefficient price dispersion is sparse, since there is relatively minimal fluctuation in inflation in the past few decades, when BLS microdata on consumer prices is available. Nakamura et al. undertook the arduous task of extending the BLS microdataset back to 1977, encompassing higher-inflation episodes. Calculating price dispersion within a category of goods can be problematic, because price dispersion may arise from differences in quality or features of the goods. The authors instead look at the absolute size of price changes, explaining, "Intuitively, if inflation leads prices to drift further away from their optimal level, we should see prices adjusting by larger amounts when they adjust. The absolute size of price adjustments should reveal how far away from optimal the adjusting prices had become before they were adjusted. The absolute size of price adjustment should therefore be highly informative about inefficient price dispersion."
They find that the mean absolute size of price changes is fairly constant from 1977 to the present, and conclude that "There is, thus, no evidence that prices deviated more from their optimal level during the Great Inflation period when inflation was running at higher than 10% per year than during the more recent period when inflation has been close to 2% per year. We conclude from this that the main costs of inflation in the New Keynesian model are completely elusive in the data. This implies that the strong conclusions about optimality of low inflation rates reached by researchers using models of this kind need to be reassessed."
Wednesday, July 27, 2016
Guest Post by Alex Rodrigue: The Fed and Lehman
The following is a guest contribution by Alex Rodrigue, a math and economics major at Haverford College and my fantastic summer research assistant. This post, like many others I have written, discusses an NBER working paper, this one by Laurence Ball. Some controversy arose out of the media coverage of Roland Fryer's recent NBER working paper on racial differences in police use of force, which I also covered on my blog, since the working paper has not yet undergone peer review. I feel comfortable discussing working papers since I am not a professional journalist and am capable of discussing methodological and other limitations of research. The working paper Alex will discuss was, like the Fryer paper, covered in the New York Times. I don't think there's a clear-cut criteria for whether a newspaper should report on a working paper or no--certainly the criteria should be more stringent for the NYT than for a blog--but in the case of the Ball paper, there is no question that the coverage was merited.
In his recently released NBER working paper, The Fed and Lehman Brothers:
Introduction and Summary, Professor Laurence Ball of Johns Hopkins University
summarizes his longer
work concerning the actions taken by the Federal Reserve when Lehman
Brothers’ experienced financial difficulties in 2008. The primary questions
Professor Ball seeks to answer are why the Federal Reserve let Lehman Brothers
fail, and whether explanations for this decision given by Federal Reserve
officials, specifically those provided by Chairman Ben Bernanke, hold up to
scrutiny. I was fortunate enough to speak with Professor Ball about this
research, along with a number of other Haverford students and economics
professors, including the author of this blog, Professor Carola Binder.
Professor Ball’s commitment to unearthing the truth about the
Lehman Brothers’ bankruptcy and the Fed’s response is evidenced by the
thoroughness of his research, including his analysis of the convoluted balance
sheets of Lehman Brothers and his investigation of all statements and
testimonies of Fed officials and Lehman Brothers executives. Professor Ball
even filed a Freedom
of Information Act lawsuit against the Board of Governors of the Federal
Reserve in an attempt to acquire all available documents related to his work.
Although the suit was unsuccessful, his commitment to exhaustive research allowed
for a comprehensive, compelling argument to reject the justification of the
Federal Reserve’s in the wake of Lehman Brothers’ financial distress.
Among other investigations into the circumstances of Lehman
Brothers’ failure, Ball analyzes the legitimacy of claims that Lehman Brothers
lacked sufficient collateral for a legal loan from the Federal Reserve. By
studying the balance sheets of Lehman Brothers from the period prior to their
bankruptcy, Ball finds “Lehman’s available collateral exceeds its maximum
liquidity needs by $115 billion, or about 25%”, meaning that the Fed could have
offered the firm a legal, secured loan. This finding directly contradicts Chairman
Ben Bernanke’s explanations for the Fed’s decision, calling into question the
legitimacy of the Fed’s treatment of the firm.
If the given explanation for the Fed’s refusal to help
Lehman Brothers is invalid, then what explanation is correct? Ball suggests
Secretary Treasurer Henry Paulson’s involvement in negotiations with the
institution at the Federal Reserve Bank of New York, and his hesitance to be
known as “Mr. Bailout,” as a possible reason for the Fed’s behavior. Paulson’s
involvement in the case seems unusual to Professor Ball, especially because his
position as a Secretary Treasurer gave him “no legal authority over the Fed’s lending
decisions.” He also cites the failure of Paulson and Fed officials to
anticipate the destructive effects of Lehman’s failure as another explanation
for the Fed’s actions.
When asked about the future of Lehman Brothers had the Fed
offered the loans necessary for its survival, Ball claims that the firm may
have survived a bit longer, or at least for long enough to have wound down in a
less destructive manner. He believes the Fed’s treatment of Lehman had less to
do with the specific financial circumstances of the firm, and more with the
timing of the its collapse. In fact, Professor Ball finds that “in lending to Bear
Stearns and AIG, the Fed took on more
risk than it would have if it rescued Lehman.” Around the time Lehman Brothers
reached out for assistance, Paulson “had
been stung by criticism of the Bear Stearns rescue and the government takeovers
of Fannie Mae and Freddie Mac.” If Lehman had failed before Fannie Mae and
Freddie Mac or AIG, then maybe the firm would have received the loans it needed
to survive.
The failure of Lehman Brothers’ was not without consequence.
In discussion, Professor Ball cited a recent NYT
article about his work, specifically mentioning his agreement with its
assertion that the Fed’s allowance of the failure of the Lehman Brothers worsened
the Great Recession, contributed to public disillusionment with the
government’s involvement in the financial sector, and potentially led to the
rise of “Trumpism” today.
Thursday, July 21, 2016
Inflation Uncertainty Update and Rise in Below-Target Inflation Expectations
In my working paper "Measuring Uncertainty Based on Rounding: New Method and Application to Inflation Expectations," I develop a new measure of consumers' uncertainty about future inflation. The measure is based on a well-documented tendency of people to use round numbers to convey uncertainty or imprecision across a wide variety of contexts. As I detail in the paper, a strikingly large share of respondents on the Michigan Survey of Consumers report inflation expectations that are a multiple of 5%. I exploits variation over time in the distribution of survey responses (in particular, the amount of "response heaping" around multiples of 5) to create inflation uncertainty indices for the one-year and five-to-ten-year horizons.
As new Michigan Survey data becomes available, I have been updating the indices and posting them here. I previously blogged about the update through November 2015. Now that a few more months of data are publicly available, I have updated the indices through June 2016. Figure 1, below, shows the updated indices. Figure 2 zooms in on more recent years and smooths with a moving average filter. You can see that short-horizon uncertainty has been falling since its historical high point in the Great Recession, and long-horizon uncertainty has been at an historical low.
The change in response patterns from 2015 to 2016 is quite interesting. Figure 3 shows histograms of the short-horizon inflation expectation responses given in 2015 and in the first half of 2016. The brown bars show the share of respondents in 2015 who gave each response, and the black lines show the share in 2016. For both years, heaping at multiples of 5 is apparent when you observe the spikes at 5 (but not 4 or 6) and at 10 (but not 9 or 11). However, it is less sharp than in other years when the uncertainty index was higher. But also notice that in 2016, the share of 0% and 1% responses rose and the share of 2, 3, 4, 5, and 10% responses fell relative to 2015.
Some respondents take the survey twice with a 6-month gap, so we can see how people switch their responses. Of the respondents who chose a 2% forecast in the second half of 2015 (those who were possible aware of the 2% target), 18% switched to a 0% forecast and 24% switched to a 1% forecast when they took the survey again in 2016. The rise in 1% responses seems most noteworthy to me-- are people finally starting to notice slightly-below-target inflation and incorporate it into their expectations? I think it's too early to say, but worth tracking.
As new Michigan Survey data becomes available, I have been updating the indices and posting them here. I previously blogged about the update through November 2015. Now that a few more months of data are publicly available, I have updated the indices through June 2016. Figure 1, below, shows the updated indices. Figure 2 zooms in on more recent years and smooths with a moving average filter. You can see that short-horizon uncertainty has been falling since its historical high point in the Great Recession, and long-horizon uncertainty has been at an historical low.
Figure 1: Consumer inflation uncertainty index developed in Binder (2015) using data from the University of Michigan Survey of Consumers. To download updated data, visit https://sites.google.com/site/inflationuncertainty/. |
Figure 2: Consumer inflation uncertainty index (centered 3-month moving average) developed in Binder (2015) using data from the University of Michigan Survey of Consumers. To download updated data, visit https://sites.google.com/site/inflationuncertainty/. |
The change in response patterns from 2015 to 2016 is quite interesting. Figure 3 shows histograms of the short-horizon inflation expectation responses given in 2015 and in the first half of 2016. The brown bars show the share of respondents in 2015 who gave each response, and the black lines show the share in 2016. For both years, heaping at multiples of 5 is apparent when you observe the spikes at 5 (but not 4 or 6) and at 10 (but not 9 or 11). However, it is less sharp than in other years when the uncertainty index was higher. But also notice that in 2016, the share of 0% and 1% responses rose and the share of 2, 3, 4, 5, and 10% responses fell relative to 2015.
Some respondents take the survey twice with a 6-month gap, so we can see how people switch their responses. Of the respondents who chose a 2% forecast in the second half of 2015 (those who were possible aware of the 2% target), 18% switched to a 0% forecast and 24% switched to a 1% forecast when they took the survey again in 2016. The rise in 1% responses seems most noteworthy to me-- are people finally starting to notice slightly-below-target inflation and incorporate it into their expectations? I think it's too early to say, but worth tracking.
Figure 3: Created by Binder with data from University of Michigan Survey of Consumers |
Monday, July 11, 2016
Racial Differences in Police Use of Force
In an NBER working paper released today, Roland Fryer, Jr. uses the NYPD Stop, Question and Frisk database, the Public Police Contact Survey, to conduct "An Empirical Analysis of Racial Differences in Police Use of Force." The paper also uses data collected by Fryer and students coded from police reports in Houstin, Austin, Dallas, Los Angeles, and several parts of Florida. The paper is worth reading in its entirety, and is also the subject of a New York Times article, which summarizes the main findings more thoroughly than I will do here.
Fryer estimates odds ratios to measure racial disparities in various types of outcomes. An odds ratio of 1 would mean that whites and blacks faced the same odds, while an odds ratio of greater than 1 for blacks would mean that blacks were more likely than whites to receive that outcome. These odds ratios can be estimated with or without controlling for other variables. One outcome of interest is whether the police used any amount of force at the time of interaction.. Panel A of the figure below shows the odds ratio by hour of the day. The point estimate is always above 1, and the 95% confidence interval is almost always above 1, meaning blacks are more likely to have force used against them than whites (and so are Hispanics). This disparity increases during daytime hours, with point estimates nearing 1.4 around 10 a.m.
Panel B shows that the average use of force against both blacks and whites peaks at around 4 a.m. and is lowest around 8 a.m. The racial gap is present at all hours, but largest in the morning and early afternoon.
Fryer builds a model to help interpret whether the disparities evident in the data represent "statistical" or "taste-based" discrimination. Statistical discrimination would result if police used race as a signal for likelihood of compliance of likelihood of having a weapon, whereas taste-based discrimination would be ingrained in officers' preferences. The data are inconsistent with solely statistical discrimination: "the marginal returns to compliant behavior are the same for blacks and whites, but the average return to compliance is lower for blacks – suggestive of a taste-based, rather than statistical, discrimination."
Fryer notes that his paper enters "treacherous terrain" including, but not limited, to data reliability. The oversimplifications and cold calculations that necessarily accompany economic models never tell the whole story, but can nonetheless promote useful debate. For example, since Fryer finds racial disparities in police use of violence but not shootings, he notes that "To date, very few police departments across the country either collect data on lower level uses of force or explicitly punish officers for misuse of these tactics...Many arguments about police reform fall victim to the 'my life versus theirs, us versus them' mantra. Holding officers accountable for the misuse of hands or pushing individuals to the ground is not likely a life or death situation and, as such, may be more amenable to policy change."
Fryer estimates odds ratios to measure racial disparities in various types of outcomes. An odds ratio of 1 would mean that whites and blacks faced the same odds, while an odds ratio of greater than 1 for blacks would mean that blacks were more likely than whites to receive that outcome. These odds ratios can be estimated with or without controlling for other variables. One outcome of interest is whether the police used any amount of force at the time of interaction.. Panel A of the figure below shows the odds ratio by hour of the day. The point estimate is always above 1, and the 95% confidence interval is almost always above 1, meaning blacks are more likely to have force used against them than whites (and so are Hispanics). This disparity increases during daytime hours, with point estimates nearing 1.4 around 10 a.m.
Panel B shows that the average use of force against both blacks and whites peaks at around 4 a.m. and is lowest around 8 a.m. The racial gap is present at all hours, but largest in the morning and early afternoon.
Fryer builds a model to help interpret whether the disparities evident in the data represent "statistical" or "taste-based" discrimination. Statistical discrimination would result if police used race as a signal for likelihood of compliance of likelihood of having a weapon, whereas taste-based discrimination would be ingrained in officers' preferences. The data are inconsistent with solely statistical discrimination: "the marginal returns to compliant behavior are the same for blacks and whites, but the average return to compliance is lower for blacks – suggestive of a taste-based, rather than statistical, discrimination."
Fryer notes that his paper enters "treacherous terrain" including, but not limited, to data reliability. The oversimplifications and cold calculations that necessarily accompany economic models never tell the whole story, but can nonetheless promote useful debate. For example, since Fryer finds racial disparities in police use of violence but not shootings, he notes that "To date, very few police departments across the country either collect data on lower level uses of force or explicitly punish officers for misuse of these tactics...Many arguments about police reform fall victim to the 'my life versus theirs, us versus them' mantra. Holding officers accountable for the misuse of hands or pushing individuals to the ground is not likely a life or death situation and, as such, may be more amenable to policy change."
Wednesday, July 6, 2016
Estimation of Historical Inflation Expectations
The final version of my paper "Estimation of Historical Inflation Expectations" is now available online in the journal Explorations in Economic History. (Ungated version here.)
My paper grew out of a chapter in my dissertation. I was interested in inflation expectations in the Great Depression after serving as a discussant for a paper by Andy Jalil and Gisela Rua on "Inflation Expectations and Recovery from the Depression in 1933:Evidence from the Narrative Record." I also remember being struck by Christina Romer and David Romer's, (2013, p. 68) remark that a whole “cottage industry” of research in the 1990s was devoted to the question of whether the deflation of 1930-32 was anticipated.
I found it interesting to think about why different papers came to different estimates of inflation expectations in the Great Depression by examining the methodological issues around estimating expectations when direct survey or market measures are not available. I later broadened the paper to consider the range of estimates of inflation expectations in the classical gold standard era and the hyperinflations of the 1920s.
A lot of my research focuses on contemporary inflation expectations, mostly using survey-based measures. Some of the issues that arise in characterizing historical expectations are still relevant even when survey or market-based measures of inflation expectations are readily available--issues of noise, heterogeneity, uncertainty, time-varying risk premia, etc. I hope this piece will also be useful to people interested in current inflation expectations in parts of the world where survey data is unreliable or nonexistent, or where markets in inflation-linked assets are underdeveloped.
What I enjoyed most about writing this paper was trying to determine and formalize the assumptions that various authors used to form their estimates, even when these assumptions weren't laid out explicitly. I also enjoyed conducting my first meta-analysis (thanks to the recommendation of the referee and editor.) I found T. D. Stanley's JEL article on meta-analysis to be a useful guide.
Abstract: Expected inflation is a central variable in economic theory. Economic historians have estimated historical inflation expectations for a variety of purposes, including studies of the Fisher effect, the debt deflation hypothesis, central bank credibility, and expectations formation. I survey the statistical, narrative, and market-based approaches that have been used to estimate inflation expectations in historical eras, including the classical gold standard era, the hyperinflations of the 1920s, and the Great Depression, highlighting key methodological considerations and identifying areas that warrant further research. A meta-analysis of inflation expectations at the onset of the Great Depression reveals that the deflation of the early 1930s was mostly unanticipated, supporting the debt deflation hypothesis, and shows how these results are sensitive to estimation methodology.This paper is part of a new "Surveys and Speculations" feature in Explorations in Economic History. Recent volumes of the journal open with a Surveys and Speculations article, where "The idea is to combine the style of JEL [Journal of Economic Literature] articles with the more speculative ideas that one might put in a book – producing surveys that can help to guide future research. The emphasis can either be on the survey or the speculation part." Other examples include "What we can learn from the early history of sovereign debt" by David Stasavage, "Urbanization without growth in historical perspective" by Remi Jedwab and Dietrich Vollrath, and "Surnames: A new source for the history of social mobility" by Gregory Clark, Neil Cummins, Yu Hao, and Dan Diaz Vidal. The referee and editorial reports were extremely helpful, so I really recommend this if you're looking for an outlet for a JEL-style paper with economic history relevance.
My paper grew out of a chapter in my dissertation. I was interested in inflation expectations in the Great Depression after serving as a discussant for a paper by Andy Jalil and Gisela Rua on "Inflation Expectations and Recovery from the Depression in 1933:Evidence from the Narrative Record." I also remember being struck by Christina Romer and David Romer's, (2013, p. 68) remark that a whole “cottage industry” of research in the 1990s was devoted to the question of whether the deflation of 1930-32 was anticipated.
I found it interesting to think about why different papers came to different estimates of inflation expectations in the Great Depression by examining the methodological issues around estimating expectations when direct survey or market measures are not available. I later broadened the paper to consider the range of estimates of inflation expectations in the classical gold standard era and the hyperinflations of the 1920s.
A lot of my research focuses on contemporary inflation expectations, mostly using survey-based measures. Some of the issues that arise in characterizing historical expectations are still relevant even when survey or market-based measures of inflation expectations are readily available--issues of noise, heterogeneity, uncertainty, time-varying risk premia, etc. I hope this piece will also be useful to people interested in current inflation expectations in parts of the world where survey data is unreliable or nonexistent, or where markets in inflation-linked assets are underdeveloped.
What I enjoyed most about writing this paper was trying to determine and formalize the assumptions that various authors used to form their estimates, even when these assumptions weren't laid out explicitly. I also enjoyed conducting my first meta-analysis (thanks to the recommendation of the referee and editor.) I found T. D. Stanley's JEL article on meta-analysis to be a useful guide.
Friday, June 17, 2016
The St. Louis Fed's Regime-Based Approach
St. Louis Federal Reserve President James Bullard today presented “The St. Louis Fed’s New Characterization of the Outlook for the U.S. Economy.” This is a change in how the St. Louis Fed thinks about medium- and longer-term macroeconomic outcomes and makes recommendations for the policy path.
As an example of why using this regime-based forecasting approach matters, imagine that the economy is in a low productivity growth regime in which the long-run growth rate is 2%, and that under a high productivity growth regime, the long-run growth rate would be 4%. Suppose you are trying to forecast the growth rate a year from now. One approach would be to come up with an estimate of the probability P that the economy will have switched to the high productivity regime, then estimate a growth rate of G1=(1-P)*2%+P*4%. An alternative is to assume that the economy will stay in its current regime, in which case your estimate is G2=2%<G1, and the chance that the economy switches regime is an “upside risk” to your forecast. This second approach is more like what the St. Louis Fed is doing. Think of it as taking the mode instead of the expected value (mean) of the probability distribution over future growth. They are making their forecasts based on the most likely outcome, not the weighted average of the different outcomes.
In a robust-control inspired approach, the possibility of switching into a recession state (and hitting the zero lower bound again) would get weighted pretty heavily in determination of the policy path, because even if it is unlikely, it would be really bad. What would that look like? This gets a little tricky because the "fundamentals" characterizing these regimes are not all fundamental in the sense of being exogenous to monetary policy. In particular, whether and when the economy enters another recession depends on the path of the policy rate. So too, indirectly, does the real rate of return on short-term government debt, both through liquidity premia and through expected inflation if we get back to the ZLB.
“The hallmark of the new narrative is to think of medium- and longer-term macroeconomic outcomes in terms of regimes. The concept of a single, long-run steady state to which the economy is converging is abandoned, and is replaced by a set of possible regimes that the economy may visit. Regimes are generally viewed as persistent, and optimal monetary policy is viewed as regime dependent. Switches between regimes are viewed as not forecastable.”Bullard describes three “fundamentals” that characterize which regime the economy is in: productivity growth (high or low), the real return on short-term government debt (high or low), and state of the business cycle (recession or not). We are currently in a low-productivity growth, low rate, no-recession regime. The St. Louis Fed’s forecasts for the next 2.5 years are made with the assumption that we will stay in such a regime over the forecast horizon. They forecast real output growth of 2%, 4.7% unemployment, and 2% trimmed-mean PCE inflation.
As an example of why using this regime-based forecasting approach matters, imagine that the economy is in a low productivity growth regime in which the long-run growth rate is 2%, and that under a high productivity growth regime, the long-run growth rate would be 4%. Suppose you are trying to forecast the growth rate a year from now. One approach would be to come up with an estimate of the probability P that the economy will have switched to the high productivity regime, then estimate a growth rate of G1=(1-P)*2%+P*4%. An alternative is to assume that the economy will stay in its current regime, in which case your estimate is G2=2%<G1, and the chance that the economy switches regime is an “upside risk” to your forecast. This second approach is more like what the St. Louis Fed is doing. Think of it as taking the mode instead of the expected value (mean) of the probability distribution over future growth. They are making their forecasts based on the most likely outcome, not the weighted average of the different outcomes.
Bullard claims that P is quite small, i.e. regimes are persistent, and that regime changes are unforecastable. He therefore argues that "the best that we can do today is to forecast that the current regime will persist and set policy appropriately for this regime." The policy takeaway is that “In light of this new approach and the associated forecast, the appropriate regime-dependent policy rate path is 63 basis points over the forecast horizon.”
The approach of forecasting that the current regime will persist and setting policy appropriately for the current regime is an interesting contrast to the "robust control" literature. As Richard Dennis summarizes, "rather than focusing on the 'most likely' outcome or on the average outcome, robust control argues that policymakers should focus on and defend against the worst-case outcome." He adds:
"In an interesting application of robust control methods, Sargent (1999) studies a simple macro-policy model and shows that robustness, in the “robust control” sense, does not necessarily lead to policy attenuation. Instead, the robust policy rule may respond more aggressively to shocks. The intuition for this result is that, by pursuing a more aggressive policy, the central bank can prevent the economy from encountering situations where model misspecification might be especially damaging."In Bullard's Figure 1 (below), we see the baseline forecast corresponding to continuation in the no recession, low real rate of return, low productivity growth regime. We also see some of the upside risks to the policy rate path, corresponding to switches to high real rate of return and/or high productivity growth regimes. We also see the arrow pointing to recession, but the four possible outcomes associated with that switch are omitted from the diagram. Bullard writes that "We are currently in a no recession state, but it is possible that we could switch to a recession state. If such a switch occurred, all variables would be affected but most notably, the unemployment rate would rise substantially. Again, the possibility of such a switch does not enter directly into the forecast because we have no reason to forecast a recession given the data available today. The possibility of recession is instead a risk to the forecast."
In a robust-control inspired approach, the possibility of switching into a recession state (and hitting the zero lower bound again) would get weighted pretty heavily in determination of the policy path, because even if it is unlikely, it would be really bad. What would that look like? This gets a little tricky because the "fundamentals" characterizing these regimes are not all fundamental in the sense of being exogenous to monetary policy. In particular, whether and when the economy enters another recession depends on the path of the policy rate. So too, indirectly, does the real rate of return on short-term government debt, both through liquidity premia and through expected inflation if we get back to the ZLB.
Wednesday, May 25, 2016
Behavioral Economics Then and Now
“Although it has never been clear whether the consumer needs to be protected from his own folly or from the rapaciousness of those who feed on him, consumer protection is a topic of intense current interest in the courts, in the legislatures, and in the law schools." So write James J. White and Frank W. Munger Jr. in a 1971 article from the Michigan Law Review.
Today, it is not uncommon for behavioral economists to weigh in on financial regulatory policy and consumer protection. White and Munger, not economists but lawyers, played the role of behavioral economists before the phrase was even coined. They managed to anticipate many of the hypotheses and themes that would later dominate behavioral economics-- but with more informal and colorful language. A number of new legislative and judicial acts in the late 1960s provided the impetus for their study:
"Congress has passed the Truth-in-Lending Act; the National Conference of Commissioners on Uniform State Laws has proposed the Uniform Consumer Credit Code; and many states have enacted retail installment sales acts to update and supplement their long-standing usury laws. These legislative and judicial acts have always relied, at best, on anecdotal knowledge of consumer behavior. In this Article we offer the results of an empirical study of a small slice of consumer behavior in the use of installment credit.
In their recent efforts, the legislatures, by imposing new interest rate disclosure requirements on installment lenders, have sought to protect the consumer against pressures to borrow money at a higher rate of interest than he can afford or need pay. The hope, if not the expectation, of the drafters of such disclosure legislation is that the consumer who is made aware of interest rates will seek the lowest-priced lender or will decide not to borrow. This migration of the consumers to the lowest-priced lender will, so the argument goes, require the higher-priced lender to reduce his rate in order to retain his business. These hopes and expectations are founded on the proposition that the consumer is largely ignorant of the interest rate that he pays; this ignorance presumably keeps him from going to a lender with cheaper rates. Knowledge of interest rates, it is believed, will rectify this defect…”
Here comes their "squatting baboon" metaphor:
“Presumably, consumers in a perfect market will behave like water in a pond, which gravitates to the lowest point-i.e., consumer borrowers should all tum to the lender that gives the cheapest loan. We began this project with a strong suspicion-based on the observations of others-that the consumer credit market is far from perfect and that water governed by the force of gravity is a poor metaphor with which to describe the behavior of consumer debtors. The consumer debtor's choice of creditor clearly involves consideration of many factors besides interest rate. Therefore, a metaphor that better describes our suspicions about the borrower's behavior in a market in which rate differences appear involves a group of monkeys in a cage with a new baboon of unknown temperament. The baboon squats in one comer of the cage near some choice, ripe bananas. In the far comer of the cage is a supply of wilted greens and spoiled bananas, the monkeys' usual fare. Some of the monkeys continue eating their usual fare because they are unaware of the new bananas and the visitor. Other monkeys observe the new bananas but do not approach them. Still others, more daring or intelligent than the rest, seek ways of snatching an occasional banana from the baboon's stock. The baboon strikes at all the brown monkeys but he permits black monkeys to eat without interference. Yet many of the black monkeys make no attempt to eat. One suspects that a social scientist who interviewed the members of the monkey tribe about their experience would find that many of those who saw and appreciated the choice bananas would be unable to articulate the reasons for their failure to eat any of them. The social scientist might also discover that a few who looked at the baboon in obvious fright would nevertheless deny that they were afraid. In addition, he might find that some were so busy picking fleas or nursing that they did not observe the choice bananas at all. We suspected that consumer borrowers had similarly diverse reasons for their behavior.
We presumed that some paid high interest rates only because of ignorance of lower rates and that others correctly concluded that they could not qualify for a cheaper loan than they received. Others, we suspected, were merely too lazy or too fearful of bankers to seek lower rates.”
A majority of the sample did not know the rate at which they had borrowed. Most had allowed the auto dealer to arrange the loan rather than shopping for the lowest rate. Even if they knew that lower rates were available elsewhere, they declined to shop around. The authors find differences in financial sophistication, education, and job characteristics between consumers who shopped around for lower rates and those who did not. They conclude:
“The results of our study suggest that, at least with regard to auto loans, the disclosure provisions of the Truth-in-Lending Act will be largely ineffective in changing consumer behavior patterns. Certainly the Act will not improve the status of those who already know that lower rates are available elsewhere. And we discovered no evidence that knowledge of the interest rate-which, even under the Act will usually come after a tentative agreement to purchase a specified car has been reached-will stimulate a substantial percentage of consumers to shop for a lower rate elsewhere.”The authors come down as pessimistic about the Truth-in-Lending Act, but make no new policy recommendations of their own. If they were writing today, instead of just predicting that a policy would be ineffective, they might suggest ways to design the policy to "nudge" consumers to make different decisions. The Truth-in-Lending Act has been amended numerous times over the years, and was placed under the authority of the Consumer Financial Protection Bureau (CFPB) by the Dodd-Frank Act. Behavioral economics has played a central role in the work of the CFPB. But the active application of behavioral law and economics to regulatory policy is not universally accepted. For example, Todd Zwicki writes:
"We argue that even if the findings of behavioral economics are sound and robust, and the recommendations of behavioral law and economics are coherent (two heroic assumptions, especially the latter), there still remain vexing problems of actually implementing behavioral economics in a real-world political context. In particular, the realities of the political and regulatory process suggests that the trip from the laboratory to the real-world crafting of regulations that will improve consumers’ lives is a long and rocky one."Zwicki expands this argument in a paper coauthored with Adam Christopher Smith. While I'm not convinced that their rather negative portrayal of the CFPB is warranted, I do think the paper presents some provocative cautions about how behavioral economics is applied to policy--especially the warning against "selective modeling of behavioral bias," which I have heard even top behavioral economists caution against.
Sunday, April 24, 2016
Presidential Candidates and Fed Accountability
In an interview with Fortune, Donald Trump gave his views on Federal Reserve Chair Janet Yellen, who will come up for reappointment in 2018. "I don’t want to comment on reappointment, but I would be more inclined to put other people in," he remarked, despite his opinion that Yellen "has done a serviceable job."
A change in the political party in power does not always result in a new Fed chair. Yellen's predecessor, Ben Bernanke, was first appointed by President George W. Bush and later reappointed by President Obama. Obama remarked, upon reappointing Bernanke in 2009, that "Ben approached a financial system on the verge of collapse with calm and wisdom; with bold action and out-of-the-box thinking that has helped put the brakes on our economic freefall."
Time reported in 2009 that "The Fed chairman is often described as the second most powerful U.S. official; the main check on him is the first most powerful official's power not to reappoint him. That power won't be used this year, and it's easy to see why. But someday, a President may have to use it..." I have written before that Fed accountability is a two-way street requiring diligence on the part of both the Fed and Congress. But the President also plays a role in checking the Fed's power. Just how far should a (prospective) President go?
Anyone who is dissatisfied with the mandate itself can go through the usual channels of political change in a democracy and pressure Congress to change the mandate. Congress, by design, is susceptible to such pressure: they need votes. Presidential candidates are in a good position to draw public attention to the Fed's mandate and urge change if they believe it is necessary. Sanders, for example, could propose redefining the Fed's full employment mandate to mean unemployment below 4 percent. I'm not quite sure what kind of mandate Trump would support. It is also fair game for any member of the public to evaluate the Fed on how successfully they are achieving their mandate. But Congress does not (or at least, should not) tell the Fed how to set interest rates to achieve its mandate, and Presidential candidates shouldn't either.
A change in the political party in power does not always result in a new Fed chair. Yellen's predecessor, Ben Bernanke, was first appointed by President George W. Bush and later reappointed by President Obama. Obama remarked, upon reappointing Bernanke in 2009, that "Ben approached a financial system on the verge of collapse with calm and wisdom; with bold action and out-of-the-box thinking that has helped put the brakes on our economic freefall."
Time reported in 2009 that "The Fed chairman is often described as the second most powerful U.S. official; the main check on him is the first most powerful official's power not to reappoint him. That power won't be used this year, and it's easy to see why. But someday, a President may have to use it..." I have written before that Fed accountability is a two-way street requiring diligence on the part of both the Fed and Congress. But the President also plays a role in checking the Fed's power. Just how far should a (prospective) President go?
Recently, Narayana Kocherlakota, who was President of the Federal Reserve Bank of Minneapolis from 2009 through 2015, has been urging Presidential candidates to address their views on the Fed. He proposes five questions we should ask the candidates, including whether they would seek a chair that would want to change the Fed's 2% inflation target, whether they would want the next chair to change the Fed's approach to its full employment mandate, whether they would want the chair to agree with using a Taylor-type rule for monetary policy, and whether they would want the chair to take an interventionist approach in a future crisis.
In Trump's Fortune interview, he continued to express some qualms with low interest rates, namely: "the problem with low interest rates is that it’s unfair that people who’ve saved every penny, paid off mortgages, and everything they were supposed to do and they were going to retire with their beautiful nest egg and now they’re getting one-eighth of 1%." However, he also pointed to an upside of low rates, noting that he would like to take advantage of low interest rates to refinance the debt and increase infrastructure and military spending.
Interestingly, neither of Trump's takes on the Fed's interest rate policy are directly related to the Fed's Congressional mandate. He does not evaluate the Fed's success in achieving either price stability or full employment. Rather, he is concerned with the distributional and fiscal implications of low interest rates--areas in which the Fed chair is traditionally reluctant to tread.
The other candidate who has said most about the Fed is Bernie Sanders, who wrote an op-ed about the Fed in the New York Times in December. Sanders' remarks focus mainly on Fed governance and financial regulation, though he also comments on the Fed's interest rate policy:
Anyone who wants to is welcome to evaluate the Fed on how successfully they are achieving that mandate. Anyone who wants to is also welcome to evaluate the merits of the mandate itself. Different people will come to different evaluations depending on their own beliefs and preferences. But neither of these two evaluations requires an audit of monetary policy by the Government Accountability Office, as both Sanders and Trump have advocated.
Kocherlakota tweeted, "Good to see Mr. Trump talking about mon. pol. - more Pres. cands need to talk about this issue." This was not Trump's first discussion of the Fed. Trump previously claimed that "Janet Yellen for political reasons is keeping interest rates so low that the next guy or person who takes over as president could have a real problem."Good to see Mr. Trump talking about mon. pol. - more Pres. cands need to talk about this issue: https://t.co/1kMKThHmAS— NRKocherlakota (@kocherlakota009) April 20, 2016
In Trump's Fortune interview, he continued to express some qualms with low interest rates, namely: "the problem with low interest rates is that it’s unfair that people who’ve saved every penny, paid off mortgages, and everything they were supposed to do and they were going to retire with their beautiful nest egg and now they’re getting one-eighth of 1%." However, he also pointed to an upside of low rates, noting that he would like to take advantage of low interest rates to refinance the debt and increase infrastructure and military spending.
Interestingly, neither of Trump's takes on the Fed's interest rate policy are directly related to the Fed's Congressional mandate. He does not evaluate the Fed's success in achieving either price stability or full employment. Rather, he is concerned with the distributional and fiscal implications of low interest rates--areas in which the Fed chair is traditionally reluctant to tread.
The other candidate who has said most about the Fed is Bernie Sanders, who wrote an op-ed about the Fed in the New York Times in December. Sanders' remarks focus mainly on Fed governance and financial regulation, though he also comments on the Fed's interest rate policy:
The recent decision by the Fed to raise interest rates is the latest example of the rigged economic system. Big bankers and their supporters in Congress have been telling us for years that runaway inflation is just around the corner. They have been dead wrong each time. Raising interest rates now is a disaster for small business owners who need loans to hire more workers and Americans who need more jobs and higher wages. As a rule, the Fed should not raise interest rates until unemployment is lower than 4 percent. Raising rates must be done only as a last resort — not to fight phantom inflation.On Friday, I took my students in my Federal Reserve class at Haverford on a field trip to DC, where we got to meet with Ben Bernanke at the Brookings Institute. I asked Bernanke whether he thought that the presidential candidates should talk about monetary policy and the (re)appointment of the Fed Chair. He agreed with Kocherlakota that candidates should talk about what they would like to see in a Fed Chair, but said that he does not think it's a good idea to politicize individual interest rate decisions, emphasizing that the Fed does not have goal independence, but does have instrument independence. In other words, Congress has given the Fed a monetary policy mandate—full employment and price stability—but does not specify what the Fed needs to do to try to achieve those goals.
Anyone who wants to is welcome to evaluate the Fed on how successfully they are achieving that mandate. Anyone who wants to is also welcome to evaluate the merits of the mandate itself. Different people will come to different evaluations depending on their own beliefs and preferences. But neither of these two evaluations requires an audit of monetary policy by the Government Accountability Office, as both Sanders and Trump have advocated.
Anyone who is dissatisfied with the mandate itself can go through the usual channels of political change in a democracy and pressure Congress to change the mandate. Congress, by design, is susceptible to such pressure: they need votes. Presidential candidates are in a good position to draw public attention to the Fed's mandate and urge change if they believe it is necessary. Sanders, for example, could propose redefining the Fed's full employment mandate to mean unemployment below 4 percent. I'm not quite sure what kind of mandate Trump would support. It is also fair game for any member of the public to evaluate the Fed on how successfully they are achieving their mandate. But Congress does not (or at least, should not) tell the Fed how to set interest rates to achieve its mandate, and Presidential candidates shouldn't either.
Sunday, March 27, 2016
Congressional Attention to Monetary Policy over Time
The Federal Reserve describes itself as "an independent government agency but also one that is ultimately accountable to the public and the Congress...Congress also structured the Federal Reserve to ensure that its monetary policy decisions focus on achieving these long-run goals and do not become subject to political pressures that could lead to undesirable outcomes."
The independence of the Fed is by no means fixed or guaranteed. Rather, the Fed continually attempts to defend its independence. As Dincer and Eichengreen (2014) note, the movement of central banks toward greater transparency can be understood in part as an effort to protect independence by demonstrating accountability outside of the electoral process. They explain that "calls to audit the Federal Reserve have intensified as the central bank has come to rely more extensively on unconventional policies and expanded the range of its interventions in securities markets. The FOMC’s decision to make more information publicly available can thus be understood as an effort to reconcile the increased complexity of its operations with the desire to maintain and defend its independence."
The Fed derives its authority from Congress, and Congress can alter the Fed's responsibilities (and decrease its independence) by statute. Since the financial crisis, congressional calls for more oversight of the Fed or for less discretion by monetary policymakers abound. In National Affairs, Steve Stein writes:
The figure below shows the number of bills in the U.S. Congress related to interest rates or monetary policy over time. Unsurprisingly, the 1970s and early 80s saw the largest number of such bills. The 1973-74 Congress considered 101 bills about interest rates and 55 about monetary policy. But the 2009-10 and 2011-12 Congress considered just 15 and 22 bills about monetary policy, respectively, which is low by historical standards.
The next graph, below, shows the number of Congressional hearings on interest rates and monetary policy. These also peaked around the late 1970s. Since then, however, while hearings on interest rates have dwindled, hearings on monetary policy remain frequent--typically 10-20 per year. There is a mild upward trend from 2005 to 2012. Still, by neither metric of bills nor hearings is the Fed facing an unprecedented era of Congressional meddling.
The independence of the Fed is by no means fixed or guaranteed. Rather, the Fed continually attempts to defend its independence. As Dincer and Eichengreen (2014) note, the movement of central banks toward greater transparency can be understood in part as an effort to protect independence by demonstrating accountability outside of the electoral process. They explain that "calls to audit the Federal Reserve have intensified as the central bank has come to rely more extensively on unconventional policies and expanded the range of its interventions in securities markets. The FOMC’s decision to make more information publicly available can thus be understood as an effort to reconcile the increased complexity of its operations with the desire to maintain and defend its independence."
The Fed derives its authority from Congress, and Congress can alter the Fed's responsibilities (and decrease its independence) by statute. Since the financial crisis, congressional calls for more oversight of the Fed or for less discretion by monetary policymakers abound. In National Affairs, Steve Stein writes:
"The independence of the Federal Reserve may well be more threatened in the coming years than at any time in the 100-year history of America's central bank. That independence could prove impossible to protect as long as the Fed continues to exchange its role as a defender of monetary stability for a new role as the ultimate overseer of the financial system. That new role is an inherently political one, and the Fed cannot expect to be permitted to perform it without interference from the democratically elected institutions of our political system."It is difficult to measure the level of "threat" to Federal Reserve independence, but some indicators of Congressional attention to monetary policy are available. The Comparative Agendas Project tracks data on policy agendas, including hearings and bills, across several countries. Congress may use monetary policy-related hearings or bills as a form of signal to the Fed--an indirect form of political pressure or warning.
The figure below shows the number of bills in the U.S. Congress related to interest rates or monetary policy over time. Unsurprisingly, the 1970s and early 80s saw the largest number of such bills. The 1973-74 Congress considered 101 bills about interest rates and 55 about monetary policy. But the 2009-10 and 2011-12 Congress considered just 15 and 22 bills about monetary policy, respectively, which is low by historical standards.
Created at http://www.comparativeagendas.net/ |
Created at http://www.comparativeagendas.net/ |