Friday, August 31, 2018

Inflation Expectations and the Price at the Pump

My paper "Inflation Expectations and the Price at the Pump" is now published in the Journal of Macroeconomics. Here is an open-access link to the official version (through October 20, 2018). And here is my new website which has links to the ungated versions of this and my other papers.

The key takeaway of this paper is that, though gas prices and average household inflation expectations are correlated, consumers do not seem to overweight gas prices when forming inflation expectations. Since the impact of gas prices on expectations fades quickly with forecast horizon, energy price shocks seem unlikely to unanchor longer-run inflation expectations.

Wednesday, August 8, 2018

"Macroeconomic Research, Present and Past"

I am at the Liberal Arts Macro Workshop at Wake Forest University. The plenary talk last night was
by the authors of "Macroeconomic Research, Present and Past.Philip Glandon, Kenneth N. Kuttner, Sandeep Mazumder, and Caleb Stroup read over 1000 papers from five top macroeconomics journals to catalog the epistemology, methodology, theoretical framework, and several other key characteristics of the papers (see table below).

They read all regular articles (not e.g. book reviews) from the 2016-2017 issues of the the Journal of Monetary Economics (JME), the Journal of Economic Dynamics and Control (JEDC), the Journal of Money, Credit and Banking (JMCB), the Review of Economic Dynamics (RED), and the American Economic Journal: Macroeconomics (AEJ). For the JME and the JMCB, they also read all articles from 1980, 1990, 2000, 2006, 2008, and 2010.

Source: Glandon, Kuttner, Mazumder, and Stroup, Table 1

The figure below shows how the share of papers in each epistemological approach has varied over time. "Model fitting" papers were almost non-existent in the 1980s, and now account for around a third of macro papers in the sample.
Source: Glandon, Kuttner, Mazumder, and Stroup

The authors also document a rise in the use of applied micro (as opposed to time series) empirical methods, and a very large rise in the use of proprietary data. Two additional tables are below, but there is much more of interest in the paper, so I recommend checking it out!
Source: Glandon, Kuttner, Mazumder, and Stroup

Source: Glandon, Kuttner, Mazumder, and Stroup


Wednesday, July 18, 2018

From Senior Thesis to Publication

One of the most challenging and rewarding aspects of my job at Haverford College is advising senior theses. At Haverford, every student in every major writes a senior thesis (or equivalent capstone project). I co-teach the senior thesis course in the fall with two or three colleagues, and advise thesis writers in the spring. In the fall, students work on choosing a topic and research question, conducting a literature review, and developing a research proposal. In the spring, they work one-on-one with an advisor to see the proposal through.

Every completed thesis is an achievement, and we celebrate all of our seniors with a champagne reception on the due date. But today, for the first time, one of my advisee's theses has successfully made its way through the peer review process and is published!

Samantha Wetzel, Haverford College class of 2018, excelled both as an economics student and on the basketball team. We published "The FOMC versus the staff, revisited: When do policymakers add value?" in Economics Letters. (This link should provide temporary free access to the published version; here is the SSRN working paper version.)

The thesis, and this letter based on it, follow up on Romer and Romer's (2008) similarly-titled paper. Here is the abstract:
The Board of Governors staff and the Federal Open Market Committee both publish macroeconomic forecasts. Romer and Romer (2008) show that policymakers' attempts to add information to the staff forecasts are counterproductive. In more recent years, however, policymakers have improved upon staff forecasts. We show that policymakers' value-added is greater when economic conditions are unfavorable or uncertain.
For other undergraduate researchers hoping to write a successful, potentially publishable thesis, I have a few bits of advice based on this advising experience. First, start early if possible. Samantha came up with her thesis topic during junior year, when she read Romer and Romer (2008) in my course on the Federal Reserve.

Second, ask a well-defined question. This may be the hardest part. Your thesis (like Sam's) will probably be much longer than the 2000 word limit of Economics Letters, with a longer literature review and more robustness checks etc., but you should be able to easily explain what you asked, what you found, and how you found it in a few pages. Your thesis is far more likely to be successful if you have a primary hypothesis that you can precisely state. (Even if you think you can state it, write it down to be sure!)

Third, come up with a plan so that you can make progress each week-- use your advisor to help you decide on a timeline for key goals and to hold you accountable. For student athletes like Samantha, this means thinking in advance about when key games and travel will be, and planning accordingly.

Saturday, May 12, 2018

Snapshot of the Publication and Review Process as an Assistant Professor

I have just completed my third year as an Assistant Professor. I have kept a spreadsheet for the three years of all of my journal submissions and the results (desk reject, referee reject, revise and resubmit, or accept, with dates for nearly everything). I had almost no idea what the publication process would be like when I finished grad school, and would have loved to see such a spreadsheet. So I thought I'd share some summary statistics in case this can help some new researchers or give students an idea of what the publication process is like.

I have no idea whether my experience is representative. Keep in mind, I am at a liberal arts college, albeit one that values research. Stefano DellaVigna and David Card have the actual statistics on publication in the top 5 journals. I have not published in those journals. To protect editorial privacy, I am not going to name any journals specifically or report on submission and decision dates.

My spreadsheet includes 15 papers that have been submitted at least once. Of these:
-7 are now published or accepted for publication.
-3 have "revise and resubmit" status.
-The rest are either under review or in the file drawer.

I made a total of 39 distinct submissions. This means counting the first submission of a particular paper to a particular journal, NOT counting revision rounds.
- 12 resulted in desk rejection (i.e. rejected by the editor without going to referees).
-Counting revision rounds, I made 49 submissions.

For the 7 papers that are published or accepted:
-5 were accepted at the first journal to which I submitted. Of these, 3 required revisions.
-One was accepted (after revisions) at the third journal to which I submitted.
-One was accepted (after revisions) at the EIGHTH journal to which I submitted (following four desk rejections and three referee rejections). I am proud of that paper and do not agree with some rules of thumb I've heard about giving up on a paper after X attempts. I think it depends a lot on the paper.

There is substantial selection bias in the above stats on number of submissions per publication, since those are the stats for my papers that were quickest to publish. For the 8 papers that are not published, obviously none were accepted at the first journal to which I submitted! If/when these get published, my average number of submissions per publication will increase substantially. Three of them have already been submitted to at least 5 places.

An interesting note is that I wrote exactly the same number of referee reports as I received. I felt like I was writing a ton of referee reports, but I guess it was pretty fair. I do think I wrote more words of referee reports than I received!

I will update again in a few years. I anticipate some changes as I gain experience with research and knowing where to submit papers, and working on different types of projects and coauthored papers.

Monday, April 16, 2018

Mortgage-Backed Securities Ratings and Losses Maybe Not So Bad

An NBER working paper released today conducts a "post mortem" on the role of non-agency mortgage-backed securities (MBS) in the 2008 financial crisis. The authors, Juan Ospina and Harald Uhlig, suggest that some of the standard narratives about the financial crisis were "created in the heat of the moment" and merit re-examination a decade later.
"One such standard narrative has it that the financial meltdown of 2008 was caused by an overextension of mortgages to weak borrowers, repackaged and then sold to willing lenders drawn in by faulty risk ratings for these mortgage back securities. To many, mortgage backed securities and rating agencies became the key villains of that financial crisis. In particular, rating agencies were blamed for assigning the coveted AAA rating to many securities, which did not deserve it, particularly in the subprime segment of the market, and that these ratings then lead to substantial losses for institutional investors, who needed to invest in safe assets and who mistakenly put their trust in these misguided ratings... 
First, were these mortgage backed securities bad investments? Second, were the ratings wrong? We answer these questions, using a new and detailed data set on the universe of non-agency residential mortgage backed securities (RMBS), obtained by devoting considerable work to carefully assembling data from Bloomberg and other sources. This data set allows us to examine the actual repayment stream and losses on principal on these securities up to 2014, and thus with a considerable distance since the crisis events...We find that the conventional narrative needs substantial rewriting: the ratings and the losses were not nearly as bad as this narrative would lead one to believe.
An ungated version of the paper is available here.

Thursday, March 8, 2018

D is for Devastating: A Statistical Error and the Vitamin D Saga

Statistical errors in research are quite common in research, and not always detected. As economists are well aware, when an error with important policy implications is revealed, it may prompt a media frenzy. I was surprised to learn recently of a major statistical error with potentially huge public health implications, yet with seemingly sparse media coverage when it was revealed.

The error concerns the Recommended Dietary Allowance (RDA) of Vitamin D. A 2014 paper found a statistical error in a study used by the Institute of Medicine (IOM) to determine the RDA, resulting in a recommendation that was about an order of magnitude too low.

I am neither a public health expert nor medically trained, but (following Miles Kimball's lead) have developed an interest in public health, and especially nutrition, research, largely due to its parallels with macroeconomic research. What little press coverage I did find about this Vitamin D study omitted technical discussion of the statistical error--"We'll spare you the gritty mathematical details," said one article. But I wanted these details, and so might you, so I dove in to what turned out to be a fascinating story. You may want to share it with your econometrics students: Correct interpretation of confidence intervals can truly be a matter of life and death.

First, some background. The human body can make Vitamin D (unlike other vitamins) when exposed to sunlight. It can also be attained by nutritional sources and supplements. Upon activation by the liver and kidneys, it acts as a hormone that plays a role in calcium metabolism. Sufficient Vitamin D is critical for bone health and a plethora of other health outcomes (more on that later). Research on the health effects of Vitamin D typically looks at health outcomes associated with different serum 25-hydroxyvitamin D (25(OH)D) levels (a measure of concentration in the blood).

The IOM issues dietary recommendations, including RDAs, for the US and Canada. The RDA is supposed to designate the nutrient intake sufficient to meet the needs of 97.5% of healthy individuals. For Vitamin D, issuing this guideline requires first deciding what 26(OH)D level is desirable, then deciding how much supplemental Vitamin D should be taken so that most people have the desired 26(OH)D level. Based on associations between 26(OH)D levels and various health outcomes, the IOM aimed to recommend an RDA that would result in 25(OH)D levels of 50 nmol/L or more.

The IOM then had to determine how much supplemental Vitamin D to recommend based on this goal. They looked at 10 studies of the dose response relationship of vitamin D intake and 25(OH)D. Some of these studies examined 25(OH)D levels for multiple different doses, so in total there were 32 estimates (the green diamonds in Figure 1). They fitted a dose response relationship curve to these points, with 95% confidence interval. The IOM came up with an RDA for individuals 1 to 70 years of age of 600 IU per day. You can see the vertical line at 600 in Figure 1. It intersects the fitted dose response curve at 63 nmol/L and the lower bound of the 95% confidence interval at 56 nmol/L. Remember, this was the amount that was supposed to achieve 25(OH)D levels of at least 50 nmol/L in at least 97.5% of healthy individuals.


Figure 1. Source: Veugelers and Ekwaru (2014)

In October 2014, Paul J. Veugelers and John Paul Ekwaru explained in a paper in Nutrients that the IOM's interpretation of these confidence intervals was incorrect. They thought that 2.5% of individuals would have serum levels below the lower 95% confidence interval, but in this meta-analysis, the unit of observation was not the individual, but the study average. In the authors' words:
The correct interpretation of the lower prediction limit is that 97.5% of study averages are predicted to have values exceeding this limit. This is essentially different from the IOM’s conclusion that 97.5% of individuals will have values exceeding the lower prediction limit.
Veugelers and Ekwaru returned to the 10 studies, eight of which reported both average and standard deviation serum level for particular doses of Vitamin D. From these statistics, the authors could calculate the 2.5th percentile at each dose. Then they regressed these 2.5 percentile values (the 23 yellow dots in Figure 2) on vitamin D intake, coming up with the red dashed line in Figure 2. The green dashed lines are the confidence intervals from Figure 1, for the sake of comparison.

In the figure, you can see that at 600 IU per day, 97.5% of individuals will have serum levels above around 27 nmol/L, not 50 nmol/L. To get 97.5% of individuals with serum levels above 50 nmol/L, you would actually need a higher dose than any of the studies examined. Out-of-sample extrapolation led them to estimate that 8895 IU of vitamin D per day would actually be required. Veugelers and Ekwaru also pointed to two studies in which 10% or 15% of Canadian subjects had serum 25(OH)D levels of less than 50 nmol/L despite vitamin D supplementation at the RDA level. They wrote, "If the RDA had been adequate, these percentages should not have exceeded 2.5%. Herewith these studies show that the current public health target is not being met."



Figure 2. Source: Veugelers and Ekwaru (2014)

Veugelers and Ekwaru did caution that as t8895 IU of vitamin D per day "is far beyond the range of studied doses, caution is warranted when interpreting this estimate. Regardless, the very high estimate illustrates that the dose is well in excess of the current RDA of 600 IU per day and the tolerable upper intake of 4000 IU per day."

In March 2015, in the same journal, Robert Heaney, Cedric Garland, Carole Baggerly, Christine French, and Edward Gorham published a letter in the same journal that alleviated some of the concern about extrapolating beyond the available data. They presented entirely different data on individuals with daily vitamin D intakes from zero to over 10,000 IU. They came up with an estimate that was slightly lower than Veugelers and Ekwaru's, but confirming the finding that the IOM recommendation was around an order of magnitude too low, and wrote:
Thus, we confirm the findings of these investigators with regard to the published RDA for vitamin and we call for the IOM and all public health authorities concerned with transmitting accurate nutritional information to the public to designate, as the RDA, a value of approximately 7000 IU per day from all sources.
Like Veugelers and Ekwaru, Heaney et al. remarked upon the safety of such a high recommendation, though their take was more optimistic:
The total, all-source intake of 7000 IU/day is below the no observed adverse effect level (NOAEL) of both the IOM and the Endocrine Society, below the tolerable upper intake level (UL) of the Endocrine Society, and well within the safe range delineated by Hathcock et al., who had generated that range using the IOM’s method of hazard identification.
The hormonal role of vitamin D explains why the Endocrine Society also issues guidance about it. Remember, vitamin D is fat-soluble, so excess amounts are stored and can accumulate in body tissues-- hence the concern about safety at higher doses. Overall health benefits may increase with dose up to a point, and then start to decline. Initial guidelines on Vitamin D RDA were based on prevention of rickets. But as scientists have learned more about other health benefits, the cost-benefits calculus of vitamin D recommendations has shifted. This shift, however, was slow to be reflected inn health policy.

"Worldwide reports have highlighted a variety of vitamin D insufficiency and deficiency diseases. Despite many publications and scientific meetings reporting advances in vitamin D science, a disturbing realization is growing that the newer scientific and clinical knowledge is not being translated into better human health," wrote Andrew Norman in a 2008 issue of the American Journal of Clinical Nutrition. A 2007 article in the same journal, by Reinhold Vieth and many coauthors, describes the situation as a "frustrating and regrettable situation for nutrition researchers."

Vieth et al. summarize the many health benefits attributable to adequate vitamin D, and evidence that the tolerable upper limit is around ten times higher than officially-recommended intakes. But they point to an over-cautious and under-nuanced take by the public media that has kept public supplemental intake too low:
Evaluation of most relations of health and disease that involve vitamin D leads to the conclusion that a desirable 25(OH)D concentration is ≥75 nmol/L (30 ng/mL). If a concentration of 75 nmol/L is the goal to be achieved by consumption of vitamin D, then why is it so rare for members of the population to accomplish this? One reason is that almost every time the public media report that vitamin D nutrition status is too low, or that higher vitamin D intakes may improve measures of health, the advice that accompanies the report is outdated and thus misleading. Media reports to the public are typically accompanied by a paragraph that approximates the following: “Current recommendations from the Institute of Medicine call for 200 IU/d from birth through age 50 y, 400 IU for those aged 51–70 y, and 600 IU for those aged >70 y. Some experts say that optimal amounts are closer to 1000 IU daily. Until more is known, it is wise not to overdo it.” The only conclusion that the public can draw from this is to do nothing different from what they have done in the past.
The evidence in favor of higher 25(OH)D concentration and a higher RDA continued to grow in subsequent years. "Despite research on the association between low vitamin D status and many diseases, no consensus has emerged on the optimal serum 25(OH)D concentration. The concern is whether it is safe to maintain serum 25(OH)D concentrations in the range high enough to prevent some types of cancers and coronary heart disease," wrote Garland et al. 2014 in the American Journal of Public Health. In a meta-analysis of serum 25(OH)D and age-adjusted all-cause mortality, they showed that overall age-adjusted hazard ratios for mortality decline steeply with 25(OH)D for serum levels below 30 nmol/L, then gradually level off (Figure 3). The hazard ratio is not statistically different from 1 at 36 nmol/L.


Figure 3. Source: Garland et al. 2014 

In Finland, public health policy was changed in response to widespread low serum 25(OH)D concentration. Vitamin D fortification of certain dairy products and spreads began in 2002, and fortification levels were increased in 2010. This was successful in raising vitamin D intake, and health benefits are already measurable. In July 2017, Dimitrios Papadimitriou noted in the Journal of Preventive Medicine and Public Health the Type I diabetes, which had been on the rise, leveled off and then declined in Finland after the vitamin D fortification policy was implemented. Papadimitriou discusses vitamin D's role as a "powerful nuclear receptor-activating hormone of critical importance, especially to the immune system," and calls for public health authorities worldwide to modify RDAs in line with now quite substantive scientific research.

Papadimitriou titled his article "The Big Vitamin D Mistake," referring in a narrow sense to the IOM's statistical error. But he really discusses a mistake in a broader sense-- mounting evidence has been too slow to be incorporated into policymaking and practice.

Saturday, January 6, 2018

To Eradicate or to Manage?

I am lucky that the American Economic Association annual meetings are in my city this year, so I made it easily to the sessions despite the snow and bitter cold. On Saturday, January 6, I attended an excellent 8 a.m. session on central bank communication. I may write more about the session later—I already tweeted some of it—but for now I wanted to share an interesting aside made by Alan Blinder. He said something like, “In life, some problems are solved and some are managed.

The context for his remark was his prediction that cacophony will remain a problem for communication by central bank committees. He asserted that this is a problem that will never go away; we can never hope to solve it, only to manage it. Here’s Blinder’s example of how cacophony can be managed:
This made me wonder which other problems (economic or otherwise) fall into the “solvable” versus “manageable” categories. This taxonomy seems more natural in public health. In fact, a CDC report discusses a “hierarchy of possible public health interventions in dealing with infectious diseases,” which runs along the gradient from manage to solve: control, elimination of disease, elimination of infections, eradication, and extinction.

The report notes that in 1993, the International Task Forcefor Disease Eradication evaluated 80 infectious diseases and determined that 6 were potentially eradicable. The potential for eradication depends on biological, societal, and political criteria. Biological criteria for eradicability can change with technological innovation.

The categorization of a disease along the hierarchy is a weighty one, as “Health resources are limited and resources cross sectors. Therefore, decisions have to be made as to whether the use of resources for an elimination or eradication programme is preferable to their use in nonhealth projects, in alternative health interventions, in continued control of the condition, or even in the eradication of other eradicable conditions.” A failed eradication attempt can come at tremendous costs to credibility and resources. The decision to attempt an eradication effort should depend on careful and broad cost-benefit analysis and consideration of the numerous stakeholders:

“Consensus on the priority and justification for [eradication] must be developed by technical experts, the decision-makers, and the scientific community. Political commitment must be gained at the highest levels, following informed discussion at regional and local levels….Eradication requires an effective alliance with all potential collaborators and partners…The eradication programme must address the issues of equity and be supportive of broaer goalss that have a positive impact on the health infrastructure…should also take into consideration the ideal sequencing of potentially concurrent campaigns.”

The approaches that policymakers and researchers take to public health problems depend on whether they have categorized the problem as one to be eradicated or to be managed. It seems like this should be true of other economic and social problems too, and I wonder if people are thinking about things like homelessness, discrimination, poverty, asset bubbles, etc. in this way. Or even if conflicting views on whether these types of problems are feasible and worthy of eradication are at some fundamental level responsible for conflicts over the appropriate course of action. Food for thought.