Thursday, September 14, 2017

Consumer Forecast Revisions: Is Information Really so Sticky?

My paper "Consumer Forecast Revisions: Is Information Really so Sticky?" was just accepted for publication in Economics Letters. This is a short paper that I believe makes an important point. 

Sticky information models are one way of modeling imperfect information. In these models, only a fraction (λ) of agents update their information sets each period. If λ is low, information is quite sticky, and that can have important implications for macroeconomic dynamics. There have been several empirical approaches to estimating λ. With micro-level survey data, a non-parametric and time-varying estimate of λ can be obtained by calculating the fraction of respondents who revise their forecasts (say, for inflation) at each survey date. Estimates from the Michigan Survey of Consumers (MSC) imply that consumers update their information about inflation approximately once every 8 months.

Here are two issues that I point out with these estimates:
I show that several issues with estimates of information stickiness based on consumer survey microdata lead to substantial underestimation of the frequency with which consumers update their expectations. The first issue stems from data frequency. The rotating panel of Michigan Survey of Consumer (MSC) respondents take the survey twice with a six-month gap. A consumer may have the same forecast at months t and t+ 6 but different forecasts in between. The second issue is that responses are reported to the nearest integer. A consumer may update her information, but if the update results in a sufficiently small revisions, it will appear that she has not updated her information. 
To quantify how these issues matter, I use data from the New York Fed Survey of Consumer Expectations, which is available monthly and not rounded to the nearest integer. I compute updating frequency with this data. It is very high-- at least 5 revisions in 8 months, as opposed to the 1 revision per 8 months found in previous literature.

Then I transform the data so that it is like the MSC data. First I round the responses to the nearest integer. This makes the updating frequency estimates decrease a little. Then I look at it at the six-month frequency instead of monthly. This makes the updating frequency estimates decrease a lot, and I find similar estimates to the previous literature-- updates about every 8 months.

So low-frequency data, and, to a lesser extent, rounded responses, result in large underestimates of revision frequency (or equivalently, overestimates of information stickiness). And if information is not really so sticky, then sticky information models may not be as good at explaining aggregate dynamics. Other classes of imperfect information models, or sticky information models combined with other classes of models, might be better.

Read the ungated version here. I will post a link to the official version when it is published.

1 comment:

  1. Haven't read you paper yet.

    Your data has internal precision, which can be discovered. If your finite list is sorted as to be optimally Huffman coded then you have the optimum information encoding. (not to say one encoder is better or worse) But the optimum encoded sorts the data into groups such that the innovations in each ar within the integer 'quant' of the next, where quant is the path length through the encoder. Whew! Then you have an N quant, N being the rank of the encoding graph, and the number 'bits' needed in a mapping to binary. That gets you precision of the data stream. It assumes a delivery channel, hence only works with pricing under a single monetary standard (one numerator type).

    ReplyDelete

Comments appreciated!