Thursday, December 11, 2014

Mixed Signals and Monetary Policy Discretion

Two recent Economic Letters from the Federal Reserve Bank of San Francisco highlight the difficulty of making monetary policy decisions when alternative measures of labor market slack and the output gap give mixed signals. In Monetary Policy when the Spyglass is Smudged, Early Elias, Helen Irvin, and √íscar Jord√† show that conventional policy rules based on the output gap and on the deviation of the unemployment rate from its natural rate generate wide-ranging policy rate prescriptions. Similarly, in Mixed Signals: Labor Markets and Monetary Policy, Canyon Bosler, Mary Daly, and Fernanda Nechio calculate the policy rate prescribed by a Taylor rule under alternative measures of labor market slack. The figure below illustrates the large divergence in alternative prescribed policy rates since the Great Recession.

Source: Bosler, Daly, and Nechio (2014), Figure 2
Uncertainty about the state of the labor market makes monetary policy more challenging and requires more discretion and judgment on the part of policymakers. What does discretion and judgment look like in practice? I think it should involve reasoning qualitatively to determine if some decisions lead to possible outcomes that are definitively worse than others. For example, here's how I would reason through the decision about whether to raise the policy rate under high uncertainty about the labor market:

Suppose it is May and the Fed is deciding whether to increase the target rate by 25 basis points. Assume inflation is still at or slightly below 2%, and the Fed would like to tighten monetary policy if and only if the "true" state of the labor market x is sufficiently high, say above some threshold X. The Fed does not observe x but has some very noisy signals about it.  They think there is about a fifty-fifty chance that x is above X, so it is not at all obvious whether tightening is appropriate. There are four possible scenarios:

  1. The Fed does not increase the target rate, and it turns out that x>X.
  2. The Fed does not increase the target rate, and it turns out that x<X.
  3. The Fed does increase the target rate, and it turns out that x>X.
  4. The Fed does increase the target rate, and it turns out that x>X.

Cases (2) and (3) are great. In case (2), the Fed did not tighten when tightening was not appropriate, and in case (3), the Fed tightened when tightening was appropriate. Cases (1) and (4) are "mistakes." In case (1), the Fed should have tightened but did not, and in case (4), the Fed should not have tightened but did. Which is worse?

If we think just about immediate or short-run impacts, case (1) might mean inflation goes higher than the Fed wants and x goes even higher above X; case (4) might mean unemployment goes higher than the Fed wants and x falls even further below X. Maybe you have an opinion on which of those short-run outcomes is worse, or maybe not. But the bigger difference between the outcomes comes when you think about the Fed's options at its subsequent meeting. In case (1), the Fed could choose how much they want to raise rates to restrain inflation. In case (4), the Fed could keep rates constant or reverse the previous meeting's rate increase.

In case (4), neither option is good. Keeping the target at 25 basis points is too restrictive. Labor market conditions were bad to begin with and keeping policy tight will make them worse. But reversing the rate increase is a non-starter. The markets expect that after the first rate increase, rates will continue on an upward trend, as in previous tightening episodes. Reversing the rate increase would cause financial market turmoil, damage credibility, and require policymakers to admit that they were wrong. Case (1) is much more attractive. I think any concern that inflation could take off and get out of control is unwarranted. In the space between two FOMC meetings, even if inflation were to rise above target, inflation expectations are not likely to rise too far. The Fed could easily restrain expectations at the next meeting by raising rates as aggressively as needed.

So going back to the four possible scenarios, (2) and (3) are good, and (4) is much worse than (1). If the Fed raises rates, scenarios (3) and (4) are about equally likely. If the Fed holds rates constant, (1) and (2) are about equally likely. Thus, holding rates constant under high uncertainty about the state of the labor market is a better option than potentially raising rates too soon.

Sunday, December 7, 2014

Most Households Expect Interest Rates to Increase by May

Two new posts on the New York Federal Reserve's Liberty Street Economics Blog describe methods of inferring interest rate expectations from interest rate futures and forwards and from surveys conducted by the Trading Desk of the New York Fed. In a post at the Atlanta Fed's macroblog, "Does Forward Guidance Reach Main Street?," economists Mike BryanBrent Meyer, and Nicholas Parker, ask, "But what do we know about Main Street’s perspective on the fed funds rate? Do they even have an opinion on the subject?"

To broach this question, they use a special question on the Business Inflation Expectations (BIE) Survey. A panel of businesses in the Sixth District were asked to assign probabilities that the federal funds rate at the end of 2015 would fall into various ranges. The figure below compares the business survey responses to the FOMC's June projection. The similarity between businesspeoples' expectations and FOMC members' expectations for the fed funds rate is taken as an indication that forward guidance on the funds rate has reached Main Street.


What about the rest of Main Street-- the non-business-owners? We don't know too much about forward guidance and the average household. I looked at the Michigan Survey of Consumers for some indication of households' interest rate expectations. One year ago, in December 2013, 61% of respondents on the Michigan Survey said they expected interest rates to rise in the next twelve months. Only a third of consumers expected rates to stay approximately the same. According to the most recently-available edition of the survey, from May 2014, 63% of consumers expect rates to rise by May 2015.

The figure below shows the percent of consumers expecting interest rates to increase in the next twelve months in each survey since 2008. I use vertical lines to indicate several key dates. In December 2008, the federal funds rate target was reduced to 0 to 0.25%, marking the start of the zero lower bound period. Nearly half of consumers in 2009 and 2010 expected rates to rise over the next year. In August 2011, Fed officials began using calendar-based forward guidance when they announced that they would keep rates near zero until at least mid-2013. Date-based forward guidance continued until December 2012. Over this period, less than 40% of consumers expected rate increases.

In December 2012, the Fed adopted the Evans Rule, announcing that the fed funds rate would remain near zero until the unemployment rate fell to 6.5%. In December 2013, the Fed announced a modest reduction in the pace of its asset purchases, emphasizing that this "tapering" did not indicate imminent rate increases. The share of consumers expecting rate increases made a large jump from 55% in June 2013 to 68% in July 2013, and has remained in the high-50s to mid-60s since then.


But since 1978, the percent of consumers expecting an increase in interest rates has tracked reasonably closely with the realized change in the federal funds rate over the next twelve months (fed funds rate in month t+12 minus fed funds rate in month t). In the figure below, the correlation coefficient is 0.26. As a back-of-the-envelop calculation, if we regress the change in the federal funds rate in twelve months on the percent of consumers expecting a rate increase, the regression coefficients indicate that when 63% of consumers expect a rate increase, that predicts a 25 basis points rise in rates in the next year.


This survey data does not tell us for sure that forward guidance has reached Main Street. The survey does not specifically refer to the federal funds rate, just to interest rates in general. And households could simply have noticed that rates have been low for a long time and expect them to increase, even without hearing the Fed's forward guidance.  In an average month, 51% of consumers expect rates to rise over the next year, with a standard deviation of 15%. So the values we're seeing lately are about a standard deviation above the historical average, but they have been higher historically. In the third and fourth quarters of 1994, after the Fed had already begun tightening interest rates, 75-80% of consumers expected further rate increases. At the start of 1994, however, only half of consumers anticipated the rate increases that would come.

In May 2004, the FOMC noted that accommodation could “be removed at a pace that is likely to be measured.” That month, 85% of consumers (a historical maximum) correctly expected rates to increase.

Monday, December 1, 2014

A Cyber Monday Message of Thanks

Cyber Monday may just be a marketing tool, but I'll take it as an opportunity to send a cyber-message of thanksgiving out to all of you who are connected to me through my blog.

I started this blog in September 2012 but only started posting regularly in January 2013, after attending a panel discussion at the 2013 AEA meetings called "Models or Muddles: How the Press Covers Economics and the Economy." The panel members, Tyler Cowen, Adam Davidson, Kelly Evans, Chrystia Freeland, and David Wessel, discussed the importance and challenge of writing intellectually upright and emotionally compelling economic journalism. I took their discussion as an invitation to try my hand at economics blogging, and I immensely appreciate every one of you who have read, shared, criticized, complemented, and challenged my writing along the way. Most of you are anonymous, but several of you I feel that I know personally. All of you have helped me improve and inspired me to continue.

I blog mostly for myself. It is a remarkable opportunity to think through new ideas, evaluate recent research, and learn about important policy issues. When I started the blog, I was quite new to economics. I was a math major as an undergraduate, so my first year of graduate school at Berkeley was a boot camp-style introduction to economic theory and analysis. Now, in my final year of graduate school, I am still relatively new to the field, but blogging has helped me develop a broader and more nuanced and contextualized understanding of economics than I could have achieved from school alone.

I blog mostly for myself, but not only for myself. I entered an economics Ph.D. program because I fundamentally wanted to help people and make a difference in the world, as naive as that may sound. Now, looking over at my brand new baby daughter, that is still what I want, even more so than five years ago, though I think I have a more subtle understanding of "making a difference" than I used to. That is also why I want to thank you, readers. I do not give you investment advice, teach you how to get rich quick, or provide juicy ad hominem attacks to entertain you. If you're reading my blog, it's probably because you also have an intellectual interest in economics also stemming, I like to believe, from your desire for a better world. Thanks.

Tuesday, November 25, 2014

Regime Change From Roosevelt to Rousseff

I've written another post for the Berkeley Center for Latin American Studies blog:
President Franklin Delano Roosevelt was elected in October 1932, in the midst of the Great Depression. High unemployment, severely depressed spending, and double-digit deflation plagued the economy. Shortly after his inauguration in March 1933, a dramatic turnaround occurred. Positive inflation was restored, and 1933 to 1937 was the fastest four-year period of output growth in peacetime in United States history. 
How did such a transformation occur? Economists Peter Temin and Barrie Wigmore attribute the recovery to a “regime change.” In the economics literature, regime change refers to the idea that a set of new policies can have major effects by rapidly and sharply changing expectations. A regime change can occur when a policymaker credibly commits to a new set of policies and goals.... 
In short, Roosevelt stated and proved that he was willing to do whatever it would take to end deflation and restore economic growth. As Roosevelt proclaimed on October 22, 1933: “If we cannot do this one way, we will do it another. Do it, we will.” 
Almost all politicians promise change but few manage such drastic transformation. In Brazil’s closely contested presidential election this October, incumbent President Dilma Rousseff won reelection with a three-point margin over centrist candidate Aecio Neves. Rouseff told supporters, “I know that I am being sent back to the presidency to make the big changes that Brazilian society demands. I want to be a much better president than I have been until now.” 
Rousseff’s rhetoric of “big changes” refers in large part to the Brazilian economy, which is plagued with stagnant growth, high inflation, and a strained federal budget. Brazil’s currency, the real, hit a nine-year low following Rousseff’s victory, and the stock market also tumbled. This market tumult reflects investors’ doubts about the Rousseff administration’s intention and ability to enact effective reforms. Investors viewed Neves as the pro-business, anti-interventionist candidate and are unconvinced that Rousseff will act decisively to restore fiscal discipline and rein in inflation. In other words, Rousseff’s talk of change is not fully credible in the way that Roosevelt’s was... 
Though Rousseff is taking some actions to improve business conditions, restore fiscal discipline, and reduce inflation, the problem is that they are being enacted quietly and reluctantly rather than being trumpeted as part of a broader vision of reform. The key to regime change is that the effects of policy changes depend crucially on how the changes are presented and perceived. Economic policies work not only through direct channels but also through signaling and expectations. For example, a small rise in fuel prices and a reduction in state bank subsidized lending may have small direct effects, but if they are viewed as signals that the president is wholeheartedly embracing market-friendly reforms, the effects will be much greater. So far, despite Rousseff’s campaign slogan — “new government, new ideas” — she hasn’t credibly committed to a new regime.
Read the complete article and my pre-election Brazil article at the CLAS blog.

Monday, November 10, 2014

Reading Keynes at the Zero Lower Bound

The developed economies of Japan, the United States, and the Eurozone are currently experiencing very low short-term rates, so low that they are considered to be at the “zero lower bound” of possibility. This effectively paralyzes conventional monetary policy. As a consequence, monetary authorities have turned to unconventional and controversial policies such as “Quantitative Easing,” “Maturity Extension,” and “Low for Long Forward Guidance.” John Maynard Keynes in The General Theory offered a rich analysis of the problems that appear at the zero lower bound and advocated the very same unconventional policies that are now being pursued. Keynes’s comments on these issues are rarely mentioned in the current discussions because the subsequent simplifications and the bowdlerization of his model obliterated this detail. It was only later that his characterization of a lower bound to interest rates would be dubbed a “Liquidity Trap.” This essay employs Keynes’s analysis to retell the economic history of the Great Depression in the United States. Keynes’s rationale for unconventional policies and his expectations of their effect remain surprisingly relevant today. I suggest that in both the Depression and the Great Recession the primary impact on interest rates was produced by lowering expectations about the future path of rates rather than by changing the risk premiums that attach to yields of different maturities. The long sustained period when short term rates were at the lower bound convinced investors that rates were likely to remain near zero for several more years. In both cases the treatment proved to be very slow to produce a significant response, requiring a sustained zero-rate policy for four years or longer.
Sutch notes that "the General Theory is a notoriously unreadable book, one that required others to interpret and popularize its message." Since Keynes did not use the phrase "liquidity trap"--it was coined by Dennis Robertson in 1940--interpreting Keynes' policy prescriptions in a liquidity trap is contentious. Sutch reinterprets the theory of the liquidity trap from the General Theory, then examines the impact of Federal Reserve and Treasury policies during the Great Depression in light of his interpretation.

Sutch outlines Keynes' three theoretical reasons why an effective floor to long-term interest rates might be encountered at the depth of a depression:
(1) Since the term structure of interest rates will rise with maturity when short-term rates are low, a point might be reached where continued open-market purchases of shortterm government debt would reduce the short-term rate to zero before producing a sufficient decline in the risk-free long-term rate [Keynes 1936: 201-204 and 233]. 
(2) It is, at least theoretically, possible that the demand for money (called “liquidity preference” by Keynes) could become “virtually absolute” at a sufficiently low longterm interest rate and, if so, then increases in the money supply would be absorbed completely by hoarding [Keynes 1936: 172 and 207-208]. 
(3) The default premiums included as a portion of the interest charged on business loans and on the return to corporate securities could become so great that it would prove impossible to bring down the long-term rate of interest relevant for business decisions even though the risk-free long-term rate was being reduced by monetary policy [Keynes 1936: 144-145].
The first two reasons, Sutch notes, are often conflated  because of a tendency in the post-Keynesian literature to drop short-term assets from the model. The third reason, called "lender's risk," is typically neglected in textbooks and empirical studies.

In Keynes' view, the Great Depression was triggered by a collapse in investment in 1929, prior to the Wall Street crash in the fall, as “experience was beginning to show that borrowers could not really hope to earn on new investment the rates which they had been paying” and “even if some new investment could earn these high rates, in the course of time all the best propositions had got taken up, and the cream was off the business.” Keynes maintained that a reduction in the long-term borrowing rate, to low levels would be required to stimulate investment after the collapse of the demand curve for investment. He suggested that the long-term borrowing rate has three components: (1) the pure expectations component, (2) the risk premium, and (3) the default premium.

Sutch goes on to interpret the zero lower bound episodes of 1932, April 1934-December 1936 and April 1938-December 1939 according to Keynes' theory. He concludes that the primary impact of unconventional monetary policy on interest rates was through lowering expectations about the future path of rates rather than by changing the risk premiums on yields of different maturities---but this impact was very slow. Sutch also concludes that a similar interpretation of recent unconventional monetary policy is appropriate. He notes four main similarities between the Great Depression and the Great Recession:
...the collapse of demand for new fixed investment, the role of the zero lower bound in hampering conventional monetary policy, the multi-year period of near-zero short term rates, and the protracted period of subnormal prosperity during the respective recoveries. A major difference between then and now is that in the current situation the monetary authorities are actively pursuing large-scale purchases of long-term government securities and mortgage-backed assets. This is the primary monetary policy that Keynes advocated for a depressed economy at the zero lower bound. This policy was not attempted during the Great Depression and it is unclear whether the backdoor QE engineered by the Treasury was an adequate substitute. 
While the current monetary activism is to be welcomed, Quantitative Easing then and now appears to be slow acting. In both regimes recovery came only after multiple painful years during which uncertainly damped optimism. Improvement came only after multiple years during which many lives were seriously marred by unemployment and many businesses experienced or were threatened with bankruptcy...Keynes opened his series of Chicago lectures in 1931 expressing the fear that, just possibly, "… when this crisis is looked back upon by the economic historian of the future it will be seen to mark one of the major turning-points. For it is a possibility that the duration of the slump may be much more prolonged than most people are expecting and that much will be changed, both in our ideas and in our methods, before we emerge. Not, of course, the duration of the acute phase of the slump, but that of the long, dragging conditions of semislump, or at least subnormal prosperity which may be expected to succeed the acute phase." [Keynes 1931: 344]
If you are interested in reading narrative evidence from Keynes' writing, the entire working paper is worth your time.

Sunday, November 2, 2014

Guest Post: Estimating Monetary Policy Rules Around The Zero Lower Bound

I hope you enjoy this guest post contributed by Jon Hartley

As the Federal Reserve moves closer to normalizing monetary policy and moving toward a federal funds rate “lift-off” date, I’ve created MonetaryPolicyRules.org, a new website that provides up-to-date interactive graphs of popular monetary policy rules.

Since the federal funds rate has hit the zero lower bound, Taylor rules have received a lot of criticism in large part because many Taylor rules have prescribed negative nominal interest rates during and after the global financial crisis. Chicago Fed President (and prominent monetary policy scholar) Charles Evans stated about the Taylor Rule that “The rule completely breaks down during the Great Recession and its aftermath”.

The discretionary versus rules-based monetary policy debate endures, most recently with the Federal Reserve Accountability and Transparency Act being introduced in Congress, followed by a series of dueling Wall Street Journal op-eds by John Taylor and Alan Blinder. Rather than thinking about Taylor rules as a prescription for monetary policy (in a normative economics sense), what has been left out of the discussion is how Taylor rules can accurately be a description of monetary policy regimes (in a positive economics sense) even if the central bank does not explicitly follow a stated rule.

Tim Duy has accurately pointed out in a recent post that using the GDP and inflation forecasts also provided by the FOMC for 2014 through 2017 (and beyond), no traditional monetary policy rule captures the median of the current fed fund rate forecasts (commonly known as the “dot plots” released by the Federal Reserve on a quarterly basis as part of their Delphic forward guidance) which are considerably lower than either the Taylor (1993), Taylor (1999), Mankiw (2001), or Rudebusch (2009) rules would estimate.

What’s also worth noting is that in the early to mid-2000’s the federal funds rate was considerably lower than what any of the above classic monetary policy rules would estimate. This in large part is because all of these rules were estimated using data from the “Great Moderation” of the 1990’s, which was then led by a very different Federal Reserve than we have today (note those rules fit the federal funds effective rate data very accurately during the 1990’s).
Source: Tim Duy
The real question is how can we estimate a monetary policy rule that describes the Bernanke-Yellen Fed, while also addressing the problem of the zero lower bound for nominal interest rates?

One interesting idea that has gained some popularity recently is the idea of measuring a “shadow federal funds rate” (originally hypothesized by Fisher Black in a 1995 paper, published just before his death, which uses fed funds futures rates and an affine term structure model to back out a negative spot rate). The idea nicely estimates the potential effects of quantitative easing on long-term rates  while the federal funds rate is at the zero lower bound (and for that reason I’ve included the Wu-Xia (2014) shadow fed funds rate on the site). With the shadow fed funds rate in hand, one can now estimate a monetary policy rule with a standard OLS regression. One issue with this methodology is how there is a lack of consensus around what to use as input data for a shadow rate which can give you very different results (Khan and Hakkio (2014) observed that the Wu-Xia (2014) shadow fed funds rate looks remarkably different from the rate calculated by Krippner (2014)).

Wu-Xia (2014) and Krippner (2014) Shadow Federal Funds Rates (in %)Source: Khan and Hakkio (2014), Federal Reserve Board of Governors, Krippner (2014), Wu-Xia (2014)

One other solution to the problem of estimating a monetary policy rule at the zero lower bound is an econometric one. Fortunately, we have Tobit regressions in our econometric toolbox (originally developed by James Tobin (1958)) which allow us to estimate Taylor rules while censoring data at the zero lower bound.

In my Taylor rule that is estimated with federal funds rate from the Bernanke-Yellen period, censoring data at the zero lower bound using a Tobit regression, I use both y/y core CPI inflation and unemployment. In another version, I use the Fed’s new Labor Market Conditions Index (LMCI) as a labor market indicator though both yield relatively similar results*. Importantly, these estimates indicate that the Bernanke-Yellen Fed puts a much higher weight on the output/unemployment gap than the Mankiw (2001) rule estimated with data from the Greenspan period.

Tobit Taylor Rule Using Unemployment Rate and Core CPI as Inputs:
Federal Funds Target Rate = max{0, -0.43 + 1.2*(Core CPI y/y %) – 2.6*(Unemployment Rate-5.6)}

Using unemployment and inflation forecast data from the latest Federal Reserve FOMC meeting’s Survey of Economic Projections, I input these data as inputs into the Tobit Rule and Mankiw (2001) Rule. Matching these federal funds rate targets implied by the rules with the median federal funds forecasts provided by both the Federal Reserve “dot plots” and the Survey of Primary Dealers (note: the expected Fed Funds rate path from Survey of Primary Dealers has significantly fallen below the Fed dot plot Fed Funds rate path as noted by Christensen (2014)). Unfortunately, we do not have precise data on which dots belong to which Fed officials (otherwise, we could try to construct a Taylor rule for each Fed president and board member). Compared to the Mankiw (2001) rule, the estimated Tobit rule much better matches the median forecasts provided by the dot plots and the Survey of Primary Dealers.

Federal Reserve Forward Guidance/Survey of Primary Dealers Fed Funds Rate Forecasts versus Tobit Rule and Mankiw (2001) Rule Using Fed Unemployment and CPI Forecasts


Janet Yellen has spoken fondly of the Taylor (1999) rule in the past as she has previously stated in a 2012 speech that “[John] Taylor himself continues to prefer his original rule, which I will refer to as the Taylor (1993) rule. In my view, however, the later variant--which I will refer to as the Taylor (1999) rule--is more consistent with following a balanced approach to promoting our dual mandate.”

It is no surprise that the Tobit rule estimated with more recent data comes much closer to accurately describing the Fed’s forward guidance than the Taylor (1993) rule. However, what is really interesting is that the Tobit Rule is also much closer to describing the Fed’s current forward guidance than the Taylor (1999) rule, which remains far off.

Footnote:
*An important issue which has been addressed recently with the introduction of the Fed’s new Labor Market Conditions Index (LMCI) is how do we measure improvement (or lack thereof) in the labor market? While the U.S. unemployment rate for September was 5.9% (the lowest level since July 2008), the figure fails to capture a number of fractures in the economy remain which are not reflected in the unemployment rate. One item included in the LMCI (but not reflected in the unemployment rate) is the high U-6 unemployment rate (which factors in individuals who are underemployed, working part-time for economic reasons and would rather have full-time jobs) currently at 11.8%. Another is subdued wage growth that is not commensurate with drops in the unemployment rate as history would suggest. The labor force participation rate is at historical lows of 62% (in large part due to the number of retirements (a secular demographic trend) and to some extent due to discouraged workers (a cyclical trend) according to a recent Philly Fed study).

A previous post on this blog astutely points out that correlation of 12-month changes in LMCI with 12-month changes in the unemployment rate is -0.96, suggesting that “the LMCI doesn’t tell you anything that the unemployment rate wouldn’t already tell you”. While the economists who developed the LMCI list on the Fed’s website the correlations of 12-month changes (which tell you the tendency of large 12-month figures moving together), I would argue that this is accurate of long-term labor market trends, while the correlations of monthly changes with the LMCI is a more accurate representation of the extent the measures move together in small short-term labor market movements. Doing so indicates that the monthly level of the LMCI has a -0.82 correlation with the unemployment rate, suggesting that the LMCI is not completely redundant in short-term labor market movements, incorporating some parts of the mixed economic narrative told by dampening wage growth, low labor force participation, and high amount of underemployed part-timers.

Tuesday, October 21, 2014

Soda Calories and the Ethics of Nudge

The “surprisingly simple way to get people to stop buying soda,” according to a new and highly-hyped study, is to tell them how long they will have to run in order to burn off the calories in the soda. Since so many of my friends are economists, runners, or both, articles about this study ended up plastered all over my Facebook and Twitter pages, followed by a lot of commentary and debate. Some of the issues that came up included paternalism, the identification of social goods and ills, the replicability of field experiments on a larger scale or longer time frame, and the ethics of nudge policies.

My friend and classmate David Berger has allowed me to share his take:
Alright, this keeps coming up: public health types saying stupidly pessimistic things about the number of hours you have to burn off x amount of calories. Friends, it's easy to deceive yourself about how many calories you actually burn doing cardio. And then there's a whole bunch of people--treadmill manufacturers, for example--who want to inflate the numbers. But this trend of public health know-it-alls using the most pessimistic calculations needs to stop. It's just wrong. These people convinced teenagers that it would take 50 minutes of running to burn off one soda. They must be targeting non-runners. 
Actually, who they are targeting is baffling. They base their calculations on the activity-energy equivalents for a 110 pound fifteen year old. Nowhere do they indicate pace, although when I use the calculator on runner's world to give a 110 pound a 15 min/mile pace (a pace walkers in comfortable clothing can manage), it gives me 277 calories for 50 minutes. 
Let's do a real calculation. If you are less worried about 110 pounders drinking soda, try a 200 pounder. Let's give the same walking pace of 15 min/mile, and keep it at 50 minutes. 504 calories. Alternately, that's 25 minutes to work off a soda. 
Or, suppose you expect someone to aspire to something, and someone reading this public service message to know the difference between walking and running, and to be able to determine whether they can maintain a running pace for 50 minutes. I won't even make them a good runner, just a 12:30 min/mile, which someone starting out can manage in most cases. The same 50 minutes goes up to 605 calories. Granted 50 minutes might be much for someone starting out, but then they only need 21 minutes to manage a 250 calorie soda. 
Now, calories/hour will be lower if you weigh less than 200 pounds. But then you will probably be able to run faster than 12:30. I understand this push society is making overall: there's too much false hope in the ability of exercise to compensate for constantly immoderate caloric choices, and there's too much acceptance of empty nutrition (like soda) as a source of calories. But if the next heavy-handed social tactic is outright lying, can we please just not?