Monday, March 20, 2017

Do policymakers need more advice from sociologists?

Sociologists sometimes feel neglected. And this can cause them to complain - about economists. Economics is a successful social science. While most social sciences do well in attracting undergraduate students, economics has performed well off campus. An undergrad economics major pays well, and an economics PhD provides entry into plentiful high-paying jobs on Wall Street, and in government, central banks, and academia. As well, policymakers care what economists think, and seek their advice (current occupants of the executive branch excepted).

But life isn't so easy for the downtrodden sociologist, which has led to writing like this piece in the Journal of Economic Perspectives. I discussed that article in this blog post. Seems the upshot of the authors' critique is that economics would be much better if it adopted ideas from other social sciences - sociology in particular.

Economics is hardly perfect. To move forward as a science, we need to be objective, heartlessly self-critical, and outward-looking - ready to absorb new and useful ideas from other fields. So what does sociology have to offer us? Neil Irwin, at the New York Times, suggests that there are some key ideas in sociology that economists, and policymakers, are not absorbing. Irwin says:
They say when all you have is a hammer, every problem looks like a nail. And the risk is that when every policy adviser is an economist, every problem looks like inadequate per-capita gross domestic product. Another academic discipline may not have the ear of presidents but may actually do a better job of explaining what has gone wrong in large swaths of the United States and other advanced nations in recent years.
So, (i) things have gone wrong in the US and elsewhere. (ii) We may be ignoring better explanations for these things than what economists are supplying.

That other academic discipline with the explanations is sociology, of course. Neil then interviews a sociologist, to get her perspective:
“Once economists have the ears of people in Washington, they convince them that the only questions worth asking are the questions that economists are equipped to answer,” said Michèle Lamont, a Harvard sociologist and president of the American Sociological Association. “That’s not to take anything away from what they do. It’s just that many of the answers they give are very partial.”
So apparently we have been somewhat conspiratorial, whispering in the ears of the Washington elite that economics is it - and all the time neglecting some important stuff. But what exactly are we missing?

The rest of Neil's article details what he views as important contributions of sociology, that help us understand current problems:

1. “Wages are very important because of course they help people live and provide for their families,” said Herbert Gans, an emeritus professor of sociology at Columbia. “But what social values can do is say that unemployment isn’t just losing wages, it’s losing dignity and self-respect and a feeling of usefulness and all the things that make human beings happy and able to function.”

2. Jennifer M. Silva of Bucknell University has in recent years studied young working-class adults and found a profound sense of economic insecurity in which the traditional markers of reaching adulthood — buying a house, marrying, landing a steady job — feel out of reach.

3. “Evicted,” a much-heralded book by the Harvard sociologist Matthew Desmond, shows how the ever-present risk of losing a home breeds an insecurity and despondency among poor Americans.

4. ...a large body of sociological research touches on the idea of stigmatization, including of the poor and of racial minorities. It makes clear that there are harder problems to solve around these issues than simply eliminating overt discrimination.

So, unemployed people feel really bad, young people worry about the future, poverty is horrible, and stigma exists. I would hope that most people would know these things, and that they shouldn't need sociologists to point out the importance of these observations. But, if the role of sociologists is to inform otherwise-oblivious people about this stuff, then good for them.

But we're looking for something more, I think. Surely sociologists have ideas about solutions to these problems that they have spent so much time studying? Well, no.
And trying to solve social problems is a more complex undertaking than working to improve economic outcomes. It’s relatively clear how a change in tax policy or an adjustment to interest rates can make the economy grow faster or slower. It’s less obvious what, if anything, government can do to change forces that are driven by the human psyche.
Apparently sociology is so much harder than economics that sociologists are bereft of solutions. And no one's asking them anyway, so why bother?
But there is a risk that there is something of a vicious cycle at work. “When no one asks us for advice, there’s no incentive to become a policy field,” Professor Gans said.
Still glad to be an economist, I think.

Sunday, March 19, 2017

What is full employment anyway, and how would we know if we are there?

What are people talking about when they say "full employment?" Maybe they don't know either? Whatever it is, "full employment" is thought to be important for policy, particularly monetary policy. Indeed, it typically enters the monetary policy discussion as "maximum employment," the second leg of the Fed's dual mandate - the first leg being "price stability."

Perhaps surprisingly, there are still people who think the US economy is not at "full employment." I hate to pick on Narayana, but he's a convenient example. He posted this on his Twitter account:
Are we close to full emp? In steady state, emp. growth will be about 1.2M per year. It's about *twice* that in the data. (1) Employment is growing much faster than long run and inflation is still low. Conclusion: we're well below long run steady state. end
Also in an interview on Bloomberg, Narayana gives us the policy conclusion. Basically, he thinks there is still "slack" in the economy. My understanding is that "slack" means we are below "full employment."

So what is Narayana saying? I'm assuming he is looking at payroll employment - the employment number that comes from the establishment survey. In his judgement, in a "steady state," which for him seems to mean the "full employment" state, payroll employment would be growing at 1.2M per year, or 100,000 per month. But over the last three months, the average increase in payroll employment has exceeded 200,000 per month. So, if we accept all of Narayana's assumptions, we would say the US economy is below full employment - it has some catching up to do. According to Narayana, employment can grow for some time in excess of 100,000 jobs per month, until we catch up to full employment, and monetary policy should help that process along by refraining from interest rate hikes in the meantime.

Again, even if we accept all of Narayana's assumptions, we could disagree about his policy recommendation. Maybe the increase in the fed funds rate target will do little to impede the trajectory to full employment. Maybe it takes monetary policy a period of time to work, and by the time interest rate hikes have their effect we are at full employment. Maybe the interest rate hikes will allow the Fed to make progress on other policy goals than employment. But let's explore this issue in depth - let's investigate what we know about "full employment" and how we would determine from current data if we are there or not.

Where does Narayana get his 1.2M number from? Best guess is that he is looking at demographics. The working age population in the United States (age 15-64) has been growing at about 0.5% per year. But labor force participation has grown over time since World War II, and later cohorts have higher labor force participation rates. For example, the labor force participation rate of baby-boomers in prime working age was higher than the participation rate of the previous generation in prime working age. So, this would cause employment growth to be higher than population growth. That is, Narayana's assumptions imply employment growth of about 0.8% per year, which seems as good a number as any. Thus, the long-run growth path for the economy should exhibit a growth rate of about 0.8% per year - though there is considerable uncertainty about that estimate.

But, we measure employment in more than one way. This chart shows year-over-year employment growth from the establishment survey, and from the household survey (CPS):
For the last couple of years, employment growth has been falling on trend, by both measures. But currently, establishment-survey employment is growing at 1.6% per year, and household survey employment is growing at 1.0% per year. The latter number is a lot closer to 0.8%. The establishment survey is what it says - a survey of establishments. The household survey is a survey of people. The advantages of the establishment survey are that it covers a significant fraction of all establishments, and reporting errors are less likely - firms generally have a good idea how many people are on their payrolls. But, the household survey has broader coverage (includes the self-employed for example) of the population, and it's collected in a manner consistent with the unemployment and labor force participation data - that's all from the same survey. There's greater potential for measurement error in the household survey, as people can be confused by the questions they're asked. You can see that in the noise in the growth rate data in the chart.

Here's another interesting detail:
This chart looks at the ratio of household-survey employment to establishment-survey employment. Over long periods of time, these two measures don't grow at the same rate, due to changes over time in the fraction of workers who are in establishments vs. those who are not. For long-run employment growth rates, you should put more weight on the household survey number (as this is a survey of the whole working-age population), provided of course that some measurement bias isn't creeping into the household survey numbers over time. Note that, since the recession, establishment-survey employment has been growing at a significantly higher rate than household-survey employment.

So, I think that the conclusion is that we should temper our view of employment growth. Maybe it's much closer to a steady state rate than Narayana thinks.

But, on to some other measures of labor market performance. This chart shows the labor force participation rate (LFPR) and the employment-population ratio (EPOP).
Here, focus on the last year. LFPR is little changed, increasing from 62.9% to 63.0%, and the same is true for EPOP, which increased from 59.8% to 60.0%. That looks like a labor market that has settled down, or is close to it.

A standard measure of labor market tightness that labor economists like to look at is the ratio of job vacancies to unemployment, here measured as the ratio of the job openings rate to the unemployment rate:
So, by this measure the labor market is at its tightest since 2001. Job openings are plentiful relative to would-be workers.

People who want to argue that some slack remains in the labor market will sometimes emphasize unconventional measures of the unemployment rate:
In the chart, U3 is the conventional unemployment rate, and U6 includes marginally attached workers (those not in the labor force who may be receptive to working) and those employed part-time for economic reasons. The U3 measure is not so far, at 4.7%, from its previous trough of 4.4% in March 2007, while the gap between current U6, at 9.2% and its previous trough, at 7.9% in December 2006, is larger. Two caveats here: (i) How seriously we want to take U6 as a measure of unemployment is an open question. There are problems even with conventional unemployment measures, in that we do not measure the intensity of search - one person's unemployment is different from another's - and survey participants' understanding of the questions they are asked is problematic. The first issue is no worse a problem for U6 than for U3, but the second issue is assuredly worse. For example, it's not clear what "employed part time for economic reasons" means to the survey respondent, or what it should mean to the average economist. Active search, as measured in U3, has a clearer meaning from an economic point of view, than an expressed desire for something one does not have - non-satiation is ubiquitous in economic systems, and removing it is just not feasible. (ii) What's a normal level for U6? Maybe the U6 measure in December 2006 was undesirably low, due to what was going on in housing and mortgage markets.

Another labor market measure that might be interpreted as indicating labor market slack is long term unemployment (unemployed 27 weeks or more) - here measured as a rate relative to the labor force:
This measure is still somewhat elevated relative to pre-recession times. However, if we look at short term unemployment (5 weeks or less), this is unusually low:
As well, the insured unemployment rate (those receiving unemployment insurance as a percentage of the labor force) is very low:
To collect UI requires having worked recently, so this reflects the fact that few people are being laid off - transitions from employment to unemployment are low.

An interpretation of what is going on here is that the short-term and long-term unemployed are very different kinds of workers. In particular, they have different skills. Some skills are in high demand, others are not, and those who have been unemployed a long time have skills that are in low demand. A high level of long-term unemployed is consistent with elevated readings for U6 - people may be marginally attached or wanting to move from part-time to full-time work for the same reasons that people have been unemployed for a long time. What's going on may indicate a need for a policy response, but if the problem is skill mismatch, that's not a problem that has a monetary policy solution.

So, if the case someone wants to make is that the Fed should postpone interest rate increases because we are below full employment - that there is still slack in the labor market - then I think that's a very difficult case to make. We could argue all day about what an output gap is, whether this is something we should worry about, and whether monetary policy can do much about an output gap, but by conventional measures we don't seem to have one in the US at the current time. In terms of raw economic performance (price stability aside), there's not much for the Fed to do at the current time. Productivity growth is unusually low, as is real GDP growth, but if that's a policy problem, it's in the fiscal department, not the monetary department.

But there is more to Narayana's views than the state of the labor market. He thinks it's important that inflation is still below the Fed's target of 2%. Actually, headline PCE inflation, which is the measure specified in the Fed's longer-run goals statement, is essentially at the target, at 1.9%. I think what Narayana means is that, given his Phillips-curve view of the world, if we are close to full employment, inflation should be higher. In fact, the long-run Fisher effect tells us that, after an extended period of low nominal interest rates, the inflation rate should be low. Thus, one might actually be puzzled as to why the inflation rate is so high. We know something about this, though. Worldwide, real rates of interest on government debt have been unusually low, which implies that, given the nominal interest rate, inflation will be unusually high. But, this makes Narayana's policy conclusion close to being correct. The Fed is very close to its targets - both legs of the dual mandate - so why do anything?

A neo-Fisherian view says that we should increase (decrease) the central bank's nominal interest rate target when inflation is too low (high) - the reverse of conventional wisdom. But maybe inflation is somewhat elevated by increases in the price of crude oil, which have since somewhat reversed themselves. So, maybe the Fed's nominal interest rate target should go up a bit more, to achieve its 2% inflation target consistently.

Though Narayana's reasoning doesn't lead him in a crazy policy direction, it would do him good to ditch the Phillips curve reasoning - I don't think that's ever been useful for policy. If one had (I think mistakenly) taken Friedman to heart (as appears to be the case with Narayana), we might think that unemployment above the "natural rate" should lead to falling inflation, and unemployment below the natural rate should lead to rising inflation. But, that's not what we see in the data. Here, I use the CBO's measure of the natural rate of unemployment (quarterly data, 1990-2016):
According to standard Friedman Phillips-curve logic, we should see a negative correlation in the chart, but the correlation is essentially zero.

Friday, March 3, 2017

What's Up With Inflation?

In the past year, inflation rates have increased in a number of countries. Are these increases temporary or permanent? What do they imply for monetary policy?

One of the more stark turnarounds in inflation performance is in Sweden. To see what is going on, it helps to look at CPI levels:
Sweden had four years of inflation close to zero, but in the last year prices have been increasing, and the current 12-month inflation rate is 1.4%, which is getting much closer to the Riksbank's 2% inflation target. The path - actual, and projected by the Riksbank - for the Riksbank's policy interest rate looks like this:
So, the Riksbank's policy rate has been negative for about two years, and it envisions negative rates for the next two years.

For the Euro area, the story is similar. Here's the Euro area CPI:
The difference here, from Sweden, is that the inflation rate has been at zero for only three years, but you can see a similar increase in the inflation rate in the last year, with the current 12-month Euro area inflation rate at 1.6%. And the overnight interest rate in the Euro area looks like this:
As you can see, overnight nominal interest rates entered negative territory in 2015, and went significantly negative last year.

Another instance of increasing inflation is the UK:
In this case, it's more like two years of inflation close to zero, followed by an increase in 2016. The current 12-month inflation rate in the UK is 1.9%. The Bank of England's policy interest rate was targeted at 0.50% from March 2009 to August 2016, when it was reduced to 0.25%.

What's the most likely cause of the increase in the inflation rate in these three countries? I don't think we have to look far:
Crude oil prices fell from $100-110 in mid-2014 to about $30 in January 2016, and have since increased to $50-55. While changes in relative prices should not matter in the long run for inflation, a strong regularity is that, in the short run, shocks that cause large relative price movements are reflected in changes in inflation rates. As is well-known, that's particularly the case for crude oil prices, which are highly volatile and tend to move aggregate price indices in the same direction.

The timing certainly seems to suggest that oil price increases are responsible for the increase in inflation in Sweden, the Euro area, and the UK. But, of course, the monetary policies of the Riksbank, the ECB, and the Bank of England were specifically designed to increase inflation. These policies included low or negative nominal interest rate targets and large-scale asset purchases by the central bank. Most of the central bankers involved tend to subscribe to a simple Keynesian story: inflation expectations are fixed (i.e. "anchored"); lower interest rates reduce real rates of interest, which increase spending, output, and employment; inflation increases through a Phillips curve effect. Why can't we always see these effects? True believers might appeal to "long and variable lags" and a "flat Phillips curve." Those are dodges, I think. IS/LM/Phillips curve isn't a helpful framework for thinking about monetary policy if I have to worry about whether it's going to take six weeks or six years for monetary policy to work, or if I need to be concerned whether the Phillips curve is just resting, flat, sloping the wrong way, or deceased.

Further, there are countries in which extreme forms of negative interest rate monetary policy and a large central bank balance sheet don't appear to have moved inflation in the desired direction. One is Switzerland. Here's the Swiss CPI:
So, the recent history in Switzerland if one of unabated trend deflation. And overnight interest rates in Switzerland look like this:
Next, since April 2013 the Bank of Japan has resorted to every trick in the book (massive quantitative easing, low and negative nominal interest rates) to get inflation up to 2%. Here's the result:
I've told this story before, but it bears repeating. Since the BoJ's easing program began in April 2013, most of the increase in the CPI level has been due to an increase of three percentage points in the consumption tax in April 2014 in Japan. Inflation has averaged about zero for almost three years.

What's the conclusion? For all these countries, recent data is consistent with the view that persistently low nominal interest rates do not increase inflation - this just makes inflation low. If a central bank is persistently undershooting its inflation target, the solution - the neo-Fisherian solution - is to raise the nominal interest rate target. Undergraduate IS/LM/Phillips curve analysis may tell you that central banks increase inflation by reducing the nominal interest rate target, but that's inconsistent with the implications of essentially all modern mainstream macroeconomic models, and with recent experience.

But, even if we recognize the importance of Fisher effects, that will not make inflation control easy. (i) Shocks to the economy - for example large changes in the relative price of crude oil - can push inflation off track. (ii) The long-run real rate of interest is not a constant. As is now widely-recognized, the real rate of return on government debt, particularly in the United States, has trended downward for the last 35 years or so, and shows no signs that it will increase. By Fisherian reasoning, a persistently low real interest rate implies that the short-term nominal interest rate consistent with 2% inflation is much lower than it once was. But what's the best guess for the appropriate nominal interest rate currently, in the United States? Here's the inflation rate, and the 3-month T-bill rate in the United States (I'm using the T-bill rate to avoid questions as to what overnight rate we should be looking at):
So, the Fed's preferred measure of inflation (raw PCE inflation), at 1.9%, is very close to its target of 2%, after two interest rate hikes (in December 2015 and December 2016), which some claimed would reduce inflation and/or push the economy off a cliff. Looking at this same data in another way, subtract the 12-month inflation rate from the 3-month T-bill rate to get a measure of the real interest rate:
So, from mid-2012 to mid-2014, the real interest rate averaged about -1.3%, before oil prices fell. Now that the price of crude oil has again increased somewhat, the real interest rate is back in that ballpark again, with inflation close to the 2% target. So, what would a neo-Fisherian do? (i) There's no good reason to think that oil prices will keep going up, so the effects of the recent oil price increases should dissipate. So, with no change in monetary policy, we might expect a small reduction in the PCE inflation rate. (ii) There's no good reason to anticipate an increase in the long-run real rate of return on government debt, given our knowledge of what makes the real interest rate low (a shortage of safe assets, low average productivity growth). Therefore, a neo-Fisherian inflation-targeting policy maker might want another 1/4 point increase in the fed funds target, but not much more.

Tuesday, February 21, 2017

Tim Fuerst

I was sad to learn today that Tim Fuerst has passed away. Tim received his PhD from Chicago in 1990, was the William and Dorothy O'Neill Professor at the University of Notre Dame, and had a long relationship with the Cleveland Fed. Tim was one of the most enthusiastic human beings I have ever met. He did pathbreaking work on liquidity effects, and his joint work with Chuck Carlstrom at the Cleveland Fed was very influential. It's tragic to lose such a productive researcher and teacher at the peak in his career. Here's an article on Tim in the University of Chicago Magazine.

Monday, February 13, 2017

Balance Sheet Blues

We're starting to hear some public discussion about Fed balance sheet reduction. For example, Jim Bullard has spoken about it, and Ben Bernanke has written about it.

Balance sheet reduction is part of the FOMC's "Policy Normalization Principles and Plans. Quoting from scripture:
The Committee intends to reduce the Federal Reserve's securities holdings in a gradual and predictable manner primarily by ceasing to reinvest repayments of principal on securities held in the SOMA.
The normalization plan also states that balance sheet reduction will occur after interest rate increases happen, and that no outright sales of assets in the Fed's portfolio are anticipated. Thus, since we have now seen two increases in the target range for the fed funds rate - in December 2015 and December 2016 - it would be understandable if people were anticipating some consideration of the issue by the Fed in the near future.

What's at stake here? The Fed engaged in several rounds of large scale asset purchases, beginning in late 2008, and continuing through late 2014, which served to more than quadruple the size of the Fed's balance sheet:
But the Fed's assets, which now consist primarily of long-maturity Treasury securities and mortgage-backed securities, mature over time. As the assets mature, the size of the Fed's asset portfolio will fall naturally, and Fed liabilities will be retired. But that isn't happening, because the FOMC instituted a "reinvestment" policy in August 2010, and that policy has continued to the present day. Under reinvestment, assets are replaced as they mature, the result being that the size of the portfolio stays roughly constant, in nominal terms. The Fed's normalization plans state that revinvestment will stop eventually, but there are different ways to phase it out. For example, revinvestment could stop abruptly, or the Fed could somehow smooth the transition.

But, if Ben Bernanke were still Fed Chair, he would be postponing balance sheet reduction. Why? As stated in his post, Bernanke thinks that:

1. There is uncertainty about the effects of ending reinvestment, so the Fed should wait until the fed funds rate is higher, giving it a larger margin to make corrections if something goes wrong with balance sheet reduction.
2. Because the demand for currency is growing over time, this reduces the amount of balance sheet reduction that would be required to return to the pre-financial crisis state of affairs in which reserves are close to zero.
3. There may be good reasons to continue operating a floor system, under which the interest rate on reserves determines overnight interest rates. But, according to Bernanke, it takes a heap of reserves to operate a floor system, giving us another reason to think that the ultimate balance sheet reduction required for normalization isn't so large.

Perhaps the most curious aspect of Bernanke's piece is the absence of any explanation of how quantitative easing (QE) is supposed to work. However, it's easy to find a stated rationale for QE in Bernanke's earlier public statements as Fed Chair, for example in his 2012 Jackson Hole speech. Basically, Bernanke argues that QE affects asset prices because of imperfect substitution among assets. Thus, for example, swaps of reserves for long-maturity Treasuries will increase the prices of long-maturity Treasuries, reduce long bond yields, and flatten the yield curve, according to Bernanke. He also claims there is solid empirical evidence supporting this theory. In Bernanke's mind, then, QE is just another form of "monetary accommodation," which substitutes for reductions in the policy interest rate when such interest rate reductions are not on the table. The reductions in long bond yields should, in Bernanke's view, increase real economic activity and increase inflation.

Actually, the evidence that QE works as intended is pretty sketchy. For the most part, the empirical work consists of event studies - isolate an announcement window for a policy change, then look for movements in asset prices in response. There's also some regression evidence, but essentially nothing (as far as I know) in terms of structural econometric work, i.e. work that is explicit about the theory in a way that allows us to quantify the effects. But surely, since QE has been in use for so long a time, it should have found a place in models that are widely used for policy analysis. Indeed, a paper by David Reifschneider at the Board of Governors uses the Board's large scale macroeconometric FRB/US model to conduct a policy exercise that, in part, evaluates the efficacy of QE. The conclusion is:
...model simulations of a severe recession suggest that policymakers would be able to use a combination of federal funds rate cuts, forward guidance, and asset purchases to replicate (and even improve upon) the economic performance that hypothetically would occur were it possible to ignore the zero lower bound on interest rates and cut short-term interest rates as much as would be prescribed by a fairly aggressive policy rule.
So, that's consistent with Bernanke's post. Bernanke argues that it was necessary to use QE in 2008 and after because the Fed was constrained by the zero lower bound on nominal interest rates. More accommodation was needed, according to Bernanke, and QE provided that accommodation. Reifschneider says that's exactly what the FRB/US model tells us. QE (along with forward guidance) effectively relaxes the zero lower bound constraint. That is, QE is just like a decrease in the target for the policy interest rate.

Is the FRB/US model the right laboratory for an assessment of the efficacy of QE? Reifschneider says:
For several reasons, FRB/US is well-suited for studying this issue. For one, it provides a good empirical description of the current dynamics of the economy, including the low sensitivity of inflation to movements in real activity. In addition, the model has a detailed treatment of the ways in which monetary policy affects spending and production through changes in financial conditions, including movements in various longer-term interest rates, equity prices, and the foreign exchange value of the dollar.
So, Reifschneider hasn't really told us why using this model is the right thing to do in this circumstance, but he's told us something about how the FRB/US model works. You can read about the FRB/US model here, and even figure out how to run it yourself if you have the inclination. The FRB/US model is a descendant of the FRB/MIT/Penn model, which existed circa 1970. In fact, if we could resurrect Lawrence Klein and show him the FRB/US model, I'm sure he would recognize it. In spite of the words in the FRB/US documentation that make it appear as if the model builders took to heart the lessons of post-1970s macroeconomics, FRB/US is basically an extended IS/LM/Phillips curve model - without the LM. Monetary policy is transmitted, as Reifschneider tells us in the above quote, through asset prices. So how would one use such a model to capture the effects of QE?
... the model’s asset pricing formulas provide a way for long-term interest rates and other financial factors to respond to shifts in term premiums induced by the Federal Reserve’s large-scale asset purchases.
So, in the simulation, QE is assumed to work through a "term premium" effect on asset prices - a flattening of the yield curve. But how large a change in the term premium results from a purchase of $x in assets by the Fed? That's in footnote 8:
The effects of asset purchases on term premiums used in this study are calibrated to be consistent with the estimates reported in Ihrig et al (2012) and Engen, Laubach and Reifschneider (2015) for the second and third phases of the Federal Reserve’s large-scale asset purchase programs, both of which involved buying assets of a longer average maturity (and thus a larger term premium effect) than the original phase. Specifically, the simulations reported here assume that announcing the purchase of an additional $500 billion in longer-term Treasury securities causes an immediate 20 basis point drop in the term premium embedded in the yield on the 10-year Treasury note; for yields on the 5-year Treasury note and the 30-year Treasury bond, the initial decline is assumed to be 17 basis points and 7 basis points, respectively. Thereafter, the downward pressure on term premiums is assumed to decline geometrically at 5 percent per quarter; this rate would be consistent with the Federal Reserve using reinvestments to maintain the size of its portfolio at its new, higher level for several years, and then allowing it to shrink passively by suspending reinvestment.
So, that's quite indirect. In the FRB/US model there are no central bank balance sheet variables. There are equations that capture the relationships among interest rates and asset prices, but there are no asset quantities. Thus, it's impossible to use the model to address directly the question: "What happens if the Fed purchases $600 billion in 10-year Treasury bonds?" To answer the question we have to do it indirectly. What Reifschneider has done is to work with what he's got, which is event studies and regression evidence. This gives him an estimate of the effect of Fed asset purchases on term premia, and he plugs that into the asset pricing relationships in the model. Maybe you're OK with that, but I don't trust it.

What makes me skeptical of Reifschneider's results? One approach to uncovering the effects of QE is to look for natural experiments. This is simple, which is ideal for a simpleton such as yours truly. For example, in April 2013, the Bank of Japan embarked on a QE project, which continues to this day. One objective of the project was to get inflation up to 2%. So, that policy has now had almost 4 years to work. What's happened? First, the magnitude of the BOJ's asset purchases is reflected in the monetary base:
The monetary base has about quadrupled over a period of less than four years. If QE works to increase inflation, surely we would be seeing a lot of it by now, right? Here's the CPI for Japan:
The price level went up alright, but part of that was due to an increase of three percentage points in the consumption tax in April 2014, which feeds directly into the CPI. Even including that, average inflation has been about 0.7% since April 2013, and about zero for the last two years. So, is QE effective in increasing inflation? Japanese experience says no.

Another natural experiment is the US and Canada. Since the financial crisis, the difference in interest rate policy between the Bank of Canada and the Fed has been minimal. For example, the Bank of Canada's current target for the overnight policy rate is 0.50%, while the ON-RRP rate (the comparable secured overnight rate in the US) is also 0.50%. From one of my papers, here's a chart showing what was going on with monetary policy in Canada, post-financial crisis:
We will return to this chart later, but for now just focus on the blue dotted line, which is overnight reserves at the Bank of Canada. For most of this period, except Spring 2009 to Spring 2010, the Bank of Canada operated under a channel system, under which it targets overnight reserves at zero. There is some slippage, with a quantity of overnight reserves typically less than $500 million. That's a very small amount relative to the quantity of interest-bearing Fed liabilities (currently close to $3 trillion). So, the Bank of Canada has not been indulging in QE, but the Fed has been doing it in a big way. Surely, if we believe Ben Bernanke, that would have resulted in observable differences in the behavior of real economic activity and inflation in the two countries. Here's real GDP, and the consumer price index for the US and Canada:
So, since the beginning of 2008, average real GDP growth and average inflation has been about the same in Canada and the US. As an econometrician once told me, if I can't see it, it's probably not there. Sure, since Canada is small and is highly integrated with the US economically, Fed policy will matter for Canadian economic performance. But, if QE were so important, the fact that the US did it and Canada did not should make some observable difference for relative performance.

What else do we know about QE? I thought about it a bit, and wrote a couple of papers - this one and this one. Basically, the idea is to think about QE for what it is - financial intermediation by the central bank. If QE is to work, and for the better, the reason has to be that the central bank can do a better job of turning long-maturity assets into short-maturity assets than either the private sector, or the fiscal authority. So, for example, QE could work because the fiscal authority is not doing its job - there is too much long-maturity government debt outstanding. So, if the central bank swaps reserves for long-maturity government debt, that could bring about an improvement, by improving the stock of collateral that supports intermediated credit. Basically, short-maturity assets are better collateral. But maybe reserves are worse assets than short-maturity government debt. Reserves can be held only by a limited set of financial institutions, while Treasury bills are widely-traded, and very useful, for example in the market for repurchase agreements. So, it's not clear that there is an improvement if, for example, the Fed purchases 10-year Treasuries, thus converting highly-useful 10-year Treasuries into not-so-useful reserves. Some of those concerns could be mitigated by an expansion in the Fed's reverse repurchase agreement (ON-RRP) program, but so far that has been operated on a small scale.
The chart shows outstanding ON-RRPs, which are a relatively small fraction of interest bearing Fed liabilities (approaching $3 trillion).

So, what of Bernanke's first point, that the Fed should postpone the termination of reinvestment, because of uncertainty? My conclusion is that it is hard to make a case that QE does anything at all, and one could make a case that it gums up the financial plumbing. But here's an interesting detail. If we break down the Fed's holdings of Treasury securities by maturity, we get this:
The chart shows the percentage of Treasury securities held by the Fed that will mature within one year, in 1-5 years, in 5-10 years, and in more than 10 years. Before the financial crisis, these percentages did not vary much, with about 80% of the portfolio maturing in less than 5 years - average maturity was relatively short. In early 2013, average maturity reached its peak, with about 75% of the total portoflio maturing in more than five years, and the remainder maturing in 1-5 years. But, since early 2013, the fraction of the portfolio maturing in more than five years has declined to about 40%, with about 10% maturing in less than one year.

So, if one thought that the degree of monetary accommodation was related not only to the size of the Fed's portfolio but to average maturity, then there is considerably less accommodation than was the case three years ago (at least in terms of the Fed's Treasury holdings). Further, if the reinvestment program were halted, the assets will run off more quickly than would have been the case if reinvestment had ceased three years ago.

What of Bernanke's two other points? The first is that the stock of currency is growing, which increases the size of the balance sheet at which interest-bearing Fed liabilities disappear. The next chart shows the stock of currency as a percentage of nominal GDP:
That's remarkable. In 1990, US currency outstanding in the world was about 4.4% of US GDP, and today it is almost 8%. To get some idea what implications this has for the Fed's balance sheet, we'll calculate the interest-bearing portion of Fed liabilities (that's total liabilities minus currency) and express that as a percentage of GDP:
So, you can see that the size of the balance sheet has actually been declining, measured as interest-bearing Fed liabilities relative to GDP. Again, if we took what is in the chart as a measure of accommodation, there is less of it than was the case late in 2014. But how long would it take for this ratio to decline to where it was (0.5%) prior to the financial crisis, if the reinvestment policy stays in place indefinitely. If my arithmetic is correct, about 55 years. Within 55 years, all kinds of things could happen, of course. Ken Rogoff could get his way and 80% of the currency stock could disappear, government currency could be replaced by private digital currencies, etc. So projecting that far into the future is pure speculation.

The final issue Bernanke raises is related to possible benefits for monetary policy implementation from a large Fed balance sheet. A case can be made that, in the U.S. institutional context, it is easier to implement monetary policy under a "floor" system than a "channel" or "corridor" system. There are some complications in the US case but, roughly, under a floor system the interest rate on reserves (IOER) should determine the overnight interest rate. To make the floor system work requires that there be adequate reserves in the financial system, so that a typical financial institution is indifferent between lending to the Fed (at IOER) and lending overnight to another financial institution. But once it's working, a floor system is easy for the Fed, as overnight interest rates are effectively set administratively, rather than through the hit-and-miss approach the Fed followed pre-financial crisis. But the key question is: How much reserves need to be in the system to make the floor system work? Here's what Bernanke says:
To ensure that the floor rate set by the central bank is always effective, the banking system must be saturated with reserves (that is, in the absence of the interest rate set and paid by the central bank, the market-determined return to reserves would be zero). In December 2008, when the federal funds rate first fell to zero and the Fed began to use the interest rate on bank reserves as a tool of monetary policy, bank reserves were about $800 billion. Taking into account growth in nominal GDP and bank liabilities, the critical level of bank reserves needed to implement monetary policy through a floor system seems likely to be well over $1 trillion today, and growing.
If you look at the very first chart, you can see what he's thinking. In October 2008 the Fed began paying interest on reserves in the midst of turmoil in financial markets. By the end of the year, overnight rates were essentially zero, and the size of the balance sheet had increased by a very large amount with reserves increasing about $800 billion. So, Bernanke is assuming that, at the end of 2008, it took $800 billion to make the overnight interest rate go to the floor - the IOER. If the balance sheet increase had occurred gradually in a relatively calm financial market, we might take that seriously, but I'm not buying it.

To see this, go back to the fourth chart, which shows what the Bank of Canada was up to. The chart shows three interest rates: (i) the rate at which the central bank lends to private financial institutions (green); (ii) the overnight interest rate the Bank of Canada targets (blue); (iii) the interest rate on deposits at the Bank of Canada - the interest rate on reserves (orange). Normally, the Bank operates a channel system, under which the overnight rate falls between the other two rates. But, for about one year, from Spring 2009 to Spring 2010, the Bank operated a floor system. As you can see, the policy rate goes to the floor for this period of time. How much reserves did it take to make the floor system work? The Bank targeted overnight reserves to $3 billion (Canadian) over this period. To get an idea of the order of magnitude, a rule of thumb is that the Canadian economy is roughly a multiple of 10 of the US economy, so this quantity of reserves is roughly comparable to $30 billion in the US. We need to account for the fact that there are reserve requirements in the US, and none in Canada, and that the US institutional setup is very different (many more banks for example). But, I think it's hard to look at the Canadian experience and think that it takes as much as $1 trillion in interest bearing Fed liabilities to make a floor system work in the US, as Bernanke is suggesting. I would be surprised if we needed as much as $100 billion.

So, in conclusion, I think Bernanke's arguments are weak. It's hard to make a case that QE is a big deal, or that stopping the Fed's reinvestment policy is risky or harmful - indeed it might improve economic welfare. Further, if one thinks that QE is accommodative, and that we can measure accommodation by the average maturity of the Fed's asset portfolio, or by the ratio of interest-bearing Fed liabilities to GDP, then withdrawal of accommodation has been underway for some time.

Addendum: This pertains to JP's comment below. Here's real GDP for Japan:
I don't see any break in the recovery from the recession associated with Abenomics. Do you? Just for good measure, we can look at the GDP price deflator for Japan:
So, you can see that it's the price behavior that's giving the nominal GDP increase. But, most of the price deflator increase is in 2014 - the price level increased by about 4% in a year's time. But, again, some of this is due to the direct effect of the consumption tax increase of three percentage points in April 2014. Note that inflation, measured by the increase in the GDP price deflator, has been about zero for the last two years.

Friday, February 3, 2017

Going North

I've accepted a new job, beginning in the 2017-18 academic year, as the Stephen A. Jarislowsky Chair in Central Banking in the Economics Department at Western University (a.k.a. the University of Western Ontario), London, Ontario. This is an exciting opportunity for me, but it comes with regrets, as I have to leave a lot behind here in St. Louis. The St. Louis Fed has treated me very well. Some of the best economists anywhere work at the St. Louis Fed, and the institution is run by first-rate people. I'll miss the advice, the economics, and the always-interesting policy work. But I'm very much looking forward to working with my new (and old) colleagues at Western. Special thanks go to Stephen Jarislowsky and the Jarislowsky Foundation for their generous support.

Sunday, January 15, 2017

What's a Macro Model Good For?

What's a macro model? It's a question, and an answer. If it's a good model, the question is well-defined, and the model gives a good answer. Olivier Blanchard has been pondering how we ask questions of models, and the answers we're getting, and he thinks it's useful to divide our models into two classes, each for answering different questions.

First, there are "theory models,"
...aimed at clarifying theoretical issues within a general equilibrium setting. Models in this class should build on a core analytical frame and have a tight theoretical structure. They should be used to think, for example, about the effects of higher required capital ratios for banks, or the effects of public debt management, or the effects of particular forms of unconventional monetary policy. The core frame should be one that is widely accepted as a starting point and that can accommodate additional distortions. In short, it should facilitate the debate among macro theorists.
At the extreme, "theory models" are purist exercises that, for example, Neil Wallace would approve of. Neil has spent his career working with tight, simple, economic models. These are models that are amenable to pencil-and-paper methods. Results are easily replicable, and the models are many steps removed from actual data - though to be at all interesting, they are designed to capture real economic phenomena. Neil has worked with fundamental models of monetary exchange - Sameulson's overlapping generations model, and the Kiyotaki-Wright (JPE 1989) model. He also approves of the Diamond-Dybvig (1983) model of banking. These models give us some insight into why and how we use money, what banks do, and (perhaps) why we have financial crises, but no one is going to estimate the parameters in such models, use them in calibration exercises, or use them at an FOMC meeting to argue why a 25 basis point increase in the fed funds rate target is better than a 50 basis point increase.

But Neil's tastes - as is well-known - are extreme. In general, what I think Blanchard means by "theory model" is something we can write up and publish in a good, mainstream, economics journal. In modern macro, that's a very broad class of work, including pure theory (no quantitative work), models with estimation (either classical or Bayesian), calibrated models, or some mix. These models are fit to increasingly sophisticated data.

Where I would depart from Blanchard is in asking that theory models have a "core frame...that is widely accepted..." It's of course useful that economists speak a common language that is easily translatable for lay people, but pathbreaking research is by definition not widely accepted. We want to make plenty of allowances for rule-breaking. That said, there are many people who break rules and write crap.

The second class of macro models, according to Blanchard, is the set of "policy models,"
...aimed at analyzing actual macroeconomic policy issues. Models in this class should fit the main characteristics of the data, including dynamics, and allow for policy analysis and counterfactuals. They should be used to think, for example, about the quantitative effects of a slowdown in China on the United States, or the effects of a US fiscal expansion on emerging markets.
This is the class of models that we would use to evaluate a particular policy option, write a memo, and present it at the FOMC meeting. Such models are not what PhD students in economics work on, and that was the case 36 years ago, when Chris Sims wrote "Macroeconomics and Reality."
...though large-scale statistical macroeconomic models exist and are by some criteria successful, a deep vein of skepticism about the value of these models runs through that part of the economics profession not actively engaged in constructing or using them. It is still rare for empirical research in macroeconomics to be planned and executed within the framework of one of the large models.
The "large models" Sims had in mind are the macroeconometric models constructed by Lawrence Klein and others, beginning primarily in the 1960s. The prime example of such models is the FRB/MIT/Penn model, which reflected in part the work of Klein, Ando, and Modigliani, among others, including (I'm sure) many PhD students. There was indeed a time when a satisfactory PhD dissertation in economics could be an estimation of the consumption sector of the FRB/MIT/Penn model.

Old-fashioned large-scale macroeconometric models borrowed their basic structure from static IS/LM models. There were equations for the consumption, investment, government, and foreign sectors. There was money demand and money supply. There were prices and wages. Typically, such models included hundreds of equations, so the job of estimating and running the model was subdivided into manageable tasks, by sector. There was a consumption person, an investment person, a wage person, etc., with further subdivision depending on the degree of disaggregation. My job in 1979-80 at the Bank of Canada was to look after residential investment in the RDXF model of the Canadian economy. No one seemed worried that I didn't spend much time talking to the price people or the mortgage people (who worked on another floor). I looked after 6 equations, and entered add factors when we had to make a forecast.

What happened to such models? Well, they are alive and well, and one of them lives at the Board of Governors in Washington D.C. - the FRB/US model. FRB/US is used as an explicit input to policy, as we can see in this speech by Janet Yellen at the last Jackson Hole conference:
A recent paper takes a different approach to assessing the FOMC's ability to respond to future recessions by using simulations of the FRB/US model. This analysis begins by asking how the economy would respond to a set of highly adverse shocks if policymakers followed a fairly aggressive policy rule, hypothetically assuming that they can cut the federal funds rate without limit. It then imposes the zero lower bound and asks whether some combination of forward guidance and asset purchases would be sufficient to generate economic conditions at least as good as those that occur under the hypothetical unconstrained policy. In general, the study concludes that, even if the average level of the federal funds rate in the future is only 3 percent, these new tools should be sufficient unless the recession were to be unusually severe and persistent.
So, that's an exercise that looks like what Blanchard has in mind, though he discusses "unconventional monetary policy" as an application of the "theory models."

It's no secret what's in the FRB/US model. The documentation is posted on the Board's web site, so you can look at the equations, and even run it, if you want to. There's some lip service to "optimization" and "expectations" in the documentation for the model, but the basic equations would be recognizable to Lawrence Klein. It's basically a kind of expanded IS/LM/Phillips curve model. And Blanchard seems to have a problem with it. He mentions FRB/US explicitly:
For example, in the main model used by the Federal Reserve, the FRB/US model, the dynamic equations are constrained to be solutions to optimization problems under high order adjustment cost structures. This strikes me as wrongheaded. Actual dynamics probably reflect many factors other than costs of adjustment. And the constraints that are imposed (for example, on the way the past and the expected future enter the equations) have little justification, theoretical or empirical.
Opinions seem to differ on how damning this is. The watershed in macroeconomists' views on large scale macreconometric models was of course Lucas's critique paper, which was aimed directly at the failures of such models. In the "Macroeconomics and Reality" paper, Sims sees Lucas's point, but he still thinks large-scale models could be useful, in spite of misidentification.

But, it's not clear that large-scale macroeconometric models are taken that seriously these days, even in policy circles, Janet Yellen aside. While simulation results are presented in policy discussions, it's not clear whether those results are changing any minds. Blanchard recognizes that we need different models to answer different questions, and one danger of the one-size-fits-all large-scale model is its use in applications for which it was not designed. Those who constructed FRB/US certainly did not envision the elements of modern unconventional monetary policy.

A modern macroeconometric approach is to scale down the models, and incorporate more theory - structure. The most well-known such models, often called "DSGE" are the Smets-Wouters model, and the Christiano/Eichenbaum/Evans model. Blanchard isn't so happy with these constructs either.
DSGE modelers, confronted with complex dynamics and the desire to fit the data, have extended the original structure to add, for example, external habit persistence (not just regular, old habit persistence), costs of changing investment (not just costs of changing capital), and indexing of prices (which we do not observe in reality), etc. These changes are entirely ad hoc, do not correspond to any micro evidence, and have made the theoretical structure of the models heavier and more opaque.
Indeed, in attempts to fit DSGE to disaggregated data, the models tend to suffer increasingly from the same problems as the original large-scale macroeconometric models. Chari, Kehoe, and McGrattan, for example, make a convincing case that DSGE models in current use are misidentified and not structural, rendering them useless for policy analysis. This has nothing to do with one's views on intervention vs. non-intervention - it's a question of how best to do policy intervention, once we've decided we're going to do it.

Are there other types of models on the horizon that might represent an improvement? One approach is the HANK model, constructed by Kaplan, Moll, and Violante. This is basically a heterogeneous-agent incomplete-markets model in the style of Aiyagari 1994, with sticky prices and monetary policy as in a Woodford model. That's interesting, but it's not doing much to help us understand how monetary policy works. It's assumed the central bank can dictate interest rates (as in a Woodford model), with no attention to the structure of central bank assets and liabilities, the intermediation done by the central bank, and the nature of central bank asset swaps. Like everyone, I'm a fan of my own work, which is more in the Blanchard "theory model" vein. For recent work on heterogeneous agent models of banking, secured credit, and monetary policy, see my web site.

Blanchard seems pessimistic about the future of policy modeling. In particular, he thinks the theory modelers and the policy modelers should go their own ways. I'd say that's bad advice. If quantitative models have any hope of being taken seriously by policymakers, this would have to come from integrating better theory in such models. Maybe the models should be small. Maybe they should be more specialized. But I don't think setting the policy modelers loose without guidance would be a good idea.