Economics

[See my previous post for context.]

I started out to research and write a post on why I disagreed with Scott Sumner about NGDP targeting, and discovered an important point of agreement: targeting nominal wages forecasts would probably be better than targeting either NGDP or CPI forecasts.

One drawback to targeting something other than CPI forecasts is that we’ve got good market forecasts of the CPI. It’s certainly possible to create markets to forecast other quantities that the Fed might target, but we don’t have a good way of predicting how much time and money those will require.

Problems with NGDP targets

The main long-term drawback to targeting NGDP (or other measures that incorporate the quantity of economic activity) rather than an inflation-like measure is that it’s quite plausible to have large changes in the trend of increasing economic activity.

We could have a large increase in our growth rate due to a technology change such as uploaded minds (ems). NGDP targeting would create unpleasant deflation in that scenario until the Fed figured out how to adjust to new NGDP targets.

I can also imagine a technology-induced slowdown in economic growth, for example: a switch to open-source hardware for things like food and clothing (3-d printers using open-source designs) could replace lots of transactions with free equivalents. That would mean a decline in NGDP without a decline in living standards. NGDP targeting would respond by creating high inflation. (This scenario seems less likely and less dangerous than the prior scenario).

Basil Halperin has some historical examples where NGDP targeting would have produced similar problems.

Problems with inflation forecasts?

Critics of inflation targeting point to problems associated with oil shocks or with strange ways of calculating housing costs. Those cause many inflation measures to temporarily diverge from what I want the Fed to focus on, which is the problem of sticky wages interacting with weak nominal wages to create unnecessary unemployment.

Those problems with measuring inflation are serious if the Fed uses inflation that has already happened or uses forecasts of inflation that extend only a few months into the future.

Instead, I recommend using multi-year CPI forecasts based on several different time periods (e.g. in the 2 to 10 year range), and possibly forecasts for time periods that start a year or so in the future (this series shows how to infer such forecasts from existing markets). In the rare case where forecasts for different time periods say conflicting things about whether the Fed is too tight or loose, I’d encourage the Fed to use its judgment about which to follow.

The multi-year forecasts have historically shown only small reactions to phenomena such as the large spike in oil prices in mid 2008. I expect that pattern to continue: commodity price spikes happen when markets get evidence of their causes/symptoms (due to market efficiency), not at predictable future times. The multi-year forecasts typically tell us mainly whether the Fed will persistently miss its target.

Won’t using those long-term forecasts enable the Fed to make mistakes that it corrects (or over-corrects) for shorter time periods? Technically yes, but that doesn’t mean the Fed has a practical way to do that. It’s much easier for the Fed to hit its target if demand for money is predictable. Demand for money is more predictable if the value of money is more predictable. That’s one reason why long-term stability of inflation (or of wages or NGDP) implies short-term stability.

It would be a bit safer to target nominal wage rate forecasts rather than CPI forecasts if we had equally good markets forecasting both. But I expect it to be easier to convince the public to trust markets that are heavily traded for other reasons, than it is to get them to trust a brand new market of uncertain liquidity.

NGDP targeting has been gaining popularity recently. But targeting market-based inflation forecasts will be about as good under most conditions [1], and we have good markets that forecast the U.S. inflation rate [2].

Those forecasts have a track record that starts in 2003. The track record seems quite consistent with my impressions about when the Fed should have adopted a more inflationary policy (to promote growth and to get inflation expectations up to 2% [3]) and when it should have adopted a less inflationary policy (to avoid fueling the housing bubble). It’s probably a bit controversial to say that the Fed should have had a less inflationary policy from February through July or August of 2008. But my impression (from reading the stock market) is that NGDP futures would have said roughly the same thing. The inflation forecasts sent a clear signal starting in very early September 2008 that Fed policy was too tight, and that’s about when other forms of hindsight switch from muddled to saying clearly that Fed policy was dangerously tight.

Why do I mention this now? The inflation forecast dropped below 1 percent two weeks ago for the first time since May 2008. So the Fed’s stated policies conflict with what a more reputable source of information says the Fed will accomplish. This looks like what we’d see if the Fed was in the process of causing a mild recession to prevent an imaginary increase in inflation.

What does the Fed think it’s doing?

  • It might be relying on interest rates to estimate what it’s policies will produce. Interest rates this low after 6.5 years of economic expansion resemble historical examples of loose monetary policy more than they resemble the stereotype of tight monetary policy [4].
  • The Fed could be following a version of the Taylor Rule. Given standard guesses about the output gap and equilibrium real interest rate [5], the Taylor Rule says interest rates ought to be rising now. The Taylor Rule has usually been at least as good as actual Fed policy at targeting inflation indirectly through targeting interest rates. But that doesn’t explain why the Fed targets interest rates when that conflicts with targeting market forecasts of inflation.
  • The Fed could be influenced by status quo bias: interest rates and unemployment are familiar types of evidence to use, whereas unbiased inflation forecasts are slightly novel.
  • Could the Fed be reacting to money supply growth? Not in any obvious way: the monetary base stopped growing about two years ago, M1 and MZM growth are slowing slightly, and M2 accelerated recently (but only after much of the Fed’s tightening).

Scott Sumner’s rants against reasoning from interest rates explain why the Fed ought to be embarrassed to use interest rates to figure out whether Fed policy is loose or tight.

Yet some institutional incentives encourage the Fed to target interest rates rather than predicted inflation. It feels like an appropriate use of high-status labor to set interest rates once every few weeks based on new discussion of expert wisdom. Switching to more or less mechanical responses to routine bond price changes would undercut much of the reason for believing that the Fed’s leaders are doing high-status work.

The news media storytellers would have trouble finding entertaining ways of reporting adjustments that consisted of small hourly responses to bond market changes. Whereas decisions made a few times per year are uncommon enough to be genuinely newsworthy. And meetings where hawks struggle against doves fit our instinctive stereotype for important news better than following a rule does. So I see little hope that storytellers will want to abandon their focus on interest rates. Do the Fed governors follow the storytellers closely enough that the storytellers’ attention strongly affects the Fed’s attention? Would we be better off if we could ban the Fed from seeing any source of daily stories?

Do any other interest groups prefer stable interest rates over stable inflation rates? I expect a wide range of preferences among Wall Street firms, but I’m unaware which preferences are dominant there.

Consumers presumably prefer that their banks, credit cards, etc have predictable interest rates. But I’m skeptical that the Fed feels much pressure to satisfy those preferences.

We need to fight those pressures by laughing at people who claim that the Fed is easing when markets predict below-target inflation (as in the fall of 2008) or that the Fed is tightening when markets predict above-target inflation (e.g. much of 2004).

P.S. – The risk-reward ratio for the stock market today is much worse than normal. I’m not as bearish as I was in October 2008, but I’ve positioned myself much more cautiously than normal.

Notes:

[1] – They appear to produce nearly identical advice under most conditions that the U.S. has experienced recently.

I expect inflation targeting to be modestly safer than NGDP targeting. I may get around to explaining my reasons for that in a separate post.

[2] – The link above gives daily forecasts of the 5 year CPI inflation rate. See here for some longer time periods.

The markets used to calculate these forecasts have enough liquidity that it would be hard for critics to claim that they could be manipulated by entities less powerful than the Fed. I expect some critics to claim that anyway.

[3] – I’m accepting the standard assumption that 2% inflation is desirable, in order to keep this post simple. Figuring out the optimal inflation rate is too hard for me to tackle any time soon. A predictable inflation rate is clearly desirable, which creates some benefits to following a standard that many experts agree on.

[4] – providing that you don’t pay much attention to Japan since 1990.

[5] – guesses which are error-prone and, if a more direct way of targeting inflation is feasible, unnecessary. The conflict between the markets’ inflation forecast and the Taylor Rule’s implication that near-zero interest rates would cause inflation to rise suggests that we should doubt those guesses. I’m pretty sure that equilibrium interest rates are lower than the standard assumptions. I don’t know what to believe about the output gap.

I was quite surprised by a paper (The Surprising Alpha From Malkiel’s Monkey and Upside-Down Strategies [PDF] by Robert D. Arnott, Jason Hsu, Vitali Kalesnik, and Phil Tindall) about “inverted” or upside-down[*] versions of some good-looking strategies for better-than-market-cap weighting of index funds.

They show that the inverse of low volatility and fundamental weighting strategies do about as well as or outperform the original strategies. Low volatility index funds still have better Sharpe ratios (risk-adjusted returns) than their inverses.

Their explanation is that most deviations from weighting by market capitalization will benefit from the size effect (small caps outperform large caps), and will also have some tendency to benefit from value effects. Weighting by market capitalization causes an index to have lots of Exxon and Apple stock. Fundamental weighting replaces some of that Apple stock with small companies. Weighting by anything that has little connection to company size (such as volatility) reduces the Exxon and Apple holdings by more than an order of magnitude. Both of those shifts exploit the benefits of investing in small-cap stocks.

Fundamental weighting outperforms most strategies. But inverting those weights adds slightly more than 1% per year to those already good returns. The only way that makes sense to me is if an inverse of market-cap weighting would also outperform fundamental weighting, by investing mostly in the smallest stocks.

They also show you can beat market-capitalization weighted indices by choosing stocks at random (i.e. simulating monkeys throwing darts at the list of companies). This highlights the perversity of weighting by market-caps, as the monkeys can’t beat the simple alternative of investing equal dollar amounts in each company.

This increases my respect for the size effect. I’ve reduced my respect for the benefits of low volatility investments, although the reduced risk they produce is still worth something. That hasn’t much changed my advice for investing in existing etf’s, but it does alter what I hope for in etf’s that will become available in the future.

[*] – They examine two different inverses:

  1. Taking the reciprocal of each stock’s original weight
  2. Taking the max(weight) and subtracting each stock’s original weight

In each case the resulting weights are then normalized to add to 1.

Book review: Masters of the Word: How Media Shaped History, by William J. Bernstein.

This is a history of the world which sometimes focuses on how technology changed communication, and how those changes affected society.

Instead of carefully documenting a few good ideas, he wanders over a wide variety of topics (including too many descriptions of battles and of individual people).

His claims seem mostly correct, but he often failed to convince me that he has good reason for believing them. E.g. when trying to explain why the Soviet economy was inefficient (haven’t enough books explained that already?) he says the “absence of a meaningful price signal proved especially damaging in the labor market”, but supports that by mentioning peculiarities which aren’t clear signs of damage, then describing some blatant waste that wasn’t clearly connected to labor market problems (and without numbers, doesn’t tell us the magnitude of the problems).

I would have preferred that he devote more effort to evaluating the importance of changes in communication to the downfall of the Soviet Union. He documents increased ability of Soviet citizens to get news from sources that their government didn’t control at roughly the time Soviet power weakened. But it’s not obvious how that drove political change. It seems to me that there was an important decrease in the ruthlessness of Soviet rulers that isn’t well explained by communication changes.

I liked his description of affordable printing presses depended on a number of technological advance, suggesting that printing could not easily have arisen at other times or places.

The claim I found most interesting was that the switch from reading aloud to reading silently and the related ability to write alone (as opposed to needed a speaker and a scribe) made it easier to spread seditious and sexual writings due to increased privacy.

Bernstein is optimistic that improved communication technology will have good political effects in the future. I partly agree, but I see more risks than he does (e.g. his like of the democratic features of the Arab Spring aren’t balanced by much concern over the risks of revolutionary violence).

Book review: Fragile by Design: The Political Origins of Banking Crises and Scarce Credit, by Charles W. Calomiris, and Stephen H. Haber.

This book start out with some fairly dull theory, then switches to specific histories of banking in several countries with moderately interesting claims about how differences in which interest groups acquired power influenced the stability of banks.

For much of U.S. history, banks were mostly constrained to a single location, due to farmers who feared banks with many branches would shift their lending elsewhere when local crop failures made local farms risky to loan to. Yet comparing to Canada, where seemingly small political differences led to banks with many branches, it seems clear that U.S. banks were more fragile because of those restrictions, and less competition in the U.S. left consumers with less desirable interest rates.

By the 1980s, improved communications eroded farmers’ ability to tie banks to one locale, so political opposition to multi-branch banks vanished, resulting in a big merger spree. The biggest problem with this merger spree was that the regulators who approved the mergers asked for more loans to risky low-income borrowers. As a result, banks (plus Fannie Mae and Freddie Mac) felt compelled to lower their standards for all borrowers (the book doesn’t explain what problems they would have faced if they had used different standards for loans the regulators pressured them to make).

These stories provide a clear and plausible explanation of why the U.S. has a pattern of banking crises that Canada and a few other well-run countries have almost entirely avoided over the past two centuries. But they suggest the U.S. banking crises should have been more unique among mature democracies than was actually the case.

The authors are overly dismissive of problems that don’t fit their narrative. Commenting on the failure of Citibank, Lehman, AIG, etc to sell more equity in early 2008, they say “Why go to the markets to raise new capital when you are confident that the government is going to bail you out?”. It seems likely bankers would have gotten better terms from the market as long as they didn’t wait until the worst part of the crisis. I’m pretty sure they gave little thought to bailouts, and relied instead on overly complacent expectations for housing prices.

The book has a number of asides that seem as important as their main points, such as claims that Britain’s greater ability to borrow money led to its military power, and its increased need for military manpower drove its expansion of the franchise.

Book review: Poor Economics: A Radical Rethinking of the Way to Fight Global Poverty by Abhijit V. Banerjee and Esther Duflo.

This book gives an interesting perspective on the obstacles to fixing poverty in the developing world. They criticize both Jeffrey Sach and William Easterly for overstating how easy/hard it is provide useful aid to the poor by attempting simple and sweeping generalizations, where Banerjee and Duflo want us to look carefully at evidence from mostly small-scale interventions which sometimes produce decent results.

They describe a few randomized controlled trials, but apparently there aren’t enough of those to occupy a full book, so they spend more time on less rigorous evidence of counter-intuitive ways that aid programs can fail.

They portray the poor as mostly rational and rarely making choices that are clearly stupid given the information that is readily available to them. But their cognitive abilities are sometimes suboptimal due to mediocre nutrition, disease, and/or stress from financial risks. Relieving any of those problems can sometimes enable them to become more productive workers.

The book advocates mild paternalism in the form of nudging weakly held beliefs about health-related questions where people can’t easily observe the results (e.g. vaccination, iodine supplementation), but probably not birth control (the poor generally choose how many children to have, although there are complex issues influencing those choices). They point out that the main reason people in developed countries make better health choices is due to better defaults, not more intelligence. I wish they’d gone a bit farther and speculated about how many of our current health practices will look pointlessly harmful to more advanced societies.

They give a lukewarm endorsement of microcredit, showing that it needs to be inflexible to avoid high default rates, and only provides small benefits overall. Most of the poor would be better off with a salaried job than borrowing money to run a shaky business.

The book fits in well with Givewell’s approach.

Book review: How China Became Capitalist, by Ronald Coase and Ning Wang.

This is my favorite book about China so far, due to a combination of insights and readability.

They emphasize that growth happened rather differently from how China’s leaders planned, and that their encouragement of trial and error was more important than their ability to recognize good plans.

The most surprising features of China’s government after 1978 were a lack of powerful special interests and freedom from ideological rigidity. Mancur Olson’s book The Rise and Decline of Nations suggests how a revolution such as Mao’s might free a nation from special interest power for a good while.

I’m still somewhat puzzled by the rapid and nearly complete switch from a country blinded by ideology to a country pragmatically searching for a good economy. Coase and Wang attribute it to awareness of the harm Maoism caused, but I can easily imagine that such awareness could mainly cause a switch to a new ideology.

It ends with a cautiously optimistic outlook on China’s future, with some doubts about freedom of expression, and some hope that China will contribute to diversity of capitalist cultures.

Automated market-making software agents have been used in many prediction markets to deal with problems of low liquidity.

The simplest versions provide a fixed amount of liquidity. This either causes excessive liquidity when trading starts, or too little later.

For instance, in the first year that I participated in the Good Judgment Project, the market maker provided enough liquidity that there was lots of money to be made pushing the market maker price from its initial setting in a somewhat obvious direction toward the market consensus. That meant much of the reward provided by the market maker went to low-value information.

The next year, the market maker provided less liquidity, so the prices moved more readily to a crude estimate of the traders’ beliefs. But then there wasn’t enough liquidity for traders to have an incentive to refine that estimate.

One suggested improvement is to have liquidity increase with increasing trading volume.

I present some sample Python code below (inspired by equation 18.44 in E.T. Jaynes’ Probability Theory) which uses the prices at which traders have traded against the market maker to generate probability-like estimates of how likely a price is to reflect the current consensus of traders.

This works more like human market makers, in that it provides the most liquidity near prices where there’s been the most trading. If the market settles near one price, liquidity rises. When the market is not trading near prices of prior trades (due to lack of trading or news that causes a significant price change), liquidity is low and prices can change more easily.

I assume that the possible prices a market maker can trade at are integers from 1 through 99 (percent).

When traders are pushing the price in one direction, this is taken as evidence that increases the weight assigned to the most recent price and all others farther in that direction. When traders reverse the direction, that is taken as evidence that increases the weight of the two most recent trade prices.

The resulting weights (p_px in the code) are fractions which should be multiplied by the maximum number of contracts the market maker is willing to offer when liquidity ought to be highest (one weight for each price at which the market maker might position itself (yes there will actually be two prices; maybe two weight ought to be averaged)).

There is still room for improvement in this approach, such as giving less weight to old trades after the market acts like it has responded to news. But implementers should test simple improvements before worrying about finding the optimal rules.

trades = [(1, 51), (1, 52), (1, 53), (-1, 52), (1, 53), (-1, 52), (1, 53), (-1, 52), (1, 53), (-1, 52),]
p_px = {}
num_agree = {}

probability_list = range(1, 100)
num_probabilities = len(probability_list)

for i in probability_list:
    p_px[i] = 1.0/num_probabilities
    num_agree[i] = 0

num_trades = 0
last_trade = 0
for (buy, price) in trades: # test on a set of made-up trades
    num_trades += 1
    for i in probability_list:
        if last_trade * buy < 0: # change of direction
            if buy < 0 and (i == price or i == price+1):
                num_agree[i] += 1
            if buy > 0 and (i == price or i == price-1):
                num_agree[i] += 1
        else:
            if buy < 0 and i <= price:
                num_agree[i] += 1
            if buy > 0 and i >= price:
                num_agree[i] += 1
        p_px[i] = (num_agree[i] + 1.0)/(num_trades + num_probabilities)
    last_trade = buy

for i in probability_list:
    print i, num_agree[i], '%.3f' % p_px[i]

Charity for Corporations

In his talk last week, Robin Hanson mentioned an apparently suboptimal level of charitable donations to for-profit companies.

My impression is that some of the money raised on Kickstarter and Indiegogo is motivated by charity.

Venture capitalists occasionally bias their investments towards more “worthy” causes.

I wonder whether there’s also some charitable component to people accepting lower salaries in order to work at jobs that sound like they produce positive externalities.

Charity for profitable companies isn’t likely to become a popular concept anytime soon, but that doesn’t keep subsets of it from becoming acceptable if framed differently.