Book Review: Knowledge and the Wealth Of Nations: A Story of Economic Discovery by David Warsh
This book is an entertaining (but sometimes long-winded) history of economic thought that focuses on the role of technological knowledge, showing how sporadic attempts starting with Adam Smith to incorporate it into the mainstream of economic thought kept getting marginalized until a paper by Paul Romer in 1990 finally appears to have convinced the profession to include it in their models as a nonrival, partly excludable good.
Warsh writes in a style intended to be appropriate for laymen, but I find this rather frustrating, as it leaves out a fair amount of technical detail that I would like to understand, but probably fails to satisfy laymen since the subject of the book will only seem important to readers who already have enough familiarity with economics to handle a more technical discussion.
I liked an analogy that the book reports of the history of maps of Africa, where improved standards of accuracy sometimes caused mapmakers to produce less informative maps as they removed unverified reports of features from interior parts of Africa well before they were able to replace them with something more reliable. The book shows how similar processes in economic models have resulted in similar blank spots in economic thought.
He claims that Romer’s theory amounts to an argument against free markets and in favor of some poorly specified state management of some aspects of the economy. But I saw no analysis to support that conclusion. All I see are arguments that classical economic theory is too simplistic, that we probably need to study lots of messy empirical evidence before deciding what Romer’s theory says about state action.
His analysis of the Microsoft antitrust case provides a better argument than I’d previously heard for breaking up Microsoft into an OS company and an Apps company, but still leaves me wondering why it would make much difference – most of the causes of Microsoft’s OS monopoly power would remain unchanged. His claim (apparently reporting Romer’s remarks) that Microsoft solved the double marginalization problem in a way that a breakup wouldn’t alter seems confused. He is right to point out that those pricing effects weren’t the main issue, although he doesn’t seem to understand why (see Lessig’s The Future of Ideas for a good explanation of how monopolies stifle innovation).
He has a chapter titled “How the Dismal Science Got Its Name” which says nothing about the actual origin of that term (which was coined by a racist who hated Mill’s belief that blacks could be productive without being slaves).
Economics
While browsing through charts of various stocks, I came across a company (Manchester Inc., symbol MNCS) with a chart that’s unusual enough that I had to check around to reassure myself that my primary source for stock market prices wasn’t playing tricks on me.
It has a history of unusually steady increases with few signs of the randomness that I normally see in stock prices. If you had bought at the closing price any day this year and held for ten trading days, it would have closed higher than your purchase price (your average gain would have been over 3 percent), and it was almost as predictable the prior year.
A paragraph in the middle of this Forbes story explains why its market value looks strange.
The only guess I have as to what might cause this is an unusual form of manipulation where the manipulators produce this phenomenon until traders who buy purely on price trends provide enough liquidity for the manipulators to cash out. But even that is pretty implausible – if that’s what’s happening, why wouldn’t they create a bit more day to day randomness to disguise it a bit? And how could they afford to risk as large an investment as I suspect that would take on an approach which seems different enough from anything tried before that it ought to be hard to predict whether it will work?
Book review: The Undercover Economist: Exposing Why the Rich Are Rich, the Poor Are Poor–and Why You Can Never Buy a Decent Used Car! by Tim Harford
This book does an excellent job of describing economics in a way that laymen can understand, although experts won’t find much that is new in it.
Harford’s description of price discrimination is the best I’ve seen, and the first to describe how to tell the extent to which an instance of price discrimination has good effects (the extent to which it expands the number of sales).
His arguments that globalization reduces pollution are impressive for most types of pollution, but for carbon dioxide emissions I’m very disappointed. He hopes that energy use has peaked in the richest countries because he’s failed to imagine what will cause enough increased demand to offset increases in efficiency. For those of modest imagination, I suggest thinking about more realistic virtual reality (I want my Holodeck), personal robots, and increased air conditioning due to people moving to bigger houses in warmer climates. For those with more imagination, add in spacecraft and utility fog.
Some small complaints:
He refers to Howard Schultz as the owner of Starbucks, but he only owns about 2 percent of Starbucks’ stock.
His comment that Amazon stock price dropped below its IPO price fails to adjust for stock splits – a share bought at $18 in 1997 would have become 12 shares worth $8 each in the summer of 2001.
His claim that “Google is the living proof that moving first counts for nothing on the Internet” is a big exaggeration. It’s quite possible that Google success was primarily due to being the first to reach some key threshold of quality, and that many small competitors have matched its quality without taking measurable business away from it.
Chris Hibbert writes (in a post that is partly about the mess resulting from Tradesports’ contract on North Korean missile launches):
The fact that pay-outs are limited to the amount spent to purchase claims is integral to the institution of prediction markets. If market operators ever pay off both sides of a claim, that is likely to encourage investors to protest many more close calls.
I disagree. Having pay-outs equal to claim purchases is integral to the normal function of well-written claims, but there’s little reason to stick to that rule with a claim written as poorly as the North Korean missile claim was.
Paying off both sides was the most reasonable suggestion I’ve heard for what Tradesports should have done to limit the damage to their reputation. Experience with similar disputes (such as those on FX) suggests that traders already have sufficient motive to protest questionable decisions that it’s hard to see how disputes produced only by additional incentives could bear much resemblance to reasonable disputes. The increased incentive on Tradesports to word their claims so that fewer people misunderstand how they will be judged is likely to have some desirable effects on how Tradesports explains the meaning of their contracts.
Disputed judgments might be inevitable for exchanges that cover subjects as ambitious as Tradesports does, but there’s nothing inevitable about confusion about whether a contract was about DoD confirmation of where the missiles landed, or whether it was about what the missiles did, with DoD statements merely being used if needed to resolve any uncertainty.
(I didn’t trade any of the North Korean missile contracts).
For those investors who (unlike me) can’t afford to do fundamental analysis on a large number of companies (and if you can’t afford to analyze thousands of companies, you’re probably using a questionable method to select which ones to analyze), there’s a new class of ETFs which sounds like fixes some of the worst problems with typical stock funds.
Most people invest in funds that are based on a capitalization weighted index, which means that any time there’s a bubble affecting some of the stocks in the index, the fund is buying those stocks at the peak. The more popular those funds are, the easier it is to create bubbles in the stocks they buy.
There’s a new ETF (symbol PRF) that weights its holdings on dividends instead, which will sell stocks that are affected by bubbles (except in the unusual case where the company increases its dividend in step with the bubble).
The Political Calculations blog mentions similar strategies which appear to work about as well (the dividend weighting selects against small immature companies, and it ought to be possible to avoid that).
Weighting on revenues sounds like it works well, although it overweights retailers and underweights successful pharmaceutical companies and oil producers that find cheap sources of oil.
Weighting on the number of employees should work (although that underweights companies that outsource).
I’m somewhat partial to weighting on book value, but instead of the standard book value, I’d use tangible book value plus an estimate of amortized R&D expenses.
Shorting the 5 or 10 companies with the largest market capitalizations would probably be a good way to invest a modest portion of a portfolio in a way that would reduce risk and improve returns.
These strategies do have the potential to underperform if they becomes as popular as buying and holding S&P 500 funds was around 2000, but it will take some time to become that trendy, and even if it does there will probably still be funds using unpopular versions of fundamental weighting that will remain good investments.
Richard Timberlake’s article in the June 2006 issue of Liberty makes some arguments about the causes of the Great Depression that are tempting to believe but at best only partly convincing.
Much of the article is about the Fed becoming dominated by followers of the real bill doctrine. While he presents evidence that leaders of the Fed liked the doctrine, and I can imagine that following that doctrine could explain much of the 1930-1932 contraction. But if the Fed was fully following that doctrine and that were the primary cause of the contraction, the narrow measures of the money supply (which are the ones most directly under the Fed’s control) would have contracted, when they actually expanded during 1930-1932. So I doubt that the Fed was as influenced by the doctrine as Timberlake suggests. But as a factor contributing to the Fed’s caution about expanding the money supply further, it’s fairly plausible, and causes me to be a bit more skeptical of the Fed’s competence than I was before.
The more interesting part of the article is the attempt to deny that the gold standard did anything to cause the contraction. Timberlake notes that the Fed’s gold reserves remained well above the legally required minimum, and claims that shows the Fed wasn’t constrained from expanding the money supply by risks to the gold standard. But that’s true only if the legally required reserves were either sufficient to cover all potential claims or to convince holders of paper dollars that all likely claims would be satisfied. I’m not aware of any clear reason to think this was the case, and it’s easy to imagine that the Fed knew more than Timberlake does about how eager holders of paper dollars would have been to demand gold if the Fed’s gold reserves had dropped further. So I’m still inclined to think that the Fed’s restraint in late 1931 and 1932 resulted from a somewhat plausible belief that it couldn’t do more without taking excessive risk that the gold standard would fail and that we would be stuck with the kind of inflation-prone system that we ended up with anyway.
Book Review: When Genius Failed : The Rise and Fall of Long-Term Capital Management by Roger Lowenstein
This is a very readable and mostly convincing account of the rise and fall of Long-Term Capital Management. It makes it clear to me how the fairly common problem of success breeding overconfidence led LTCM to make unreasonable gambles, and why other financial institutions that risked their money by dealing with LTCM failed to require it to exercise a normal degree of caution.
The book occasionally engages in some minor exaggerations that suggest the author is a journalist rather than an expert in finance, but mostly the book appears a good deal more accurate and informed than I expect from a reporter. It is written so that both experts and laymen will enjoy it.
One passage stands out as unusually remarkable. “The traders hadn’t seen a move like that – ever. True, it had happened in 1987 and again in 1992. But Long-Term’s models didn’t go back that far.” This is really peculiar mistake. The people involved appeared to have enough experience to realize the need to backtest their models better than that. I’m disappointed that the book fails to analyze how this misjudgment was possible.
Also, the author spends a bit too much analysis on LTCM’s overconfidence in their models, when his reporting suggests that a good deal of the problem was due to trading that wasn’t supported by any model.
Paul W.K. Rothemund’s cover article on DNA origami in the March 16 issue of Nature appears to represent an order of magnitude increase in the complexity of objects that can self-assemble to roughly atomic precision (whether it’s really atomic precision depends in part on the purposes you’re using it for – every atom is put in a predictable bond connecting it to neighbors, but there’s enough flexibility in the system that the distances between distant atoms generally aren’t what would be considered atomically precise).
It was interesting watching the delayed reaction in the stock price of Nanoscience Technologies Inc. (symbol NANS), which holds possibly relevant patents. Even though I’m a NANS stockholder, have been following the work in the field carefully, and was masochistic enough to read important parts of the relevant patents produced by Ned Seeman several years ago, I have little confidence in my ability to determine whether the Seeman patents cover Rothemund’s design. (If the patents were worded as broadly as many aggressive patents are these days, the answer would probably be yes, but they’re worded fairly responsibly to cover Seeman’s inventions fairly specifically. It’s clear that Seeman’s inventions at least had an important influence on Rothemund’s design.)
It’s pretty rare for a stock price to take days to start reacting to news, but this was an unusual case. Someone reading the Nature article would think the probability of the technique being covered by patents owned by a publicly traded company to be too small to justify a nontrivial search. Hardly anyone was following the company (which I think is a one-person company). I put in bids on the 20th and 21st for some of the stock at prices that were cautious enough not to signal that I was reacting to potentially important news, and picked up a modest number of shares from people who seemed to not know the news or think it irrelevant. Then late on the 21st some heavy buying started. Now it looks like there’s massive uncertainty about what the news means.
Nick Szabo has a theory about the cause of the industrial revolution that’s well worth reading.
Arnold Kling reports that free markets are more popular in China than in the U.S.
That reminds me that I should give more thought to investing in China-related companies.