I’ve switched from b2evolution to WordPress for the software that runs this blog.
I had been planning to do this for several weeks because b2evolution was too spam-friendly, and WordPress seems to have a competent approach to comment spam (holding comments for moderation if they don’t come from someone who previously made an approved comment). I was forced to shut down the b2evolution software yesterday when the load it put on my system became excessive.
I think I’ve fixed all the permalinks to work as before.

Book Review: On Intelligence by Jeff Hawkins

This book presents strong arguments that prediction is a more important part of intelligence than most experts realize. It outlines a fairly simple set of general purpose rules that may describe some important aspects of how small groups of neurons interact to produce intelligent behavior. It provides a better theory of the role of the hippocampus than I’ve seen before.
I wouldn’t call this book a major breakthrough, but I expect that it will produce some nontrivial advances in the understanding of the human brain.
The most disturbing part of this book is the section on the risks of AI. He claims that AIs will just be tools, but he shows no sign of having given thought to any of the issues involved beyond deciding that an AI is unlikely to have human motives. But that leaves a wide variety of other possible goals systems, many of which would be as dangerous. It’s possible that he sees easy ways to ensure that an AI is always obedient, but there are many approaches to AI for which I don’t think this is possible (for instance, evolutionary programming looks like it would select for something resembling a survival instinct), and this book doesn’t clarify what goals Hawkins’ approach is likely to build into his software. It is easy to imagine that he would need to build in goals other than obedience in order to get his system to do any learning. If this is any indication of the care he is taking to ensure that his “tools” are safe, I hope he fails to produce intelligent software.
For more discussion of AI risks, see sl4.org. In particular, I have a description there of how one might go about safely implementing an obedient AI. At the time I was thinking of Pei Wang’s NARS as the best approach to AI, and with that approach it seems natural for an AI to have no goals that are inconsistent with obedience. But Hawkins’ approach seems approximately as powerful as NARS, but more likely to tempt designers into building in goals other than obedience.

Robin Hanson has another interesting paper on human attitudes toward truth and on how they might be improved.
See also some related threads on the extropy-chat list here and here.
One issue that Robin raises involves disputes between us and future generations over how much we ought to constrain our descendants to be similar to us. He is correct that some of this disagreement results from what he calls “moral arrogance” (i.e. at least one group of people overestimating their ability to know what is best). But even if we and our descendants were objective about analyzing the costs and benefits of the alternatives, I would expect some disagreement to remain, because different generations will want to maximize the interests of different groups of beings. Conflicting interests between two groups that exist at the same time can in principle be resolved by one group paying the other to change it’s position. But when one group exists only in the future, and its existence is partly dependent on which policy is adopted now, it’s difficult to see how such disagreements could be resolved in a way that all could agree upon.

Book Review: Anthropic Bias: Observation Selections Effects in Science and Philosophy by Nick Bostrom

This book discusses selection effects as they affect reasoning on topics such as the Doomsday Argument, whether you will choose a lane of traffic that is slower than average, and whether we can get evidence for or against the Many Worlds Interpretation of quantum mechanics. Along the way it poses some unusual thought experiments that at first glance seem to prove some absurd conclusions. It then points out the questionable assumptions about what constitute “similar observers” upon which the absurd conclusions depend, and in doing so it convinced me that the Doomsday Argument is weaker than I had previously thought.
It says some interesting things about the implications of a spatially infinite universe, and of the possibility that the number of humans will be infinite.
It is not easy to read, but there’s little reason to expect a book on this subject could be both easy to read and correct.
The author has a web site for the book.

Tyler Cowen claims that market prices say the “demand for raw materials will continue to outstrip the supply”. But I don’t see the market prices saying that. Tyler seems to be extrapolating from trends of the past few years.
He seems to be ignoring what futures contracts for delivery several years out are saying. Here’s what I see for commodities with futures contracts several years out:

Commodity Nearest future contract Farthest future contract
Silver $7.307 $7.848 (Jul 2009)
Crude Oil $51.15 $42.41 (Dec 2011)
Natural Gas $6.304 $5.721 (Dec 2010)
Copper $1.477 $1.255 (Dec 2006)

Gold and silver prices are expected (as usual) to maintain their purchasing power, while prices of other commodities that have had big run-ups recently are expected to fall.
I’ve been making some investments that are based on the belief that markets are underestimating Chinese/Indian demand over the next 5 years or so. But markets are clearly saying that the Hubbert Peak arguments are either wrong, or unimportant due to the likelihood of a switch to alternative fuels. And with metals, it sure looks like we are seeing merely a combination of asian demand and a weak dollar.

This book provides a moderately strong argument that the production of cheap oil is peaking, although it isn’t as conclusive an argument as I’d hoped for, and is only a little bit better than the brief summaries of Hubbert’s ideas that I’d previously seen on the net.
Much of the book consists of marginally relevant stories of his career as a geologist. He occasionally slips in some valuable tidbits, such as that Texas once had an oil cartel.
He does a mediocre job of analyzing the consequences of scarcer oil. He provides a few hints of how natural gas could replace oil, but says much less about the costs of switching than I’d hoped for. His comments on how to protect yourself are misleading:

In the past, a useful way of insuring major producers and consumers against the effect of a price changes was purchasing futures contracts. However, the ordinary futures contracts extend for a year or two. The oil problem extends for 10 years or more. The oil problem extends for 10 years or more. Anyone who agrees to supply oil 10 years from now, for a price agreed on today, very likely will disappear into bankruptcy before the contract matures.

At the time the book was first published (2001), crude oil futures contracts extended about 7 years out. They weren’t liquid enough to hedge a large fraction of consumption, but if a desire to hedge had caused them to say in 2001 that crude would be at $60/barrel in 2008 rather than saying it would be in the low twenties, that would both have signaled a need to react and reduced the risks of doing so. The idea that bankruptcy would threaten such futures reflects his ignorance of the futures markets. An oil producer who sold futures as a hedge will almost certainly not sell more futures than it has oil to deliver on. Speculators might lose their shirts, but futures brokers have the experience needed to ensure that the defaults are small enough for the brokers to absorb (see, for example, what happened in the gold mania of the late 70s).

The cover describes Stratfor (the intelligence company Friedman founded) as a “Shadow CIA”. By this book’s description of the CIA, this implies it has a lot of details right but misses many important broad trends. The book tends to have weaknesses of this nature, being better as a history of Al Qaeda’s conflict with the U.S. than as a guide to the future, but it’s probably a good deal more reliable than CIA analysis.
It describes a few important trends that I wasn’t aware of. The best theory the book proposes that I hadn’t heard before is the claim that the U.S. government is much more worried about Al Qaeda getting a nuclear bomb than the public realizes (for instance, the Axis of Evil is the set of nations that are unable or unwilling to prove they won’t help Al Qaeda get the bomb).
The explanation of the U.S. motives for invading Iraq as primarily to pressure the Saudi government is unconvincing.
The book’s biases are sufficiently subtle that I have some difficulty detecting them. It often paints Bush in as favorable a light as possible, but is also filled with some harsh criticisms of his mistakes, for example:

It is an extraordinary fact that in the U.S.-jihadist war, the only senior commander or responsible civilian to have been effectively relieved was Eric Shinseki, Chief of Staff of the U.S. Army, who was retired unceremoniously (although not ahead of schedule) after he accurately stated that more than 200,000 troops would be needed in Iraq

The person selected, Tom Ridge, had no background in the field and had absolutely no idea what he was doing, but that was not a problem problem since, in fact, he would have nothing really to do. His job was simply to appear to be in control of an apparatus that did not yet exist

But it’s hard to place a lot of confidence in theories that are backed mainly by eloquent stories. It’s unfortunate that the book is unable or unwilling to document the evidence needed to confirm them.

There’s a fair amount of agreement between this book and Imperial Hubris, but I’ve revised my opinion of that book a bit due to the disagreements between the two. The claims by Imperial Hubris that we don’t need to worry about a new Caliphate seem unpersuasive now that I see there widespread disagreement with that claim and weak arguments on both sides. The two books disagree on who’s currently winning the war, but I see no sign that defeat for either side is anywhere near close enough to be predictable.

Patri Friedman asks why websites often require users to deal with annoying pulldown menus such as those listing 50 states. I expect that the main reason is that users who are allowed to type in text will enter it in nonstandard forms. For example, Massachusetts will be entered as Mass or MA, or if limited to 2 characters the user might not remember the correct 2-letter code. Sites that need to calculate sales taxes differently for different state, or who think (not necessarily with good reason) that they need to analyze customers by location for marketing reasons, need either standardized input or a good deal of imagination to predict all variants they will get. Imagination isn’t cheap.
I suspect there’s also a desire by some designers to show their status over users by preventing users from entering unexpected input.
I doubt these factors are enough to explain all examples of annoying pulldown menus, but I’d guess they explain at least half.