Science and Technology

Accelerando! is an entertaining collection of loosely related anecdotes spanning a time that covers both the near future and the post-singularity world. Stross seems to be more interested in showing off how many geeky pieces of knowledge he has and how many witty one-liners he can produce than he is in producing a great plot or a big new vision. I expect that people who aren’t hackers or extropians will sometimes be confused by some of his more obscure references (e.g. when he assumes you know how a third-party compiler defeats the Thompson hack).
He sometimes tries too hard to show off his knowledge, such as when he says “solving the calculation problem” causes “screams from the Chicago School” – this seems to show he confuses the Chicago School with the Austrian School. He says that in the farther parts of the solar system

Most people huddle close to the hub, for comfort and warmth and low latency: posthumans are gregarious.

But most of what I know about the physics of computation suggests that warmth is a problem they will be trying to minimize.
The early parts of the book try to impress the reader with future shock, but toward the end the effects of technological change seem to have less and less effect on how the characters lives. That is hard to reconcile with the kind of exponential change that Stross seems to believe in.
He has many tidbits about innovative economic and legal institutions. But it’s often hard to understand how realistic they are, because I got some inconsistent impressions about basic things such as whether Manfred used money.
His answer to the Fermi paradox is unconvincing. It is easy to imagine that the smartest beings will want to stick close to the most popular locations. But that leaves plenty of other moderately intelligent beings (the lobsters?) with little attachment to this solar system, whose failure to colonize the galaxy he doesn’t explain.
Some interesting quotes:

humans will be obsolete as economic units within a couple more decades. All I want to do is make everybody rich beyond their wildest dreams before that happens.

“A moment.” Manfred tries to remember what address to ping. It’s useless, and painfully frustrating. “It would help if I could remember where I keep the rest of my mind,”

disaffected youth against the formerly graying gerontocracy of Europe, insist that people who predate the supergrid and can’t handle implants aren’t really conscious

And here’s one quote from the Fred in my reading group‘s discussion of the book:

The meat shall inherit the earth

Book Review: Women, Fire, and Dangerous Things : What Categories Reveal about the Mind by George Lakoff
I would have found this book well worth reading if I had read it when it was published, but by now I’ve picked up most of the ideas elsewhere.
He does a good job of describing the problems of the classical view of categories. His description of the alternative prototype theory is not as clear and convincing as what I’ve found in the neural net literature.
His attacks on objectivism and the “God’s eye view” of reality are pretty good. I found this claim interesting: (page 301) “to be objective requires one to be a relativist of an appropriate sort”.
The chapter on the mind-as-machine paradigm gives a superficial impression of saying more than it actually does. It discredits an approach to AI that was mostly recognized as a failure by the AI community when the book was published or shortly after that. He could confuse some people into thinking he discredits more than this by his strange use of the word algorithm. He says “Algorithms concern the manipulation of meaningless disembodied symbols”, and admits his arguments don’t discredit connectionism. Yet by the normal computer science usage of “algorithm”, it is quite sensible to say that connectionism uses algorithms to manipulate meaningful concepts.

Book Review: The God Gene : How Faith Is Hardwired into Our Genes by Dean H. Hamer
This book is entertaining but erratic. To start with, the title is misleading. The important parts of the book are about spirituality (as in what Buddhists seek), which has little connection with God or churches. He does a moderately good job of describing evidence that he has identified a gene that influences spirituality. He makes plausible claims that spirituality makes people happy (that part of the book resembles the works of Csikszentmihalyi and Seligman). He makes a half-hearted attempt to argue that spirituality has evolutionary advantages which isn’t very convincing by itself, but in combination with the sexual selection arguments in Miller’s book The Mating Mind it becomes moderately plausible.
About halfway through the book, he runs out of things to say on those subjects and proceeds to wander through a bunch of marginally related subjects.
His descriptions of psilocybin, prozac, and ecstasy were interesting enough to make me want to learn more about those and similar drugs.
His claims that placebos are effective seem very exaggerated (see this abstract).

I was somewhat disappointed by the latest Accelerating Change Conference, which might have been great for people who have never been to that kind of conference before, but didn’t manage enough novelty to be terribly valuable to those who attended the first one. Here are a few disorganized tidbits I got from it.
Bruno Olshausen described our understanding of the neuron as pre-newtonian, and said a neuron might be as complex as a pentium.
Joichi Ito convinced me that Wikipedia has a wider range of uses than my stereotype of it as a dictionary/encyclopedia suggested. For example, its entry on Katrina seems to be a better summary of the news than what I can get via the traditional news media.
Cory Ondrejka pointed out the negative correlation between the availability of violent video games and some broad measure of U.S. crime. He hinted this might say something about causation, but reminded people of the appropriate skepticism by noting the correlation between the decline in pirates and global warming.
Someone reported that Second Life is growing at an impressive pace. I’ve tried it a little over a somewhat flaky wireless connection and wasn’t too excited; I’ll try to get my iBook connected to my dsl line and see if a more reliable connection makes it nicer.
Tom Malone talked about how declining communications costs first enabled the creation of large companies with centralized hierarchies and are now decentralizing companies. His view of Ebay was interesting – he pointed out that it could be considered a retailer with one of the largest number of employees, except that it has outsourced most of its employees (i.e. the people who make a living selling through Ebay). He also mentioned that Intel has some internal markets for resources such as manufacturing capacity.
Daniel Amen criticized modern psychiatry for failing to look at the brain for signs of physical damage. He provided strong anecdotal evidence that the brain imaging services he sell can sometimes tell people how to fix mental problems that standard psychiatry can’t diagnose, but left plenty of doubt as to whether his successes are frequent enough to justify his fees.
T. Colin Campbell described some evidence that eating animal protein is unhealthy. He didn’t convince me that he was a very reliable source of information, but his evidence against casein (a milk protein) sounded fairly strong.
One odd comment from Robin Raskin (amidst an annoying amount of thoughtless sensationalism) was that kids don’t use email anymore. They send about two emails per day [i.e. they’ve switch to IM]. The idea that sending two emails per day amounts to abandoning email makes me wonder to what extent I’m out of touch with modern communication habits.
An amusing joke, attributed to Eric Drexler:
Q: Why did Douglas Hofstadter cross the road?
A: To make this joke possible.

Book Review: The Singularity Is Near : When Humans Transcend Biology by Ray Kurzweil
Kurzweil does a good job of arguing that extrapolating trends such as Moore’s Law works better than most alternative forecasting methods, and he does a good job of describing the implications of those trends. But he is a bit long-winded, and tries to hedge his methodology by pointing to specific research results which he seems to think buttress his conclusions. He neither convinces me that he is good at distinguishing hype from value when analyzing current projects, nor that doing so would help with the longer-term forecasting that constitutes the important aspect of the book.
Given the title, I was slightly surprised that he predicts that AIs will become powerful slightly more gradually than I recall him suggesting previously (which is a good deal more gradual than most Singulitarians). He offsets this by predicting more dramatic changes in the 22nd century than I imagined could be extrapolated from existing trends.
His discussion of the practical importance of reversible computing is clearer than anything else I’ve read on this subject.
When he gets specific, large parts of what he says seem almost right, but there are quite a few details that are misleading enough that I want to quibble with them.
For instance (on page 244, talking about the world circa 2030): “The bulk of the additional energy needed is likely to come from new nanoscale solar, wind, and geothermal technologies.” Yet he says little to justify this, and most of what I know suggests that wind and geothermal have little hope of satisfying more than 1 or 2 percent of new energy demand.
His reference on page 55 to “the devastating effect that illegal file sharing has had on the music-recording industry” seems to say something undesirable about his perspective.
His comments on economists thoughts about deflation are confused and irrelevant.
On page 92 he says “Is the problem that we are not running the evolutionary algorithms long enough? … This won’t work, however, because conventional genetic algorithms reach an asymptote in their level of performance, so running them for a longer period of time won’t help.” If “conventional” excludes genetic programming, then maybe his claim is plausible. But genetic programming originator John Koza claims his results keep improving when he uses more computing power.
His description of nanotech progress seems naive. (page 228) “Drexler’s dissertation … laid out the foundation and provided the road map still being followed today.” (page 234): “each aspect of Drexler’s conceptual designs has been validated”. I’ve been following this area pretty carefully, and I’m aware of some computer simulations which do a tiny fraction of what is needed, but if any lab research is being done that could be considered to follow Drexler’s road map, it’s a well kept secret. Kurzweil then offsets his lack of documentation for those claims by going overboard about documenting his accurate claim that “no serious flaw in Drexler’s nanoassembler concept has been described”.
Kurzweil argues that self-replicating nanobots will sometimes be desirable. I find this poorly thought out. His reasons for wanting them could be satisfied by nanobots that replicate under the control of a responsible AI.
I’m bothered by his complacent attitude toward the risks of AI. He sometimes hints that he is concerned, but his suggestions for dealing with the risks don’t indicate that he has given much thought to the subject. He has a footnote that mentions Yudkowsky’s Guidelines on Friendly AI. The context could lead readers to think they are comparable to the Foresight Guidelines on Molecular Nanotechnology. Alas, Yudkowsky’s guidelines depend on concepts which are hard enough to understand that few researchers are likely to comprehend them, and the few who have tried disagree about their importance.
Kurzweil’s thoughts on the risks that the simulation we may live in will be turned off are somewhat interesting, but less thoughtful than Robin Hanson’s essay on How To Live In A Simulation.
A couple of nice quotes from the book:
(page 210): “It’s mostly in your genes” is only true if you take the usual passive attitude toward health and aging.
(page 301): Sex has largely been separated from its biological function. … So why don’t we provide the same for … another activity that also provides both social intimacy and sensual pleasure – namely, eating?

Why did many people decide not to leave New Orleans in advance of Katrina? Part of the problem may have been that they relied on storytellers rather than weather experts.
NBC’s Brian Williams reports on his blog NBC’s reaction to this weather alert:

URGENT – WEATHER MESSAGE
NATIONAL WEATHER SERVICE NEW ORLEANS LA
1011 AM CDT SUN AUG 28 2005
…DEVASTATING DAMAGE EXPECTED…
HURRICANE KATRINA…A MOST POWERFUL HURRICANE WITH UNPRECEDENTED
STRENGTH…RIVALING THE INTENSITY OF HURRICANE CAMILLE OF 1969.
MOST OF THE AREA WILL BE UNINHABITABLE FOR WEEKS…PERHAPS LONGER.
AT LEAST HALF OF WELL CONSTRUCTED HOMES WILL HAVE ROOF AND WALL
FAILURE. ALL GABLED ROOFS WILL FAIL…ALL WOOD FRAMED LOW RISING
APARTMENT BUILDINGS WILL BE DESTROYED. … WATER SHORTAGES WILL MAKE
HUMAN SUFFERING INCREDIBLE BY MODERN STANDARDS.

Williams says “The wording and contents were so incendiary that our folks were concerned that it wasn’t real”, and implies that he and others at NBC translated this into something less scary for their viewers.
My most memorable experience with hurricane forecasts was with hurricane Gloria in 1985 when I was in Block Island (off Rhode Island). I recall a TV weather forecast that winds might reach 135 to 175 mph, and marine weather radio forecasts of 50 to 70 knot sustained winds with gusts to 90 knots (i.e. less than 105 mph). The marine radio forecasts seem to be more direct relays of what the weather service puts out, and it was fairly simple for me to determine that the TV forecast was bogus (the marine radio forecasts proved pretty accurate).
So it’s easy to imagine that people are aware that TV forecasts have a habit of overstating the threat from storms, and thought they could infer expert forecasts from TV forecasts by assuming a simple pattern of exaggeration, when it may be that the storytellers have a more complex model of how viewers’ behavior should be manipulated by biasing their reports. Do people actually rely on TV reports rather than more direct and reliable sources of expert opinion when accurate forecasts are important? If so, is it because they use weather forecasts mainly as entertainment or a catalyst for smalltalk at parties, and don’t want to be aware of the flaws?
And of course there was the problem of key government leaders failing to believe the expert forecast: (from The Agitator) [then] FEMA Director Brown:

Saturday and Sunday, we thought it was a typical hurricane
situation — not to say it wasn’t going to be bad, but that the
water would drain away fairly quickly. Then the levees broke and
(we had) this lawlessness. That almost stopped our
efforts…Katrina was much larger than we expected.

OkCupid

After wading through many online dating web sites, and being depressed at having to choose between searches on superficial features which return thousands of uninteresting results, or keyword searches which rarely return any results, I found OkCupid! (thanks to Wayne Radinsky). I have some hope that it will do for online dating what Google did for searches.
It encourages people to provide it with lots of information that can be used to compare people, mainly by asking lots of yes/no or multiple choice questions (many submitted by users). A few examples (selected more for their amusement value than importance):

Eventually, a computer will write the best novel ever written.

I should be able to sell my vote for cash if I feel like it.

Would you rather get caught masturbating by your mother or father?

Could you date a giant carnivorous reptile?

Would you ever date or mess around with a good friend’s ex?

Ethnicity restrictions? You racist. Please note that unless you leave these blank (which we recommend), you’ll only match with people who’ve submitted their ethnicities.

They have some way of deducing from user responses how valuable each question is.
They’ve written and open sourced their own web server.
It’s free and plans to stay that way, and is supported by ads.
I’m a bit disappointed that they claim the site shows “a disregard for profit”. I doubt they’re any less interested in profit than Google is, although they’ve clearly avoided the cover-your-ass culture of large bureaucracies. The site doesn’t yet have enough people to be terribly valuable, but it appears to be growing quickly enough that it will succeed.
So for I’ve got one message from a person who is a good deal more interesting than anyone I’ve met on the usual dating sites in quite a while, except that he’s in Pennsylvania (OkCupid doesn’t seem very good at handling geographic preferences).
I’m annoyed that their menu for languages in which I’m fluent offers Khmer, C++, LISP, and some languages I don’t recognize, but not Python.
It has a Friendster-like provision for lists of friends. If I know you and you become an OkCupid member, please let me know.

Robin Hanson writes in a post on Intuition Error and Heritage:

Unless you can see a reason to have expected to be born into a culture or species with more accurate than average intuitions, you must expect your cultural or species specific intuitions to be random, and so not worth endorsing.

Deciding whether an intuition is species specific and no more likely than random to be right seems a bit hard, due to the current shortage of species whose cultures address many of the disputes humans have.
The ideas in this quote follow logically from other essays of Robin’s that I’ve read, but phrasing them this way makes them seem superficially hard to reconcile with arguments by Hayek that we should respect the knowledge contained in culture.
Part of this apparent conflict seems to be due to the Hayek’s emphasis on intuitions for which there is some unobvious and inconclusive evidence that supports the cultural intuitions. Hayek wasn’t directing his argument to a random culture, but rather to a culture for which there was some evidence of better than random results, and it would make less sense to apply his arguments to, say, North Korean society. For many other intuitions that Hayek cared about, the number of cultures which agree with the intuition may be large enough to constitute evidence in support of the intuition.
Some intuitions may be appropriate for a culture even though they were no better than random when first adopted. Driving on the right side of the road is a simple example. The arguments given in favor of a judicial bias toward stare decisis suggest this is just the tip of an iceberg.
Some of this apparent conflict may be due the importance of treating interrelated practices together. For instance, laws against extramarital sex might be valuable in societies where people depend heavily on marital fidelity but not in societies where a divorced person can support herself comfortably. A naive application of Robin’s rule might lead the former society to decide such a law is arbitrary, when a Hayekian might wonder if it is better to first analyze whether to treat the two practices as a unit which should only be altered together.
I’m uncertain whether these considerations fully reconcile the two views, or whether Hayek’s arguments need more caveats.

Book Review: Nanofuture: What’s Next For Nanotechnology by J. Storrs Hall
This book provides some rather well informed insights into what molecular engineering will be able to do in a few decades. It isn’t as thoughtful as Drexler’s Engines of Creation, but it has many ideas that seem new to this reader who has been reading similar essays for many years, such as a solar energy collector that looks and feels like grass.
The book is somewhat eccentric in it’s choice of what to emphasize, devoting three pages to the history of the steam engine, but describing the efficiency of nanotech batteries in a footnote that is a bit too cryptic to be convincing.
The chapter on economics is better than I expected, but I’m still not satisfied. The prediction that interest rates will be much higher sounds correct for the period in which we transition to widespread use of general purpose assemblers, since investing capital in producing more machines will be very productive. But once the technology is widespread and mature, the value of additional manufacturing will decline rapidly to the point where it ceases to put upward pressure on interest rates.
The chapter on AI is disappointing, implying that the main risks of AI are to the human ego. For some better clues about the risks of AI, see Yudkowsky’s essay on Creating Friendly AI.