Artificial Intelligence

I was somewhat disappointed by the latest Accelerating Change Conference, which might have been great for people who have never been to that kind of conference before, but didn’t manage enough novelty to be terribly valuable to those who attended the first one. Here are a few disorganized tidbits I got from it.
Bruno Olshausen described our understanding of the neuron as pre-newtonian, and said a neuron might be as complex as a pentium.
Joichi Ito convinced me that Wikipedia has a wider range of uses than my stereotype of it as a dictionary/encyclopedia suggested. For example, its entry on Katrina seems to be a better summary of the news than what I can get via the traditional news media.
Cory Ondrejka pointed out the negative correlation between the availability of violent video games and some broad measure of U.S. crime. He hinted this might say something about causation, but reminded people of the appropriate skepticism by noting the correlation between the decline in pirates and global warming.
Someone reported that Second Life is growing at an impressive pace. I’ve tried it a little over a somewhat flaky wireless connection and wasn’t too excited; I’ll try to get my iBook connected to my dsl line and see if a more reliable connection makes it nicer.
Tom Malone talked about how declining communications costs first enabled the creation of large companies with centralized hierarchies and are now decentralizing companies. His view of Ebay was interesting – he pointed out that it could be considered a retailer with one of the largest number of employees, except that it has outsourced most of its employees (i.e. the people who make a living selling through Ebay). He also mentioned that Intel has some internal markets for resources such as manufacturing capacity.
Daniel Amen criticized modern psychiatry for failing to look at the brain for signs of physical damage. He provided strong anecdotal evidence that the brain imaging services he sell can sometimes tell people how to fix mental problems that standard psychiatry can’t diagnose, but left plenty of doubt as to whether his successes are frequent enough to justify his fees.
T. Colin Campbell described some evidence that eating animal protein is unhealthy. He didn’t convince me that he was a very reliable source of information, but his evidence against casein (a milk protein) sounded fairly strong.
One odd comment from Robin Raskin (amidst an annoying amount of thoughtless sensationalism) was that kids don’t use email anymore. They send about two emails per day [i.e. they’ve switch to IM]. The idea that sending two emails per day amounts to abandoning email makes me wonder to what extent I’m out of touch with modern communication habits.
An amusing joke, attributed to Eric Drexler:
Q: Why did Douglas Hofstadter cross the road?
A: To make this joke possible.

Book Review: The Singularity Is Near : When Humans Transcend Biology by Ray Kurzweil
Kurzweil does a good job of arguing that extrapolating trends such as Moore’s Law works better than most alternative forecasting methods, and he does a good job of describing the implications of those trends. But he is a bit long-winded, and tries to hedge his methodology by pointing to specific research results which he seems to think buttress his conclusions. He neither convinces me that he is good at distinguishing hype from value when analyzing current projects, nor that doing so would help with the longer-term forecasting that constitutes the important aspect of the book.
Given the title, I was slightly surprised that he predicts that AIs will become powerful slightly more gradually than I recall him suggesting previously (which is a good deal more gradual than most Singulitarians). He offsets this by predicting more dramatic changes in the 22nd century than I imagined could be extrapolated from existing trends.
His discussion of the practical importance of reversible computing is clearer than anything else I’ve read on this subject.
When he gets specific, large parts of what he says seem almost right, but there are quite a few details that are misleading enough that I want to quibble with them.
For instance (on page 244, talking about the world circa 2030): “The bulk of the additional energy needed is likely to come from new nanoscale solar, wind, and geothermal technologies.” Yet he says little to justify this, and most of what I know suggests that wind and geothermal have little hope of satisfying more than 1 or 2 percent of new energy demand.
His reference on page 55 to “the devastating effect that illegal file sharing has had on the music-recording industry” seems to say something undesirable about his perspective.
His comments on economists thoughts about deflation are confused and irrelevant.
On page 92 he says “Is the problem that we are not running the evolutionary algorithms long enough? … This won’t work, however, because conventional genetic algorithms reach an asymptote in their level of performance, so running them for a longer period of time won’t help.” If “conventional” excludes genetic programming, then maybe his claim is plausible. But genetic programming originator John Koza claims his results keep improving when he uses more computing power.
His description of nanotech progress seems naive. (page 228) “Drexler’s dissertation … laid out the foundation and provided the road map still being followed today.” (page 234): “each aspect of Drexler’s conceptual designs has been validated”. I’ve been following this area pretty carefully, and I’m aware of some computer simulations which do a tiny fraction of what is needed, but if any lab research is being done that could be considered to follow Drexler’s road map, it’s a well kept secret. Kurzweil then offsets his lack of documentation for those claims by going overboard about documenting his accurate claim that “no serious flaw in Drexler’s nanoassembler concept has been described”.
Kurzweil argues that self-replicating nanobots will sometimes be desirable. I find this poorly thought out. His reasons for wanting them could be satisfied by nanobots that replicate under the control of a responsible AI.
I’m bothered by his complacent attitude toward the risks of AI. He sometimes hints that he is concerned, but his suggestions for dealing with the risks don’t indicate that he has given much thought to the subject. He has a footnote that mentions Yudkowsky’s Guidelines on Friendly AI. The context could lead readers to think they are comparable to the Foresight Guidelines on Molecular Nanotechnology. Alas, Yudkowsky’s guidelines depend on concepts which are hard enough to understand that few researchers are likely to comprehend them, and the few who have tried disagree about their importance.
Kurzweil’s thoughts on the risks that the simulation we may live in will be turned off are somewhat interesting, but less thoughtful than Robin Hanson’s essay on How To Live In A Simulation.
A couple of nice quotes from the book:
(page 210): “It’s mostly in your genes” is only true if you take the usual passive attitude toward health and aging.
(page 301): Sex has largely been separated from its biological function. … So why don’t we provide the same for … another activity that also provides both social intimacy and sensual pleasure – namely, eating?

Book Review: On Intelligence by Jeff Hawkins

This book presents strong arguments that prediction is a more important part of intelligence than most experts realize. It outlines a fairly simple set of general purpose rules that may describe some important aspects of how small groups of neurons interact to produce intelligent behavior. It provides a better theory of the role of the hippocampus than I’ve seen before.
I wouldn’t call this book a major breakthrough, but I expect that it will produce some nontrivial advances in the understanding of the human brain.
The most disturbing part of this book is the section on the risks of AI. He claims that AIs will just be tools, but he shows no sign of having given thought to any of the issues involved beyond deciding that an AI is unlikely to have human motives. But that leaves a wide variety of other possible goals systems, many of which would be as dangerous. It’s possible that he sees easy ways to ensure that an AI is always obedient, but there are many approaches to AI for which I don’t think this is possible (for instance, evolutionary programming looks like it would select for something resembling a survival instinct), and this book doesn’t clarify what goals Hawkins’ approach is likely to build into his software. It is easy to imagine that he would need to build in goals other than obedience in order to get his system to do any learning. If this is any indication of the care he is taking to ensure that his “tools” are safe, I hope he fails to produce intelligent software.
For more discussion of AI risks, see sl4.org. In particular, I have a description there of how one might go about safely implementing an obedient AI. At the time I was thinking of Pei Wang’s NARS as the best approach to AI, and with that approach it seems natural for an AI to have no goals that are inconsistent with obedience. But Hawkins’ approach seems approximately as powerful as NARS, but more likely to tempt designers into building in goals other than obedience.

Book Review: What is Thought? by Eric Baum

The first half of this book is an overview of the field of artificial intelligence that might be one of the best available introductions for people who are new to the subject, but which seemed fairly slow and only mildly interesting to me.

The parts of the book that are excellent for both amateurs and experts are chapters 11 through 13, dealing with how human intelligence evolved.

He presents strong, although not conclusive, arguments that the evolution of language did not involve dramatic new modes of thought except to the extent that improved communication improved learning, and that small catalysts created by humans might well be enough to spark the evolution of human-like language in other apes.

His recasting of the nature versus nurture debate in terms of biases that guide learning is likely to prove more valuable at resisting the distortions of ideologues than more conventional versions (e.g. Pinker’s).

His arguments have important implications for how AI will progress. He convinced me that it will be less sudden than I previously thought, by convincing me that truly general-purpose learning machines won’t work, and that much of intelligence involves using large quantities of data about the real world to choose good biases with which to guide our learning.