At the recent AGI workshop, Michael Anissimov concisely summarized one of the reasons to worry about AI: the greatest risk is that there won’t be small risks leading up to it.
existential risks
All posts tagged existential risks
Book Review: The Singularity Is Near : When Humans Transcend Biology by Ray Kurzweil
Kurzweil does a good job of arguing that extrapolating trends such as Moore’s Law works better than most alternative forecasting methods, and he does a good job of describing the implications of those trends. But he is a bit long-winded, and tries to hedge his methodology by pointing to specific research results which he seems to think buttress his conclusions. He neither convinces me that he is good at distinguishing hype from value when analyzing current projects, nor that doing so would help with the longer-term forecasting that constitutes the important aspect of the book.
Given the title, I was slightly surprised that he predicts that AIs will become powerful slightly more gradually than I recall him suggesting previously (which is a good deal more gradual than most Singulitarians). He offsets this by predicting more dramatic changes in the 22nd century than I imagined could be extrapolated from existing trends.
His discussion of the practical importance of reversible computing is clearer than anything else I’ve read on this subject.
When he gets specific, large parts of what he says seem almost right, but there are quite a few details that are misleading enough that I want to quibble with them.
For instance (on page 244, talking about the world circa 2030): “The bulk of the additional energy needed is likely to come from new nanoscale solar, wind, and geothermal technologies.” Yet he says little to justify this, and most of what I know suggests that wind and geothermal have little hope of satisfying more than 1 or 2 percent of new energy demand.
His reference on page 55 to “the devastating effect that illegal file sharing has had on the music-recording industry” seems to say something undesirable about his perspective.
His comments on economists thoughts about deflation are confused and irrelevant.
On page 92 he says “Is the problem that we are not running the evolutionary algorithms long enough? … This won’t work, however, because conventional genetic algorithms reach an asymptote in their level of performance, so running them for a longer period of time won’t help.” If “conventional” excludes genetic programming, then maybe his claim is plausible. But genetic programming originator John Koza claims his results keep improving when he uses more computing power.
His description of nanotech progress seems naive. (page 228) “Drexler’s dissertation … laid out the foundation and provided the road map still being followed today.” (page 234): “each aspect of Drexler’s conceptual designs has been validated”. I’ve been following this area pretty carefully, and I’m aware of some computer simulations which do a tiny fraction of what is needed, but if any lab research is being done that could be considered to follow Drexler’s road map, it’s a well kept secret. Kurzweil then offsets his lack of documentation for those claims by going overboard about documenting his accurate claim that “no serious flaw in Drexler’s nanoassembler concept has been described”.
Kurzweil argues that self-replicating nanobots will sometimes be desirable. I find this poorly thought out. His reasons for wanting them could be satisfied by nanobots that replicate under the control of a responsible AI.
I’m bothered by his complacent attitude toward the risks of AI. He sometimes hints that he is concerned, but his suggestions for dealing with the risks don’t indicate that he has given much thought to the subject. He has a footnote that mentions Yudkowsky’s Guidelines on Friendly AI. The context could lead readers to think they are comparable to the Foresight Guidelines on Molecular Nanotechnology. Alas, Yudkowsky’s guidelines depend on concepts which are hard enough to understand that few researchers are likely to comprehend them, and the few who have tried disagree about their importance.
Kurzweil’s thoughts on the risks that the simulation we may live in will be turned off are somewhat interesting, but less thoughtful than Robin Hanson’s essay on How To Live In A Simulation.
A couple of nice quotes from the book:
(page 210): “It’s mostly in your genes” is only true if you take the usual passive attitude toward health and aging.
(page 301): Sex has largely been separated from its biological function. … So why don’t we provide the same for … another activity that also provides both social intimacy and sensual pleasure – namely, eating?
Book Review: On Intelligence by Jeff Hawkins
This book presents strong arguments that prediction is a more important part of intelligence than most experts realize. It outlines a fairly simple set of general purpose rules that may describe some important aspects of how small groups of neurons interact to produce intelligent behavior. It provides a better theory of the role of the hippocampus than I’ve seen before.
I wouldn’t call this book a major breakthrough, but I expect that it will produce some nontrivial advances in the understanding of the human brain.
The most disturbing part of this book is the section on the risks of AI. He claims that AIs will just be tools, but he shows no sign of having given thought to any of the issues involved beyond deciding that an AI is unlikely to have human motives. But that leaves a wide variety of other possible goals systems, many of which would be as dangerous. It’s possible that he sees easy ways to ensure that an AI is always obedient, but there are many approaches to AI for which I don’t think this is possible (for instance, evolutionary programming looks like it would select for something resembling a survival instinct), and this book doesn’t clarify what goals Hawkins’ approach is likely to build into his software. It is easy to imagine that he would need to build in goals other than obedience in order to get his system to do any learning. If this is any indication of the care he is taking to ensure that his “tools” are safe, I hope he fails to produce intelligent software.
For more discussion of AI risks, see sl4.org. In particular, I have a description there of how one might go about safely implementing an obedient AI. At the time I was thinking of Pei Wang’s NARS as the best approach to AI, and with that approach it seems natural for an AI to have no goals that are inconsistent with obedience. But Hawkins’ approach seems approximately as powerful as NARS, but more likely to tempt designers into building in goals other than obedience.
Book Review: Catastrophe: Risk And Response by Richard A. Posner
This book does a very good job of arguing that humans are doing an inadequate job of minimizing the expected harm associated with improbable but major disasters such as asteroid strikes and sudden climate changes. He provides a rather thorough and unbiased summary of civilization-threatening risks, and a good set of references to the relevant literature.
I am disappointed that he gave little attention to the risks of AI. Probably his reason is that his expertise in law and economics will do little to address what is more of an engineering problem that is unlikely to be solved by better laws.
I suspect he’s overly concerned about biodiversity loss. He tries to justify his concern by noting risks to our food chain that seem to depend on our food supply being less diverse than it is.
His solutions do little to fix the bad incentives which have prevented adequate preparations. The closest he comes to fixing them is his proposal for a center for catastrophic-risk assessment and response, which would presumably have some incentive to convince people of risks in order to justify its existence.
His criticisms of information markets (aka idea futures) ignore the best arguments on this subject. He attacks the straw man of using them to predict particular terrorist attacks, and ignores possibilities such as using them to predict whether invading Iraq would reduce or increase deaths due to terrorism over many years. And his claim that scientists need no monetary incentives naively ignores their bias to dismiss concerns about harm resulting from their research (bias which he notes elsewhere as a cause of recklessness). See Robin Hanson’s Idea Futures web pages for arguments suggesting that this is a major mistake on Posner’s part.
Continue Reading
Rare Earth : Why Complex Life Is Uncommon in the Universe provides some fairly strong (and not well known) arguments that animal life on earth has been very lucky, and that planetary surfaces are typically much more hostile to multicellular life than our experience leads us to expect.
The most convincing parts of the book deal with geological and astronomical phenomena that suggest that earth-like conditions are unstable, and that it would have been normal for animal life to have been wiped out by disasters such as asteroids, extreme temperatures, supernovae, etc.
The parts of the book that deal with biology and evolution are disappointing. The “enigma” of the Cambrian explosion seems to have been explained by Andrew Parker (see his book In the Blink of an Eye) in a way that undercuts Rare Earth’s use of it (dramatic changes of this nature seem very likely when eyes first evolve). This theory was apparently first published in a technical journal in 1998 (i.e. before Rare Earth).
They often assume that intelligence could only develop as it has in humans, even suggesting that it couldn’t evolve in the ocean, which is rather odd given how close the octopus is to qualifying. But the various arguments in the book are independent enough that the weak parts don’t have much affect on the rest of the arguments.
I was surprised that they never mentioned the Fermi Paradox, which I consider to be the strongest single argument for their position. Apparently they don’t give it much thought because they don’t expect technological growth to produce effects that encompass more than our planet and are visible at galactic distances.
Their concern over biodiversity seems rather misplaced. I can understand why people who overestimate mother nature’s benevolence think that preserving the status quo is a safe strategy for humanity, but it seems to me that anyone sharing Rare Earth’s belief that nature could wipe us out any time now should tend to prefer a strategy of putting more of our effort into creating technology that will allow us to survive natural disasters.
I am disappointed that they rarely attempt to quantify the range of probabilities they would consider reasonable for the risks they discuss.
Stephen Webb has written a book on roughly the same subject called Where is Everybody? that is more carefully argued, but less entertaining, than Rare Earth.