Bayesian Investor Blog

Ramblings of a somewhat libertarian stock market speculator

Artificial Superintelligence: A Futuristic Approach

Posted by Peter on August 17, 2015
Posted in: Artificial Intelligence, Book Reviews. Tagged: existential risks.

Book review: Artificial Superintelligence: A Futuristic Approach, by Roman V. Yampolskiy.

This strange book has some entertainment value, and might even enlighten you a bit about the risks of AI. It presents many ideas, with occasional attempts to distinguish the important ones from the jokes.

I had hoped for an analysis that reflected a strong understanding of which software approaches were most likely to work. Yampolskiy knows something about computer science, but doesn’t strike me as someone with experience at writing useful code. His claim that “to increase their speed [AIs] will attempt to minimize the size of their source code” sounds like a misconception that wouldn’t occur to an experienced programmer. And his chapter “How to Prove You Invented Superintelligence So No One Else Can Steal It” seems like a cute game that someone might play with if he cared more about passing a theoretical computer science class than about, say, making money on the stock market, or making sure the superintelligence didn’t destroy the world.

I’m still puzzling over some of his novel suggestions for reducing AI risks. How would “convincing robots to worship humans as gods” differ from the proposed Friendly AI? Would such robots notice (and resolve in possibly undesirable ways) contradictions in their models of human nature?

Other suggestions are easy to reject, such as hoping AIs will need us for our psychokinetic abilities (abilities that Yampolskiy says are shown by peer-reviewed experiments associated with the Global Consciousness Project).

The style is also weird. Some chapters were previously published as separate papers, and weren’t adapted to fit together. It was annoying to occasionally see sentences that seemed identical to ones in a prior chapter.

The author even has strange ideas about what needs footnoting. E.g. when discussing the physical limits to intelligence, he cites (Einstein 1905).

Only read this if you’ve read other authors on this subject first.

Posts navigation

← Foragers, Farmers, and Fossil Fuels
War! What Is It Good For? →
  • Recent Posts

    • The Death of Cancer
    • Future-Generation Government
    • AI-Oriented Investments
    • An Optimistic Scenario for Taiwan
    • Further Thoughts on AI Ethics
    • Waking up to AGI
    • Super Agers
    • Are Intelligent Agents More Ethical?
  • Recent Comments

    • The Death of Cancer | Bayesian Investor Blog on Super Agers
    • David Schneider-Joseph on AI-Oriented Investments
    • Peter on AI-Oriented Investments
    • Future-Generation Government | Bayesian Investor Blog on Intangibles
    • Peter on Waking up to AGI
  • Tags

    aging amm autism best posts bias brain bubbles CFAR climate communication skills consciousness covid diet effective altruism empires equality ethics evolution existential risks genetics happiness history honesty industrial revolution information economics IQ kelvinism law macroeconomics meditation mind uploading MIRI neuroscience prediction markets prizes psychology rationality relationships risks seasteading status stock market crash transhumanism war willpower
  • Categories

    • Announcements [B] (6)
    • Book Reviews (284)
    • Economics (187)
      • Idea Futures (44)
      • Investing (85)
    • Life, the Universe, and Everything (155)
      • Fermi Paradox (6)
      • Health (113)
      • Humor (11)
    • Movies (2)
    • Politics (201)
      • China (19)
      • Freedom (19)
      • Mideast (14)
      • U.S. Politics (82)
    • Science and Technology (261)
      • Artificial Intelligence (93)
      • Miscellaneous (20)
      • Molecular Assemblers (Advanced Nanotech) (16)
      • The Flynn Effect (16)
      • The Human Mind (111)
      • Virtual Worlds (4)
    • Uncategorized (14)
Proudly powered by WordPress Theme: Parament by Automattic.