Bayesian Investor Blog

Ramblings of a somewhat libertarian stock market speculator

Artificial Superintelligence: A Futuristic Approach

Posted by Peter on August 17, 2015
Posted in: Artificial Intelligence, Book Reviews. Tagged: existential risks.

Book review: Artificial Superintelligence: A Futuristic Approach, by Roman V. Yampolskiy.

This strange book has some entertainment value, and might even enlighten you a bit about the risks of AI. It presents many ideas, with occasional attempts to distinguish the important ones from the jokes.

I had hoped for an analysis that reflected a strong understanding of which software approaches were most likely to work. Yampolskiy knows something about computer science, but doesn’t strike me as someone with experience at writing useful code. His claim that “to increase their speed [AIs] will attempt to minimize the size of their source code” sounds like a misconception that wouldn’t occur to an experienced programmer. And his chapter “How to Prove You Invented Superintelligence So No One Else Can Steal It” seems like a cute game that someone might play with if he cared more about passing a theoretical computer science class than about, say, making money on the stock market, or making sure the superintelligence didn’t destroy the world.

I’m still puzzling over some of his novel suggestions for reducing AI risks. How would “convincing robots to worship humans as gods” differ from the proposed Friendly AI? Would such robots notice (and resolve in possibly undesirable ways) contradictions in their models of human nature?

Other suggestions are easy to reject, such as hoping AIs will need us for our psychokinetic abilities (abilities that Yampolskiy says are shown by peer-reviewed experiments associated with the Global Consciousness Project).

The style is also weird. Some chapters were previously published as separate papers, and weren’t adapted to fit together. It was annoying to occasionally see sentences that seemed identical to ones in a prior chapter.

The author even has strange ideas about what needs footnoting. E.g. when discussing the physical limits to intelligence, he cites (Einstein 1905).

Only read this if you’ve read other authors on this subject first.

Posts navigation

← Foragers, Farmers, and Fossil Fuels
War! What Is It Good For? →
  • Recent Posts

    • Are Intelligent Agents More Ethical?
    • Insider Trading on Nuclear Regulation?
    • The Ageless Brain
    • AI 2027 Thoughts
    • Should AIs be Encouraged to Cooperate?
    • Rain of Tariffs
    • Notes from the TRIIM-X Clinical Trial
    • AI Markets on Manifold
  • Recent Comments

    • Bruce Smith on Are Intelligent Agents More Ethical?
    • Peter on Rain of Tariffs
    • Eli on Rain of Tariffs
    • The Ageless Brain | Bayesian Investor Blog on The End of Alzheimer’s
    • AI 2027 Thoughts | Bayesian Investor Blog on AI Fire Alarm Scenarios
  • Tags

    aging amm autism best posts bias brain bubbles CFAR climate communication skills consciousness covid diet effective altruism empires equality ethics evolution existential risks genetics happiness history honesty industrial revolution information economics IQ kelvinism law macroeconomics meditation mind uploading MIRI neuroscience prediction markets prizes psychology rationality relationships risks seasteading status stock market crash transhumanism war willpower
  • Categories

    • Announcements [B] (6)
    • Book Reviews (281)
    • Economics (185)
      • Idea Futures (44)
      • Investing (83)
    • Life, the Universe, and Everything (153)
      • Fermi Paradox (6)
      • Health (111)
      • Humor (11)
    • Movies (2)
    • Politics (197)
      • China (18)
      • Freedom (19)
      • Mideast (14)
      • U.S. Politics (80)
    • Science and Technology (258)
      • Artificial Intelligence (90)
      • Miscellaneous (20)
      • Molecular Assemblers (Advanced Nanotech) (16)
      • The Flynn Effect (16)
      • The Human Mind (111)
      • Virtual Worlds (4)
    • Uncategorized (14)
Proudly powered by WordPress Theme: Parament by Automattic.