{"id":1093,"date":"2015-08-17T13:29:05","date_gmt":"2015-08-17T21:29:05","guid":{"rendered":"https:\/\/bayesianinvestor.com\/blog\/?p=1093"},"modified":"2023-02-12T10:01:14","modified_gmt":"2023-02-12T18:01:14","slug":"artificial-superintelligence-a-futuristic-approach","status":"publish","type":"post","link":"https:\/\/bayesianinvestor.com\/blog\/index.php\/2015\/08\/17\/artificial-superintelligence-a-futuristic-approach\/","title":{"rendered":"Artificial Superintelligence: A Futuristic Approach"},"content":{"rendered":"<p>Book review: Artificial Superintelligence: A Futuristic Approach, by Roman V. Yampolskiy.<\/p>\n<p>This strange book has some entertainment value, and might even enlighten you a bit about the risks of AI. It presents many ideas, with occasional attempts to distinguish the important ones from the jokes.<\/p>\n<p>I had hoped for an analysis that reflected a strong understanding of which software approaches were most likely to work. Yampolskiy knows something about computer science, but doesn&#8217;t strike me as someone with experience at writing useful code. His claim that &#8220;to increase their speed [AIs] will attempt to minimize the size of their source code&#8221; sounds like a misconception that wouldn&#8217;t occur to an experienced programmer. And his chapter &#8220;How to Prove You Invented Superintelligence So No One Else Can Steal It&#8221; seems like a cute game that someone might play with if he cared more about passing a theoretical computer science class than about, say, making money on the stock market, or making sure the superintelligence didn&#8217;t destroy the world.<\/p>\n<p>I&#8217;m still puzzling over some of his novel suggestions for reducing AI risks. How would &#8220;convincing robots to worship humans as gods&#8221; differ from the proposed Friendly AI? Would such robots notice (and resolve in possibly undesirable ways) contradictions in their models of human nature?<\/p>\n<p>Other suggestions are easy to reject, such as hoping AIs will need us for our psychokinetic abilities (abilities that Yampolskiy says are shown by <a href=\"http:\/\/noosphere.princeton.edu\/papers\/pdf\/GCP.Events.Mar08.prepress.pdf\">peer-reviewed experiments<\/a> associated with the <a href=\"https:\/\/en.wikipedia.org\/wiki\/Global_Consciousness_Project\">Global Consciousness Project<\/a>).<\/p>\n<p>The style is also weird. Some chapters were previously published as separate papers, and weren&#8217;t adapted to fit together. It was annoying to occasionally see sentences that seemed identical to ones in a prior chapter.<\/p>\n<p>The author even has strange ideas about what needs footnoting. E.g. when discussing the physical limits to intelligence, he cites (Einstein 1905).<\/p>\n<p>Only read this if you&#8217;ve read other authors on this subject first.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Book review: Artificial Superintelligence: A Futuristic Approach, by Roman V. Yampolskiy. This strange book has some entertainment value, and might even enlighten you a bit about the risks of AI. It presents many ideas, with occasional attempts to distinguish the important ones from the jokes. I had hoped for an analysis that reflected a strong [&hellip;]<\/p>\n","protected":false},"author":2,"featured_media":0,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"jetpack_post_was_ever_published":false,"footnotes":"","jetpack_publicize_message":"","jetpack_is_tweetstorm":false,"jetpack_publicize_feature_enabled":true,"jetpack_social_post_already_shared":false,"jetpack_social_options":{"image_generator_settings":{"template":"highway","enabled":false}}},"categories":[26,22],"tags":[128],"class_list":["post-1093","post","type-post","status-publish","format-standard","hentry","category-ai","category-books","tag-existential-risks"],"jetpack_publicize_connections":[],"jetpack_featured_media_url":"","jetpack_sharing_enabled":true,"jetpack_shortlink":"https:\/\/wp.me\/p80O1l-hD","_links":{"self":[{"href":"https:\/\/bayesianinvestor.com\/blog\/index.php\/wp-json\/wp\/v2\/posts\/1093","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/bayesianinvestor.com\/blog\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/bayesianinvestor.com\/blog\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/bayesianinvestor.com\/blog\/index.php\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/bayesianinvestor.com\/blog\/index.php\/wp-json\/wp\/v2\/comments?post=1093"}],"version-history":[{"count":1,"href":"https:\/\/bayesianinvestor.com\/blog\/index.php\/wp-json\/wp\/v2\/posts\/1093\/revisions"}],"predecessor-version":[{"id":1094,"href":"https:\/\/bayesianinvestor.com\/blog\/index.php\/wp-json\/wp\/v2\/posts\/1093\/revisions\/1094"}],"wp:attachment":[{"href":"https:\/\/bayesianinvestor.com\/blog\/index.php\/wp-json\/wp\/v2\/media?parent=1093"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/bayesianinvestor.com\/blog\/index.php\/wp-json\/wp\/v2\/categories?post=1093"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/bayesianinvestor.com\/blog\/index.php\/wp-json\/wp\/v2\/tags?post=1093"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}