2 comments on “Will an Overconfident AGI Mistakenly Expect to Conquer the World?

  1. Seems like a scenario worth considering. What leads you to believe this, though? ‘The first AGIs seem likely to be somewhat weak at creating these causal models compared to other IQ-like capabilities.’

  2. To reply to “Egg Syntax” — I can’t speak for Peter, but the quoted statement seems plausible to me as well, because I view it as a plausible “default assumption” — that causal models that are harder to test are more likely to be inaccurate. (Note that this is also true about causal models believed by humans.)

Leave a Reply

Your email address will not be published. Required fields are marked *