AI 2027 portrays two well thought out scenarios for how AI is likely to impact the world toward the end of this decade.
I expect those scenarios will prove to be moderately wrong, but close enough to be scary. I also expect that few people will manage to make forecasts that are significantly more accurate.
Here are some scattered thoughts that came to mind while I read AI 2027.
The authors are fairly pessimistic. I see four key areas where their assumptions seem to lead them to see more danger than do more mainstream experts. They see:
- a relatively small capabilities lead being enough for a group to conquer the world
- more difficulty of alignment
- more difficulty of detecting deception
- AI companies being less careful than is necessary
I expect that the authors are being appropriately concerned on about two of these assumptions, and a bit too pessimistic on the others. I’m hesitant to bet on which assumptions belong in which category.
They don’t focus much on justifying those assumptions. That’s likely wise, since prior debates on those topics have not been very productive. Instead, they’ve focused more on when various changes will happen.
This post will focus on aspects of the first two assumptions for which I expect further analysis to be relatively valuable.
Continue Reading