[I mostly wrote this to clarify my thoughts. I’m unclear whether this will be valuable for readers. ]
I expect that within a decade, AI will be able to do 90% of current human jobs. I don’t mean that 90% of humans will be obsolete. I mean that the average worker could delegate 90% of their tasks to an AGI.
I feel confused about what this implies for the kind of AI long-term planning and strategizing that would enable an AI to create large-scale harm if it is poorly aligned.
Is the ability to achieve long-term goals hard for an AI to develop?
Continue Reading