One comment on “Further Thoughts on AI Ethics

  1. If I understand Eliezer’s position correctly, it is that if you “train” a superintelligent AI to follow essentially any goal, it will end up with a different goal and be misaligned (with a probability he thinks is more like 99.99%). If you want to reduce this probability, I think he thinks, you need to invent some fundamentally different way to instill a goal into it than by “training”.

    I agree with him on this. Where I disagree is that I have a lot more hope than he does that inventing this alternative to training is practical in the near term.

Leave a Reply

Your email address will not be published. Required fields are marked *