One comment on “Human Compatible

  1. > I’m not too clear how standard that model is. It’s not like there’s a consensus of experts who are promoting it as the primary way to think of AI. It’s more like people find the model to be a simple way to think about goals when they’re being fairly abstract. Few people seem to be defending the standard model against Russell’s criticism (and it’s unclear whether Russell is claiming they are doing so).

    It’s not that AI researchers are saying “clearly we should be writing down an objective function that captures our goal with certainty”. It’s that if you look at the actual algorithms that the field of AI produces, nearly all of them assume the existence of some kind of specification that says what the goal is, because that is just the way that you do AI research. There wasn’t a deliberate decision to use this “standard model”; but given that all the work produced does fit in this standard model, it seems pretty reasonable to call it “standard”.

    This is not specific to deep learning — it also applies to traditional AI algorithms like search, constraint satisfaction, logic, reinforcement learning, etc. The one exception I know of is the field of human-robot interaction, which has grappled with the problem that objectives are hard to write down.

Comments are closed.