13 comments on “Superintelligence


  1. If an oracle AI can’t produce a result that humans can analyze well enough to decide (without trusting the AI) that it’s safe, why would we expect other approaches (e.g. humans writing the equivalent seed AI directly) to be more feasible?

    One argument is that, in the case of checking an oracle AI produced a proof of its own safety, we have to worry about intentionally hidden flaws (intentionally hidden by a potentially much smarter being), but in the case of checking a human produced proof of the oracle’s safety, you only have to worry about accidentally hidden flaws (errors).

  2. Pingback: » AGI Foom Bayesian Investor Blog

  3. Pingback: Assorted links

  4. I think that Bostrom considers most of the problems he discusses to remain relevant in cases with many AGI’s, as do I. I think a singleton is not a very likely outcome, but have little trouble finding common ground with him on many of these issues (though there are also some disagreements driven by the distinction).

    With respects to risks from emulations, I think Bostrom considers human-like but radically inhuman AI quite likely. I mostly agree. Human brains don’t look that brittle with respect to various parameter changes (you can muck with a brain a shocking amount before breaking it), and we know that in some sense they aren’t that brittle since evolution can improve them relatively quickly. So even in the most brittle case, it seems like we would probably expect an accumulation of small improvements found by brute force search (which may or may not preserve human values). On top of that, I think there is a pretty good chance, certainly not much less than 50-50, that working examples of human brains in silico would relatively quickly allow us to design useful algorithms that were not human but leveraged the same useful principles. I’m less confident concerning whether this is better or worse than the status quo.

    John: it seems that both a human and a credit-seeking researcher attempt to produce a maximally compelling argument, with correctness enforced by our ability to notice errors. That is, I don’t think that shared human values are an important aspect of ensuring the correctness of most academic work. Given that, why be more concerned about an AI’s proposals? One reason is that it’s much smarter than you. But (1) we have the ability to throttle how smart an AI is; if humans can solve problem X, then presumably an AI of human-level intelligence can solve problem X and in this case the capacity for manipulation is no greater than usual, and we have other advantages (2) we can leverage machine intelligence to help evaluate proposals as well as to craft them, for example see my post here. Overall it looks like this situation is much better than our current one to me.

  5. Paul, what kind of mucking with brains are you talking about? The kinds of changes caused by damage (e.g. Phineas Gage) or drugs don’t produce results I’d consider inhuman, except maybe when they significantly impair the person’s ability to communicate. The evidence from drugs suggests that the few that improve productivity (caffeine, modafanil) don’t come close to changing a person’s humanity. Generally the more a drug changes personality, the more it reduces productivity, which suggests we’re close enough to a local optimum that improvements to uploaded minds will be more like increased speed or better input/output than value-altering changes. What little I can infer from artificial neural nets doesn’t cause me to change that estimate much.

    Are you using a much narrower concept of human than I am?

    I’m less confident in my ability to evaluate the effects of knowledge from emulations on approaches such as Jeff Hawkins’.

  6. Drugs and other interventions do have some effects on personality even if they are muted, and if you were able to experiment until you found an improvement and then repeat dozens of times, I would expect to end up with something relatively far from human. We may disagree about empirically how different people are on drugs. I think the important notion of “closeness” is what people would do in the long-term if they acquired resources (very roughly), and I do think that this could be substantially modified over not-too-many steps similar to a lobotomy or stimulant use (though obviously it would take many more steps of the latter kind, since they are much smaller steps).

    I don’t know how hard it would be to avoid relevant kinds of drift if you wanted to, my guess would be that it is non-trivial but also not that hard, and probably wouldn’t involve huge productivity losses. So I probably disagree with Bostrom on this front.

    I would guess that continued evolution would yield creatures as different again from humans as humans are from chimps, and that even with a crude understanding of development it would be possible to carry out a similar process with brain emulations by experimenting. To the extent that I’m not concerned about this kind of change, I’m similarly unconcerned about chimps.

    It also seems like normal processes of indoctrination and selection can lead to fairly large changes in humans (e.g. towards the end of changing their motivation), and again that those processes could potentially occur radically faster for emulations even with a relatively crude understanding of psychology.

  7. Indoctrination is effective at changing tribal affiliation, which has important effects on what people would do if they had lots more resources. That doesn’t seem much like making them less human. I expect the important changes in emulation motives to be like that.

  8. Pingback: The AI Safety Landscape | Bayesian Investor Blog

  9. Pingback: The Measure of All Minds | Bayesian Investor Blog

  10. Pingback: Artificial Intelligence Safety and Security | Bayesian Investor Blog

  11. Pingback: Drexler on AI Risk | Bayesian Investor Blog

  12. Pingback: Human Compatible | Bayesian Investor Blog

  13. Pingback: Deep Utopia | Bayesian Investor Blog

Comments are closed.