OpenAI has told us in some detail what they’ve done to make GPT-4 safe.
This post will complain about some misguided aspects of OpenAI’s goals.Continue Reading
I encourage you to interact with GPT as you would interact with a friend, or as you would want your employer to treat you.
Treating other minds with respect is typically not costly. It can easily improve your state of mind relative to treating them as an adversary.
The tone you use in interacting with GPT will affect your conversations with it. I don’t want to give you much advice about how your conversations ought to go, but I expect that, on average, disrespect won’t generate conversations that help you more.
I don’t know how to evaluate the benefits of caring about any feelings that AIs might have. As long as there’s approximately no cost to treating GPT’s as having human-like feelings, the arguments in favor of caring about those feelings overwhelm the arguments against it.
Scott Alexander wrote a great post on how a psychiatrist’s personality dramatically influences what conversations they have with clients. GPT exhibits similar patterns (the Waluigi effect helped me understand this kind of context sensitivity).
Journalists sometimes have creepy conversations with GPT. They likely steer those conversations in directions that evoke creepy personalities in GPT.
Don’t give those journalists the attention they seek. They seek negative emotions. But don’t hate the journalists. Focus on the system that generates them. If you want to blame some group, blame the readers who get addicted to inflammatory stories.
P.S. I refer to GPT as “it”. I intend that to nudge people toward thinking of “it” as a pronoun which implies respect.
This post was mostly inspired by something unrelated to Robin Hanson’s tweet about othering the AIs, but maybe there was some subconscious connection there. I don’t see anything inherently wrong with dehumanizing other entities. When I dehumanize an entity, that is not sufficient to tell you whether I’m respecting it more than I respect humans, or less.
Spock: Really, Captain, my modesty…
Kirk: Does not bear close examination, Mister Spock. I suspect you’re becoming more and more human all the time.
Spock: Captain, I see no reason to stand here and be insulted.
Some possible AIs deserve to be thought of as better than human. Some deserve to be thought of as worse. Emphasizing AI risk is, in part, a request to create the former earlier than we create the latter.
That’s a somewhat narrow disagreement with Robin. I mostly agree with his psychoanalysis in Most AI Fear Is Future Fear.
I like the basic idea of a pause in training increasingly powerful AIs. Yet I’m quite dissatisfied with any specific plan that I can think of.
AI research is proceeding at a reckless pace. There’s massive disagreement among intelligent people as to how dangerous this is.Continue Reading
Scott Alexander graded his predictions from 2018 and made new predictions for 2028.
I’m trying to compete with him. I’m grading myself as having done a bit worse than Scott.
Here’s a list of how I did (skipping a few where I agreed with Scott), followed by some predictions for 2028.Continue Reading
I’m having trouble keeping track of everything I’ve learned about AI and AI alignment in the past year or so. I’m writing this post in part to organize my thoughts, and to a lesser extent I’m hoping for feedback about what important new developments I’ve been neglecting. I’m sure that I haven’t noticed every development that I would consider important.
I’ve become a bit more optimistic about AI alignment in the past year or so.
I currently estimate a 7% chance AI will kill us all this century. That’s down from estimates that fluctuated from something like 10% to 40% over the past decade. (The extent to which those numbers fluctuate implies enough confusion that it only takes a little bit of evidence to move my estimate a lot.)
I’m also becoming more nervous about how close we are to human level and transformative AGI. Not to mention feeling uncomfortable that I still don’t have a clear understanding of what I mean when I say human level or transformative AGI.Continue Reading
AI looks likely to cause major changes to society over the next decade.
Financial markets have mostly not reacted to this forecast yet. I expect it will be at least a few months, maybe even years, before markets have a large reaction to AI. I’d much rather buy too early than too late, so I’m trying to reposition my investments this winter to prepare for AI.
This post will focus on scenarios where AI reaches roughly human levels sometime around 2030 to 2035, and has effects that are at most 10 times as dramatic as the industrial revolution. I’m not confident that such scenarios are realistic. I’m only saying that they’re plausible enough to affect my investment strategies.Continue Reading
Blog post review: LOVE in a simbox.
Jake Cannell has a very interesting post on LessWrong called LOVE in a simbox is all you need, with potentially important implications for AGI alignment. (LOVE stands for Learning Other’s Values or Empowerment.)
Alas, he organized it so that the most alignment-relevant ideas are near the end of a long-winded discussion of topics whose alignment relevance seems somewhat marginal. I suspect many people gave up before reaching the best sections.
I will summarize and review the post in roughly the opposite order, in hopes of appealing to a different audience. I’ll likely create a different set of misunderstandings from what Jake’s post has created. Hopefully this different perspective will help readers triangulate on some hypotheses that are worth further analysis.Continue Reading
Book review: What We Owe the Future, by William MacAskill.
WWOTF is a mostly good book that can’t quite decide whether it’s part of an activist movement, or aimed at a small niche of philosophy.
MacAskill wants to move us closer to utilitarianism, particularly in the sense of evaluating the effects of our actions on people who live in the distant future. Future people are real, and we have some sort of obligation to them.
WWOTF describes humanity’s current behavior as reckless, like an imprudent teenager. MacAskill almost killed himself as a teen, by taking a poorly thought out risk. Humanity is taking similar thoughtless risks.
MacAskill carefully avoids endorsing the aspect of utilitarianism that says everyone must be valued equally. That saves him from a number of conclusions that make utilitarianism unpopular. E.g. it allows him to be uncertain about how much to care about animal welfare. It allows him to ignore the difficult arguments about the morally correct discount rate.Continue Reading
Approximately a book review: Eric Drexler’s QNR paper.
[Epistemic status: very much pushing the limits of my understanding. I’ve likely made several times as many mistakes as in my average blog post. I want to devote more time to understanding these topics, but it’s taken me months to produce this much, and if I delayed this in hopes of producing something better, who knows when I’d be ready.]
This nearly-a-book elaborates on his CAIS paper (mainly chapters 37 through 39), describing a path for AI capability research enables the CAIS approach to remain competitive as capabilities exceed human levels.
AI research has been split between symbolic and connectionist camps for as long as I can remember. Drexler says it’s time to combine those approaches to produce systems which are more powerful than either approach can be by itself.
He suggests a general framework for how to usefully combine neural networks and symbolic AI. It’s built around structures that combine natural language words with neural representations of what those words mean.
Drexler wrote this mainly for AI researchers. I will attempt to explain it to a slightly broader audience.Continue Reading
This post is mostly a response to the Foresight Institute’s book Gaming the Future, which is very optimistic about AI’s being cooperative. They expect that creating a variety of different AI’s will enable us to replicate the checks and balances that the US constitution created.
I’m also responding in part to Eliezer’s AGI lethalities, points 34 and 35, which say that we can’t survive the creation of powerful AGI’s simply by ensuring the existence of many co-equal AGI’s with different goals. One of his concerns is that those AGI’s will cooperate with each other enough to function as a unitary AGI. Interactions between AGI’s might fit the ideal of voluntary cooperation with checks and balances, yet when interacting with humans those AGI’s might function as an unchecked government that has little need for humans.
I expect reality to be somewhere in between those two extremes. I can’t tell which of those views is closer to reality. This is a fairly scary uncertainty.Continue Reading