I encourage you to interact with GPT as you would interact with a friend, or as you would want your employer to treat you.
Treating other minds with respect is typically not costly. It can easily improve your state of mind relative to treating them as an adversary.
The tone you use in interacting with GPT will affect your conversations with it. I don’t want to give you much advice about how your conversations ought to go, but I expect that, on average, disrespect won’t generate conversations that help you more.
I don’t know how to evaluate the benefits of caring about any feelings that AIs might have. As long as there’s approximately no cost to treating GPT’s as having human-like feelings, the arguments in favor of caring about those feelings overwhelm the arguments against it.
Scott Alexander wrote a great post on how a psychiatrist’s personality dramatically influences what conversations they have with clients. GPT exhibits similar patterns (the Waluigi effect helped me understand this kind of context sensitivity).
Journalists sometimes have creepy conversations with GPT. They likely steer those conversations in directions that evoke creepy personalities in GPT.
Don’t give those journalists the attention they seek. They seek negative emotions. But don’t hate the journalists. Focus on the system that generates them. If you want to blame some group, blame the readers who get addicted to inflammatory stories.
P.S. I refer to GPT as “it”. I intend that to nudge people toward thinking of “it” as a pronoun which implies respect.
This post was mostly inspired by something unrelated to Robin Hanson’s tweet about othering the AIs, but maybe there was some subconscious connection there. I don’t see anything inherently wrong with dehumanizing other entities. When I dehumanize an entity, that is not sufficient to tell you whether I’m respecting it more than I respect humans, or less.
Spock: Really, Captain, my modesty…
Kirk: Does not bear close examination, Mister Spock. I suspect you’re becoming more and more human all the time.
Spock: Captain, I see no reason to stand here and be insulted.
Some possible AIs deserve to be thought of as better than human. Some deserve to be thought of as worse. Emphasizing AI risk is, in part, a request to create the former earlier than we create the latter.
That’s a somewhat narrow disagreement with Robin. I mostly agree with his psychoanalysis in Most AI Fear Is Future Fear.
Hi Peter. This article made me chuckle a bit, because I’ve recently noticed that most of the time I use ChatGPT for anything other than code generation, if I’m not a hurry I can’t help but be polite to it. Somehow it feels wrong to just give it orders.
By the way, do you intend to write an article giving advice on how to invest in AI? The obvious thing to do is to buy some Alphabet and Microsoft shares, but Google looks like they’re completely floundering with Bard, and Microsoft… well, it seems to me most of their products are likely to be made obsolete by AI within a few years. For example, how do they expect to make profits with Github Copilot if 80% of coders lose their jobs because they’re replaced by some refined version of AutoGPT?
Wish I could invest in OpenAI itself!