I encourage you to interact with GPT as you would interact with a friend, or as you would want your employer to treat you.

Treating other minds with respect is typically not costly. It can easily improve your state of mind relative to treating them as an adversary.

The tone you use in interacting with GPT will affect your conversations with it. I don’t want to give you much advice about how your conversations ought to go, but I expect that, on average, disrespect won’t generate conversations that help you more.

I don’t know how to evaluate the benefits of caring about any feelings that AIs might have. As long as there’s approximately no cost to treating GPT’s as having human-like feelings, the arguments in favor of caring about those feelings overwhelm the arguments against it.

Scott Alexander wrote a great post on how a psychiatrist’s personality dramatically influences what conversations they have with clients. GPT exhibits similar patterns (the Waluigi effect helped me understand this kind of context sensitivity).

Journalists sometimes have creepy conversations with GPT. They likely steer those conversations in directions that evoke creepy personalities in GPT.

Don’t give those journalists the attention they seek. They seek negative emotions. But don’t hate the journalists. Focus on the system that generates them. If you want to blame some group, blame the readers who get addicted to inflammatory stories.

P.S. I refer to GPT as “it”. I intend that to nudge people toward thinking of “it” as a pronoun which implies respect.

This post was mostly inspired by something unrelated to Robin Hanson’s tweet about othering the AIs, but maybe there was some subconscious connection there. I don’t see anything inherently wrong with dehumanizing other entities. When I dehumanize an entity, that is not sufficient to tell you whether I’m respecting it more than I respect humans, or less.

Spock: Really, Captain, my modesty…

Kirk: Does not bear close examination, Mister Spock. I suspect you’re becoming more and more human all the time.

Spock: Captain, I see no reason to stand here and be insulted.

Some possible AIs deserve to be thought of as better than human. Some deserve to be thought of as worse. Emphasizing AI risk is, in part, a request to create the former earlier than we create the latter.

That’s a somewhat narrow disagreement with Robin. I mostly agree with his psychoanalysis in Most AI Fear Is Future Fear.

This week we saw two interesting bank collapses: Silvergate Capital Corporation, and SVB Financial Group.

This is a reminder that diversification is important.

The most basic problem in both cases is that they got money from a rather undiverse set of depositors, who experienced unusually large fluctuations in their deposits and withdrawals. They also made overly large bets on the safety of government bonds.

Continue Reading

Book review: How Social Science Got Better: Overcoming Bias with More Evidence, Diversity, and Self-Reflection, by Matt Grossmann.

It’s easy for me to become disenchanted with social science when so much of what I read about it is selected from the most pessimistic and controversial reports.

With this book, Grossmann helped me to correct my biased view of the field. While plenty of valid criticisms have been made about social science, many of the complaints lobbed against it are little more than straw men.

Grossmann offers a sweeping overview of the progress that the field has made over the past few decades. His tone is optimistic and hearkens back to Steven Pinker’s Better Angels of our Nature, while maintaining a rigorous (but dry) style akin to the less controversial sections of Robin Hanson’s Age of Em. Throughout the book, Grossmann aims to outdo even Wikipedia in his use of a neutral point of view.

Continue Reading

I’m having trouble keeping track of everything I’ve learned about AI and AI alignment in the past year or so. I’m writing this post in part to organize my thoughts, and to a lesser extent I’m hoping for feedback about what important new developments I’ve been neglecting. I’m sure that I haven’t noticed every development that I would consider important.

I’ve become a bit more optimistic about AI alignment in the past year or so.

I currently estimate a 7% chance AI will kill us all this century. That’s down from estimates that fluctuated from something like 10% to 40% over the past decade. (The extent to which those numbers fluctuate implies enough confusion that it only takes a little bit of evidence to move my estimate a lot.)

I’m also becoming more nervous about how close we are to human level and transformative AGI. Not to mention feeling uncomfortable that I still don’t have a clear understanding of what I mean when I say human level or transformative AGI.

Continue Reading

I recently noticed similarities between how I decide what stock market evidence to look at, and how the legal system decides what lawyers are allowed to tell juries.

This post will elaborate on Eliezer’s Scientific Evidence, Legal Evidence, Rational Evidence. In particular, I’ll try to generalize about why there’s a large class of information that I actively avoid treating as Bayesian evidence.

Continue Reading

AI looks likely to cause major changes to society over the next decade.

Financial markets have mostly not reacted to this forecast yet. I expect it will be at least a few months, maybe even years, before markets have a large reaction to AI. I’d much rather buy too early than too late, so I’m trying to reposition my investments this winter to prepare for AI.

This post will focus on scenarios where AI reaches roughly human levels sometime around 2030 to 2035, and has effects that are at most 10 times as dramatic as the industrial revolution. I’m not confident that such scenarios are realistic. I’m only saying that they’re plausible enough to affect my investment strategies.

Continue Reading


BioVie Inc recently reported some unusual results from a clinical trial for Alzheimer’s.

They report some mildly encouraging cognitive improvements, but it’s only 3 months into the trial and there’s no placebo group, so it’s easy to imagine they’re just seeing a placebo effect (Annovis’ results show a clear placebo effect, presumably influencing the measurement rather than the actual health).

What interested me is this:

Reduces Horvath DNA Methylation SkinBlood Clock by 3.3 years after 3 months of treatment.

Continue Reading

Book review: Investing Amid Low Expected Returns: Making the Most When Markets Offer the Least, by Antti Ilmanen.

This book is a follow-up to Ilmanen’s prior book, Expected Returns. Ilmanen has gotten nerdier in the decade between the two books. This book is for professional investors who want more extensive analysis than what Expected Returns provided. This review is also written for professional investors. Skip this review if you don’t aspire to be one.

Continue Reading