I have canceled my OpenAI subscription in protest over OpenAI’s lack of ethics.

In particular, I object to:

  • threats to confiscate departing employees’ equity unless those employees signed a life-long non-disparagement contract
  • Sam Altman’s pattern of lying about important topics

I’m trying to hold AI companies to higher standards than I use for typical companies, due to the risk that AI companies will exert unusual power.

A boycott of OpenAI subscriptions seems unlikely to gain enough attention to meaningfully influence OpenAI. Where I hope to make a difference is by discouraging competent researchers from joining OpenAI unless they clearly reform (e.g. by firing Altman). A few good researchers choosing not to work at OpenAI could make the difference between OpenAI being the leader in AI 5 years from now versus being, say, a distant 3rd place.

A year ago, I thought that OpenAI equity would be a great investment, but that I had no hope of buying any. But the value of equity is heavily dependent on trust that a company will treat equity holders fairly. The legal system helps somewhat with that, but it can be expensive to rely on the legal system. OpenAI’s equity is nonstandard in ways that should create some unusual uncertainty. Potential employees ought to question whether there’s much connection between OpenAI’s future profits and what equity holders will get.

How does OpenAI’s behavior compare to other leading AI companies?

I’m unsure whether Elon Musk’s xAI deserves a boycott, partly because I’m unsure whether it’s a serious company. Musk has a history of breaking contracts that bears some similarity to OpenAI’s attitude. Musk also bears some responsibility for SpaceX requiring non-disparagement agreements.

Google has shown some signs of being evil. As far as I can tell, DeepMind has been relatively ethical. I’ve heard clear praise of Demis Hassabis’s character from Aubrey de Grey, who knew Hassabis back in the 1990s. Probably parts of Google ought to be boycotted, but I encourage good researchers to work at DeepMind.

Anthropic seems to be a good deal more ethical than OpenAI. I feel comfortable paying them for a subscription to Claude Opus. My evidence concerning their ethics is too weak to say more than that.

P.S. Some of the better sources to start with for evidence against Sam Altman / OpenAI:

But if you’re thinking of working at OpenAI, please look at more than just those sources.

Book review: Deep Utopia: Life and Meaning in a Solved World, by Nick Bostrom.

Bostrom’s previous book, Superintelligence, triggered expressions of concern. In his latest work, he describes his hopes for the distant future, presumably to limit the risk that fear of AI will lead to a The Butlerian Jihad-like scenario.

While Bostrom is relatively cautious about endorsing specific features of a utopia, he clearly expresses his dissatisfaction with the current state of the world. For instance, in a footnoted rant about preserving nature, he writes:

Imagine that some technologically advanced civilization arrived on Earth … Imagine they said: “The most important thing is to preserve the ecosystem in its natural splendor. In particular, the predator populations must be preserved: the psychopath killers, the fascist goons, the despotic death squads … What a tragedy if this rich natural diversity were replaced with a monoculture of healthy, happy, well-fed people living in peace and harmony.” … this would be appallingly callous.

The book begins as if addressing a broad audience, then drifts into philosophy that seems obscure, leading me to wonder if it’s intended as a parody of aimless academic philosophy.

Continue Reading

I’ve been dedicating a fair amount of my time recently to investigating whole brain emulation (WBE).

As computational power continues to grow, the feasibility of emulating a human brain at a reasonable speed becomes increasingly plausible.

While the connectome data alone seems insufficient to fully capture and replicate human behavior, recent advancements in scanning technology have provided valuable insights into distinguishing different types of neural connections. I’ve heard suggestions that combining this neuron-scale data with higher-level information, such as fMRI or EEG, might hold the key to unlocking WBE. However, the evidence is not yet conclusive enough for me to make any definitive statements.

I’ve heard some talk about a new company aiming to achieve WBE within the next five years. While this timeline aligns suspiciously with the typical venture capital horizon for industries with weak patent protection, I believe there is a non-negligible chance of success within the next decade – perhaps exceeding 10%. As a result, I’m actively exploring investment opportunities in this company.

There has also been speculation about the potential of WBE to aid in AI alignment efforts. However, I remain skeptical about this prospect. For WBE to make a significant impact on AI alignment, it would require not only an acceleration in WBE progress but also a slowdown in AI capability advances as they approach human levels or the assumption that the primary risks from AI emerge only when it substantially surpasses human intelligence.

My primary motivation for delving into WBE stems from a personal desire to upload my own mind. The potential benefits of WBE for those who choose not to upload remain uncertain, and I’m uncertain how to predict its broader societal implications.

Here are some videos that influenced my recent increased interest. Note that I’m relying heavily on the reputations of the speakers when deciding how much weight to give to their opinions.

Some relevant prediction markets:

Additionally, I’ve been working on some of the suggestions mentioned in the first video. I’m sharing my code and analysis on Colab. My aim is to evaluate the resilience of language models to the types of errors that might occur during the brain scanning process. While the results provide some reassurance, their value heavily relies on assumptions about the importance of low-confidence guesses made by the emulated mind.

Manifold Markets is a prediction market platform where I’ve been trading since September. This post will compare it to other prediction markets that I’ve used.

Play Money

The most important fact about Manifold is that traders bet mana, which is for most purposes not real money. You can buy mana, and use mana to donate real money to charity. That’s not attractive enough for most of us to treat it as anything other than play money.

Play money has the important advantage of not being subject to CFTC regulation or gambling laws. That enables a good deal of innovation that is stifled in real-money platforms that are open to US residents.

Continue Reading

Book review: A Theory of Everyone – The New Science of Who We Are, How We Got Here, and Where We’re Going Energy, culture and a better future for everyone, by Michael Muthukrishna.

I found this book disappointing. An important part of that is because Muthukrishna set my expectations too high.

I had previously blogged about a paper that he co-authored with Henrich on cultural influences on IQ. If those ideas were new in the book, I’d be eagerly writing about them. But I’ve already written enough about those ideas in that blog post.

Another source of disappointment was that the book’s title is misleading. To the limited extent that the book focuses on a theory, it’s the theory that’s more clearly described in Henrich’s The Secret of our Success. A Theory of Everyone feels more like a collection of blog posts than like a well-organized book.

Continue Reading

Book review: Dark Skies: Space Expansionism, Planetary Geopolitics, and the Ends of Humanity, by Daniel Deudney.

Dark Skies is an unusually good and bad book.

Good in the sense that 95% of the book consists of uncontroversial, scholarly, mundane claims that accurately describe the views that Deudney is attacking. These parts of the book are careful to distinguish between value differences and claims about objective facts.

Bad in the senses that the good parts make the occasional unfair insult more gratuitous, and that Deudney provides little support for his predictions that his policies will produce better results than those of his adversaries. I count myself as one of his adversaries.

Dark Skies is an opposite of Where Is My Flying Car? in both style and substance.

Continue Reading

Book review: The Accidental Superpower: The Next Generation of American Preeminence and the Coming Global Disorder, by Peter Zeihan.

Are you looking for an entertaining set of geopolitical forecasts that will nudge you out of the frameworks of mainstream pundits? This might be just the right book for you.

Zeihan often sounds more like a real estate salesman than a scholar: The US has more miles of internal waterways than the rest of the world combined! US mountain ranges have passes that are easy enough to use that the mountains barely impede traffic. Transportation options like that guarantee sufficient political unity!

Continue Reading

[I mostly wrote this to clarify my thoughts. I’m unclear whether this will be valuable for readers. ]

I expect that within a decade, AI will be able to do 90% of current human jobs. I don’t mean that 90% of humans will be obsolete. I mean that the average worker could delegate 90% of their tasks to an AGI.

I feel confused about what this implies for the kind of AI long-term planning and strategizing that would enable an AI to create large-scale harm if it is poorly aligned.

Is the ability to achieve long-term goals hard for an AI to develop?

Continue Reading