risks

All posts tagged risks

Two months ago I attended Eric Drexler’s launch of MSEP.one. It’s open source software, written by people with professional game design experience, intended to catalyze better designs for atomically precise manufacturing (or generative nanotechnology, as he now calls it).

Drexler wants to draw more attention to the benefits of nanotech, which involve large enough exponents that our intuition boggles at handling them. That includes permanent health (Drexler’s new framing of life extension and cures for aging).

He hopes that a decentralized network of users will create a rich library of open-source components that might be used to build a nanotech factory. With enough effort, it could then become possible to design a complete enough factory that critics would have to shift from their current practice of claiming nanotech is impossible, to arguing with expert chemists over how well it would work.

Continue Reading

Book review: On the Edge: The Art of Risking Everything, by Nate Silver.

Nate Silver’s latest work straddles the line between journalistic inquiry and subject matter expertise.

“On the Edge” offers a valuable lens through which to understand analytical risk-takers.

The River versus The Village

Silver divides the interesting parts of the world into two tribes.

On his side, we have “The River” – a collection of eccentrics typified by Silicon Valley entrepreneurs and professional gamblers, who tend to be analytical, abstract, decoupling, competitive, critical, independent-minded (contrarian), and risk-tolerant.

Continue Reading

I have canceled my OpenAI subscription in protest over OpenAI’s lack of ethics.

In particular, I object to:

  • threats to confiscate departing employees’ equity unless those employees signed a life-long non-disparagement contract
  • Sam Altman’s pattern of lying about important topics

I’m trying to hold AI companies to higher standards than I use for typical companies, due to the risk that AI companies will exert unusual power.

A boycott of OpenAI subscriptions seems unlikely to gain enough attention to meaningfully influence OpenAI. Where I hope to make a difference is by discouraging competent researchers from joining OpenAI unless they clearly reform (e.g. by firing Altman). A few good researchers choosing not to work at OpenAI could make the difference between OpenAI being the leader in AI 5 years from now versus being, say, a distant 3rd place.

A year ago, I thought that OpenAI equity would be a great investment, but that I had no hope of buying any. But the value of equity is heavily dependent on trust that a company will treat equity holders fairly. The legal system helps somewhat with that, but it can be expensive to rely on the legal system. OpenAI’s equity is nonstandard in ways that should create some unusual uncertainty. Potential employees ought to question whether there’s much connection between OpenAI’s future profits and what equity holders will get.

How does OpenAI’s behavior compare to other leading AI companies?

I’m unsure whether Elon Musk’s xAI deserves a boycott, partly because I’m unsure whether it’s a serious company. Musk has a history of breaking contracts that bears some similarity to OpenAI’s attitude. Musk also bears some responsibility for SpaceX requiring non-disparagement agreements.

Google has shown some signs of being evil. As far as I can tell, DeepMind has been relatively ethical. I’ve heard clear praise of Demis Hassabis’s character from Aubrey de Grey, who knew Hassabis back in the 1990s. Probably parts of Google ought to be boycotted, but I encourage good researchers to work at DeepMind.

Anthropic seems to be a good deal more ethical than OpenAI. I feel comfortable paying them for a subscription to Claude Opus. My evidence concerning their ethics is too weak to say more than that.

P.S. Some of the better sources to start with for evidence against Sam Altman / OpenAI:

But if you’re thinking of working at OpenAI, please look at more than just those sources.

Book review: Dark Skies: Space Expansionism, Planetary Geopolitics, and the Ends of Humanity, by Daniel Deudney.

Dark Skies is an unusually good and bad book.

Good in the sense that 95% of the book consists of uncontroversial, scholarly, mundane claims that accurately describe the views that Deudney is attacking. These parts of the book are careful to distinguish between value differences and claims about objective facts.

Bad in the senses that the good parts make the occasional unfair insult more gratuitous, and that Deudney provides little support for his predictions that his policies will produce better results than those of his adversaries. I count myself as one of his adversaries.

Dark Skies is an opposite of Where Is My Flying Car? in both style and substance.

Continue Reading

This week we saw two interesting bank collapses: Silvergate Capital Corporation, and SVB Financial Group.

This is a reminder that diversification is important.

The most basic problem in both cases is that they got money from a rather undiverse set of depositors, who experienced unusually large fluctuations in their deposits and withdrawals. They also made overly large bets on the safety of government bonds.

Continue Reading

Book review: Now It Can Be Told: The Story Of The Manhattan Project, by Leslie R. Groves.

This is the story of a desperate arms race, against what turned out to be a mostly imaginary opponent. I read it for a perspective on how future arms races and large projects might work.

What Surprised Me

It seemed strange that a large fraction of the book described how to produce purified U-235 and plutonium, and that the process of turning those fuels into bombs seemed anticlimactic.

Continue Reading

I’ve been pondering whether we’ll get any further warnings about when AI(s) will exceed human levels at general-purpose tasks, and that doing so would entail enough risk that AI researchers ought to take some precautions. I feel pretty uncertain about this.

I haven’t even been able to make useful progress at clarifying what I mean by that threshold of general intelligence.

As a weak substitute, I’ve brainstormed a bunch of scenarios describing not-obviously-wrong ways in which people might notice, or fail to notice, that AI is transforming the world.

I’ve given probabilities for each scenario, which I’ve pulled out of my ass and don’t plan to defend.

Continue Reading

Book review: The Alignment Problem: Machine Learning and Human Values, by Brian Christian.

I was initially skeptical of Christian’s focus on problems with AI as it exists today. Most writers with this focus miss the scale of catastrophe that could result from AIs that are smart enough to subjugate us.

Christian mostly writes about problems that are visible in existing AIs. Yet he organizes his discussion of near-term risks in ways that don’t pander to near-sighted concerns, and which nudge readers in the direction of wondering whether today’s mistakes represent the tip of an iceberg.

Most of the book carefully avoids alarmist or emotional tones. It’s hard to tell whether he has an opinion on how serious a threat unaligned AI will be – presumably it’s serious enough to write a book about?

Could the threat be more serious than that implies? Christian notes, without indicating his own opinion, that some people think so:

A growing chorus within the AI community … believes, if we are not sufficiently careful, the this is literally how the world will end. And – for today at least – the humans have lost the game.

Continue Reading

Dirt

TL;DR: loss of topsoil is a problem, but not a crisis. I’m unsure whether fixing it qualifies as a great opportunity for mitigating global warming.

This post will loosely resemble a review of the book Dirt: The Erosion of Civilizations, by David R. Montgomery. If you want a real review, see Colby Moorberg’s review on Goodreads.

Depletion of topsoil has been an important cause of the collapse of large civilizations. Farmers are often tempted to maximize this year’s production, at the cost of declining crop yields. When declining yields leave an empire unable to feed everyone, farmers are unwilling to adopt techniques that restore the topsoil, because doing so will temporarily decrease production further. The Mayan civilization seems to have experienced three cycles of soil-driven boom and bust lasting around 1000 years per cycle.

Continue Reading

From The problem with rapid Covid testing, Mayank Gupta writes:

The absolute number of false positives would rise dramatically under slightly inaccurate, broad surveillance testing. At least initially, the number of people going to the doctor to ask what to do would also rise. One can imagine if doctors truly flubbed and didn’t know how to advise patients accurately, a lot of individual patients would lose trust in the medical system (testing, doctors, or both). The consequence of this would be more resistance to health public policy measures in the future.

For a reminder of why rapid testing is valuable, see Alex Tabarrok. Note also the evidence from the NBA that people who need useful tests can be more innovative than the medical system.

This seems like the tip of an important iceberg.

Continue Reading