risks

All posts tagged risks

Book review: On the Edge: The Art of Risking Everything, by Nate Silver.

Nate Silver’s latest work straddles the line between journalistic inquiry and subject matter expertise.

“On the Edge” offers a valuable lens through which to understand analytical risk-takers.

The River versus The Village

Silver divides the interesting parts of the world into two tribes.

On his side, we have “The River” – a collection of eccentrics typified by Silicon Valley entrepreneurs and professional gamblers, who tend to be analytical, abstract, decoupling, competitive, critical, independent-minded (contrarian), and risk-tolerant.

Continue Reading

I have canceled my OpenAI subscription in protest over OpenAI’s lack of ethics.

In particular, I object to:

  • threats to confiscate departing employees’ equity unless those employees signed a life-long non-disparagement contract
  • Sam Altman’s pattern of lying about important topics

I’m trying to hold AI companies to higher standards than I use for typical companies, due to the risk that AI companies will exert unusual power.

A boycott of OpenAI subscriptions seems unlikely to gain enough attention to meaningfully influence OpenAI. Where I hope to make a difference is by discouraging competent researchers from joining OpenAI unless they clearly reform (e.g. by firing Altman). A few good researchers choosing not to work at OpenAI could make the difference between OpenAI being the leader in AI 5 years from now versus being, say, a distant 3rd place.

A year ago, I thought that OpenAI equity would be a great investment, but that I had no hope of buying any. But the value of equity is heavily dependent on trust that a company will treat equity holders fairly. The legal system helps somewhat with that, but it can be expensive to rely on the legal system. OpenAI’s equity is nonstandard in ways that should create some unusual uncertainty. Potential employees ought to question whether there’s much connection between OpenAI’s future profits and what equity holders will get.

How does OpenAI’s behavior compare to other leading AI companies?

I’m unsure whether Elon Musk’s xAI deserves a boycott, partly because I’m unsure whether it’s a serious company. Musk has a history of breaking contracts that bears some similarity to OpenAI’s attitude. Musk also bears some responsibility for SpaceX requiring non-disparagement agreements.

Google has shown some signs of being evil. As far as I can tell, DeepMind has been relatively ethical. I’ve heard clear praise of Demis Hassabis’s character from Aubrey de Grey, who knew Hassabis back in the 1990s. Probably parts of Google ought to be boycotted, but I encourage good researchers to work at DeepMind.

Anthropic seems to be a good deal more ethical than OpenAI. I feel comfortable paying them for a subscription to Claude Opus. My evidence concerning their ethics is too weak to say more than that.

P.S. Some of the better sources to start with for evidence against Sam Altman / OpenAI:

But if you’re thinking of working at OpenAI, please look at more than just those sources.

Book review: Dark Skies: Space Expansionism, Planetary Geopolitics, and the Ends of Humanity, by Daniel Deudney.

Dark Skies is an unusually good and bad book.

Good in the sense that 95% of the book consists of uncontroversial, scholarly, mundane claims that accurately describe the views that Deudney is attacking. These parts of the book are careful to distinguish between value differences and claims about objective facts.

Bad in the senses that the good parts make the occasional unfair insult more gratuitous, and that Deudney provides little support for his predictions that his policies will produce better results than those of his adversaries. I count myself as one of his adversaries.

Dark Skies is an opposite of Where Is My Flying Car? in both style and substance.

Continue Reading

This week we saw two interesting bank collapses: Silvergate Capital Corporation, and SVB Financial Group.

This is a reminder that diversification is important.

The most basic problem in both cases is that they got money from a rather undiverse set of depositors, who experienced unusually large fluctuations in their deposits and withdrawals. They also made overly large bets on the safety of government bonds.

Continue Reading

Book review: Now It Can Be Told: The Story Of The Manhattan Project, by Leslie R. Groves.

This is the story of a desperate arms race, against what turned out to be a mostly imaginary opponent. I read it for a perspective on how future arms races and large projects might work.

What Surprised Me

It seemed strange that a large fraction of the book described how to produce purified U-235 and plutonium, and that the process of turning those fuels into bombs seemed anticlimactic.

Continue Reading

I’ve been pondering whether we’ll get any further warnings about when AI(s) will exceed human levels at general-purpose tasks, and that doing so would entail enough risk that AI researchers ought to take some precautions. I feel pretty uncertain about this.

I haven’t even been able to make useful progress at clarifying what I mean by that threshold of general intelligence.

As a weak substitute, I’ve brainstormed a bunch of scenarios describing not-obviously-wrong ways in which people might notice, or fail to notice, that AI is transforming the world.

I’ve given probabilities for each scenario, which I’ve pulled out of my ass and don’t plan to defend.

Continue Reading

Book review: The Alignment Problem: Machine Learning and Human Values, by Brian Christian.

I was initially skeptical of Christian’s focus on problems with AI as it exists today. Most writers with this focus miss the scale of catastrophe that could result from AIs that are smart enough to subjugate us.

Christian mostly writes about problems that are visible in existing AIs. Yet he organizes his discussion of near-term risks in ways that don’t pander to near-sighted concerns, and which nudge readers in the direction of wondering whether today’s mistakes represent the tip of an iceberg.

Most of the book carefully avoids alarmist or emotional tones. It’s hard to tell whether he has an opinion on how serious a threat unaligned AI will be – presumably it’s serious enough to write a book about?

Could the threat be more serious than that implies? Christian notes, without indicating his own opinion, that some people think so:

A growing chorus within the AI community … believes, if we are not sufficiently careful, the this is literally how the world will end. And – for today at least – the humans have lost the game.

Continue Reading

Dirt

TL;DR: loss of topsoil is a problem, but not a crisis. I’m unsure whether fixing it qualifies as a great opportunity for mitigating global warming.

This post will loosely resemble a review of the book Dirt: The Erosion of Civilizations, by David R. Montgomery. If you want a real review, see Colby Moorberg’s review on Goodreads.

Depletion of topsoil has been an important cause of the collapse of large civilizations. Farmers are often tempted to maximize this year’s production, at the cost of declining crop yields. When declining yields leave an empire unable to feed everyone, farmers are unwilling to adopt techniques that restore the topsoil, because doing so will temporarily decrease production further. The Mayan civilization seems to have experienced three cycles of soil-driven boom and bust lasting around 1000 years per cycle.

Continue Reading

From The problem with rapid Covid testing, Mayank Gupta writes:

The absolute number of false positives would rise dramatically under slightly inaccurate, broad surveillance testing. At least initially, the number of people going to the doctor to ask what to do would also rise. One can imagine if doctors truly flubbed and didn’t know how to advise patients accurately, a lot of individual patients would lose trust in the medical system (testing, doctors, or both). The consequence of this would be more resistance to health public policy measures in the future.

For a reminder of why rapid testing is valuable, see Alex Tabarrok. Note also the evidence from the NBA that people who need useful tests can be more innovative than the medical system.

This seems like the tip of an important iceberg.

Continue Reading

Is R0 < 1 yet?

I recently made a bet with Robin Hanson that US COVID-19 deaths will be less than 250,000 by Jan 1, 2022 (details hiding in these Facebook comments).

I gave a few hints here about my reasons for optimism (based on healthweather.us). I’ll add some more thoughts here, but won’t try to fully explain my intuitions. Note that these are more carefully thought out than my reasoning at the time of the bet, and the evidence has been steadily improving between then and now.

First, a quick sanity check. Metaculus has been estimating about 2 million deaths from COVID-19 worldwide this year. It also predicts that diagnosed cases will decline each quarter from this quarter through at least Q4 2020, and stabilize in Q1 2021 at 1/10 the rate of the current quarter, suggesting that most deaths will occur this year.

U.S. population is roughly 4% of the world, suggesting a bit over 80k deaths if the U.S. is fairly average. The U.S. looks about a factor of 5 worse than average as measured by currently confirmed deaths, but a bit of that is due to a few countries doing a poorer job of confirming the deaths that happen (Iran?), and more importantly, the Metaculus forecasts likely anticipate that countries such as India, Brazil, and Indonesia will eventually have a much higher fraction of the world’s deaths than is the case now. So I’m fairly comfortable with betting that the U.S. will end up well within a factor of 3 of the world per capita average.

I was about 75% confident in late March that R0 had dropped below 1, and my confidence has been slowly increasing since then.

Note a contrary opinion here. It appears to produce results that are slightly pessimistic, due to assuming that testing effort hasn’t increased.

Yet even if it’s currently a little bit above 1, there’s still a fair amount of reason for hope.

Many people have been talking as if strict shelter-in-place rules (lockdowns) are the main tools for keeping R0 < 1. That’s a misleading half-truth. Something like those rules may have been critical last month for generating quick coordination around some drastic and urgent changes. But the best longer-term strategies are less drastic and more effective.

One obstacle to lowering R0 is that hospitals are a source of infection. I’m pretty sure that will be solved, on a lousy schedule that’s unconnected with the lockdowns.

Within-home transmission likely has a significant effect on R0. Lockdowns didn’t cause any immediate drop in that transmission, but that transmission drops a good deal as the fraction of people who have been staying at home for 2+ weeks rises, so R0 is likely declining now due to that effect.

Most buildings that are open to the public should soon require good masks for anyone to enter. It wasn’t feasible to include such a rule in the initial lockdown orders, but there’s a steady move toward following that rule.

I expect those 3 changes to reduce R0 at least 20%, and probably more, between late March and late April.

Robin is right to be concerned about the competence of institutions that we relied on to prevent the pandemic. Yet I see modest reasons for optimism that the U.S. will mostly use different institutions for test and trace: Google, Apple, LabCorp, etc., and they’re moderately competent. Also, most institutions are more competent at handling problems which they recall vividly than they are at handling problems which have been insignificant in the lifetimes of most executives.

We can be pretty sure based on China’s results that R0 < 1 is not a narrow target. Wuhan got R0 lower than the key threshold by a factor of something like two. They did that in roughly the worst weather conditions – most of the time, warmer (or occasionally colder) weather will modestly reduce R0. So we’ll be able to survive a fair amount of incompetence.

But there’s still plenty of uncertainty about whether next week’s R0 will be just barely acceptable, or comfortably below 1.

Deliberate Infection?

The challenges of adapting to the most likely scenarios took nearly all of my attention in March. So I had no remaining slack to adequately prepare for a scenario that looked unlikely to me, but which looked likely to Robin. For one thing, I ought to have evaluated the possibility that money will be significantly more valuable to me if Robin wins the bet than if he loses.

It is certainly possible to imagine circumstances where deliberate coronavirus infection is quite valuable. But it looks rather low value in the scenario I think we’re in.

I don’t have much hope of getting a sensible program of deliberate infection in a society that couldn’t even stockpile facemasks in February.

I also see only a small chance that talking about deliberate infection now will help in a future pandemic. I expect this to be humanity’s last major natural pandemic (note: I’m too lazy today to evaluate the relevance of bioterrorist risks). I don’t know exactly how we’ll deal with future pandemics, but the current crisis is likely to speed up some approaches that could prevent a future virus from becoming a crisis. Some conjectures about what might be possible within a decade:

  • Better approaches to vaccination, such that vaccines could become widely available within a week of identifying the virus.
  • Medical tricorders that are as ubiquitous as phones, and which can be quickly updated to detect any new virus.

Still, I do think deliberate infection should be tried in a few places, in case the situation is as desperate as Robin believes. I’ll suggest Australia as a top choice. It has weather-related reasons for worrying that the peak will come in a few months. It has substantial tuberculosis vaccination, which may reduce the death rate among infected people by a large margin (see Correlation between universal BCG vaccination policy and reduced morbidity and mortality for COVID-19: an epidemiological study).

Note that tuberculosis vaccination looks a good deal more promising than deliberate infection, so it should be getting more attention.

Other odds and ends

Some of the concerns about a lasting economic slowdown are due to expectations that the restaurant industry will be shut down for years. I expect many other businesses to reopen within months with strict requirements that everyone wear masks, but it’s rather hard to eat while wearing a mask. So I see a large uncertainty about which year the restaurant business will return to normal. Yet I also don’t see people who used to rely on restaurants putting up with cooking at home for long. I see plenty of room for improvement in providing restaurant-like food to the home.

Current apps for delivery from restaurants seem like clumsy attempts to tack on a service as an afterthought. There’s plenty of room to redesign food preparation around home delivery, in ways that more efficiently and conveniently handle more of the volume that restaurants were handling before.

We have significant unemployment among restaurant workers, combined with food being hard to acquire for reasons which often boil down to labor shortages (combined with rules against price gouging). That’s not the kind of disruption that causes a lasting depression. The widespread opposition to price gouging is slowing down the adjustments a bit, but even so, it shouldn’t be long before the unemployed food service workers manage to become redeployed in whatever roles are appropriate to this year’s food preparation and delivery needs.

Finally, what should we think about this news: SuperCom Ships Coronavirus Quarantine Compliance Technology for Immediate Pilot?