risks

All posts tagged risks

Is R0 < 1 yet?

I recently made a bet with Robin Hanson that US COVID-19 deaths will be less than 250,000 by Jan 1, 2022 (details hiding in these Facebook comments).

I gave a few hints here about my reasons for optimism (based on healthweather.us). I’ll add some more thoughts here, but won’t try to fully explain my intuitions. Note that these are more carefully thought out than my reasoning at the time of the bet, and the evidence has been steadily improving between then and now.

First, a quick sanity check. Metaculus has been estimating about 2 million deaths from COVID-19 worldwide this year. It also predicts that diagnosed cases will decline each quarter from this quarter through at least Q4 2020, and stabilize in Q1 2021 at 1/10 the rate of the current quarter, suggesting that most deaths will occur this year.

U.S. population is roughly 4% of the world, suggesting a bit over 80k deaths if the U.S. is fairly average. The U.S. looks about a factor of 5 worse than average as measured by currently confirmed deaths, but a bit of that is due to a few countries doing a poorer job of confirming the deaths that happen (Iran?), and more importantly, the Metaculus forecasts likely anticipate that countries such as India, Brazil, and Indonesia will eventually have a much higher fraction of the world’s deaths than is the case now. So I’m fairly comfortable with betting that the U.S. will end up well within a factor of 3 of the world per capita average.

I was about 75% confident in late March that R0 had dropped below 1, and my confidence has been slowly increasing since then.

Note a contrary opinion here. It appears to produce results that are slightly pessimistic, due to assuming that testing effort hasn’t increased.

Yet even if it’s currently a little bit above 1, there’s still a fair amount of reason for hope.

Many people have been talking as if strict shelter-in-place rules (lockdowns) are the main tools for keeping R0 < 1. That’s a misleading half-truth. Something like those rules may have been critical last month for generating quick coordination around some drastic and urgent changes. But the best longer-term strategies are less drastic and more effective.

One obstacle to lowering R0 is that hospitals are a source of infection. I’m pretty sure that will be solved, on a lousy schedule that’s unconnected with the lockdowns.

Within-home transmission likely has a significant effect on R0. Lockdowns didn’t cause any immediate drop in that transmission, but that transmission drops a good deal as the fraction of people who have been staying at home for 2+ weeks rises, so R0 is likely declining now due to that effect.

Most buildings that are open to the public should soon require good masks for anyone to enter. It wasn’t feasible to include such a rule in the initial lockdown orders, but there’s a steady move toward following that rule.

I expect those 3 changes to reduce R0 at least 20%, and probably more, between late March and late April.

Robin is right to be concerned about the competence of institutions that we relied on to prevent the pandemic. Yet I see modest reasons for optimism that the U.S. will mostly use different institutions for test and trace: Google, Apple, LabCorp, etc., and they’re moderately competent. Also, most institutions are more competent at handling problems which they recall vividly than they are at handling problems which have been insignificant in the lifetimes of most executives.

We can be pretty sure based on China’s results that R0 < 1 is not a narrow target. Wuhan got R0 lower than the key threshold by a factor of something like two. They did that in roughly the worst weather conditions – most of the time, warmer (or occasionally colder) weather will modestly reduce R0. So we’ll be able to survive a fair amount of incompetence.

But there’s still plenty of uncertainty about whether next week’s R0 will be just barely acceptable, or comfortably below 1.

Deliberate Infection?

The challenges of adapting to the most likely scenarios took nearly all of my attention in March. So I had no remaining slack to adequately prepare for a scenario that looked unlikely to me, but which looked likely to Robin. For one thing, I ought to have evaluated the possibility that money will be significantly more valuable to me if Robin wins the bet than if he loses.

It is certainly possible to imagine circumstances where deliberate coronavirus infection is quite valuable. But it looks rather low value in the scenario I think we’re in.

I don’t have much hope of getting a sensible program of deliberate infection in a society that couldn’t even stockpile facemasks in February.

I also see only a small chance that talking about deliberate infection now will help in a future pandemic. I expect this to be humanity’s last major natural pandemic (note: I’m too lazy today to evaluate the relevance of bioterrorist risks). I don’t know exactly how we’ll deal with future pandemics, but the current crisis is likely to speed up some approaches that could prevent a future virus from becoming a crisis. Some conjectures about what might be possible within a decade:

  • Better approaches to vaccination, such that vaccines could become widely available within a week of identifying the virus.
  • Medical tricorders that are as ubiquitous as phones, and which can be quickly updated to detect any new virus.

Still, I do think deliberate infection should be tried in a few places, in case the situation is as desperate as Robin believes. I’ll suggest Australia as a top choice. It has weather-related reasons for worrying that the peak will come in a few months. It has substantial tuberculosis vaccination, which may reduce the death rate among infected people by a large margin (see Correlation between universal BCG vaccination policy and reduced morbidity and mortality for COVID-19: an epidemiological study).

Note that tuberculosis vaccination looks a good deal more promising than deliberate infection, so it should be getting more attention.

Other odds and ends

Some of the concerns about a lasting economic slowdown are due to expectations that the restaurant industry will be shut down for years. I expect many other businesses to reopen within months with strict requirements that everyone wear masks, but it’s rather hard to eat while wearing a mask. So I see a large uncertainty about which year the restaurant business will return to normal. Yet I also don’t see people who used to rely on restaurants putting up with cooking at home for long. I see plenty of room for improvement in providing restaurant-like food to the home.

Current apps for delivery from restaurants seem like clumsy attempts to tack on a service as an afterthought. There’s plenty of room to redesign food preparation around home delivery, in ways that more efficiently and conveniently handle more of the volume that restaurants were handling before.

We have significant unemployment among restaurant workers, combined with food being hard to acquire for reasons which often boil down to labor shortages (combined with rules against price gouging). That’s not the kind of disruption that causes a lasting depression. The widespread opposition to price gouging is slowing down the adjustments a bit, but even so, it shouldn’t be long before the unemployed food service workers manage to become redeployed in whatever roles are appropriate to this year’s food preparation and delivery needs.

Finally, what should we think about this news: SuperCom Ships Coronavirus Quarantine Compliance Technology for Immediate Pilot?

The stock market crash of the past two weeks looks like an over-reaction to COVID-19.

Is COVID-19 really the reason for the crash? I can’t find any other news that would explain the timing and which stocks were hit hardest.

Here’s a sample of some of the harder hit stocks, all travel-related (Friday’s close compared to the highest close in February):

  • -37% Hertz (HTZ)
  • -36% Avis (CAR)
  • -29% World Fuel Services Corp (INT)
  • -24% Carnival (cruise line) (CCL)
  • -22% Delta Air Lines (DAL)
  • (compare to the S&P 500: -12.4%)

It is, of course, possible that the market was in a mild bubble in early February, and the virus merely triggered a return to sanity. There were enough high-priced stocks that I’ll guess that’s explains a little of what happened. Hertz and Avis are maybe high-risk stocks due to the risks associated with the upcoming transition to robocars. But the others that I listed did not at all fit my stereotype of overpriced stocks.

And the stocks that I had been thinking were overpriced, in industries that don’t look to be especially hurt by the virus, declined roughly in line with the market.

Outside of travel-related stocks, it mostly looks like a general shift in preferences to more cash, and away from stock. I.e. a general increase in risk aversion.

The gold market is confused as to which direction a pandemic should move it. I agree. I’m confused as to how gold should react.

What scenario could explain the decline? Maybe a two month shutdown of 90+% of U.S. air travel? A multi-year reduction in travel of 10%? It would take something like that for the market reaction to make much sense. Yet I’d bet at roughly 10:1 odds against any one of those scenarios happening.

Metaculus is currently predicting 195k COVID-19 deaths this year.

Metaculus forecast trends ought to look a good deal like random walks, yet the charts I see there look more like exponential growth.

Metaculus is likely to be a more objective source of information than the news media storyteller industry or social media. But it’s likely more susceptible to selection effects and hype than are markets that have lots of money at stake. (Metaculus has token prizes, structured in a way that may encourage more extreme bets than a regular market would).

None of this implies much about where other reactions to the virus are sensible. There’s a much different asymmetry between getting sick versus being paranoid than there is between losing money due to a pandemic versus losing money due to selling on a false alarm.

I’ve got about a month’s supply of food, but that’s my normal preparation for a variety of disasters. I have no special insights about whether the current risks justify staying home.

P.S. Chinese stocks are supporting the view that the situation in China has improved over the past month.

Eric Drexler has published a book-length paper on AI risk, describing an approach that he calls Comprehensive AI Services (CAIS).

His primary goal seems to be reframing AI risk discussions to use a rather different paradigm than the one that Nick Bostrom and Eliezer Yudkowsky have been promoting. (There isn’t yet any paradigm that’s widely accepted, so this isn’t a Kuhnian paradigm shift; it’s better characterized as an amorphous field that is struggling to establish its first paradigm). Dueling paradigms seems to be the best that the AI safety field can manage to achieve for now.

I’ll start by mentioning some important claims that Drexler doesn’t dispute:

  • an intelligence explosion might happen somewhat suddenly, in the fairly near future;
  • it’s hard to reliably align an AI’s values with human values;
  • recursive self-improvement, as imagined by Bostrom / Yudkowsky, would pose significant dangers.

Drexler likely disagrees about some of the claims made by Bostrom / Yudkowsky on those points, but he shares enough of their concerns about them that those disagreements don’t explain why Drexler approaches AI safety differently. (Drexler is more cautious than most writers about making any predictions concerning these three claims).

CAIS isn’t a full solution to AI risks. Instead, it’s better thought of as an attempt to reduce the risk of world conquest by the first AGI that reaches some threshold, preserve existing corrigibility somewhat past human-level AI, and postpone need for a permanent solution until we have more intelligence.

Continue Reading

Descriptions of AI-relevant ontological crises typically choose examples where it seems moderately obvious how humans would want to resolve the crises. I describe here a scenario where I don’t know how I would want to resolve the crisis.

I will incidentally ridicule express distate for some philosophical beliefs.

Suppose a powerful AI is programmed to have an ethical system with a version of the person-affecting view. A version which says only persons who exist are morally relevant, and “exist” only refers to the present time. [Note that the most sophisticated advocates of the person-affecting view are willing to treat future people as real, and only object to comparing those people to other possible futures where those people don’t exist.]

Suppose also that it is programmed by someone who thinks in Newtonian models. Then something happens which prevents the programmer from correcting any flaws in the AI. (For simplicity, I’ll say programmer dies, and the AI was programmed to only accept changes to its ethical system from the programmer).

What happens when the AI tries to make ethical decisions about people in distant galaxies (hereinafter “distant people”) using a model of the universe that works like relativity?

Continue Reading

Book review: Artificial Intelligence Safety and Security, by Roman V. Yampolskiy.

This is a collection of papers, with highly varying topics, quality, and importance.

Many of the papers focus on risks that are specific to superintelligence, some assuming that a single AI will take over the world, and some assuming that there will be many AIs of roughly equal power. Others focus on problems that are associated with current AI programs.

I’ve tried to arrange my comments on individual papers in roughly descending order of how important the papers look for addressing the largest AI-related risks, while also sometimes putting similar topics in one group. The result feels a little more organized than the book, but I worry that the papers are too dissimilar to be usefully grouped. I’ve ignored some of the less important papers.

The book’s attempt at organizing the papers consists of dividing them into “Concerns of Luminaries” and “Responses of Scholars”. Alas, I see few signs that many of the authors are even aware of what the other authors have written, much less that the later papers are attempts at responding to the earlier papers. It looks like the papers are mainly arranged in order of when they were written. There’s a modest cluster of authors who agree enough with Bostrom to constitute a single scientific paradigm, but half the papers demonstrate about as much of a consensus on what topic they’re discussing as I would expect to get from asking medieval peasants about airplane safety.

Continue Reading

Book review: Warnings: Finding Cassandras to Stop Catastrophes, by Richard A. Clarke and R.P. Eddy.

This book is moderately addictive softcore version of outrage porn. Only small portions of the book attempt to describe how to recognize valuable warnings and ignore the rest. Large parts of the book seem written mainly to tell us which of the people portrayed in the book we should be outraged at, and which we should praise.

Normally I wouldn’t get around to finishing and reviewing a book containing this little information value, but this one was entertaining enough that I couldn’t stop.

The authors show above-average competence at selecting which warnings to investigate, but don’t convince me that they articulated how they accomplished that.

I’ll start with warnings on which I have the most expertise. I’ll focus a majority of my review on their advice for deciding which warnings matter, even though that may give the false impression that much of the book is about such advice.
Continue Reading

Book review: Feeding Everyone No Matter What: Managing Food Security After Global Catastrophe, by David Denkenberger and Joshua M. Pearce.

I have very mixed feelings about this book.

It discusses some moderately unlikely risks – scenarios where most crops fail worldwide for several years, due to inadequate sunlight.

It’s hard to feel emotionally satisfied about a tolerable but uncomfortable response to disasters, when ideally we’d prevent those disasters in the first place. And the disasters seem sufficiently improbable that I don’t feel comfortable thinking frequently about them. But we don’t yet have a foolproof way of preventing catastrophic climate changes, and there are things we can do to survive them. So logic tells me that we ought to devote a few resources to preparing.

The authors sketch a set of strategies which could conceivably ensure that nobody starves (Wikipedia has a good summary). There might even be a bit of room for mistakes, but not much.

The book focuses on the technical problems, with the hope that others will solve the political problems. This makes some sense, as the feasibility of various political solutions is very different if the best political strategy saves 95% of people than if it saves 30%.

It’s a bit disturbing that this seems to be the most expert analysis available for these scenarios – the authors appear fairly competent, but seem to have done less research than I expect from a technical book. They may have made the right choice to publish early, in order to attract more support. I’m mainly disturbed by what the lack of expertise says about societal competence.

The book leaves me with lots of uncertainty about how hard it is to improve on the meager preparations that have been done so far.

For example, I expect there are a moderate number of people who know something about rapidly scaling up mushroom production. Are they already capable of handling the needed changes? Or are drastically different preparations needed? It’s hard for me to tell without developing significant expertise in growing mushrooms.

There’s probably an urgent need for a bit more preparation for extracting nutrition from ordinary leaves. In particular, I expect it to matter what kinds of leaves to use. The book mostly talks of leaves from trees, but careless people in my area might include poison hemlock leaves, with disastrous results. A small amount of advance preparation should be able to cause large reductions in this kind of mistake.

Another simple preparation that’s needed is a better awareness of where to look in a crisis. The news media in particular ought to be able to quickly find this kind of information even when they’re overwhelmed with problems.

I’m guessing that a few hundred thousand dollars of additional effort in this area would have high expected value, with strongly diminishing returns after that. I’ve donated a small amount to ALLFED, and I encourage you to donate a little bit as well.

Or, why I don’t fear the p-zombie apocalypse.

This post analyzes concerns about how evolution, in the absence of a powerful singleton, might, in the distant future, produce what Nick Bostrom calls a “Disneyland without children”. I.e. a future with many agents, whose existence we don’t value because they are missing some important human-like quality.

The most serious description of this concern is in Bostrom’s The Future of Human Evolution. Bostrom is cautious enough that it’s hard to disagree with anything he says.

Age of Em has prompted a batch of similar concerns. Scott Alexander at SlateStarCodex has one of the better discussions (see section IV of his review of Age of Em).

People sometimes sound like they want to use this worry as an excuse to oppose the age of em scenario, but it applies to just about any scenario with human-in-a-broad-sense actors. If uploading never happens, biological evolution could produce slower paths to the same problem(s) [1]. Even in the case of a singleton AI, the singleton will need to solve the tension between evolution and our desire to preserve our values, although in that scenario it’s more important to focus on how the singleton is designed.

These concerns often assume something like the age of em lasts forever. The scenario which Age of Em analyzes seems unstable, in that it’s likely to be altered by stranger-than-human intelligence. But concerns about evolution only depend on control being sufficiently decentralized that there’s doubt about whether a central government can strongly enforce rules. That situation seems sufficiently stable to be worth analyzing.

I’ll refer to this thing we care about as X (qualia? consciousness? fun?), but I expect people will disagree on what matters for quite some time. Some people will worry that X is lost in uploading, others will worry that some later optimization process will remove X from some future generation of ems.

I’ll first analyze scenarios in which X is a single feature (in the sense that it would be lost in a single step). Later, I’ll try to analyze the other extreme, where X is something that could be lost in millions of tiny steps. Neither extreme seems likely, but I expect that analyzing the extremes will illustrate the important principles.

Continue Reading

NGDP targeting has been gaining popularity recently. But targeting market-based inflation forecasts will be about as good under most conditions [1], and we have good markets that forecast the U.S. inflation rate [2].

Those forecasts have a track record that starts in 2003. The track record seems quite consistent with my impressions about when the Fed should have adopted a more inflationary policy (to promote growth and to get inflation expectations up to 2% [3]) and when it should have adopted a less inflationary policy (to avoid fueling the housing bubble). It’s probably a bit controversial to say that the Fed should have had a less inflationary policy from February through July or August of 2008. But my impression (from reading the stock market) is that NGDP futures would have said roughly the same thing. The inflation forecasts sent a clear signal starting in very early September 2008 that Fed policy was too tight, and that’s about when other forms of hindsight switch from muddled to saying clearly that Fed policy was dangerously tight.

Why do I mention this now? The inflation forecast dropped below 1 percent two weeks ago for the first time since May 2008. So the Fed’s stated policies conflict with what a more reputable source of information says the Fed will accomplish. This looks like what we’d see if the Fed was in the process of causing a mild recession to prevent an imaginary increase in inflation.

What does the Fed think it’s doing?

  • It might be relying on interest rates to estimate what it’s policies will produce. Interest rates this low after 6.5 years of economic expansion resemble historical examples of loose monetary policy more than they resemble the stereotype of tight monetary policy [4].
  • The Fed could be following a version of the Taylor Rule. Given standard guesses about the output gap and equilibrium real interest rate [5], the Taylor Rule says interest rates ought to be rising now. The Taylor Rule has usually been at least as good as actual Fed policy at targeting inflation indirectly through targeting interest rates. But that doesn’t explain why the Fed targets interest rates when that conflicts with targeting market forecasts of inflation.
  • The Fed could be influenced by status quo bias: interest rates and unemployment are familiar types of evidence to use, whereas unbiased inflation forecasts are slightly novel.
  • Could the Fed be reacting to money supply growth? Not in any obvious way: the monetary base stopped growing about two years ago, M1 and MZM growth are slowing slightly, and M2 accelerated recently (but only after much of the Fed’s tightening).

Scott Sumner’s rants against reasoning from interest rates explain why the Fed ought to be embarrassed to use interest rates to figure out whether Fed policy is loose or tight.

Yet some institutional incentives encourage the Fed to target interest rates rather than predicted inflation. It feels like an appropriate use of high-status labor to set interest rates once every few weeks based on new discussion of expert wisdom. Switching to more or less mechanical responses to routine bond price changes would undercut much of the reason for believing that the Fed’s leaders are doing high-status work.

The news media storytellers would have trouble finding entertaining ways of reporting adjustments that consisted of small hourly responses to bond market changes. Whereas decisions made a few times per year are uncommon enough to be genuinely newsworthy. And meetings where hawks struggle against doves fit our instinctive stereotype for important news better than following a rule does. So I see little hope that storytellers will want to abandon their focus on interest rates. Do the Fed governors follow the storytellers closely enough that the storytellers’ attention strongly affects the Fed’s attention? Would we be better off if we could ban the Fed from seeing any source of daily stories?

Do any other interest groups prefer stable interest rates over stable inflation rates? I expect a wide range of preferences among Wall Street firms, but I’m unaware which preferences are dominant there.

Consumers presumably prefer that their banks, credit cards, etc have predictable interest rates. But I’m skeptical that the Fed feels much pressure to satisfy those preferences.

We need to fight those pressures by laughing at people who claim that the Fed is easing when markets predict below-target inflation (as in the fall of 2008) or that the Fed is tightening when markets predict above-target inflation (e.g. much of 2004).

P.S. – The risk-reward ratio for the stock market today is much worse than normal. I’m not as bearish as I was in October 2008, but I’ve positioned myself much more cautiously than normal.

Notes:

[1] – They appear to produce nearly identical advice under most conditions that the U.S. has experienced recently.

I expect inflation targeting to be modestly safer than NGDP targeting. I may get around to explaining my reasons for that in a separate post.

[2] – The link above gives daily forecasts of the 5 year CPI inflation rate. See here for some longer time periods.

The markets used to calculate these forecasts have enough liquidity that it would be hard for critics to claim that they could be manipulated by entities less powerful than the Fed. I expect some critics to claim that anyway.

[3] – I’m accepting the standard assumption that 2% inflation is desirable, in order to keep this post simple. Figuring out the optimal inflation rate is too hard for me to tackle any time soon. A predictable inflation rate is clearly desirable, which creates some benefits to following a standard that many experts agree on.

[4] – providing that you don’t pay much attention to Japan since 1990.

[5] – guesses which are error-prone and, if a more direct way of targeting inflation is feasible, unnecessary. The conflict between the markets’ inflation forecast and the Taylor Rule’s implication that near-zero interest rates would cause inflation to rise suggests that we should doubt those guesses. I’m pretty sure that equilibrium interest rates are lower than the standard assumptions. I don’t know what to believe about the output gap.

Hive Mind

Book review: Hive Mind: How your nation’s IQ matters so much more than your own, by Garett Jones.

Hive Mind is a solid and easy to read discussion of why high IQ nations are more successful than low IQ nations.

There’s a pretty clear correlation between national IQ and important results such as income. It’s harder to tell how much of the correlation is caused by IQ differences. The Flynn Effect hints that high IQ could instead be a symptom of increased wealth.

The best evidence for IQ causing wealth (more than being caused by wealth) is that Hong Kong and Taiwan had high IQs back in the 1960s, before becoming rich.

Another piece of similar evidence (which Hive Mind doesn’t point to) is that Saudi Arabia is the most conspicuous case of a country that became wealthy via luck. Its IQ is lower than countries of comparable wealth, and lower than neighbors of similar culture/genes.

Much of the book is devoted to speculations about how IQ could affect a nation’s success.

High IQ is associated with more patience, probably due to better ability to imagine the future:

Imagine two societies: one in which the future feels like a dim shadow, the other in which the future seems a real as now. Which society will have more restaurants that care about repeat customers? Which society will have more politicians who turn down bribes because they worry about eventually getting caught?

Hive Mind describes many possible causes of the Flynn Effect, without expressing much of a preference between them. Flynn’s explanation still seems strongest to me. The most plausible alternative that Hive Mind mentions is anxiety and stress from poverty-related problems distracting people during tests (and possibly also from developing abstract cognitive skills). But anxiety / stress explanations seem less likely to produce the Hong Kong/Taiwan/Saudi Arabia results.

Hive Mind talks about the importance of raising national IQ, especially in less-developed countries. That goal would be feasible if differences in IQ were mainly caused by stress or nutrition. Flynn’s cultural explanation points to causes that are harder for governments or charities to influence (how do you legislate an increased desire to think abstractly?).

What about the genetic differences that contribute to IQ differences? The technology needed to fix that contributing factor to low IQs is not ready today, but looks near enough that we should pay attention. Hive Mind implies [but avoids saying] that potentially large harm from leaving IQ unchanged could outweigh the risks of genetic engineering. Fears about genetic engineering of IQ often involve fears of competition, but Hive Mind shows that higher IQ means more cooperation. More cooperation suggests less war, less risk of dangerous nanotech arms races, etc.

It shouldn’t sound paradoxical to say that aggregate IQ matters more than individual IQ. It should start to seem ordinary if more people follow the example of Hive Mind and focus more attention on group success than on individual success as they relate to IQ.