Reading Superintelligence prompted me to think more carefully about the speed of AGI takeoff. There are many ways to get somewhat relevant evidence. I’ll focus on two categories:
- How has intelligence increased in the past?
- How has software been improved in the past?
Methods
I will focus on two important dimensions on which people disagree: the likelihood of a few big insights versus many small insights, and the extent to which to expect diminishing returns to some measure of effort.
Note that this debate can sound much different if speed in measured by standard calendar time or subjective time as observed by the fastest whole brain emulation(s). I plan to use the latter, as it seems more useful at estimating the extent to which frontrunner advantage matters.
Eliezer complains about “reference class tennis” – people who propose one “good” reference class as sufficient reason to pay little attention to others. To the extent Eliezer is defending inside views, he seems to be flatly opposing lessons on which the heuristics and biases literature is clear.
But to the extent he’s complaining about Robin claiming to have found “the” reference class to use, he could be making a wise claim that Robin (or a strawman of him) is taking a hedgehog-like approach. Taking an outside view doesn’t say we should only use one reference class.
We’ve got Tetlock’s Expert Political Judgment documenting that using a fox-like approach is the most important feature of good political forecasters.
Holden Karnofsky’s sequence thinking versus cluster thinking distinction makes a similar point for another domain – relying on a single model makes us more vulnerable to the effects of a single big error.
Also, my own experience with models I use to evaluate stock market decisions has driven me away from models I find tempting but which allow a single belief or number to dominate my conclusion and toward a more clearly fox-like approach where multiple lines of reasoning need to support any decision.
So this post will try to use as many reference classes as possible to evaluate outside views.
Evidence from Biological Intelligence
Robin Hanson hypothesizes (see Long-Term Growth As A Sequence of Exponential Modes) mostly steady exponential growth with infrequent big increases in the growth rate, and shows evidence consistent with that hypothesis. He analyzes two million years of human history, with footnotes about extending it back to the Cambrian explosion.
If AGI takeoff causes the next transition to a faster growth rate, this model suggests a takeoff time of a few years. If (as seems more likely in this model) it causes the transition after that I’d expect a fast takeoff in calendar time, but with plausible assumptions about the speed of whole brain emulations between transitions I’d expect a subjective takeoff time of more than a decade. Under this model growth is mostly a steady rate of small insights, but the transitions that Robin looks at don’t differ enough from predictions made by fast takeoff models to justify more than a weak prediction of a slow to moderate takeoff.
Eliezer suggests paying more attention to rarer transitions such as the first replicator and human language. What we know of how software differs from wetware does suggest these are more appropriate reference classes than the ones Robin selects. This is some evidence against the value of Robin’s approach. It’s unclear whether the human language transition implies faster or slower growth than Robin’s approach does. Robin attempts (using poor data) to cover the period in which that transition happened, without showing a change in growth then. Conjectures that something sudden happened when language started don’t tell me whether it caused a fast change in intelligence or whether it enabled many small insights that steadily increased intelligence. When I try to compare the replicator transition to AGI, I don’t see how to model it as increasing growth of some prior phenomena. I imagine any comparable transition would involve growth on some new dimension sufficiently different from intelligence that my reasons for expecting AGI don’t lead me to expect growth on this mystery dimension.
Another source of evidence about past intelligence increases is the Flynn Effect. Flynn’s book on the subject What is Intelligence? attributes this to liberation from concrete thought due to cultural changes. I expect this process to have a somewhat different basis than the longer term change that distinguished humans from other primates, in that the Flynn Effect hasn’t involved Darwinian processes and is more clearly a software-like behavior change with intelligence becoming more general-purpose. The pattern shows a steady rate of improvement that strongly suggests many small insights.
Evidence from Software
Comments from several quasi-experts suggests that data compression is one of the best software models for intelligence:
- AI researcher Marcus Hutter established a prize for improved data compression to encourage AI research.
- AI researcher Eric Baum emphasizes compact representations of the world as an important part of intelligence.
- Max Tegmark mentions in Our Mathematical Universe that one of his favorite definitions of science is data compression.
Baum predicts a slow takeoff, while Tegmark suggests in his book a takeoff of “hours or seconds”. This suggests that the focus on data compression isn’t strongly associated with one extreme of the foom debate.
The Calgary Corpus provides the longest-running history of data compression progress that I’ve seen. It indicates strongly diminishing returns over time (20% improvement over the first 7 years, dropping to less than 2% over the 4-year period between the last 2 entries). It provide less clear evidence about the value of big insights.
Katja Grace surveyed six other domains for evidence concerning patterns of software improvements:
- The SAT solvers seem moderately general-purpose, and have good data showing a 19% per year reduction in time taken to solve problems over three competitions in 2007, 2009, and 2011, with increasing returns. Some lower quality data going back to 1994 show much faster improvements early on, with clear signs of diminishing returns. The somewhat erratic pattern suggests a moderate number of medium-sized insights.
- Computer chess seems to show improvements in software that are similar to hardware improvements, with no clear signs of diminishing returns. The very erratic patterns over periods of a few years show that insights are fewer and bigger than for hardware, but not big compared to the overall improvement over the last 50 years. Computer Go probably has a similar pattern, but Katja found less data.
- Several physics simulations and linear programming problems show patterns that are probably similar, with one report of little change in linear programming since 2004.
- Natural language understanding and computer vision are said to be experiencing diminishing returns. It’s unclear how well any of the numbers used to measure them are connected to power/value. One large jump in computer vision performance is reported. My impression is insights producing large jumps only account for a modest fraction of improvements.
Since the brainpower going into writing software has been increasing, probably at an exponential rate due to exponential increases in the industry size, I expect that exponential progress in a typical field implies returns that are neither clearly increasing nor clearly decreasing.
The most interesting part of the chapter on takeoff speed in Superintelligence described a model of software with two subsystems – a collection of domain-specific techniques, and a general purpose reasoner, where improvements in the latter are masked by the domain-specific techniques being better.
This led me to think about Katja’s claim that software speedups cause nearly as much improvement as Moore’s Law in the domains she looked at. If that’s something more than selection effects (software with interesting trends getting more attention), then my intuition suggests a tendency for general-purpose software to be harder to speed up than special-purpose software. The evidence mentioned here seems roughly consistent with that hypothesis. So I consider the scenario from Superintelligence unlikely.
Evaluating the evidence
The data compression evidence appears to provide the best combination of similarity to AGI and well-quantified data. It clearly discourages us from expecting anything close to increasing returns, and weakly suggests neither extreme in the big versus small insight dimension is plausible.
The Flynn effect seems like the second best type of evidence, combining increases in intelligence at the human-equivalent level with clear data measuring an important subset of intelligence. It provides evidence against strongly increasing or diminishing returns, and provides clear evidence against the few big insights view.
The other evidence suggests moderate and fairly uncertain forecasts about takeoff speed. The difference between the evidence from software and the evidence from biological intelligence suggests an important one-time transition to faster growth when software reaches human-level intelligence.
There is much more evidence from software that ought to be able to increase our confidence in these forecasts, but it doesn’t seem to be available in convenient forms.
Postscripts
Whether intelligence can work like a single skill influences not just the work needed to improve it, but also how big an IQ-like difference would be needed for an AI to go from having abilities similar to a single human to the ability to conquer the world. The less uniform the cognitive improvement, the longer the AI will have cognitive weaknesses that offer some chance of humans or less general-purpose software finding an Achilles Heel.
Human have weakness due to heuristics that cause biases, and that appears to be due to a tradeoff between speed and accuracy. I’m unaware of any reason to expect human-equivalent AGIs to avoid this tradeoff, so I expect AGIs which are on average equal to or somewhat more powerful than humans to need significant improvement before they can conquer a world of humans plus almost-human-level AIs.
Even though this post doesn’t focus much on Nick Bostrom, I wrote it to focus on why I disagree with him. With others who attach a high probability to AI foom, I expect I disagree about a larger number of claims which are hard to disentangle.
Eliezer tries to apply the Lucas critique (“it is naive to try to predict the effects of a change in economic policy entirely on the basis of relationships observed in historical data”) against the outside view. I don’t see how the Lucas critique says much about the outside view. It’s an argument against models without microfoundations, and models which depend on faulty microfoundations.
I’ve tried to focus on two dimensions of microfoundations on which AGI takeoff models differ. The slowest takeoff models assume that increased intelligence requires solving a diverse set of small problems with varying difficulties. Because agents have some tendency to prioritize the easiest problems, the remaining problems are on average harder.
In slow-to-moderate takeoff models that are more like Robin’s model, something like “technology” increases (in a steady series of small steps) agents’ ability to solve problems. These models resemble many growth models used by economists, and the model of intelligence that Eric Baum describes in What is Thought?. We have examples of feedback loops where improved general-purpose abilities were used to improve general-purpose abilities:
- Sexual selection improving the way evolution selects better intelligence.
- Scientific methods applied to creating better scientific methods (e.g. adding randomized controlled trials, and ways of detecting publication bias).
These examples provide evidence about recursive self-improvement.
The paper How Intelligible is Intelligence? is also worth reading.
Pingback: » Superintelligence Bayesian Investor Blog
“The difference between the evidence from software and the evidence from biological intelligence suggests an important one-time transition to faster growth when software reaches human-level intelligence.” Could you explain your reasoning there a bit more?
Pingback: Overcoming Bias : Irreducible Detail
Robin,
Some of the evidence about software suggests it’s improving faster than I expected. There are enough domains experiencing close to Moore’s Law type improvements on top of hardware improvements that I’d expect an AI that’s a collection of many domain-specific modules to improve faster than Moore’s Law in many domains even before accounting for increased ability to copy the intelligence.
I’m being vague about whether that implies something faster or slower than what you’ve suggested for the next transition, since I’m rather uncertain about that.
Pingback: Assorted links