Artificial Intelligence

A new paper titled When Will AI Exceed Human Performance? Evidence from AI Experts reports some bizarre results. From the abstract:

Researchers believe there is a 50% chance of AI outperforming humans in all tasks in 45 years and of automating all human jobs in 120 years, with Asian respondents expecting these dates much sooner than North Americans.

So we should expect a 75 year period in which machines can perform all tasks better and more cheaply than humans, but can’t automate all occupations. Huh?

I suppose there are occupations that consist mostly of having status rather than doing tasks (queen of England, or waiter at a classy restaurant that won’t automate service due to the high status of serving food the expensive way). Or occupations protected by law, such as gas station attendants who pump gas in New Jersey, decades after most drivers switched to pumping for themselves.

But I’d be rather surprised if machine learning researchers would think of those points when answering a survey in connection with a machine learning conference.

Maybe the actual wording of the survey questions caused a difference that got lost in the abstract? Hmmm …

“High-level machine intelligence” (HLMI) is achieved when unaided machines can accomplish every task better and more cheaply than human workers

versus

when all occupations are fully automatable. That is, when for any occupation, machines could be built to carry out the task better and more cheaply than human workers.

I tried to convince myself that the second version got interpreted as referring to actually replacing humans, while the first version referred to merely being qualified to replace humans. But the more I compared the two, the more that felt like wishful thinking. If anything, the “unaided” in the first version should make that version look farther in the future.

Can I find any other discrepancies between the abstract and the details? The 120 years in the abstract turns into 122 years in the body of the paper. So the authors seem to be downplaying the weirdness of the results.

There’s even a prediction of a 50% chance that the occupation “AI researcher” will be automated in about 88 years (I’m reading that from figure 2; I don’t see an explicit number for it). I suspect some respondents said this would take longer than for machines to “accomplish every task better and more cheaply”, but I don’t see data in the paper to confirm that [1].

A more likely hypothesis is that researchers alter their answers based on what they think people want to hear. Researchers might want to convince their funders that AI deals with problems that can be solved within the career of the researcher [2], while also wanting to reassure voters that AI won’t create massive unemployment until the current generation of workers has retired.

That would explain the general pattern of results, although the magnitude of the effect still seems strange. And it would imply that most machine learning researchers are liars, or have so little understanding of when HLMI will arrive that they don’t notice a 50% shift in their time estimates.

The ambiguity in terms such as “tasks” and “better” could conceivably explain confusion over the meaning of HLMI. I keep intending to write a blog post that would clarify concepts such as human-level AI and superintelligence, but then procrastinating because my thoughts on those topics are unclear.

It’s hard to avoid the conclusion that I should reduce my confidence in any prediction of when AI will reach human-level competence. My prior 90% confidence interval was something like 10 to 300 years. I guess I’ll broaden it to maybe 8 to 400 years [3].

P.S. – See also Katja’s comments on prior surveys.

[1] – the paper says most participants were asked the question that produced the estimate of 45 years to HLMI, the rest got the question that produced the 122 year estimate. So the median for all participants ought to be less than about 84 years, unless there are some unusual quirks in the data.

[2] – but then why do experienced researchers say human-level AI is farther in the future than new researchers, who presumably will be around longer? Maybe the new researchers are chasing fads or get-rich-quick schemes, and will mostly quit before becoming senior researchers?

[3] – years of subjective time as experienced by the fastest ems. So probably nowhere near 400 calendar years.

Book review: The Measure of All Minds: Evaluating Natural and Artificial Intelligence, by José Hernández-Orallo.

Much of this book consists of surveys of the psychometric literature. But the best parts of the book involve original results that bring more rigor and generality to the field. The best parts of the book approach the quality that I saw in Judea Pearl’s Causality, and E.T. Jaynes’ Probability Theory, but Measure of All Minds achieves a smaller fraction of its author’s ambitions, and is sometimes poorly focused.

Hernández-Orallo has an impressive ambition: measure intelligence for any agent. The book mentions a wide variety of agents, such as normal humans, infants, deaf-blind humans, human teams, dogs, bacteria, Q-learning algorithms, etc.

The book is aimed at a narrow and fairly unusual target audience. Much of it reads like it’s directed at psychology researchers, but the more original parts of the book require thinking like a mathematician.

The survey part seems pretty comprehensive, but I wasn’t satisfied with his ability to distinguish the valuable parts (although he did a good job of ignoring the politicized rants that plague many discussions of this subject).

For nearly the first 200 pages of the book, I was mostly wondering whether the book would address anything important enough for me to want to read to the end. Then I reached an impressive part: a description of an objective IQ-like measure. Hernández-Orallo offers a test (called the C-test) which:

  • measures a well-defined concept: sequential inductive inference,
  • defines the correct responses using an objective rule (based on Kolmogorov complexity),
  • with essentially no arbitrary cultural bias (the main feature that looks like an arbitrary cultural bias is the choice of alphabet and its order)[1],
  • and gives results in objective units (based on Levin’s Kt).

Yet just when I got my hopes up for a major improvement in real-world IQ testing, he points out that what the C-test measures is too narrow to be called intelligence: there’s a 960 line Perl program that exhibits human-level performance on this kind of test, without resembling a breakthrough in AI.
Continue Reading

I’ve recently noticed some possibly important confusion about machine learning (ML)/deep learning. I’m quite uncertain how much harm the confusion will cause.

On MIRI’s Intelligent Agent Foundations Forum:

If you don’t do cognitive reductions, you will put your confusion in boxes and hide the actual problem. … E.g. if neural networks are used to predict math, then the confusion about how to do logical uncertainty is placed in the black box of “what this neural net learns to do”

On SlateStarCodex:

Imagine a future inmate asking why he was denied parole, and the answer being “nobody knows and it’s impossible to find out even in principle” … (DeepMind employs a Go master to help explain AlphaGo’s decisions back to its own programmers, which is probably a metaphor for something)

A possibly related confusion, from a conversation that I observed recently: philosophers have tried to understand how concepts work for centuries, but have made little progress; therefore deep learning isn’t very close to human-level AGI.

I’m unsure whether any of the claims I’m criticizing reflect actually mistaken beliefs, or whether they’re just communicated carelessly. I’m confident that at least some people at MIRI are wise enough to avoid this confusion [1]. I’ve omitted some ensuing clarifications from my description of the deep learning conversation – maybe if I remembered those sufficiently well, I’d see that I was reacting to a straw man of that discussion. But it seems likely that some people were misled by at least the SlateStarCodex comment.

There’s an important truth that people refer to when they say that neural nets (and machine learning techniques in general) are opaque. But that truth gets seriously obscured when rephrased as “black box” or “impossible to find out even in principle”.
Continue Reading

Two and a half years ago, Eliezer was (somewhat plausibly) complaining that virtually nobody outside of MIRI was working on AI-related existential risks.

This year (at EAGlobal) one of MIRI’s talks was a bit hard to distinguish from an AI safety talk given by someone with pretty mainstream AI affiliations.

What happened in that time to cause that shift?

A large change was catalyzed by the publication of Superintelligence. I’ve been mildly disappointed about how little it affected discussions among people who were already interested in the topic. But Superintelligence caused a large change in how many people are willing to express concern over AI risks. That’s presumably because Superintelligence looks sufficiently academic and neutral to make many people comfortable about citing it, whereas similar arguments by Eliezer/MIRI didn’t look sufficiently prestigious within academia.

A smaller part of the change was MIRI shifting its focus somewhat to be more in line with how mainstream machine learning (ML) researchers expect AI to reach human levels.

Also, OpenAI has been quietly shifting in a more MIRI-like direction (I’m very unclear on how big a change this is). (Paul Christiano seems to deserve some credit for both the MIRI and OpenAI shifts in strategies.)

Given those changes, it seems like MIRI ought to be able to attract more donations than before. Especially since it has demonstrated evidence of increasing competence, and also because HPMoR seemed to draw significantly more people into the community of people who are interested in MIRI.

MIRI has gotten one big grant from OpenPhilanthropy that it probably couldn’t have gotten when mainstream AI researchers were treating MIRI’s concerns as too far-fetched to be worth commenting on. But donations from MIRI’s usual sources have stagnated.

That pattern suggests that MIRI was previously benefiting from a polarization effect, where the perception of two distinct “tribes” (those who care about AI risks versus those who promote AI) energized people to care about “their tribe”.

Whereas now there’s no clear dividing line between MIRI and mainstream researchers. Also, there’s lots of money going into other organizations that plan to do something about AI safety. (Most of those haven’t yet articulated enough of a strategy to make me optimistic that that money is well spent. I still endorse the ideas I mentioned last year in How much Diversity of AGI-Risk Organizations is Optimal?. I’m unclear on how much diversity of approaches we’re getting from the recent proliferation of AI safety organizations.)

That kind of pattern of donations creates perverse incentives to charities to at least market themselves as fighting a powerful group of people, rather than (as the ideal charity should be) addressing a neglected problem. Even if that marketing doesn’t distort a charity’s operations, the charity will be tempted to use counterproductive alarmism. AI risk organizations have resisted those temptations (at least recently), but it seems risky to tempt them.

That’s part of why I recently made a modest donation to MIRI, in spite of the uncertainty over the value of their efforts (I had last donated to them in 2009).

Book review: Notes on a New Philosophy of Empirical Science (Draft Version), by Daniel Burfoot.

Standard views of science focus on comparing theories by finding examples where they make differing predictions, and rejecting the theory that made worse predictions.

Burfoot describes a better view of science, called the Compression Rate Method (CRM), which replaces the “make prediction” step with “make a compression program”, and compares theories by how much they compress a standard (large) database.

These views of science produce mostly equivalent results(!), but CRM provides a better perspective.

Machine Learning (ML) is potentially science, and this book focuses on how ML will be improved by viewing its problems through the lens of CRM. Burfoot complains about the toolkit mentality of traditional ML research, arguing that the CRM approach will turn ML into an empirical science.

This should generate a Kuhnian paradigm shift in ML, with more objective measures of the research quality than any branch of science has achieved so far.

Burfoot focuses on compression as encoding empirical knowledge of specific databases / domains. He rejects the standard goal of a general-purpose compression tool. Instead, he proposes creating compression algorithms that are specialized for each type of database, to reflect what we know about topics (such as images of cars) that are important to us.
Continue Reading

MIRI has produced a potentially important result (called Garrabrant induction) for dealing with uncertainty about logical facts.

The paper is somewhat hard for non-mathematicians to read. This video provides an easier overview, and more context.

It uses prediction markets! “It’s a financial solution to the computer science problem of metamathematics”.

It shows that we can evade disturbing conclusions such as Godel incompleteness and the paradox of the liar, by expecting to only be very confident about logically deducible facts (as opposed to being mathematically certain). That’s similar to the difference between treating beliefs about empirical facts as probabilities, as opposed to boolean values.

I’m somewhat skeptical that it will have an important effect on AI safety, but my intuition says it will produce enough benefits somewhere that it will become at least as famous as Pearl’s work on causality.

One of the weakest claims in The Age of Em was that AI progress has not been accelerating.

J Storrs Hall (aka Josh) has a hypothesis that AI progress accelerated about a decade ago due to a shift from academia to industry. (I’m puzzled why the title describes it as a coming change, when it appears to have already happened).

I find it quite likely that something important happened then, including an acceleration in the rate at which AI affects people.

I find it less clear whether that indicates a change in how fast AI is approaching human intelligence levels.

Josh points to airplanes as an example of a phase change being important.

I tried to compare AI progress to other industries which might have experienced a similar phase change, driven by hardware progress. But I was deterred by the difficulty of estimating progress in industries when they were driven by academia.

One industry I tried to compare to was photovoltaics, which seemed to be hyped for a long time before becoming commercially important (10-20 years ago?). But I see only weak signs of a phase change around 2007, from looking at Swanson’s Law. It’s unclear whether photovoltaic progress was ever dominated by academia enough for a phase change to be important.

Hypertext is a domain where a clear phase change happened in the earl 1990s. It experienced a nearly foom-like rate of adoption when internet availability altered the problem, from one that required a big company to finance the hardware and marketing, to a problem that could be solved by simply giving away a small amount of code. But this change in adoption was not accompanied by a change in the power of hypertext software (beyond changes due to network effects). So this seems like weak evidence against accelerating progress toward human-level AI.

What other industries should I look at?

Book review: Made-Up Minds: A Constructivist Approach to Artificial Intelligence, by Gary L. Drescher.

It’s odd to call a book boring when it uses the pun “ontology recapitulates phylogeny”[1]. to describe a surprising feature of its model. About 80% of the book is dull enough that I barely forced myself to read it, yet the occasional good idea persuaded me not to give up.

Drescher gives a detailed model of how Piaget-style learning in infants could enable them to learn complex concepts starting with minimal innate knowledge.
Continue Reading

One of most important assumptions in The Age of Ems is that non-em AGI will take a long time to develop.

1.

Scott Alexander at SlateStarCodex complains that Robin rejects survey data that uses validated techniques, and instead uses informal surveys whose results better fit Robin’s biases [1]. Robin clearly explains one reason why he does that: to get the outside view of experts.

Whose approach to avoiding bias is better?

  • Minimizing sampling error and carefully documenting one’s sampling technique are two of the most widely used criteria to distinguish science from wishful thinking.
  • Errors due to ignoring the outside view have been documented to be large, yet forecasters are reluctant to use the outside view.

So I rechecked advice from forecasting experts such as Philip Tetlock and Nate Silver, and the clear answer I got was … that was the wrong question.

Tetlock and Silver mostly focus on attitudes that are better captured by the advice to be a fox, not a hedgehog.

The strongest predictor of rising into the ranks of superforecasters is perpetual beta, the degree to which one is committed to belief updating and self-improvement.

Tetlock’s commandment number 3 says “Strike the right balance between inside and outside views”. Neither Tetlock or Silver offer hope that either more rigorous sampling of experts or dogmatically choosing the outside view over the inside view help us win a forecasting contest.

So instead of asking who is right, we should be glad to have two approaches to ponder, and should want more. (Robin only uses one approach for quantifying the time to non-em AGI, but is more fox-like when giving qualitative arguments against fast AGI progress).

2.

What Robin downplays is that there’s no consensus of the experts on whom he relies, not even about whether progress is steady, accelerating, or decelerating.

Robin uses the median expert estimate of progress in various AI subfields. This makes sense if AI progress depends on success in many subfields. It makes less sense if success in one subfield can make the other subfields obsolete. If “subfield” means a guess about what strategy best leads to intelligence, then I expect the median subfield to be rendered obsolete by a small number of good subfields [2]. If “subfield” refers to a subset of tasks that AI needs to solve (e.g. vision, or natural language processing), then it seems reasonable to look at the median (and I can imagine that slower subfields matter more). Robin appears to use both meanings of “subfield”, with fairly similar results for each, so it’s somewhat plausible that the median is informative.

3.

Scott also complains that Robin downplays the importance of research spending while citing only a paper dealing with government funding of agricultural research. But Robin also cites another paper (Ulku 2004), which covers total R&D expenditures in 30 countries (versus 16 countries in the paper that Scott cites) [3].

4.

Robin claims that AI progress will slow (relative to economic growth) due to slowing hardware progress and reduced dependence on innovation. Even if I accept Robin’s claims about these factors, I have trouble believing that AI progress will slow.

I expect higher em IQ will be one factor that speeds up AI progress. Garrett Jones suggests that a 40 IQ point increase in intelligence causes a 50% increase in a country’s productivity. I presume that AI researcher productivity is more sensitive to IQ than is, say, truck driver productivity. So it seems fairly plausible to imagine that increased em IQ will cause more than a factor of two increase in the rate of AI progress. (Robin downplays the effects of IQ in contexts where a factor of two wouldn’t much affect his analysis; he appears to ignore them in this context).

I expect that other advantages of ems will contribute additional speedups – maybe ems who work on AI will run relatively fast, maybe good training/testing data will be relatively cheap to create, or maybe knowledge from experimenting on ems will better guide AI research.

5.

Robin’s arguments against an intelligence explosion are weaker than they appear. I mostly agree with those arguments, but I want to discourage people from having strong confidence in them.

The most suspicious of those arguments is that gains in software algorithmic efficiency “remain surprisingly close to the rate at which hardware costs have fallen. This suggests that algorithmic gains have been enabled by hardware gains”. He cites only (Grace 2013) in support of this. That paper doesn’t comment on whether hardware changes enable software changes. The evidence seems equally consistent with that or with the hypothesis that both are independently caused by some underlying factor. I’d say there’s less than a 50% chance that Robin is correct about this claim.

Robin lists 14 other reasons for doubting there will be an intelligence explosion: two claims about AI history (no citations), eight claims about human intelligence (one citation), and four about what causes progress in research (with the two citations mentioned earlier). Most of those 14 claims are probably true, but it’s tricky to evaluate their relevance.

Conclusion

I’d say there’s maybe a 15% chance that Robin is basically right about the timing of non-em AI given his assumptions about ems. His book is still pretty valuable if an em-dominated world lasts for even one subjective decade before something stranger happens. And “something stranger happens” doesn’t necessarily mean his analysis becomes obsolete.

Footnotes

[1] – I can’t find any SlateStarCodex complaint about Bostrom doing something in Superintelligence that’s similar to what Scott accuses Robin of, when Bostrom’s survey of experts shows an expected time of decades for human-level AI to become superintelligent. Bostrom wants to focus on a much faster takeoff scenario, and disagrees with the experts, without identifying reasons for thinking his approach reduces biases.

[2] – One example is that genetic algorithms are looking fairly obsolete compared to neural nets, now that they’re being compared on bigger problems than when genetic algorithms were trendy.

Robin wants to avoid biases from recent AI fads by looking at subfields as they were defined 20 years ago. Some recent changes in AI are fads, but some are increased wisdom. I expect many subfields to be dead ends, given how immature AI was 20 years ago (and may still be today).

[3] – Scott quotes from one of three places that Robin mentions this subject (an example of redundancy that is quite rare in the book), and that’s the one place out of three where Robin neglects to cite (Ulku 2004). Age of Em is the kind of book where it’s easy to overlook something important like that if you don’t read it more carefully than you’d read a normal book.

I tried comparing (Ulku 2004) to the OECD paper that Scott cites, and failed to figure out whether they disagree. The OECD paper is probably consistent with Robin’s “less than proportionate increases” claim that Scott quotes. But Scott’s doubts are partly about Robin’s bolder prediction that AI progress will slow down, and academic papers don’t help much in evaluating that prediction.

If you’re tempted to evaluate how well the Ulku paper supports Robin’s views, beware that this quote is one of its easier to understand parts:

In addition, while our analysis lends support for endogenous growth theories in that it confirms a significant relationship between R&D stock and innovation, and between innovation and per capita GDP, it lacks the evidence for constant returns to innovation in terms of R&D stock. This implies that R&D models are not able to explain sustainable economic growth, i.e. they are not fully endogenous.

Book review: The Age of Em: Work, Love and Life when Robots Rule the Earth, by Robin Hanson.

This book analyzes a possible future era when software emulations of humans (ems) dominate the world economy. It is too conservative to tackle longer-term prospects for eras when more unusual intelligent beings may dominate the world.

Hanson repeatedly tackles questions that scare away mainstream academics, and gives relatively ordinary answers (guided as much as possible by relatively standard, but often obscure, parts of the academic literature).

Assumptions

Hanson’s scenario relies on a few moderately controversial assumptions. The assumptions which I find most uncertain are related to human-level intelligence being hard to understand (because it requires complex systems), enough so that ems will experience many subjective centuries before artificial intelligence is built from scratch. For similar reasons, ems are opaque enough that it will be quite a while before they can be re-engineered to be dramatically different.

Hanson is willing to allow that ems can be tweaked somewhat quickly to produce moderate enhancements (at most doubling IQ) before reaching diminishing returns. He gives somewhat plausible reasons for believing this will only have small effects on his analysis. But few skeptics will be convinced.

Some will focus on potential trillions of dollars worth of benefits that higher IQs might produce, but that wealth would not much change Hanson’s analysis.

Others will prefer an inside view analysis which focuses on the chance that higher IQs will better enable us to handle risks of superintelligent software. Hanson’s analysis implies we should treat that as an unlikely scenario, but doesn’t say what we should do about modest probabilities of huge risks.

Another way that Hanson’s assumptions could be partly wrong is if tweaking the intelligence of emulated Bonobos produces super-human entities. That seems to only require small changes to his assumptions about how tweakable human-like brains are. But such a scenario is likely harder to analyze than Hanson’s scenario, and it probably makes more sense to understand Hanson’s scenario first.

Wealth

Wages in this scenario are somewhat close to subsistence levels. Ems have some ability to restrain wage competition, but less than they want. Does that mean wages are 50% above subsistence levels, or 1%? Hanson hints at the former. The difference feels important to me. I’m concerned that sound-bite versions of book will obscure the difference.

Hanson claims that “wealth per em will fall greatly”. It would be possible to construct a measure by which ems are less wealthy than humans are today. But I expect it will be at least as plausible to use a measure under which ems are rich compared to humans of today, but have high living expenses. I don’t believe there’s any objective unit of value that will falsify one of those perspectives [1].

Style / Organization

The style is more like a reference book than a story or an attempt to persuade us of one big conclusion. Most chapters (except for a few at the start and end) can be read in any order. If the section on physics causes you to doubt whether the book matters, skip to chapter 12 (labor), and return to the physics section later.

The style is very concise. Hanson rarely repeats a point, so understanding him requires more careful attention than with most authors.

It’s odd that the future of democracy gets less than twice as much space as the future of swearing. I’d have preferred that Hanson cut out a few of his less important predictions, to make room for occasional restatements of important ideas.

Many little-known results that are mentioned in the book are relevant to the present, such as: how the pitch of our voice affects how people perceive us, how vacations affect productivity, and how bacteria can affect fluid viscosity.

I was often tempted to say that Hanson sounds overconfident, but he is clearly better than most authors at admitting appropriate degrees of uncertainty. If he devoted much more space to caveats, I’d probably get annoyed at the repetition. So it’s hard to say whether he could have done any better.

Conclusion

Even if we should expect a much less than 50% chance of Hanson’s scenario becoming real, it seems quite valuable to think about how comfortable we should be with it and how we could improve on it.

Footnote

[1] – The difference matters only in one paragraph, where Hanson discusses whether ems deserve charity more than do humans living today. Hanson sounds like he’s claiming ems deserve our charity because they’re poor. Most ems in this scenario are comfortable enough for this to seem wrong.

Hanson might also be hinting that our charity would be effective at increasing the number of happy ems, and that basic utilitarianism says that’s preferable to what we can do by donating to today’s poor. That argument deserves more respect and more detailed analysis.