rationality

All posts tagged rationality

I recently noticed similarities between how I decide what stock market evidence to look at, and how the legal system decides what lawyers are allowed to tell juries.

This post will elaborate on Eliezer’s Scientific Evidence, Legal Evidence, Rational Evidence. In particular, I’ll try to generalize about why there’s a large class of information that I actively avoid treating as Bayesian evidence.

Continue Reading

Book review: Why Everyone (Else) Is a Hypocrite: Evolution and the Modular Mind, by Robert Kurzban.

Minds are Modular

Many people explain minds by positing that they’re composed of parts:

  • the id, ego, and super-ego
  • the left side and the right side of the brain
  • System 1 and System 2
  • the triune brain
  • Marvin Minsky’s Society of Mind

Minsky’s proposal is the only one of these that resembles Kurzban’s notion of modularity enough to earn Kurzban’s respect. The modules Kurzban talks about are much more numerous, and more specialized, than most people are willing to imagine.

Here’s Kurzban’s favorite Minsky quote:

The mind is a community of “agents.” Each has limited powers and can communicate only with certain others. The powers of mind emerge from their interactions for none of the Agents, by itself, has significant intelligence. […] Everyone knows what it feels like to be engaged in a conversation with oneself. In this book, we will develop the idea that these discussions really happen, and that the participants really “exist.” In our picture of the mind we will imagine many “sub-persons”, or “internal agents”, interacting with one another. Solving the simplest problem—seeing a picture—or remembering the experience of seeing it—might involve a dozen or more—perhaps very many more—of these agents playing different roles. Some of them bear useful knowledge, some of them bear strategies for dealing with other agents, some of them carry warnings or encouragements about how the work of others is proceeding. And some of them are concerned with discipline, prohibiting or “censoring” others from thinking forbidden thoughts.

Let’s take the US government as a metaphor. Instead of saying it’s composed of the legislative, executive, and judicial modules, Kurzban would describe it as being made up of modules such as a White House press secretary, Anthony Fauci, a Speaker of the House, more generals than I can name, even more park rangers, etc.

In What Is It Like to Be a Bat?, Nagel says “our own mental activity is the only unquestionable fact of our experience”. In contrast, Kurzban denies that we know more than a tiny fraction of our mental activity. We don’t ask “what is it like to be an edge detector?”, because there was no evolutionary pressure to enable us to answer that question. It could be most human experience is as mysterious to our conscious minds as bat experiences. Most of our introspection involves examining a mental model that we construct for propaganda purposes.

Is Self-Deception Mysterious?

There’s been a good deal of confusion about self-deception and self-control. Kurzban attributes the confusion to attempts at modeling the mind as a unitary agent. If there’s a single homunculus in charge of all of the mind’s decisions, then it’s genuinely hard to explain phenomena that look like conflicts between agents.

With a sufficiently modular model of minds, the confusion mostly vanishes.

A good deal of what gets called self-deception is better described as being strategically wrong.

For example, when President Trump had COVID, the White House press secretary had a strong incentive not to be aware of any evidence that Trump’s health was worse than expected, in order to reassure voters without being clearly dishonest. Whereas the White House doctor had some reason to err a bit on the side of overestimating Trump’s risk of dying. So it shouldn’t surprise us if they had rather different beliefs. I don’t describe that situation as “the US government is deceiving itself”, but I’d be confused as to whether to describe it that way if I could only imagine the government as a unitary agent.

Minds work much the same way. E.g. the cancer patient who buys space on a cruise that his doctor says he won’t live to enjoy (presumably to persuade allies that he’ll be around long enough to be worth allying with), while still following the doctor’s advice about how to treat the cancer. A modular model of the mind isn’t surprised that his mind holds inconsistent beliefs about how serious the cancer is. The patient’s press-secretary-like modules are pursuing a strategy of getting friends to make long-term plans to support the patient. They want accurate enough knowledge of the patient’s health to sound credible. Why would they want to be more accurate than that?

Self-Control

Kurzban sees less value in the concept of a self than do most Buddhists.

almost any time you come across a theory with the word “self” in it, you should check your wallet.

Self-control has problems that are similar to the problems with the concept of self-deception. It’s best thought of as conflicts between modules.

We should expect context-sensitive influences on which modules exert the most influence on decisions. E.g. we should expect a calorie-acquiring module to have more influence when a marshmallow is in view than if a path to curing cancer is in view. That makes it hard for a mind to have a stable preference about how to value eating a marshmallow or curing cancer.

If I think I see a path to curing cancer that is certain to succeed, my cancer-research modules ought to get more attention than my calorie-acquiring modules. I’m pretty sure that’s what would happen if I had good evidence that I’m about to cure cancer. But a more likely situation is that my press-secretary-like modules say I’ll succeed, and some less eloquent modules say I’ll fail. That will look like a self-control problem to those who want the press secretary to be in charge, and look more like politics to those who take Kurzban’s view.

I could identify some of my brain’s modules as part of my “self”, and say that self-control refers to those modules overcoming the influence of the non-self parts of my brain. But the more I think like Kurzban, the more arbitrary it seems to treat some modules as more privileged than others.

The Rest

Along the way, Kurzban makes fun of the literature on self-esteem, and of models that say self-control is a function of resources.

The book consists mostly of easy to read polemics for ideas that ought to be obvious, but which our culture resists.

Warning: you should skip the chapter titled Morality and Contradictions. Kurzban co-authored a great paper called A Solution to the Mysteries of Morality. But in this book, his controversial examples of hypocrisy will distract attention of most readers from the rather unremarkable wisdom that the examples illustrate.

Book review: Noise: A Flaw in Human Judgment, by Daniel Kahneman, Olivier Sibony, and Cass R. Sunstein.

Doctors are more willing to order a test for patients they see in the morning than for those they see late in the day.

Asylum applicants chances of prevailing may be as low as 5% or as high as 88% purely due to which judge hears their case.

Clouds Make Nerds Look Good, in the sense that university admissions officers give higher weight to academic attributes on cloudy days.

These are examples of what the authors describe as an important and neglected problem.

A more precise description of the book’s topic is variations in judgment, with judgment defined as “measurement in which the instrument is a human mind”.

Continue Reading

Book review: The AI Does Not Hate You: Superintelligence, Rationality and the Race to Save the World, by Tom Chivers.

This book is a sympathetic portrayal of the rationalist movement by a quasi-outsider. It includes a well-organized explanation of why some people expect tha AI will create large risks sometime this century, written in simple language that is suitable for a broad audience.

Caveat: I know many of the people who are described in the book. I’ve had some sort of connection with the rationalist movement since before it became distinct from transhumanism, and I’ve been mostly an insider since 2012. I read this book mainly because I was interested in how the rationalist movement looks to outsiders.

Chivers is a science writer. I normally avoid books by science writers, due to an impression that they mostly focus on telling interesting stories, without developing a deep understanding of the topics they write about.

Chivers’ understanding of the rationalist movement doesn’t quite qualify as deep, but he was surprisingly careful to read a lot about the subject, and to write only things he did understand.

Many times I reacted to something he wrote with “that’s close, but not quite right”. Usually when I reacted that way, Chivers did a good job of describing the the rationalist message in question, and the main problem was either that rationalists haven’t figured out how to explain their ideas in a way that a board audience can understand, or that rationalists are confused. So the complaints I make in the rest of this review are at most weakly directed in Chivers direction.

I saw two areas where Chivers overlooked something important.

Rationality

One involves CFAR.

Chivers wrote seven chapters on biases, and how rationalists view them, ending with “the most important bias”: knowing about biases can make you more biased. (italics his).

I get the impression that Chivers is sweeping this problem under the rug (Do we fight that bias by being aware of it? Didn’t we just read that that doesn’t work?). That is roughly what happened with many people who learned rationalism solely via written descriptions.

Then much later, when describing how he handled his conflicting attitudes toward the risks from AI, he gives a really great description of maybe 3% of what CFAR teaches (internal double crux), much like a blind man giving a really clear description of the upper half of an elephant’s trunk. He prefaces this narrative with the apt warning: “I am aware that this all sounds a bit mystical and self-helpy. It’s not.”

Chivers doesn’t seem to connect this exercise with the goal of overcoming biases. Maybe he was too busy applying the technique on an important problem to notice the connection with his prior discussions of Bayes, biases, and sanity. It would be reasonable for him to argue that CFAR’s ideas have diverged enough to belong in a separate category, but he seems to put them in a different category by accident, without realizing that many of us consider CFAR to be an important continuation of rationalists’ interest in biases.

World conquest

Chivers comes very close to covering all of the layman-accessible claims that Yudkowsky and Bostrom make. My one complaint here is that he only give vague hints about why one bad AI can’t be stopped by other AI’s.

A key claim of many leading rationalists is that AI will have some winner take all dynamics that will lead to one AI having a decisive strategic advantage after it crosses some key threshold, such as human-level intelligence.

This is a controversial position that is somewhat connected to foom (fast takeoff), but which might be correct even without foom.

Utility functions

“If I stop caring about chess, that won’t help me win any chess games, now will it?” – That chapter title provides a good explanation of why a simple AI would continue caring about its most fundamental goals.

Is that also true of an AI with more complex, human-like goals? Chivers is partly successful at explaining how to apply the concept of a utility function to a human-like intelligence. Rationalists (or at least those who actively research AI safety) have a clear meaning here, at least as applied to agents that can be modeled mathematically. But when laymen try to apply that to humans, confusion abounds, due to the ease of conflating subgoals with ultimate goals.

Chivers tries to clarify, using the story of Odysseus and the Sirens, and claims that the Sirens would rewrite Odysseus’ utility function. I’m not sure how we can verify that the Sirens work that way, or whether they would merely persuade Odysseus to make false predictions about his expected utility. Chivers at least states clearly that the Sirens try to prevent Odysseus (by making him run aground) from doing what his pre-Siren utility function advises. Chivers’ point could be a bit clearer if he specified that in his (nonstandard?) version of the story, the Sirens make Odysseus want to run aground.

Philosophy

“Essentially, he [Yudkowsky] (and the Rationalists) are thoroughgoing utilitarians.” – That’s a bit misleading. Leading rationalists are predominantly consequentialists, but mostly avoid committing to a moral system as specific as utilitarianism. Leading rationalists also mostly endorse moral uncertainty. Rationalists mostly endorse utilitarian-style calculation (which entails some of the controversial features of utilitarianism), but are careful to combine that with worry about whether we’re optimizing the quantity that we want to optimize.

I also recommend Utilitarianism and its discontents as an example of one rationalist’s nuanced partial endorsement of utilitarianism.

Political solutions to AI risk?

Chivers describes Holden Karnofsky as wanting “to get governments and tech companies to sign treaties saying they’ll submit any AGI designs to outside scrutiny before switching them on. It wouldn’t be iron-clad, because firms might simply lie”.

Most rationalists seem pessimistic about treaties such as this.

Lying is hardly the only problem. This idea assumes that there will be a tiny number of attempts, each with a very small number of launches that look like the real thing, as happened with the first moon landing and the first atomic bomb. Yet the history of software development suggests it will be something more like hundreds of attempts that look like they might succeed. I wouldn’t be surprised if there are millions of times when an AI is turned on, and the developer has some hope that this time it will grow into a human-level AGI. There’s no way that a large number of designs will get sufficient outside scrutiny to be of much use.

And if a developer is trying new versions of their system once a day (e.g. making small changes to a number that controls, say, openness to new experience), any requirement to submit all new versions for outside scrutiny would cause large delays, creating large incentives to subvert the requirement.

So any realistic treaty would need provisions that identify a relatively small set of design choices that need to be scrutinized.

I see few signs that any experts are close to developing a consensus about what criteria would be appropriate here, and I expect that doing so would require a significant fraction of the total wisdom needed for AI safety. I discussed my hope for one such criterion in my review of Drexler’s Reframing Superintelligence paper.

Rationalist personalities

Chivers mentions several plausible explanations for what he labels the “semi-death of LessWrong”, the most obvious being that Eliezer Yudkowsky finished most of the blogging that he had wanted to do there. But I’m puzzled by one explanation that Chivers reports: “the attitude … of thinking they can rebuild everything”. Quoting Robin Hanson:

At Xanadu they had to do everything different: they had to organize their meetings differently and orient their screens differently and hire a different kind of manager, everything had to be different because they were creative types and full of themselves. And that’s the kind of people who started the Rationalists.

That seems like a partly apt explanation for the demise of the rationalist startups MetaMed and Arbital. But LessWrong mostly copied existing sites, such as Reddit, and was only ambitious in the sense that Eliezer was ambitious about what ideas to communicate.

Culture

I guess a book about rationalists can’t resist mentioning polyamory. “For instance, for a lot of people it would be difficult not to be jealous.” Yes, when I lived in a mostly monogamous culture, jealousy seemed pretty standard. That attititude melted away when the bay area cultures that I associated with started adopting polyamory or something similar (shortly before the rationalists became a culture). Jealousy has much more purpose if my partner is flirting with monogamous people than if he’s flirting with polyamorists.

Less dramatically, We all know people who are afraid of visiting their city centres because of terrorist attacks, but don’t think twice about driving to work.

This suggests some weird filter bubbles somewhere. I thought that fear of cities got forgotten within a month or so after 9/11. Is this a difference between London and the US? Am I out of touch with popular concerns? Does Chivers associate more with paranoid people than I do? I don’t see any obvious answer.

Conclusion

It would be really nice if Chivers and Yudkowsky could team up to write a book, but this book is a close substitute for such a collaboration.

See also Scott Aaronson’s review.

Book review: Principles: Life and Work, by Ray Dalio.

Most popular books get that way by having an engaging style. Yet this book’s style is mundane, almost forgetable.

Some books become bestsellers by being controversial. Others become bestsellers by manipulating reader’s emotions, e.g. by being fun to read, or by getting the reader to overestimate how profound the book is. Principles definitely doesn’t fit those patterns.

Some books become bestsellers because the author became famous for reasons other than his writings (e.g. Stephen Hawking, Donald Trump, and Bill Gates). Principles fits this pattern somewhat well: if an obscure person had published it, nothing about it would have triggered a pattern of readers enthusiastically urging their friends to read it. I suspect the average book in this category is rather pathetic, but I also expect there’s a very large variance in the quality of books in this category.

Principles contains an unusual amount of wisdom. But it’s unclear whether that’s enough to make it a good book, because it’s unclear whether it will convince readers to follow the advice. Much of the advice sounds like ideas that most of us agree with already. The wisdom comes more in selecting the most underutilized ideas, without being particularly novel. The main benefit is likely to be that people who were already on the verge of adopting the book’s advice will get one more nudge from an authority, providing the social reassurance they need.

Advice

Some of why I trust the book’s advice is that it overlaps a good deal with other sources from which I’ve gotten value, e.g. CFAR.

Key ideas include:

  • be honest with yourself
  • be open-minded
  • focus on identifying and fixing your most important weaknesses

Continue Reading

The point of this blog post feels almost too obvious to be worth saying, yet I doubt that it’s widely followed.

People often avoid doing projects that have a low probability of success, even when the expected value is high. To counter this bias, I recommend that you mentally combine many such projects into a strategy of trying new things, and evaluate the strategy’s probability of success.

1.

Eliezer says in On Doing the Improbable:

I’ve noticed that, by my standards and on an Eliezeromorphic metric, most people seem to require catastrophically high levels of faith in what they’re doing in order to stick to it. By this I mean that they would not have stuck to writing the Sequences or HPMOR or working on AGI alignment past the first few months of real difficulty, without assigning odds in the vicinity of 10x what I started out assigning that the project would work. … But you can’t get numbers in the range of what I estimate to be something like 70% as the required threshold before people will carry on through bad times. “It might not work” is enough to force them to make a great effort to continue past that 30% failure probability. It’s not good decision theory but it seems to be how people actually work on group projects where they are not personally madly driven to accomplish the thing.

I expect this reluctance to work on projects with a large chance of failure is a widespread problem for individual self-improvement experiments.

2.

One piece of advice I got from my CFAR workshop was to try lots of things. Their reasoning involved the expectation that we’d repeat the things that worked, and forget the things that didn’t work.

I’ve been hesitant to apply this advice to things that feel unlikely to work, and I expect other people have similar reluctance.

The relevant kind of “things” are experiments that cost maybe 10 to 100 hours to try, which don’t risk much other than wasting time, and for which I should expect on the order of a 10% chance of noticeable long-term benefits.

Here are some examples of the kind of experiments I have in mind:

  • gratitude journal
  • morning pages
  • meditation
  • vitamin D supplements
  • folate supplements
  • a low carb diet
  • the Plant Paradox diet
  • an anti-anxiety drug
  • ashwaghanda
  • whole fruit coffee extract
  • piracetam
  • phenibut
  • modafinil
  • a circling workshop
  • Auditory Integration Training
  • various self-help books
  • yoga
  • sensory deprivation chamber

I’ve cheated slightly, by being more likely to add something to this list if it worked for me than if it was a failure that I’d rather forget. So my success rate with these was around 50%.

The simple practice of forgetting about the failures and mostly repeating the successes is almost enough to cause the net value of these experiments to be positive. More importantly, I kept the costs of these experiments low, so the benefits of the top few outweighed the costs of the failures by a large factor.

3.

I face a similar situation when I’m investing.

The probability that I’ll make any profit on a given investment is close to 50%, and the probability of beating the market on a given investment is lower. I don’t calculate actual numbers for that, because doing so would be more likely to bias me than to help me.

I would find it rather discouraging to evaluate each investment separately. Doing so would focus my attention on the fact that any individual result is indistinguishable from luck.

Instead, I focus my evaluations much more on bundles of hundreds of trades, often associated with a particular strategy. Aggregating evidence in that manner smooths out the good and bad luck to make my skill (or lack thereof) more conspicuous. I’m focusing in this post not on the logical interpretation of evidence, but on how the subconscious parts of my mind react. This mental bundling of tasks is particularly important for my subconscious impressions of whether I’m being productive.

I believe this is a well-known insight (possibly from poker?), but I can’t figure out where I’ve seen it described.

I’ve partly applied this approach to self-improvement tasks (not quite as explicitly as I ought to), and it has probably helped.

Time Biases

Book review: Time Biases: A Theory of Rational Planning and Personal Persistence, by Meghan Sullivan.

I was very unsure about whether this book would be worth reading, as it could easily have been focused on complaints about behavior that experts have long known are mistaken.

I was pleasantly surprised when it quickly got to some of the really hard questions, and was thoughtful about what questions deserved attention. I disagree with enough of Sullivan’s premises that I have significant disagreements with her conclusions. Yet her reasoning is usually good enough that I’m unsure what to make of our disagreements – they’re typically due to differences of intuition that she admits are controversial.

I had hoped for some discussion of ethics (e.g. what discount rate to use in evaluating climate change), whereas the book focuses purely on prudential rationality (i.e. what’s rational for a self-interested person). Still, the discussion of prudential rationality covers most of the issues that make the ethical choices hard.

Personal identity

A key issue is the nature of personal identity – does one’s identity change over time?

Continue Reading

Book review: Inadequate Equilibria, by Eliezer Yudkowsky.

This book (actually halfway between a book and a series of blog posts) attacks the goal of epistemic modesty, which I’ll loosely summarize as reluctance to believe that one knows better than the average person.

1.

The book starts by focusing on the base rate for high-status institutions having harmful incentive structures, charting a middle ground between the excessive respect for those institutions that we see in mainstream sources, and the cynicism of most outsiders.

There’s a weak sense in which this is arrogant, namely that if were obvious to the average voter how to improve on these problems, then I’d expect the problems to be fixed. So people who claim to detect such problems ought to have decent evidence that they’re above average in the relevant skills. There are plenty of people who can rationally decide that applies to them. (Eliezer doubts that advising the rest to be modest will help; I suspect there are useful approaches to instilling modesty in people who should be more modest, but it’s not easy). Also, below-average people rarely seem to be attracted to Eliezer’s writings.

Later parts of the book focus on more personal choices, such as choosing a career.

Some parts of the book seem designed to show off Eliezer’s lack of need for modesty – sometimes successfully, sometimes leaving me suspecting he should be more modest (usually in ways that are somewhat orthogonal to his main points; i.e. his complaints about “reference class tennis” suggest overconfidence in his understanding of his debate opponents).

2.

Eliezer goes a bit overboard in attacking the outside view. He starts with legitimate complaints about people misusing it to justify rejecting theory and adopt “blind empiricism” (a mistake that I’ve occasionally made). But he partly rejects the advice that Tetlock gives in Superforecasting. I’m pretty sure Tetlock knows more about this domain than Eliezer does.

E.g. Eliezer says “But in novel situations where causal mechanisms differ, the outside view fails—there may not be relevantly similar cases, or it may be ambiguous which similar-looking cases are the right ones to look at.”, but Tetlock says ‘Nothing is 100% “unique” … So superforecasters conduct creative searches for comparison classes even for seemingly unique events’.

Compare Eliezer’s “But in many contexts, the outside view simply can’t compete with a good theory” with Tetlock’s commandment number 3 (“Strike the right balance between inside and outside views”). Eliezer seems to treat the approaches as antagonistic, whereas Tetlock advises us to find a synthesis in which the approaches cooperate.

3.

Eliezer provides a decent outline of what causes excess modesty. He classifies the two main failure modes as anxious underconfidence, and status regulation. Anxious underconfidence definitely sounds like something I’ve felt somewhat often, and status regulation seems pretty plausible, but harder for me to detect.

Eliezer presents a clear model of why status regulation exists, but his explanation for anxious underconfidence doesn’t seem complete. Here are some of my ideas about possible causes of anxious underconfidence:

  • People evaluate mistaken career choices and social rejection as if they meant death (which was roughly true until quite recently), so extreme risk aversion made sense;
  • Inaction (or choosing the default action) minimizes blame. If I carefully consider an option, my choice says more about my future actions than if I neglect to think about the option;
  • People often evaluate their success at life by counting the number of correct and incorrect decisions, rather than adding up the value produced;
  • People who don’t grok the Bayesian meaning of the word “evidence” are likely to privilege the scientific and legal meanings of evidence. So beliefs based on more subjective evidence get treated as second class citizens.

I suspect that most harm from excess modesty (and also arrogance) happens in evolutionarily novel contexts. Decisions such as creating a business plan for a startup, or writing a novel that sells a million copies, are sufficiently different from what we evolved to do that we should expect over/underconfidence to cause more harm.

4.

Another way to summarize the book would be: don’t aim to overcompensate for overconfidence; instead, aim to eliminate the causes of overconfidence.

This book will be moderately popular among Eliezer’s fans, but it seems unlikely to greatly expand his influence.

It didn’t convince me that epistemic modesty is generally harmful, but it does provide clues to identifying significant domains in which epistemic modesty causes important harm.

A new paper titled When Will AI Exceed Human Performance? Evidence from AI Experts reports some bizarre results. From the abstract:

Researchers believe there is a 50% chance of AI outperforming humans in all tasks in 45 years and of automating all human jobs in 120 years, with Asian respondents expecting these dates much sooner than North Americans.

So we should expect a 75 year period in which machines can perform all tasks better and more cheaply than humans, but can’t automate all occupations. Huh?

I suppose there are occupations that consist mostly of having status rather than doing tasks (queen of England, or waiter at a classy restaurant that won’t automate service due to the high status of serving food the expensive way). Or occupations protected by law, such as gas station attendants who pump gas in New Jersey, decades after most drivers switched to pumping for themselves.

But I’d be rather surprised if machine learning researchers would think of those points when answering a survey in connection with a machine learning conference.

Maybe the actual wording of the survey questions caused a difference that got lost in the abstract? Hmmm …

“High-level machine intelligence” (HLMI) is achieved when unaided machines can accomplish every task better and more cheaply than human workers

versus

when all occupations are fully automatable. That is, when for any occupation, machines could be built to carry out the task better and more cheaply than human workers.

I tried to convince myself that the second version got interpreted as referring to actually replacing humans, while the first version referred to merely being qualified to replace humans. But the more I compared the two, the more that felt like wishful thinking. If anything, the “unaided” in the first version should make that version look farther in the future.

Can I find any other discrepancies between the abstract and the details? The 120 years in the abstract turns into 122 years in the body of the paper. So the authors seem to be downplaying the weirdness of the results.

There’s even a prediction of a 50% chance that the occupation “AI researcher” will be automated in about 88 years (I’m reading that from figure 2; I don’t see an explicit number for it). I suspect some respondents said this would take longer than for machines to “accomplish every task better and more cheaply”, but I don’t see data in the paper to confirm that [1].

A more likely hypothesis is that researchers alter their answers based on what they think people want to hear. Researchers might want to convince their funders that AI deals with problems that can be solved within the career of the researcher [2], while also wanting to reassure voters that AI won’t create massive unemployment until the current generation of workers has retired.

That would explain the general pattern of results, although the magnitude of the effect still seems strange. And it would imply that most machine learning researchers are liars, or have so little understanding of when HLMI will arrive that they don’t notice a 50% shift in their time estimates.

The ambiguity in terms such as “tasks” and “better” could conceivably explain confusion over the meaning of HLMI. I keep intending to write a blog post that would clarify concepts such as human-level AI and superintelligence, but then procrastinating because my thoughts on those topics are unclear.

It’s hard to avoid the conclusion that I should reduce my confidence in any prediction of when AI will reach human-level competence. My prior 90% confidence interval was something like 10 to 300 years. I guess I’ll broaden it to maybe 8 to 400 years [3].

P.S. – See also Katja’s comments on prior surveys.

[1] – the paper says most participants were asked the question that produced the estimate of 45 years to HLMI, the rest got the question that produced the 122 year estimate. So the median for all participants ought to be less than about 84 years, unless there are some unusual quirks in the data.

[2] – but then why do experienced researchers say human-level AI is farther in the future than new researchers, who presumably will be around longer? Maybe the new researchers are chasing fads or get-rich-quick schemes, and will mostly quit before becoming senior researchers?

[3] – years of subjective time as experienced by the fastest ems. So probably nowhere near 400 calendar years.

Book review: Superforecasting: The Art and Science of Prediction, by Philip E. Tetlock and Dan Gardner.

This book reports on the Good Judgment Project (GJP).

Much of the book recycles old ideas: 40% of the book is a rerun of Thinking Fast and Slow, 15% of the book repeats Wisdom of Crowds, and 15% of the book rehashes How to Measure Anything. Those three books were good enough that it’s very hard to improve on them. Superforecasting nearly matches their quality, but most people ought to read those three books instead. (Anyone who still wants more after reading them will get decent value out of reading the last 4 or 5 chapters of Superforecasting).

The book’s style is very readable, using an almost Gladwell-like style (a large contrast to Tetlock’s previous, more scholarly book), at a moderate cost in substance. It contains memorable phrases, such as “a fox with the bulging eyes of a dragonfly” (to describe looking at the world through many perspectives).

Continue Reading