evolution

All posts tagged evolution

[I mostly wrote this to clarify my thoughts. I’m unclear whether this will be valuable for readers. ]

I expect that within a decade, AI will be able to do 90% of current human jobs. I don’t mean that 90% of humans will be obsolete. I mean that the average worker could delegate 90% of their tasks to an AGI.

I feel confused about what this implies for the kind of AI long-term planning and strategizing that would enable an AI to create large-scale harm if it is poorly aligned.

Is the ability to achieve long-term goals hard for an AI to develop?

Continue Reading

Disagreements related to what we value seem to explain maybe 10% of the disagreements over AI safety. This post will try to explain how I think about which values I care about perpetuating to the distant future.

Robin Hanson helped to clarify the choices in Which Of Your Origins Are You?:

The key hard question here is this: what aspects of the causal influences that lead to you do you now embrace, and which do you instead reject as “random” errors that you want to cut out? Consider two extremes.
At one extreme, one could endorse absolutely every random element that contributed to any prior choice or intuition.

At the other extreme, you might see yourself as primarily the result of natural selection, both of genes and of memes, and see your core non-random value as that of doing the best you can to continue to “win” at that game. … In this view, everything about you that won’t help your descendants be selected in the long run is a random error that you want to detect and reject.

In other words, the more unique criteria we have about what we want to preserve into the distant future, the less we should expect to succeed.

Continue Reading

Book review: Why Everyone (Else) Is a Hypocrite: Evolution and the Modular Mind, by Robert Kurzban.

Minds are Modular

Many people explain minds by positing that they’re composed of parts:

  • the id, ego, and super-ego
  • the left side and the right side of the brain
  • System 1 and System 2
  • the triune brain
  • Marvin Minsky’s Society of Mind

Minsky’s proposal is the only one of these that resembles Kurzban’s notion of modularity enough to earn Kurzban’s respect. The modules Kurzban talks about are much more numerous, and more specialized, than most people are willing to imagine.

Here’s Kurzban’s favorite Minsky quote:

The mind is a community of “agents.” Each has limited powers and can communicate only with certain others. The powers of mind emerge from their interactions for none of the Agents, by itself, has significant intelligence. […] Everyone knows what it feels like to be engaged in a conversation with oneself. In this book, we will develop the idea that these discussions really happen, and that the participants really “exist.” In our picture of the mind we will imagine many “sub-persons”, or “internal agents”, interacting with one another. Solving the simplest problem—seeing a picture—or remembering the experience of seeing it—might involve a dozen or more—perhaps very many more—of these agents playing different roles. Some of them bear useful knowledge, some of them bear strategies for dealing with other agents, some of them carry warnings or encouragements about how the work of others is proceeding. And some of them are concerned with discipline, prohibiting or “censoring” others from thinking forbidden thoughts.

Let’s take the US government as a metaphor. Instead of saying it’s composed of the legislative, executive, and judicial modules, Kurzban would describe it as being made up of modules such as a White House press secretary, Anthony Fauci, a Speaker of the House, more generals than I can name, even more park rangers, etc.

In What Is It Like to Be a Bat?, Nagel says “our own mental activity is the only unquestionable fact of our experience”. In contrast, Kurzban denies that we know more than a tiny fraction of our mental activity. We don’t ask “what is it like to be an edge detector?”, because there was no evolutionary pressure to enable us to answer that question. It could be most human experience is as mysterious to our conscious minds as bat experiences. Most of our introspection involves examining a mental model that we construct for propaganda purposes.

Is Self-Deception Mysterious?

There’s been a good deal of confusion about self-deception and self-control. Kurzban attributes the confusion to attempts at modeling the mind as a unitary agent. If there’s a single homunculus in charge of all of the mind’s decisions, then it’s genuinely hard to explain phenomena that look like conflicts between agents.

With a sufficiently modular model of minds, the confusion mostly vanishes.

A good deal of what gets called self-deception is better described as being strategically wrong.

For example, when President Trump had COVID, the White House press secretary had a strong incentive not to be aware of any evidence that Trump’s health was worse than expected, in order to reassure voters without being clearly dishonest. Whereas the White House doctor had some reason to err a bit on the side of overestimating Trump’s risk of dying. So it shouldn’t surprise us if they had rather different beliefs. I don’t describe that situation as “the US government is deceiving itself”, but I’d be confused as to whether to describe it that way if I could only imagine the government as a unitary agent.

Minds work much the same way. E.g. the cancer patient who buys space on a cruise that his doctor says he won’t live to enjoy (presumably to persuade allies that he’ll be around long enough to be worth allying with), while still following the doctor’s advice about how to treat the cancer. A modular model of the mind isn’t surprised that his mind holds inconsistent beliefs about how serious the cancer is. The patient’s press-secretary-like modules are pursuing a strategy of getting friends to make long-term plans to support the patient. They want accurate enough knowledge of the patient’s health to sound credible. Why would they want to be more accurate than that?

Self-Control

Kurzban sees less value in the concept of a self than do most Buddhists.

almost any time you come across a theory with the word “self” in it, you should check your wallet.

Self-control has problems that are similar to the problems with the concept of self-deception. It’s best thought of as conflicts between modules.

We should expect context-sensitive influences on which modules exert the most influence on decisions. E.g. we should expect a calorie-acquiring module to have more influence when a marshmallow is in view than if a path to curing cancer is in view. That makes it hard for a mind to have a stable preference about how to value eating a marshmallow or curing cancer.

If I think I see a path to curing cancer that is certain to succeed, my cancer-research modules ought to get more attention than my calorie-acquiring modules. I’m pretty sure that’s what would happen if I had good evidence that I’m about to cure cancer. But a more likely situation is that my press-secretary-like modules say I’ll succeed, and some less eloquent modules say I’ll fail. That will look like a self-control problem to those who want the press secretary to be in charge, and look more like politics to those who take Kurzban’s view.

I could identify some of my brain’s modules as part of my “self”, and say that self-control refers to those modules overcoming the influence of the non-self parts of my brain. But the more I think like Kurzban, the more arbitrary it seems to treat some modules as more privileged than others.

The Rest

Along the way, Kurzban makes fun of the literature on self-esteem, and of models that say self-control is a function of resources.

The book consists mostly of easy to read polemics for ideas that ought to be obvious, but which our culture resists.

Warning: you should skip the chapter titled Morality and Contradictions. Kurzban co-authored a great paper called A Solution to the Mysteries of Morality. But in this book, his controversial examples of hypocrisy will distract attention of most readers from the rather unremarkable wisdom that the examples illustrate.

Lifespan

Book review: Lifespan: Why We Age – and Why We Don’t Have To, by David A. Sinclair.

A decade ago, the belief that aging could be cured was just barely starting to get attention from mainstream science, and the main arguments for a cure came from people with somewhat marginal formal credentials.

Now we have a book by an author who’s a co-chief editor of the scientific journal Aging. He’s the cofounder of 14 biotech companies (i.e. probably more than he’s had enough time to work for full time, so I’m guessing some companies are listing him as a cofounder more for prestige than for full-time work). He’s even respected enough by some supplement companies that they use his name, even after he sends them cease and desist letters.

I’m glad that Sinclair published a book that says aging can be cured, since there’s still a shortage of eminent scientists who are willing to take that position.

Continue Reading

Book review: The Longevity Diet: Discover the New Science Behind Stem Cell Activation and Regeneration to Slow Aging, Fight Disease, and Optimize Weight, by Valter Longo.

Longo is a moderately competent researcher whose ideas about nutrition and fasting are mostly heading in the right general direction, but many of his details look suspicious.

He convinced me to become more serious about occasional, longer fasts, but I probably won’t use his products.
Continue Reading

[Warning: long post, of uncertain value, with annoyingly uncertain conclusions.]

This post will focus on how hardware (cpu power) will affect AGI timelines. I will undoubtedly overlook some important considerations; this is just a model of some important effects that I understand how to analyze.

I’ll make some effort to approach this as if I were thinking about AGI timelines for the first time, and focusing on strategies that I use in other domains.

I’m something like 60% confident that the most important factor in the speed of AI takeoff will be the availability of computing power.

I’ll focus here on the time to human-level AGI, but I suspect this reasoning implies getting from there to superintelligence at speeds that Bostrom would classify as slow or moderate.
Continue Reading

Book review: The Causes of War and the Spread of Peace: But Will War Rebound?, by Azar Gat.

This book provides a good synthesis of the best ideas about why wars happen.

It overlaps a good deal with Pinker’s The Better Angels of Our Nature. Pinker provides much more detailed evidence, but Gat has a much better understanding than Pinker of the theories behind the trends.
Continue Reading

Book review: Darwin’s Unfinished Symphony: How Culture Made the Human Mind, by Kevin N. Laland.

This book is a mostly good complement to Henrich’s The Secret of our Success. The two books provide different, but strongly overlapping, perspectives on how cultural transmission of information played a key role in the evolution of human intelligence.

The first half of the book describes the importance of copying behavior in many animals.

I was a bit surprised that animals as simple as fruit flies are able to copy some behaviors of other fruit flies. Laland provides good evidence that a wide variety of species have evolved some ability to copy behavior, and that ability is strongly connected to the benefits of acquiring knowledge from others and the costs of alternative ways of acquiring that knowledge.

Yet I was also surprised that the value of copying is strongly limited by the low reliability with which behavior is copied, except with humans. Laland makes plausible claims that the need for high-fidelity copying of behavior was an important driving force behind the evolution of bigger and more sophisticated brains.

Laland claims that humans have a unique ability to teach, and that teaching is an important adaptation. He means teaching in a much broader sense than we see in schooling – he includes basic stuff that could have preceded language, such as a parent directing a child’s attention to things that the child ought to learn. This seems like a good extension to Henrich’s ideas.

The most interesting chapter theorizes about the origin of human language. Laland’s theory that language evolved for teaching provides maybe a bit stronger selection pressure than other theories, but he doesn’t provide much reason to reject competing theories.

Laland presents seven criteria for a good explanation of the evolution of language. But these criteria look somewhat biased toward his theory.

Laland’s first two criteria are that language should have been initially honest and cooperative. He implies that it must have been more honest and cooperative than modern language use is, but he isn’t as clear about that as I would like. Those two criteria seem designed as arguments against the theory that language evolved to impress potential mates. The mate-selection theory involves plenty of competition, and presumably a fair amount of deception. But better communicators do convey important evidence about the quality of their genes, even if they’re engaging in some deception. That seems sufficient to drive the evolution of language via mate-selection pressures.

Laland’s theory seems to provide a somewhat better explanation of when language evolved than most other theories do, so I’m inclined to treat it as one of the top theories. But I don’t expect any consensus on this topic anytime soon.

The book’s final four chapters seemed much less interesting. I recommend skipping them.

Henrich’s book emphasized evidence that humans are pretty similar to other apes. Laland emphasizes ways in which humans are unique (language and teaching ability). I didn’t notice any cases where they directly contradicted each other, but it’s a bit disturbing that they left quite different impressions while saying mostly appropriate things.

Henrich claimed that increasing climate variability created increased rewards for the fast adaptation that culture enabled. Laland disagrees, saying that cultural change itself is a more plausible explanation for the kind of environmental change that incentivized faster adaptation. My intuition says that Laland’s conclusion is correct, but he seems a bit overconfident about it.

Overall, Laland’s book is less comprehensive and less impressive than Henrich’s book, but is still good enough to be in my top ten list of books on the evolution of intelligence.

Update on 2017-08-18: I just read another theory about the evolution of language which directly contradicts Laland’s claim that early language needed to be honest and cooperative. Wild Voices: Mimicry, Reversal, Metaphor, and the Emergence of Language claims that an important role of initial human vocal flexibility was to deceive other species.