evolution

All posts tagged evolution

Book review: Why Everyone (Else) Is a Hypocrite: Evolution and the Modular Mind, by Robert Kurzban.

Minds are Modular

Many people explain minds by positing that they’re composed of parts:

  • the id, ego, and super-ego
  • the left side and the right side of the brain
  • System 1 and System 2
  • the triune brain
  • Marvin Minsky’s Society of Mind

Minsky’s proposal is the only one of these that resembles Kurzban’s notion of modularity enough to earn Kurzban’s respect. The modules Kurzban talks about are much more numerous, and more specialized, than most people are willing to imagine.

Here’s Kurzban’s favorite Minsky quote:

The mind is a community of “agents.” Each has limited powers and can communicate only with certain others. The powers of mind emerge from their interactions for none of the Agents, by itself, has significant intelligence. […] Everyone knows what it feels like to be engaged in a conversation with oneself. In this book, we will develop the idea that these discussions really happen, and that the participants really “exist.” In our picture of the mind we will imagine many “sub-persons”, or “internal agents”, interacting with one another. Solving the simplest problem—seeing a picture—or remembering the experience of seeing it—might involve a dozen or more—perhaps very many more—of these agents playing different roles. Some of them bear useful knowledge, some of them bear strategies for dealing with other agents, some of them carry warnings or encouragements about how the work of others is proceeding. And some of them are concerned with discipline, prohibiting or “censoring” others from thinking forbidden thoughts.

Let’s take the US government as a metaphor. Instead of saying it’s composed of the legislative, executive, and judicial modules, Kurzban would describe it as being made up of modules such as a White House press secretary, Anthony Fauci, a Speaker of the House, more generals than I can name, even more park rangers, etc.

In What Is It Like to Be a Bat?, Nagel says “our own mental activity is the only unquestionable fact of our experience”. In contrast, Kurzban denies that we know more than a tiny fraction of our mental activity. We don’t ask “what is it like to be an edge detector?”, because there was no evolutionary pressure to enable us to answer that question. It could be most human experience is as mysterious to our conscious minds as bat experiences. Most of our introspection involves examining a mental model that we construct for propaganda purposes.

Is Self-Deception Mysterious?

There’s been a good deal of confusion about self-deception and self-control. Kurzban attributes the confusion to attempts at modeling the mind as a unitary agent. If there’s a single homunculus in charge of all of the mind’s decisions, then it’s genuinely hard to explain phenomena that look like conflicts between agents.

With a sufficiently modular model of minds, the confusion mostly vanishes.

A good deal of what gets called self-deception is better described as being strategically wrong.

For example, when President Trump had COVID, the White House press secretary had a strong incentive not to be aware of any evidence that Trump’s health was worse than expected, in order to reassure voters without being clearly dishonest. Whereas the White House doctor had some reason to err a bit on the side of overestimating Trump’s risk of dying. So it shouldn’t surprise us if they had rather different beliefs. I don’t describe that situation as “the US government is deceiving itself”, but I’d be confused as to whether to describe it that way if I could only imagine the government as a unitary agent.

Minds work much the same way. E.g. the cancer patient who buys space on a cruise that his doctor says he won’t live to enjoy (presumably to persuade allies that he’ll be around long enough to be worth allying with), while still following the doctor’s advice about how to treat the cancer. A modular model of the mind isn’t surprised that his mind holds inconsistent beliefs about how serious the cancer is. The patient’s press-secretary-like modules are pursuing a strategy of getting friends to make long-term plans to support the patient. They want accurate enough knowledge of the patient’s health to sound credible. Why would they want to be more accurate than that?

Self-Control

Kurzban sees less value in the concept of a self than do most Buddhists.

almost any time you come across a theory with the word “self” in it, you should check your wallet.

Self-control has problems that are similar to the problems with the concept of self-deception. It’s best thought of as conflicts between modules.

We should expect context-sensitive influences on which modules exert the most influence on decisions. E.g. we should expect a calorie-acquiring module to have more influence when a marshmallow is in view than if a path to curing cancer is in view. That makes it hard for a mind to have a stable preference about how to value eating a marshmallow or curing cancer.

If I think I see a path to curing cancer that is certain to succeed, my cancer-research modules ought to get more attention than my calorie-acquiring modules. I’m pretty sure that’s what would happen if I had good evidence that I’m about to cure cancer. But a more likely situation is that my press-secretary-like modules say I’ll succeed, and some less eloquent modules say I’ll fail. That will look like a self-control problem to those who want the press secretary to be in charge, and look more like politics to those who take Kurzban’s view.

I could identify some of my brain’s modules as part of my “self”, and say that self-control refers to those modules overcoming the influence of the non-self parts of my brain. But the more I think like Kurzban, the more arbitrary it seems to treat some modules as more privileged than others.

The Rest

Along the way, Kurzban makes fun of the literature on self-esteem, and of models that say self-control is a function of resources.

The book consists mostly of easy to read polemics for ideas that ought to be obvious, but which our culture resists.

Warning: you should skip the chapter titled Morality and Contradictions. Kurzban co-authored a great paper called A Solution to the Mysteries of Morality. But in this book, his controversial examples of hypocrisy will distract attention of most readers from the rather unremarkable wisdom that the examples illustrate.

Lifespan

Book review: Lifespan: Why We Age – and Why We Don’t Have To, by David A. Sinclair.

A decade ago, the belief that aging could be cured was just barely starting to get attention from mainstream science, and the main arguments for a cure came from people with somewhat marginal formal credentials.

Now we have a book by an author who’s a co-chief editor of the scientific journal Aging. He’s the cofounder of 14 biotech companies (i.e. probably more than he’s had enough time to work for full time, so I’m guessing some companies are listing him as a cofounder more for prestige than for full-time work). He’s even respected enough by some supplement companies that they use his name, even after he sends them cease and desist letters.

I’m glad that Sinclair published a book that says aging can be cured, since there’s still a shortage of eminent scientists who are willing to take that position.

Continue Reading

Book review: The Longevity Diet: Discover the New Science Behind Stem Cell Activation and Regeneration to Slow Aging, Fight Disease, and Optimize Weight, by Valter Longo.

Longo is a moderately competent researcher whose ideas about nutrition and fasting are mostly heading in the right general direction, but many of his details look suspicious.

He convinced me to become more serious about occasional, longer fasts, but I probably won’t use his products.
Continue Reading

[Warning: long post, of uncertain value, with annoyingly uncertain conclusions.]

This post will focus on how hardware (cpu power) will affect AGI timelines. I will undoubtedly overlook some important considerations; this is just a model of some important effects that I understand how to analyze.

I’ll make some effort to approach this as if I were thinking about AGI timelines for the first time, and focusing on strategies that I use in other domains.

I’m something like 60% confident that the most important factor in the speed of AI takeoff will be the availability of computing power.

I’ll focus here on the time to human-level AGI, but I suspect this reasoning implies getting from there to superintelligence at speeds that Bostrom would classify as slow or moderate.
Continue Reading

Book review: The Causes of War and the Spread of Peace: But Will War Rebound?, by Azar Gat.

This book provides a good synthesis of the best ideas about why wars happen.

It overlaps a good deal with Pinker’s The Better Angels of Our Nature. Pinker provides much more detailed evidence, but Gat has a much better understanding than Pinker of the theories behind the trends.
Continue Reading

Book review: Darwin’s Unfinished Symphony: How Culture Made the Human Mind, by Kevin N. Laland.

This book is a mostly good complement to Henrich’s The Secret of our Success. The two books provide different, but strongly overlapping, perspectives on how cultural transmission of information played a key role in the evolution of human intelligence.

The first half of the book describes the importance of copying behavior in many animals.

I was a bit surprised that animals as simple as fruit flies are able to copy some behaviors of other fruit flies. Laland provides good evidence that a wide variety of species have evolved some ability to copy behavior, and that ability is strongly connected to the benefits of acquiring knowledge from others and the costs of alternative ways of acquiring that knowledge.

Yet I was also surprised that the value of copying is strongly limited by the low reliability with which behavior is copied, except with humans. Laland makes plausible claims that the need for high-fidelity copying of behavior was an important driving force behind the evolution of bigger and more sophisticated brains.

Laland claims that humans have a unique ability to teach, and that teaching is an important adaptation. He means teaching in a much broader sense than we see in schooling – he includes basic stuff that could have preceded language, such as a parent directing a child’s attention to things that the child ought to learn. This seems like a good extension to Henrich’s ideas.

The most interesting chapter theorizes about the origin of human language. Laland’s theory that language evolved for teaching provides maybe a bit stronger selection pressure than other theories, but he doesn’t provide much reason to reject competing theories.

Laland presents seven criteria for a good explanation of the evolution of language. But these criteria look somewhat biased toward his theory.

Laland’s first two criteria are that language should have been initially honest and cooperative. He implies that it must have been more honest and cooperative than modern language use is, but he isn’t as clear about that as I would like. Those two criteria seem designed as arguments against the theory that language evolved to impress potential mates. The mate-selection theory involves plenty of competition, and presumably a fair amount of deception. But better communicators do convey important evidence about the quality of their genes, even if they’re engaging in some deception. That seems sufficient to drive the evolution of language via mate-selection pressures.

Laland’s theory seems to provide a somewhat better explanation of when language evolved than most other theories do, so I’m inclined to treat it as one of the top theories. But I don’t expect any consensus on this topic anytime soon.

The book’s final four chapters seemed much less interesting. I recommend skipping them.

Henrich’s book emphasized evidence that humans are pretty similar to other apes. Laland emphasizes ways in which humans are unique (language and teaching ability). I didn’t notice any cases where they directly contradicted each other, but it’s a bit disturbing that they left quite different impressions while saying mostly appropriate things.

Henrich claimed that increasing climate variability created increased rewards for the fast adaptation that culture enabled. Laland disagrees, saying that cultural change itself is a more plausible explanation for the kind of environmental change that incentivized faster adaptation. My intuition says that Laland’s conclusion is correct, but he seems a bit overconfident about it.

Overall, Laland’s book is less comprehensive and less impressive than Henrich’s book, but is still good enough to be in my top ten list of books on the evolution of intelligence.

Update on 2017-08-18: I just read another theory about the evolution of language which directly contradicts Laland’s claim that early language needed to be honest and cooperative. Wild Voices: Mimicry, Reversal, Metaphor, and the Emergence of Language claims that an important role of initial human vocal flexibility was to deceive other species.

Or, why I don’t fear the p-zombie apocalypse.

This post analyzes concerns about how evolution, in the absence of a powerful singleton, might, in the distant future, produce what Nick Bostrom calls a “Disneyland without children”. I.e. a future with many agents, whose existence we don’t value because they are missing some important human-like quality.

The most serious description of this concern is in Bostrom’s The Future of Human Evolution. Bostrom is cautious enough that it’s hard to disagree with anything he says.

Age of Em has prompted a batch of similar concerns. Scott Alexander at SlateStarCodex has one of the better discussions (see section IV of his review of Age of Em).

People sometimes sound like they want to use this worry as an excuse to oppose the age of em scenario, but it applies to just about any scenario with human-in-a-broad-sense actors. If uploading never happens, biological evolution could produce slower paths to the same problem(s) [1]. Even in the case of a singleton AI, the singleton will need to solve the tension between evolution and our desire to preserve our values, although in that scenario it’s more important to focus on how the singleton is designed.

These concerns often assume something like the age of em lasts forever. The scenario which Age of Em analyzes seems unstable, in that it’s likely to be altered by stranger-than-human intelligence. But concerns about evolution only depend on control being sufficiently decentralized that there’s doubt about whether a central government can strongly enforce rules. That situation seems sufficiently stable to be worth analyzing.

I’ll refer to this thing we care about as X (qualia? consciousness? fun?), but I expect people will disagree on what matters for quite some time. Some people will worry that X is lost in uploading, others will worry that some later optimization process will remove X from some future generation of ems.

I’ll first analyze scenarios in which X is a single feature (in the sense that it would be lost in a single step). Later, I’ll try to analyze the other extreme, where X is something that could be lost in millions of tiny steps. Neither extreme seems likely, but I expect that analyzing the extremes will illustrate the important principles.

Continue Reading

Book review: Other Minds: The Octopus, the Sea, and the Deep Origins of Consciousness, by Peter Godfrey-Smith.

This book describes some interesting mysteries, but provides little help at solving them.

It provides some pieces of a long-term perspective on the evolution of intelligence.

Cephalopods’ most recent common ancestor with vertebrates lived way back before the Cambrian explosion. Nervous systems back then were primitive enough that minds didn’t need to react to other minds, and predation was a rare accident, not something animals prepared carefully to cause and avoid.

So cephalopod intelligence evolved rather independently from most of the minds we observe. We could learn something about alien minds by understanding them.

Intelligence may even have evolved more than once in cephalopods – nobody seems to know whether octopuses evolved intelligence separately from squids/cuttlefish.

An octopus has a much less centralized mind than vertebrates do. Does an octopus have a concept of self? The book presents evidence that octopuses sometimes seem to think of their arms as parts of their self, yet hints that their concept of self is a good deal weaker than in humans, and maybe the octopus treats its arms as semi-autonomous entities.

2.

Does an octopus have color vision? Not via its photoreceptors the way many vertebrates do. Simple tests of octopuses’ ability to discriminate color also say no.

Yet octopuses clearly change color to camouflage themselves. They also change color in ways that suggest they’re communicating via a visual language. But to whom?

One speculative guess is that the color-producing parts act as color filters, with monochrome photoreceptors in the skin evaluating the color of the incoming light by how much the light is attenuated by the filters. So they “see” color with their skin, but not their eyes.

That would still leave plenty of mystery about what they’re communicating.

3.

The author’s understanding of aging implies that few organisms die of aging in the wild. He sees evidence in Octopuses that conflicts with this prediction, yet that doesn’t alert him to the growing evidence of problems with the standard theories of aging.

He says octopuses are subject to much predation. Why doesn’t this cause them to be scared of humans? He has surprising anecdotes of octopuses treating humans as friends, e.g. grabbing one and leading him on a ten-minute “tour”.

He mentions possible REM sleep in cuttlefish. That would almost certainly have evolved independently from vertebrate REM sleep, which must indicate something important.

I found the book moderately entertaining, but I was underwhelmed by the author’s expertise. The subtitle’s reference to “the Deep Origins of Consciousness” led me to expect more than I got.