The Human Mind

Book review: A Theory of Everyone – The New Science of Who We Are, How We Got Here, and Where We’re Going Energy, culture and a better future for everyone, by Michael Muthukrishna.

I found this book disappointing. An important part of that is because Muthukrishna set my expectations too high.

I had previously blogged about a paper that he co-authored with Henrich on cultural influences on IQ. If those ideas were new in the book, I’d be eagerly writing about them. But I’ve already written enough about those ideas in that blog post.

Another source of disappointment was that the book’s title is misleading. To the limited extent that the book focuses on a theory, it’s the theory that’s more clearly described in Henrich’s The Secret of our Success. A Theory of Everyone feels more like a collection of blog posts than like a well-organized book.

Continue Reading

[I mostly wrote this to clarify my thoughts. I’m unclear whether this will be valuable for readers. ]

I expect that within a decade, AI will be able to do 90% of current human jobs. I don’t mean that 90% of humans will be obsolete. I mean that the average worker could delegate 90% of their tasks to an AGI.

I feel confused about what this implies for the kind of AI long-term planning and strategizing that would enable an AI to create large-scale harm if it is poorly aligned.

Is the ability to achieve long-term goals hard for an AI to develop?

Continue Reading

Disagreements related to what we value seem to explain maybe 10% of the disagreements over AI safety. This post will try to explain how I think about which values I care about perpetuating to the distant future.

Robin Hanson helped to clarify the choices in Which Of Your Origins Are You?:

The key hard question here is this: what aspects of the causal influences that lead to you do you now embrace, and which do you instead reject as “random” errors that you want to cut out? Consider two extremes.
At one extreme, one could endorse absolutely every random element that contributed to any prior choice or intuition.

At the other extreme, you might see yourself as primarily the result of natural selection, both of genes and of memes, and see your core non-random value as that of doing the best you can to continue to “win” at that game. … In this view, everything about you that won’t help your descendants be selected in the long run is a random error that you want to detect and reject.

In other words, the more unique criteria we have about what we want to preserve into the distant future, the less we should expect to succeed.

Continue Reading

Book review: Why Everyone (Else) Is a Hypocrite: Evolution and the Modular Mind, by Robert Kurzban.

Minds are Modular

Many people explain minds by positing that they’re composed of parts:

  • the id, ego, and super-ego
  • the left side and the right side of the brain
  • System 1 and System 2
  • the triune brain
  • Marvin Minsky’s Society of Mind

Minsky’s proposal is the only one of these that resembles Kurzban’s notion of modularity enough to earn Kurzban’s respect. The modules Kurzban talks about are much more numerous, and more specialized, than most people are willing to imagine.

Here’s Kurzban’s favorite Minsky quote:

The mind is a community of “agents.” Each has limited powers and can communicate only with certain others. The powers of mind emerge from their interactions for none of the Agents, by itself, has significant intelligence. […] Everyone knows what it feels like to be engaged in a conversation with oneself. In this book, we will develop the idea that these discussions really happen, and that the participants really “exist.” In our picture of the mind we will imagine many “sub-persons”, or “internal agents”, interacting with one another. Solving the simplest problem—seeing a picture—or remembering the experience of seeing it—might involve a dozen or more—perhaps very many more—of these agents playing different roles. Some of them bear useful knowledge, some of them bear strategies for dealing with other agents, some of them carry warnings or encouragements about how the work of others is proceeding. And some of them are concerned with discipline, prohibiting or “censoring” others from thinking forbidden thoughts.

Let’s take the US government as a metaphor. Instead of saying it’s composed of the legislative, executive, and judicial modules, Kurzban would describe it as being made up of modules such as a White House press secretary, Anthony Fauci, a Speaker of the House, more generals than I can name, even more park rangers, etc.

In What Is It Like to Be a Bat?, Nagel says “our own mental activity is the only unquestionable fact of our experience”. In contrast, Kurzban denies that we know more than a tiny fraction of our mental activity. We don’t ask “what is it like to be an edge detector?”, because there was no evolutionary pressure to enable us to answer that question. It could be most human experience is as mysterious to our conscious minds as bat experiences. Most of our introspection involves examining a mental model that we construct for propaganda purposes.

Is Self-Deception Mysterious?

There’s been a good deal of confusion about self-deception and self-control. Kurzban attributes the confusion to attempts at modeling the mind as a unitary agent. If there’s a single homunculus in charge of all of the mind’s decisions, then it’s genuinely hard to explain phenomena that look like conflicts between agents.

With a sufficiently modular model of minds, the confusion mostly vanishes.

A good deal of what gets called self-deception is better described as being strategically wrong.

For example, when President Trump had COVID, the White House press secretary had a strong incentive not to be aware of any evidence that Trump’s health was worse than expected, in order to reassure voters without being clearly dishonest. Whereas the White House doctor had some reason to err a bit on the side of overestimating Trump’s risk of dying. So it shouldn’t surprise us if they had rather different beliefs. I don’t describe that situation as “the US government is deceiving itself”, but I’d be confused as to whether to describe it that way if I could only imagine the government as a unitary agent.

Minds work much the same way. E.g. the cancer patient who buys space on a cruise that his doctor says he won’t live to enjoy (presumably to persuade allies that he’ll be around long enough to be worth allying with), while still following the doctor’s advice about how to treat the cancer. A modular model of the mind isn’t surprised that his mind holds inconsistent beliefs about how serious the cancer is. The patient’s press-secretary-like modules are pursuing a strategy of getting friends to make long-term plans to support the patient. They want accurate enough knowledge of the patient’s health to sound credible. Why would they want to be more accurate than that?

Self-Control

Kurzban sees less value in the concept of a self than do most Buddhists.

almost any time you come across a theory with the word “self” in it, you should check your wallet.

Self-control has problems that are similar to the problems with the concept of self-deception. It’s best thought of as conflicts between modules.

We should expect context-sensitive influences on which modules exert the most influence on decisions. E.g. we should expect a calorie-acquiring module to have more influence when a marshmallow is in view than if a path to curing cancer is in view. That makes it hard for a mind to have a stable preference about how to value eating a marshmallow or curing cancer.

If I think I see a path to curing cancer that is certain to succeed, my cancer-research modules ought to get more attention than my calorie-acquiring modules. I’m pretty sure that’s what would happen if I had good evidence that I’m about to cure cancer. But a more likely situation is that my press-secretary-like modules say I’ll succeed, and some less eloquent modules say I’ll fail. That will look like a self-control problem to those who want the press secretary to be in charge, and look more like politics to those who take Kurzban’s view.

I could identify some of my brain’s modules as part of my “self”, and say that self-control refers to those modules overcoming the influence of the non-self parts of my brain. But the more I think like Kurzban, the more arbitrary it seems to treat some modules as more privileged than others.

The Rest

Along the way, Kurzban makes fun of the literature on self-esteem, and of models that say self-control is a function of resources.

The book consists mostly of easy to read polemics for ideas that ought to be obvious, but which our culture resists.

Warning: you should skip the chapter titled Morality and Contradictions. Kurzban co-authored a great paper called A Solution to the Mysteries of Morality. But in this book, his controversial examples of hypocrisy will distract attention of most readers from the rather unremarkable wisdom that the examples illustrate.

Book review: Noise: A Flaw in Human Judgment, by Daniel Kahneman, Olivier Sibony, and Cass R. Sunstein.

Doctors are more willing to order a test for patients they see in the morning than for those they see late in the day.

Asylum applicants chances of prevailing may be as low as 5% or as high as 88% purely due to which judge hears their case.

Clouds Make Nerds Look Good, in the sense that university admissions officers give higher weight to academic attributes on cloudy days.

These are examples of what the authors describe as an important and neglected problem.

A more precise description of the book’s topic is variations in judgment, with judgment defined as “measurement in which the instrument is a human mind”.

Continue Reading

Book review: The Geography of Thought: How Asians and Westerners Think Differently… and Why, by Richard E. Nisbett.

It is often said that travel is a good way to improve one’s understanding of other cultures.

The Geography of Thought discredits that saying, by being full of examples of cultural differences that 99.9% of travelers will overlook.

Here are a few of the insights I got from the book, but I’m pretty sure I wouldn’t have gotten from visiting Asia frequently:

Continue Reading

I said in my review of WEIRDest People that the Flynn effect seems like a natural consequence of thinking styles that became more analytical, abstract, reductionist, and numerical.

I’ll expand here on some questions which I swept under the rug, so that I could keep that review focused on the book’s most important aspects.

Cultural Bias

After reading WEIRDest People, I find that the goal of a culture-neutral IQ test looks strange (and, of course, WEIRD). At least as strange as trying to fix basketball to stop favoring tall people.

Continue Reading

Book review: The WEIRDest People in the World, by Joseph Henrich.

Wow!

Henrich previously wrote one of the best books of the last decade. Normally, I expect such an author’s future books to, at best, exhibit regression toward the mean. But Henrich’s grand overview of humanity’s first few million years was merely a modest portion of the ideas that he originally tried to fit into this magnum opus. Henrich couldn’t quite explain in one volume how humanity got all the way to industrial empires, so he split the explanation into two books.

The cartoon version of the industrial revolution: Protestant culture made the West more autistic.

However, explaining the most important event in history makes up only about 25% of this book’s focus and value.

Continue Reading

Book review: The Finders, by Jeffery A Martin.

This book is about the states of mind that Martin labels Fundamental Wellbeing.

These seem to be what people seek through meditation, but Martin carefully avoids focusing on Buddhism, and says that other spiritual approaches produce similar states of mind.

Martin approaches the subject as if he were an anthropologist. I expect that’s about as rigorous as we should hope for on many of the phenomena that he studies.

The most important change associated with Fundamental Wellbeing involves the weakening or disappearance of the Narrative-Self (i.e. the voice that seems to be the center of attention in most human minds).

I’ve experienced a weak version of that. Through a combination of meditation and CFAR ideas (and maybe The Mating Mind, which helped me think of the Narrative-Self as more of a press secretary than as a leader), I’ve substantially reduced the importance that my brain attaches to my Narrative-Self, and that has significantly reduced how much I’m bothered by negative stimuli.

Some more “advanced” versions of Fundamental Wellbeing also involve a loss of “self” – something along the lines of being one with the universe, or having no central locus or vantage point from which to observe the world. I don’t understand this very well. Martin suggests an analogy which describes this feeling as “zoomed-out”, i.e. the opposite extreme from Hyperfocus or a state of Flow. I guess that gives me enough hints to say that I haven’t experienced anything that’s very close to it.

I’m tempted to rephrase this as turning off what Dennett calls the Cartesian Theater. Many of the people that Martin studied seem to have discarded this illusion.

Alas, the book says little about how to achieve Fundamental Wellbeing. The people who he studied tend to have achieved it via some spiritual path, but it sounds like there was typically a good deal of luck involved. Martin has developed an allegedly more reliable path, available at FindersCourse.com, but that requires a rather inflexible commitment to a time-consuming schedule, and a fair amount of money.

Should I want to experience Fundamental Wellbeing?

Most people who experience it show a clear preference for remaining in that state. That’s a clear medium strength reason to suspect that I should want it, and it’s hard to see any counter to that argument.

The weak version of Fundamental Wellbeing that I’ve experienced tends to confirm that conclusion, although I see signs that some aspects require continuing attention to maintain, and the time required to do so sometimes seems large compared to the benefits.

Martin briefly discusses people who experienced Fundamental Wellbeing, and then rejected it. It reminds me of my reaction to an SSRI – it felt like I got a nice vacation, but vacation wasn’t what I wanted, since it conflicted with some of my goals for achieving life satisfaction. Those who reject Fundamental Wellbeing disliked the lack of agency and emotion (I think this refers only to some of the harder-to-achieve versions of Fundamental Wellbeing). That sounds like it overlaps a fair amount with what I experienced on the SSRI.

Martin reports that some of the people he studied have unusual reactions to pain, feeling bliss under circumstances that appear to involve lots of pain. I can sort of see how this is a plausible extreme of the effects that I understand, but it still sounds pretty odd.

Will the world be better if more people achieve Fundamental Wellbeing?

The world would probably be somewhat better. Some people become more willing and able to help others when they reduce their own suffering. But that’s partly offset by people with Fundamental Wellbeing feeling less need to improve themselves, and feeling less bothered by the suffering of others. So the net effect is likely just a minor benefit.

I expect that even in the absence of people treating each other better, the reduced suffering that’s associated with Fundamental Wellbeing would mean that the world is a better place.

However, it’s tricky to determine how important that is. Martin mentions a clear case of a person who said he felt no stress, but exhibited many physical signs of being highly stressed. Is that better or worse than being conscious of stress? I think my answer is very context-dependent.

If it’s so great, why doesn’t everyone learn how to do it?

  • Achieving Fundamental Wellbeing often causes people to have diminished interest in interacting with other people. Only a modest fraction of people who experience it attempt to get others to do so.
  • I presume it has been somewhat hard to understand how to achieve Fundamental Wellbeing, and why we should think it’s valuable.
  • The benefits are somewhat difficult to observe, and there are sometimes visible drawbacks. E.g. one anecdote of a manager who became more generous with his company’s resources – that was likely good for some people, but likely at some cost to the company and/or his career.

Conclusion

The ideas in this book deserve to be more widely known.

I’m unsure whether that means lots of people should read this book. Maybe it’s more important just to repeat simple summaries of the book, and to practice more meditation.

[Note: I read a pre-publication copy that was distributed at the Transformative Technology conference.]