The Human Mind

Book review: The Finders, by Jeffery A Martin.

This book is about the states of mind that Martin labels Fundamental Wellbeing.

These seem to be what people seek through meditation, but Martin carefully avoids focusing on Buddhism, and says that other spiritual approaches produce similar states of mind.

Martin approaches the subject as if he were an anthropologist. I expect that’s about as rigorous as we should hope for on many of the phenomena that he studies.

The most important change associated with Fundamental Wellbeing involves the weakening or disappearance of the Narrative-Self (i.e. the voice that seems to be the center of attention in most human minds).

I’ve experienced a weak version of that. Through a combination of meditation and CFAR ideas (and maybe The Mating Mind, which helped me think of the Narrative-Self as more of a press secretary than as a leader), I’ve substantially reduced the importance that my brain attaches to my Narrative-Self, and that has significantly reduced how much I’m bothered by negative stimuli.

Some more “advanced” versions of Fundamental Wellbeing also involve a loss of “self” – something along the lines of being one with the universe, or having no central locus or vantage point from which to observe the world. I don’t understand this very well. Martin suggests an analogy which describes this feeling as “zoomed-out”, i.e. the opposite extreme from Hyperfocus or a state of Flow. I guess that gives me enough hints to say that I haven’t experienced anything that’s very close to it.

I’m tempted to rephrase this as turning off what Dennett calls the Cartesian Theater. Many of the people that Martin studied seem to have discarded this illusion.

Alas, the book says little about how to achieve Fundamental Wellbeing. The people who he studied tend to have achieved it via some spiritual path, but it sounds like there was typically a good deal of luck involved. Martin has developed an allegedly more reliable path, available at FindersCourse.com, but that requires a rather inflexible commitment to a time-consuming schedule, and a fair amount of money.

Should I want to experience Fundamental Wellbeing?

Most people who experience it show a clear preference for remaining in that state. That’s a clear medium strength reason to suspect that I should want it, and it’s hard to see any counter to that argument.

The weak version of Fundamental Wellbeing that I’ve experienced tends to confirm that conclusion, although I see signs that some aspects require continuing attention to maintain, and the time required to do so sometimes seems large compared to the benefits.

Martin briefly discusses people who experienced Fundamental Wellbeing, and then rejected it. It reminds me of my reaction to an SSRI – it felt like I got a nice vacation, but vacation wasn’t what I wanted, since it conflicted with some of my goals for achieving life satisfaction. Those who reject Fundamental Wellbeing disliked the lack of agency and emotion (I think this refers only to some of the harder-to-achieve versions of Fundamental Wellbeing). That sounds like it overlaps a fair amount with what I experienced on the SSRI.

Martin reports that some of the people he studied have unusual reactions to pain, feeling bliss under circumstances that appear to involve lots of pain. I can sort of see how this is a plausible extreme of the effects that I understand, but it still sounds pretty odd.

Will the world be better if more people achieve Fundamental Wellbeing?

The world would probably be somewhat better. Some people become more willing and able to help others when they reduce their own suffering. But that’s partly offset by people with Fundamental Wellbeing feeling less need to improve themselves, and feeling less bothered by the suffering of others. So the net effect is likely just a minor benefit.

I expect that even in the absence of people treating each other better, the reduced suffering that’s associated with Fundamental Wellbeing would mean that the world is a better place.

However, it’s tricky to determine how important that is. Martin mentions a clear case of a person who said he felt no stress, but exhibited many physical signs of being highly stressed. Is that better or worse than being conscious of stress? I think my answer is very context-dependent.

If it’s so great, why doesn’t everyone learn how to do it?

  • Achieving Fundamental Wellbeing often causes people to have diminished interest in interacting with other people. Only a modest fraction of people who experience it attempt to get others to do so.
  • I presume it has been somewhat hard to understand how to achieve Fundamental Wellbeing, and why we should think it’s valuable.
  • The benefits are somewhat difficult to observe, and there are sometimes visible drawbacks. E.g. one anecdote of a manager who became more generous with his company’s resources – that was likely good for some people, but likely at some cost to the company and/or his career.

Conclusion

The ideas in this book deserve to be more widely known.

I’m unsure whether that means lots of people should read this book. Maybe it’s more important just to repeat simple summaries of the book, and to practice more meditation.

[Note: I read a pre-publication copy that was distributed at the Transformative Technology conference.]

Book review: Principles: Life and Work, by Ray Dalio.

Most popular books get that way by having an engaging style. Yet this book’s style is mundane, almost forgetable.

Some books become bestsellers by being controversial. Others become bestsellers by manipulating reader’s emotions, e.g. by being fun to read, or by getting the reader to overestimate how profound the book is. Principles definitely doesn’t fit those patterns.

Some books become bestsellers because the author became famous for reasons other than his writings (e.g. Stephen Hawking, Donald Trump, and Bill Gates). Principles fits this pattern somewhat well: if an obscure person had published it, nothing about it would have triggered a pattern of readers enthusiastically urging their friends to read it. I suspect the average book in this category is rather pathetic, but I also expect there’s a very large variance in the quality of books in this category.

Principles contains an unusual amount of wisdom. But it’s unclear whether that’s enough to make it a good book, because it’s unclear whether it will convince readers to follow the advice. Much of the advice sounds like ideas that most of us agree with already. The wisdom comes more in selecting the most underutilized ideas, without being particularly novel. The main benefit is likely to be that people who were already on the verge of adopting the book’s advice will get one more nudge from an authority, providing the social reassurance they need.

Advice

Some of why I trust the book’s advice is that it overlaps a good deal with other sources from which I’ve gotten value, e.g. CFAR.

Key ideas include:

  • be honest with yourself
  • be open-minded
  • focus on identifying and fixing your most important weaknesses

Continue Reading

Book review: Surfing Uncertainty: Prediction, Action, and the Embodied Mind, by Andy Clark.

Surfing Uncertainty describes minds as hierarchies of prediction engines. Most behavior involves interactions between a stream of information that uses low-level sensory data to adjust higher level predictive models of the world, and another stream of data coming from high-level models that guides low-level sensory processes to better guess the most likely interpretations of ambiguous sensory evidence.

Clark calls this a predictive processing (PP) model; others refer to is as predictive coding.

The book is full of good ideas, presented in a style that sapped my curiosity.

Jeff Hawkins has a more eloquent book about PP (On Intelligence), which focuses on how PP might be used to create artificial intelligence. The underwhelming progress of the company Hawkins started to capitalize on these ideas suggests it wasn’t the breakthrough that AI researchers were groping for. In contrast, Clark focuses on how PP helps us understand existing minds.

The PP model clearly has some value. The book was a bit more thorough than I wanted at demonstrating that. Since I didn’t find that particularly new or surprising, I’ll focus most of this review on a few loose threads that the book left dangling. So don’t treat this as a summary of the book (see Slate Star Codex if you want that, or if my review is too cryptic to understand), but rather as an exploration of the questions that the book provoked me to think about.

Continue Reading

Book review: The Elephant in the Brain, by Kevin Simler and Robin Hanson.

This book is a well-written analysis of human self-deception.

Only small parts of this book will seem new to long-time readers of Overcoming Bias. It’s written more to bring those ideas to a wider audience.

Large parts of the book will seem obvious to cynics, but few cynics have attempted to explain the breadth of patterns that this book does. Most cynics focus on complaints about some group of people having worse motives than the rest of us. This book sends a message that’s much closer to “We have met the enemy, and he is us.”

The authors claim to be neutrally describing how the world works (“We aren’t trying to put our species down or rub people’s noses in their own shortcomings.”; “… we need this book to be a judgment-free zone”). It’s less judgmental than the average book that I read, but it’s hardly neutral. The authors are criticizing, in the sense that they’re rubbing our noses in evidence that humans are less virtuous than many people claim humans are. Darwin unavoidably put our species down in the sense of discrediting beliefs that we were made in God’s image. This book continues in a similar vein.

This suggests the authors haven’t quite resolved the conflict between their dreams of upholding the highest ideals of science (pursuit of pure knowledge for its own sake) and their desire to solve real-world problems.

The book needs to be (and mostly is) non-judgmental about our actual motives, in order to maximize our comfort with acknowledging those motives. The book is appropriately judgmental about people who pretend to have more noble motives than they actually have.

The authors do a moderately good job of admitting to their own elephants, but I get the sense that they’re still pretty hesitant about doing so.

Impact

Most people will underestimate the effects which the book describes.
Continue Reading

Book review: Inadequate Equilibria, by Eliezer Yudkowsky.

This book (actually halfway between a book and a series of blog posts) attacks the goal of epistemic modesty, which I’ll loosely summarize as reluctance to believe that one knows better than the average person.

1.

The book starts by focusing on the base rate for high-status institutions having harmful incentive structures, charting a middle ground between the excessive respect for those institutions that we see in mainstream sources, and the cynicism of most outsiders.

There’s a weak sense in which this is arrogant, namely that if were obvious to the average voter how to improve on these problems, then I’d expect the problems to be fixed. So people who claim to detect such problems ought to have decent evidence that they’re above average in the relevant skills. There are plenty of people who can rationally decide that applies to them. (Eliezer doubts that advising the rest to be modest will help; I suspect there are useful approaches to instilling modesty in people who should be more modest, but it’s not easy). Also, below-average people rarely seem to be attracted to Eliezer’s writings.

Later parts of the book focus on more personal choices, such as choosing a career.

Some parts of the book seem designed to show off Eliezer’s lack of need for modesty – sometimes successfully, sometimes leaving me suspecting he should be more modest (usually in ways that are somewhat orthogonal to his main points; i.e. his complaints about “reference class tennis” suggest overconfidence in his understanding of his debate opponents).

2.

Eliezer goes a bit overboard in attacking the outside view. He starts with legitimate complaints about people misusing it to justify rejecting theory and adopt “blind empiricism” (a mistake that I’ve occasionally made). But he partly rejects the advice that Tetlock gives in Superforecasting. I’m pretty sure Tetlock knows more about this domain than Eliezer does.

E.g. Eliezer says “But in novel situations where causal mechanisms differ, the outside view fails—there may not be relevantly similar cases, or it may be ambiguous which similar-looking cases are the right ones to look at.”, but Tetlock says ‘Nothing is 100% “unique” … So superforecasters conduct creative searches for comparison classes even for seemingly unique events’.

Compare Eliezer’s “But in many contexts, the outside view simply can’t compete with a good theory” with Tetlock’s commandment number 3 (“Strike the right balance between inside and outside views”). Eliezer seems to treat the approaches as antagonistic, whereas Tetlock advises us to find a synthesis in which the approaches cooperate.

3.

Eliezer provides a decent outline of what causes excess modesty. He classifies the two main failure modes as anxious underconfidence, and status regulation. Anxious underconfidence definitely sounds like something I’ve felt somewhat often, and status regulation seems pretty plausible, but harder for me to detect.

Eliezer presents a clear model of why status regulation exists, but his explanation for anxious underconfidence doesn’t seem complete. Here are some of my ideas about possible causes of anxious underconfidence:

  • People evaluate mistaken career choices and social rejection as if they meant death (which was roughly true until quite recently), so extreme risk aversion made sense;
  • Inaction (or choosing the default action) minimizes blame. If I carefully consider an option, my choice says more about my future actions than if I neglect to think about the option;
  • People often evaluate their success at life by counting the number of correct and incorrect decisions, rather than adding up the value produced;
  • People who don’t grok the Bayesian meaning of the word “evidence” are likely to privilege the scientific and legal meanings of evidence. So beliefs based on more subjective evidence get treated as second class citizens.

I suspect that most harm from excess modesty (and also arrogance) happens in evolutionarily novel contexts. Decisions such as creating a business plan for a startup, or writing a novel that sells a million copies, are sufficiently different from what we evolved to do that we should expect over/underconfidence to cause more harm.

4.

Another way to summarize the book would be: don’t aim to overcompensate for overconfidence; instead, aim to eliminate the causes of overconfidence.

This book will be moderately popular among Eliezer’s fans, but it seems unlikely to greatly expand his influence.

It didn’t convince me that epistemic modesty is generally harmful, but it does provide clues to identifying significant domains in which epistemic modesty causes important harm.

Book review: Into the Gray Zone: A Neuroscientist Explores the Border Between Life and Death, by Adrian Owen.

Too many books and talks have gratuitous displays of fMRIs and neuroscience. At last, here’s a book where fMRIs are used with fairly good reason, and neuroscience is explained only when that’s appropriate.

Owen provides evidence of near-normal brain activity in a modest fraction of people who had been classified as being in a persistent vegetative state. They are capable of answering yes or no to most questions, and show signs of understanding the plots of movies.

Owen believes this evidence is enough to say they’re conscious. I suspect he’s mostly right about that, and that they do experience much of the brain function that is typically associated with consciousness. Owen doesn’t have any special insights into what we mean by the word consciousness. He mostly just investigates how to distinguish between near-normal mental activity and seriously impaired mental activity.

So what were neurologists previously using to classify people as vegetative? As far as I can tell, they were diagnosing based on a lack of motor responses, even though they were aware of an alternate diagnosis, total locked-in syndrome, with identical symptoms. Locked-in syndrome and persistent vegetative state were both coined (in part) by the same person (but I’m unclear who coined the term total locked-in syndrome).

My guess is that the diagnoses have been influenced by a need for certainty. (whose need? family members? doctors? It’s not obvious).

The book has a bunch of mostly unremarkable comments about ethics. But I was impressed by Owen’s observation that people misjudge whether they’d want to die if they end up in a locked-in state. So how likely is it they’ll mispredict what they’d want in other similar conditions? I should have deduced this from the book stumbling on happiness, but I failed to think about it.

I’m a bit disturbed by Owen’s claim that late-stage Alzheimer’s patients have no sense of self. He doesn’t cite evidence for this conclusion, and his research should hint to him that it would be quite hard to get good evidence on this subject.

Most books written by scientists who made interesting discoveries attribute the author’s success to their competence. This book provides clear evidence for the accidental nature of at least some science. Owen could easily have gotten no signs of consciousness from the first few patients he scanned. Given the effort needed for the scans, I can imagine that that would have resulted in a mistaken consensus of experts that vegetative states were being diagnosed correctly.

Book review: Darwin’s Unfinished Symphony: How Culture Made the Human Mind, by Kevin N. Laland.

This book is a mostly good complement to Henrich’s The Secret of our Success. The two books provide different, but strongly overlapping, perspectives on how cultural transmission of information played a key role in the evolution of human intelligence.

The first half of the book describes the importance of copying behavior in many animals.

I was a bit surprised that animals as simple as fruit flies are able to copy some behaviors of other fruit flies. Laland provides good evidence that a wide variety of species have evolved some ability to copy behavior, and that ability is strongly connected to the benefits of acquiring knowledge from others and the costs of alternative ways of acquiring that knowledge.

Yet I was also surprised that the value of copying is strongly limited by the low reliability with which behavior is copied, except with humans. Laland makes plausible claims that the need for high-fidelity copying of behavior was an important driving force behind the evolution of bigger and more sophisticated brains.

Laland claims that humans have a unique ability to teach, and that teaching is an important adaptation. He means teaching in a much broader sense than we see in schooling – he includes basic stuff that could have preceded language, such as a parent directing a child’s attention to things that the child ought to learn. This seems like a good extension to Henrich’s ideas.

The most interesting chapter theorizes about the origin of human language. Laland’s theory that language evolved for teaching provides maybe a bit stronger selection pressure than other theories, but he doesn’t provide much reason to reject competing theories.

Laland presents seven criteria for a good explanation of the evolution of language. But these criteria look somewhat biased toward his theory.

Laland’s first two criteria are that language should have been initially honest and cooperative. He implies that it must have been more honest and cooperative than modern language use is, but he isn’t as clear about that as I would like. Those two criteria seem designed as arguments against the theory that language evolved to impress potential mates. The mate-selection theory involves plenty of competition, and presumably a fair amount of deception. But better communicators do convey important evidence about the quality of their genes, even if they’re engaging in some deception. That seems sufficient to drive the evolution of language via mate-selection pressures.

Laland’s theory seems to provide a somewhat better explanation of when language evolved than most other theories do, so I’m inclined to treat it as one of the top theories. But I don’t expect any consensus on this topic anytime soon.

The book’s final four chapters seemed much less interesting. I recommend skipping them.

Henrich’s book emphasized evidence that humans are pretty similar to other apes. Laland emphasizes ways in which humans are unique (language and teaching ability). I didn’t notice any cases where they directly contradicted each other, but it’s a bit disturbing that they left quite different impressions while saying mostly appropriate things.

Henrich claimed that increasing climate variability created increased rewards for the fast adaptation that culture enabled. Laland disagrees, saying that cultural change itself is a more plausible explanation for the kind of environmental change that incentivized faster adaptation. My intuition says that Laland’s conclusion is correct, but he seems a bit overconfident about it.

Overall, Laland’s book is less comprehensive and less impressive than Henrich’s book, but is still good enough to be in my top ten list of books on the evolution of intelligence.

Update on 2017-08-18: I just read another theory about the evolution of language which directly contradicts Laland’s claim that early language needed to be honest and cooperative. Wild Voices: Mimicry, Reversal, Metaphor, and the Emergence of Language claims that an important role of initial human vocal flexibility was to deceive other species.

Book review: The Hungry Brain: Outsmarting the Instincts That Make Us Overeat, by Stephan Guyenet.

Researchers who studied obesity in rats used to have trouble coaxing their rats to overeat. The obvious approaches (a high fat diet, or a high sugar diet) were annoyingly slow. Then they stumbled on the approach of feeding human junk food to the rats, and made much faster progress.

What makes something “junk food”? The best parts of this book help to answer this, although some ambiguity remains. It mostly boils down to palatability (is it yummier than what our ancestors evolved to expect? If so, it’s somewhat addictive) and caloric density.

Presumably designers of popular snack foods have more sophisticated explanations of what makes people obese, since that’s apparently identical to what they’re paid to optimize (with maybe a few exceptions, such as snacks that are marketed as healthy or ethical). Yet researchers who officially study obesity seem reluctant to learn from snack food experts. (Because they’re the enemy? Because they’re low status? Because they work for evil corporations? Your guess is likely as good as mine.)

Guyenet provides fairly convincing evidence that it’s simple to achieve a healthy weight while feeling full. (E.g. the 20 potatoes a day diet). To the extent that we need willpower, it’s to avoid buying convenient/addictive food, and to avoid restaurants.

My experience is that I need a moderate amount of willpower to follow Guyenet’s diet ideas, and that it would require large amount of willpower if I attended many social events involving food. But for full control over my weight, it seemed like I needed to supplement a decent diet with some form of intermittent fasting (e.g. alternate day calorie restriction); Guyenet says little about that.

Guyenet’s practical advice boils down to a few simple rules: eat whole foods that resemble what our ancestors ate; don’t have other “food” anywhere that you can quickly grab it; sleep well; exercise; avoid stress. That’s sufficiently similar to advice I’ve heard before that I’m confident The Hungry Brain won’t revolutionize many people’s understanding of obesity. But it’s got a pretty good ratio of wisdom to questionable advice, and I’m unaware of reasons to expect much more than that.

Guyenet talks a lot about neuroscience. That would make sense if readers wanted to learn how to fix obesity via brain surgery. The book suggests that, in the absence of ethical constraints, it might be relatively easy to cure obesity by brain surgery. Yet I doubt such a solution would become popular, even given optimistic assumptions about safety.

An alternate explanation is that Guyenet is showing off his knowledge of brains, in order to show that he’s smart enough to have trustworthy beliefs about diets. But that effect is likely small, due to competition among diet-mongers for comparable displays of smartness.

Or maybe he’s trying to combat dualism, in order to ridicule the “just use willpower” approach to diet? Whatever the reason is, the focus on neuroscience implies something unimpressive about the target audience.

You should read this book if you eat a fairly healthy diet but are still overweight. Otherwise, read Guyenet’s blog instead, for a wider variety of health advice.

Book review: Daring Greatly: How the Courage to Be Vulnerable Transforms the Way We Live, Love, Parent, and Lead, by Brene Brown.

I almost didn’t read this because I was unimpressed by the TEDx video version of it, but parts of the book were pretty good (mainly chapters 3 and 4).

The book helped clarify my understanding of shame: how it differs from guilt, how it often constrains us without accomplishing anything useful, and how to reduce it.

She emphasizes that we can reduce shame by writing down or talking about shameful thoughts. She doesn’t give a strong explanation of what would cause that effect, but she prompted me to generate one: parts of my subconscious mind initially want to hide the shameful thoughts, and that causes them to fight the parts of my mind that want to generate interesting ideas. The act of communicating those ideas to the outside world convinces those censor-like parts of my mind to worry less about the ideas (because it’s too late? or because the social response is evidence that the censor was mistakenly worried? I don’t know).

I was a bit confused by her use of the phrase “scarcity culture”. I was initially tempted to imagine she wanted us to take a Panglossian view in which we ignore the resource constraints that keep us from eliminating poverty. But the context suggests she’s thinking more along the lines of “a culture of envy”. Or maybe a combination of perfectionism plus status seeking? Her related phrase “never enough” makes sense if I interpret it as “never impressive enough”.

I find it hard to distinguish those “bad” attitudes from the attitudes that seem important for me to strive for self-improvement.

She attempts to explain that distinction in a section on perfectionism. She compares perfectionism to healthy striving by noting that perfectionism focuses on what other people will think of us, whereas healthy striving is self-focused. Yet I’m pretty sure I’ve managed to hurt myself with perfectionism while focusing mostly on worries about how I’ll judge myself.

I suspect that healthy striving requires more focus on the benefits of success, and less attention to fear of failure, than is typical of perfectionism. The book hints at this, but doesn’t say it clearly when talking about perfectionism. Maybe she describes perfectionism better in her book The Gifts of Imperfection. Should I read that?

Her claim “When we stop caring about what people think, we lose our capacity for connection” feels important, and an area where I have trouble.

The book devotes too much attention to gender-stereotypical problems with shame. Those stereotypes are starting to look outdated. And it shouldn’t require two whole chapters to say that advice on how to have healthy interactions with people should also apply to relations at work, and to relations between parents and children.

The book was fairly easy to read, and parts of it are worth rereading.

Book review: The Measure of All Minds: Evaluating Natural and Artificial Intelligence, by José Hernández-Orallo.

Much of this book consists of surveys of the psychometric literature. But the best parts of the book involve original results that bring more rigor and generality to the field. The best parts of the book approach the quality that I saw in Judea Pearl’s Causality, and E.T. Jaynes’ Probability Theory, but Measure of All Minds achieves a smaller fraction of its author’s ambitions, and is sometimes poorly focused.

Hernández-Orallo has an impressive ambition: measure intelligence for any agent. The book mentions a wide variety of agents, such as normal humans, infants, deaf-blind humans, human teams, dogs, bacteria, Q-learning algorithms, etc.

The book is aimed at a narrow and fairly unusual target audience. Much of it reads like it’s directed at psychology researchers, but the more original parts of the book require thinking like a mathematician.

The survey part seems pretty comprehensive, but I wasn’t satisfied with his ability to distinguish the valuable parts (although he did a good job of ignoring the politicized rants that plague many discussions of this subject).

For nearly the first 200 pages of the book, I was mostly wondering whether the book would address anything important enough for me to want to read to the end. Then I reached an impressive part: a description of an objective IQ-like measure. Hernández-Orallo offers a test (called the C-test) which:

  • measures a well-defined concept: sequential inductive inference,
  • defines the correct responses using an objective rule (based on Kolmogorov complexity),
  • with essentially no arbitrary cultural bias (the main feature that looks like an arbitrary cultural bias is the choice of alphabet and its order)[1],
  • and gives results in objective units (based on Levin’s Kt).

Yet just when I got my hopes up for a major improvement in real-world IQ testing, he points out that what the C-test measures is too narrow to be called intelligence: there’s a 960 line Perl program that exhibits human-level performance on this kind of test, without resembling a breakthrough in AI.
Continue Reading