psychology

All posts tagged psychology

Book review: Outlive: The Science and Art of Longevity, by Peter Attia.

This year’s book on aging focuses mostly on healthspan rather than lifespan, in an effort to combat the tendency of people in the developed world to have a wasted decade around age 80.

Attia calls his approach Medicine 3.0. He wants people to pay a lot more attention to their lifestyle starting a couple of decades before problems such as diabetes and Alzheimer’s create obvious impacts.

He complains about Medicine 2.0 (i.e. mainstream medicine) treating disease as a binary phenomenon. There’s lots of evidence suggesting that age-related diseases develop slowly over periods of more than a decade.

He’s not aiming to cure aging. He aims to enjoy life until age 100 or 120.

Continue Reading

I encourage you to interact with GPT as you would interact with a friend, or as you would want your employer to treat you.

Treating other minds with respect is typically not costly. It can easily improve your state of mind relative to treating them as an adversary.

The tone you use in interacting with GPT will affect your conversations with it. I don’t want to give you much advice about how your conversations ought to go, but I expect that, on average, disrespect won’t generate conversations that help you more.

I don’t know how to evaluate the benefits of caring about any feelings that AIs might have. As long as there’s approximately no cost to treating GPT’s as having human-like feelings, the arguments in favor of caring about those feelings overwhelm the arguments against it.

Scott Alexander wrote a great post on how a psychiatrist’s personality dramatically influences what conversations they have with clients. GPT exhibits similar patterns (the Waluigi effect helped me understand this kind of context sensitivity).

Journalists sometimes have creepy conversations with GPT. They likely steer those conversations in directions that evoke creepy personalities in GPT.

Don’t give those journalists the attention they seek. They seek negative emotions. But don’t hate the journalists. Focus on the system that generates them. If you want to blame some group, blame the readers who get addicted to inflammatory stories.

P.S. I refer to GPT as “it”. I intend that to nudge people toward thinking of “it” as a pronoun which implies respect.

This post was mostly inspired by something unrelated to Robin Hanson’s tweet about othering the AIs, but maybe there was some subconscious connection there. I don’t see anything inherently wrong with dehumanizing other entities. When I dehumanize an entity, that is not sufficient to tell you whether I’m respecting it more than I respect humans, or less.

Spock: Really, Captain, my modesty…

Kirk: Does not bear close examination, Mister Spock. I suspect you’re becoming more and more human all the time.

Spock: Captain, I see no reason to stand here and be insulted.

Some possible AIs deserve to be thought of as better than human. Some deserve to be thought of as worse. Emphasizing AI risk is, in part, a request to create the former earlier than we create the latter.

That’s a somewhat narrow disagreement with Robin. I mostly agree with his psychoanalysis in Most AI Fear Is Future Fear.

Book review: Why Everyone (Else) Is a Hypocrite: Evolution and the Modular Mind, by Robert Kurzban.

Minds are Modular

Many people explain minds by positing that they’re composed of parts:

  • the id, ego, and super-ego
  • the left side and the right side of the brain
  • System 1 and System 2
  • the triune brain
  • Marvin Minsky’s Society of Mind

Minsky’s proposal is the only one of these that resembles Kurzban’s notion of modularity enough to earn Kurzban’s respect. The modules Kurzban talks about are much more numerous, and more specialized, than most people are willing to imagine.

Here’s Kurzban’s favorite Minsky quote:

The mind is a community of “agents.” Each has limited powers and can communicate only with certain others. The powers of mind emerge from their interactions for none of the Agents, by itself, has significant intelligence. […] Everyone knows what it feels like to be engaged in a conversation with oneself. In this book, we will develop the idea that these discussions really happen, and that the participants really “exist.” In our picture of the mind we will imagine many “sub-persons”, or “internal agents”, interacting with one another. Solving the simplest problem—seeing a picture—or remembering the experience of seeing it—might involve a dozen or more—perhaps very many more—of these agents playing different roles. Some of them bear useful knowledge, some of them bear strategies for dealing with other agents, some of them carry warnings or encouragements about how the work of others is proceeding. And some of them are concerned with discipline, prohibiting or “censoring” others from thinking forbidden thoughts.

Let’s take the US government as a metaphor. Instead of saying it’s composed of the legislative, executive, and judicial modules, Kurzban would describe it as being made up of modules such as a White House press secretary, Anthony Fauci, a Speaker of the House, more generals than I can name, even more park rangers, etc.

In What Is It Like to Be a Bat?, Nagel says “our own mental activity is the only unquestionable fact of our experience”. In contrast, Kurzban denies that we know more than a tiny fraction of our mental activity. We don’t ask “what is it like to be an edge detector?”, because there was no evolutionary pressure to enable us to answer that question. It could be most human experience is as mysterious to our conscious minds as bat experiences. Most of our introspection involves examining a mental model that we construct for propaganda purposes.

Is Self-Deception Mysterious?

There’s been a good deal of confusion about self-deception and self-control. Kurzban attributes the confusion to attempts at modeling the mind as a unitary agent. If there’s a single homunculus in charge of all of the mind’s decisions, then it’s genuinely hard to explain phenomena that look like conflicts between agents.

With a sufficiently modular model of minds, the confusion mostly vanishes.

A good deal of what gets called self-deception is better described as being strategically wrong.

For example, when President Trump had COVID, the White House press secretary had a strong incentive not to be aware of any evidence that Trump’s health was worse than expected, in order to reassure voters without being clearly dishonest. Whereas the White House doctor had some reason to err a bit on the side of overestimating Trump’s risk of dying. So it shouldn’t surprise us if they had rather different beliefs. I don’t describe that situation as “the US government is deceiving itself”, but I’d be confused as to whether to describe it that way if I could only imagine the government as a unitary agent.

Minds work much the same way. E.g. the cancer patient who buys space on a cruise that his doctor says he won’t live to enjoy (presumably to persuade allies that he’ll be around long enough to be worth allying with), while still following the doctor’s advice about how to treat the cancer. A modular model of the mind isn’t surprised that his mind holds inconsistent beliefs about how serious the cancer is. The patient’s press-secretary-like modules are pursuing a strategy of getting friends to make long-term plans to support the patient. They want accurate enough knowledge of the patient’s health to sound credible. Why would they want to be more accurate than that?

Self-Control

Kurzban sees less value in the concept of a self than do most Buddhists.

almost any time you come across a theory with the word “self” in it, you should check your wallet.

Self-control has problems that are similar to the problems with the concept of self-deception. It’s best thought of as conflicts between modules.

We should expect context-sensitive influences on which modules exert the most influence on decisions. E.g. we should expect a calorie-acquiring module to have more influence when a marshmallow is in view than if a path to curing cancer is in view. That makes it hard for a mind to have a stable preference about how to value eating a marshmallow or curing cancer.

If I think I see a path to curing cancer that is certain to succeed, my cancer-research modules ought to get more attention than my calorie-acquiring modules. I’m pretty sure that’s what would happen if I had good evidence that I’m about to cure cancer. But a more likely situation is that my press-secretary-like modules say I’ll succeed, and some less eloquent modules say I’ll fail. That will look like a self-control problem to those who want the press secretary to be in charge, and look more like politics to those who take Kurzban’s view.

I could identify some of my brain’s modules as part of my “self”, and say that self-control refers to those modules overcoming the influence of the non-self parts of my brain. But the more I think like Kurzban, the more arbitrary it seems to treat some modules as more privileged than others.

The Rest

Along the way, Kurzban makes fun of the literature on self-esteem, and of models that say self-control is a function of resources.

The book consists mostly of easy to read polemics for ideas that ought to be obvious, but which our culture resists.

Warning: you should skip the chapter titled Morality and Contradictions. Kurzban co-authored a great paper called A Solution to the Mysteries of Morality. But in this book, his controversial examples of hypocrisy will distract attention of most readers from the rather unremarkable wisdom that the examples illustrate.

Book review: Noise: A Flaw in Human Judgment, by Daniel Kahneman, Olivier Sibony, and Cass R. Sunstein.

Doctors are more willing to order a test for patients they see in the morning than for those they see late in the day.

Asylum applicants chances of prevailing may be as low as 5% or as high as 88% purely due to which judge hears their case.

Clouds Make Nerds Look Good, in the sense that university admissions officers give higher weight to academic attributes on cloudy days.

These are examples of what the authors describe as an important and neglected problem.

A more precise description of the book’s topic is variations in judgment, with judgment defined as “measurement in which the instrument is a human mind”.

Continue Reading

Book review: The Geography of Thought: How Asians and Westerners Think Differently… and Why, by Richard E. Nisbett.

It is often said that travel is a good way to improve one’s understanding of other cultures.

The Geography of Thought discredits that saying, by being full of examples of cultural differences that 99.9% of travelers will overlook.

Here are a few of the insights I got from the book, but I’m pretty sure I wouldn’t have gotten from visiting Asia frequently:

Continue Reading

Book review: The WEIRDest People in the World, by Joseph Henrich.

Wow!

Henrich previously wrote one of the best books of the last decade. Normally, I expect such an author’s future books to, at best, exhibit regression toward the mean. But Henrich’s grand overview of humanity’s first few million years was merely a modest portion of the ideas that he originally tried to fit into this magnum opus. Henrich couldn’t quite explain in one volume how humanity got all the way to industrial empires, so he split the explanation into two books.

The cartoon version of the industrial revolution: Protestant culture made the West more autistic.

However, explaining the most important event in history makes up only about 25% of this book’s focus and value.

Continue Reading

Book review: Surfing Uncertainty: Prediction, Action, and the Embodied Mind, by Andy Clark.

Surfing Uncertainty describes minds as hierarchies of prediction engines. Most behavior involves interactions between a stream of information that uses low-level sensory data to adjust higher level predictive models of the world, and another stream of data coming from high-level models that guides low-level sensory processes to better guess the most likely interpretations of ambiguous sensory evidence.

Clark calls this a predictive processing (PP) model; others refer to is as predictive coding.

The book is full of good ideas, presented in a style that sapped my curiosity.

Jeff Hawkins has a more eloquent book about PP (On Intelligence), which focuses on how PP might be used to create artificial intelligence. The underwhelming progress of the company Hawkins started to capitalize on these ideas suggests it wasn’t the breakthrough that AI researchers were groping for. In contrast, Clark focuses on how PP helps us understand existing minds.

The PP model clearly has some value. The book was a bit more thorough than I wanted at demonstrating that. Since I didn’t find that particularly new or surprising, I’ll focus most of this review on a few loose threads that the book left dangling. So don’t treat this as a summary of the book (see Slate Star Codex if you want that, or if my review is too cryptic to understand), but rather as an exploration of the questions that the book provoked me to think about.

Continue Reading

Book review: Into the Gray Zone: A Neuroscientist Explores the Border Between Life and Death, by Adrian Owen.

Too many books and talks have gratuitous displays of fMRIs and neuroscience. At last, here’s a book where fMRIs are used with fairly good reason, and neuroscience is explained only when that’s appropriate.

Owen provides evidence of near-normal brain activity in a modest fraction of people who had been classified as being in a persistent vegetative state. They are capable of answering yes or no to most questions, and show signs of understanding the plots of movies.

Owen believes this evidence is enough to say they’re conscious. I suspect he’s mostly right about that, and that they do experience much of the brain function that is typically associated with consciousness. Owen doesn’t have any special insights into what we mean by the word consciousness. He mostly just investigates how to distinguish between near-normal mental activity and seriously impaired mental activity.

So what were neurologists previously using to classify people as vegetative? As far as I can tell, they were diagnosing based on a lack of motor responses, even though they were aware of an alternate diagnosis, total locked-in syndrome, with identical symptoms. Locked-in syndrome and persistent vegetative state were both coined (in part) by the same person (but I’m unclear who coined the term total locked-in syndrome).

My guess is that the diagnoses have been influenced by a need for certainty. (whose need? family members? doctors? It’s not obvious).

The book has a bunch of mostly unremarkable comments about ethics. But I was impressed by Owen’s observation that people misjudge whether they’d want to die if they end up in a locked-in state. So how likely is it they’ll mispredict what they’d want in other similar conditions? I should have deduced this from the book stumbling on happiness, but I failed to think about it.

I’m a bit disturbed by Owen’s claim that late-stage Alzheimer’s patients have no sense of self. He doesn’t cite evidence for this conclusion, and his research should hint to him that it would be quite hard to get good evidence on this subject.

Most books written by scientists who made interesting discoveries attribute the author’s success to their competence. This book provides clear evidence for the accidental nature of at least some science. Owen could easily have gotten no signs of consciousness from the first few patients he scanned. Given the effort needed for the scans, I can imagine that that would have resulted in a mistaken consensus of experts that vegetative states were being diagnosed correctly.

Book review: Darwin’s Unfinished Symphony: How Culture Made the Human Mind, by Kevin N. Laland.

This book is a mostly good complement to Henrich’s The Secret of our Success. The two books provide different, but strongly overlapping, perspectives on how cultural transmission of information played a key role in the evolution of human intelligence.

The first half of the book describes the importance of copying behavior in many animals.

I was a bit surprised that animals as simple as fruit flies are able to copy some behaviors of other fruit flies. Laland provides good evidence that a wide variety of species have evolved some ability to copy behavior, and that ability is strongly connected to the benefits of acquiring knowledge from others and the costs of alternative ways of acquiring that knowledge.

Yet I was also surprised that the value of copying is strongly limited by the low reliability with which behavior is copied, except with humans. Laland makes plausible claims that the need for high-fidelity copying of behavior was an important driving force behind the evolution of bigger and more sophisticated brains.

Laland claims that humans have a unique ability to teach, and that teaching is an important adaptation. He means teaching in a much broader sense than we see in schooling – he includes basic stuff that could have preceded language, such as a parent directing a child’s attention to things that the child ought to learn. This seems like a good extension to Henrich’s ideas.

The most interesting chapter theorizes about the origin of human language. Laland’s theory that language evolved for teaching provides maybe a bit stronger selection pressure than other theories, but he doesn’t provide much reason to reject competing theories.

Laland presents seven criteria for a good explanation of the evolution of language. But these criteria look somewhat biased toward his theory.

Laland’s first two criteria are that language should have been initially honest and cooperative. He implies that it must have been more honest and cooperative than modern language use is, but he isn’t as clear about that as I would like. Those two criteria seem designed as arguments against the theory that language evolved to impress potential mates. The mate-selection theory involves plenty of competition, and presumably a fair amount of deception. But better communicators do convey important evidence about the quality of their genes, even if they’re engaging in some deception. That seems sufficient to drive the evolution of language via mate-selection pressures.

Laland’s theory seems to provide a somewhat better explanation of when language evolved than most other theories do, so I’m inclined to treat it as one of the top theories. But I don’t expect any consensus on this topic anytime soon.

The book’s final four chapters seemed much less interesting. I recommend skipping them.

Henrich’s book emphasized evidence that humans are pretty similar to other apes. Laland emphasizes ways in which humans are unique (language and teaching ability). I didn’t notice any cases where they directly contradicted each other, but it’s a bit disturbing that they left quite different impressions while saying mostly appropriate things.

Henrich claimed that increasing climate variability created increased rewards for the fast adaptation that culture enabled. Laland disagrees, saying that cultural change itself is a more plausible explanation for the kind of environmental change that incentivized faster adaptation. My intuition says that Laland’s conclusion is correct, but he seems a bit overconfident about it.

Overall, Laland’s book is less comprehensive and less impressive than Henrich’s book, but is still good enough to be in my top ten list of books on the evolution of intelligence.

Update on 2017-08-18: I just read another theory about the evolution of language which directly contradicts Laland’s claim that early language needed to be honest and cooperative. Wild Voices: Mimicry, Reversal, Metaphor, and the Emergence of Language claims that an important role of initial human vocal flexibility was to deceive other species.