OpenAI has told us in some detail what they’ve done to make GPT-4 safe.
This post will complain about some misguided aspects of OpenAI’s goals.
Continue ReadingOpenAI has told us in some detail what they’ve done to make GPT-4 safe.
This post will complain about some misguided aspects of OpenAI’s goals.
Continue ReadingI recently noticed similarities between how I decide what stock market evidence to look at, and how the legal system decides what lawyers are allowed to tell juries.
This post will elaborate on Eliezer’s Scientific Evidence, Legal Evidence, Rational Evidence. In particular, I’ll try to generalize about why there’s a large class of information that I actively avoid treating as Bayesian evidence.
Continue ReadingBook review: Why Everyone (Else) Is a Hypocrite: Evolution and the Modular Mind, by Robert Kurzban.
Many people explain minds by positing that they’re composed of parts:
Minsky’s proposal is the only one of these that resembles Kurzban’s notion of modularity enough to earn Kurzban’s respect. The modules Kurzban talks about are much more numerous, and more specialized, than most people are willing to imagine.
Here’s Kurzban’s favorite Minsky quote:
The mind is a community of “agents.” Each has limited powers and can communicate only with certain others. The powers of mind emerge from their interactions for none of the Agents, by itself, has significant intelligence. […] Everyone knows what it feels like to be engaged in a conversation with oneself. In this book, we will develop the idea that these discussions really happen, and that the participants really “exist.” In our picture of the mind we will imagine many “sub-persons”, or “internal agents”, interacting with one another. Solving the simplest problem—seeing a picture—or remembering the experience of seeing it—might involve a dozen or more—perhaps very many more—of these agents playing different roles. Some of them bear useful knowledge, some of them bear strategies for dealing with other agents, some of them carry warnings or encouragements about how the work of others is proceeding. And some of them are concerned with discipline, prohibiting or “censoring” others from thinking forbidden thoughts.
Let’s take the US government as a metaphor. Instead of saying it’s composed of the legislative, executive, and judicial modules, Kurzban would describe it as being made up of modules such as a White House press secretary, Anthony Fauci, a Speaker of the House, more generals than I can name, even more park rangers, etc.
In What Is It Like to Be a Bat?, Nagel says “our own mental activity is the only unquestionable fact of our experience”. In contrast, Kurzban denies that we know more than a tiny fraction of our mental activity. We don’t ask “what is it like to be an edge detector?”, because there was no evolutionary pressure to enable us to answer that question. It could be most human experience is as mysterious to our conscious minds as bat experiences. Most of our introspection involves examining a mental model that we construct for propaganda purposes.
There’s been a good deal of confusion about self-deception and self-control. Kurzban attributes the confusion to attempts at modeling the mind as a unitary agent. If there’s a single homunculus in charge of all of the mind’s decisions, then it’s genuinely hard to explain phenomena that look like conflicts between agents.
With a sufficiently modular model of minds, the confusion mostly vanishes.
A good deal of what gets called self-deception is better described as being strategically wrong.
For example, when President Trump had COVID, the White House press secretary had a strong incentive not to be aware of any evidence that Trump’s health was worse than expected, in order to reassure voters without being clearly dishonest. Whereas the White House doctor had some reason to err a bit on the side of overestimating Trump’s risk of dying. So it shouldn’t surprise us if they had rather different beliefs. I don’t describe that situation as “the US government is deceiving itself”, but I’d be confused as to whether to describe it that way if I could only imagine the government as a unitary agent.
Minds work much the same way. E.g. the cancer patient who buys space on a cruise that his doctor says he won’t live to enjoy (presumably to persuade allies that he’ll be around long enough to be worth allying with), while still following the doctor’s advice about how to treat the cancer. A modular model of the mind isn’t surprised that his mind holds inconsistent beliefs about how serious the cancer is. The patient’s press-secretary-like modules are pursuing a strategy of getting friends to make long-term plans to support the patient. They want accurate enough knowledge of the patient’s health to sound credible. Why would they want to be more accurate than that?
Kurzban sees less value in the concept of a self than do most Buddhists.
almost any time you come across a theory with the word “self” in it, you should check your wallet.
Self-control has problems that are similar to the problems with the concept of self-deception. It’s best thought of as conflicts between modules.
We should expect context-sensitive influences on which modules exert the most influence on decisions. E.g. we should expect a calorie-acquiring module to have more influence when a marshmallow is in view than if a path to curing cancer is in view. That makes it hard for a mind to have a stable preference about how to value eating a marshmallow or curing cancer.
If I think I see a path to curing cancer that is certain to succeed, my cancer-research modules ought to get more attention than my calorie-acquiring modules. I’m pretty sure that’s what would happen if I had good evidence that I’m about to cure cancer. But a more likely situation is that my press-secretary-like modules say I’ll succeed, and some less eloquent modules say I’ll fail. That will look like a self-control problem to those who want the press secretary to be in charge, and look more like politics to those who take Kurzban’s view.
I could identify some of my brain’s modules as part of my “self”, and say that self-control refers to those modules overcoming the influence of the non-self parts of my brain. But the more I think like Kurzban, the more arbitrary it seems to treat some modules as more privileged than others.
Along the way, Kurzban makes fun of the literature on self-esteem, and of models that say self-control is a function of resources.
The book consists mostly of easy to read polemics for ideas that ought to be obvious, but which our culture resists.
Warning: you should skip the chapter titled Morality and Contradictions. Kurzban co-authored a great paper called A Solution to the Mysteries of Morality. But in this book, his controversial examples of hypocrisy will distract attention of most readers from the rather unremarkable wisdom that the examples illustrate.
Book review: Principles: Life and Work, by Ray Dalio.
Most popular books get that way by having an engaging style. Yet this book’s style is mundane, almost forgetable.
Some books become bestsellers by being controversial. Others become bestsellers by manipulating reader’s emotions, e.g. by being fun to read, or by getting the reader to overestimate how profound the book is. Principles definitely doesn’t fit those patterns.
Some books become bestsellers because the author became famous for reasons other than his writings (e.g. Stephen Hawking, Donald Trump, and Bill Gates). Principles fits this pattern somewhat well: if an obscure person had published it, nothing about it would have triggered a pattern of readers enthusiastically urging their friends to read it. I suspect the average book in this category is rather pathetic, but I also expect there’s a very large variance in the quality of books in this category.
Principles contains an unusual amount of wisdom. But it’s unclear whether that’s enough to make it a good book, because it’s unclear whether it will convince readers to follow the advice. Much of the advice sounds like ideas that most of us agree with already. The wisdom comes more in selecting the most underutilized ideas, without being particularly novel. The main benefit is likely to be that people who were already on the verge of adopting the book’s advice will get one more nudge from an authority, providing the social reassurance they need.
Some of why I trust the book’s advice is that it overlaps a good deal with other sources from which I’ve gotten value, e.g. CFAR.
Key ideas include:
Book review: The Elephant in the Brain, by Kevin Simler and Robin Hanson.
This book is a well-written analysis of human self-deception.
Only small parts of this book will seem new to long-time readers of Overcoming Bias. It’s written more to bring those ideas to a wider audience.
Large parts of the book will seem obvious to cynics, but few cynics have attempted to explain the breadth of patterns that this book does. Most cynics focus on complaints about some group of people having worse motives than the rest of us. This book sends a message that’s much closer to “We have met the enemy, and he is us.”
The authors claim to be neutrally describing how the world works (“We aren’t trying to put our species down or rub people’s noses in their own shortcomings.”; “… we need this book to be a judgment-free zone”). It’s less judgmental than the average book that I read, but it’s hardly neutral. The authors are criticizing, in the sense that they’re rubbing our noses in evidence that humans are less virtuous than many people claim humans are. Darwin unavoidably put our species down in the sense of discrediting beliefs that we were made in God’s image. This book continues in a similar vein.
This suggests the authors haven’t quite resolved the conflict between their dreams of upholding the highest ideals of science (pursuit of pure knowledge for its own sake) and their desire to solve real-world problems.
The book needs to be (and mostly is) non-judgmental about our actual motives, in order to maximize our comfort with acknowledging those motives. The book is appropriately judgmental about people who pretend to have more noble motives than they actually have.
The authors do a moderately good job of admitting to their own elephants, but I get the sense that they’re still pretty hesitant about doing so.
Most people will underestimate the effects which the book describes.
Continue Reading
Book review: Daring Greatly: How the Courage to Be Vulnerable Transforms the Way We Live, Love, Parent, and Lead, by Brene Brown.
I almost didn’t read this because I was unimpressed by the TEDx video version of it, but parts of the book were pretty good (mainly chapters 3 and 4).
The book helped clarify my understanding of shame: how it differs from guilt, how it often constrains us without accomplishing anything useful, and how to reduce it.
She emphasizes that we can reduce shame by writing down or talking about shameful thoughts. She doesn’t give a strong explanation of what would cause that effect, but she prompted me to generate one: parts of my subconscious mind initially want to hide the shameful thoughts, and that causes them to fight the parts of my mind that want to generate interesting ideas. The act of communicating those ideas to the outside world convinces those censor-like parts of my mind to worry less about the ideas (because it’s too late? or because the social response is evidence that the censor was mistakenly worried? I don’t know).
I was a bit confused by her use of the phrase “scarcity culture”. I was initially tempted to imagine she wanted us to take a Panglossian view in which we ignore the resource constraints that keep us from eliminating poverty. But the context suggests she’s thinking more along the lines of “a culture of envy”. Or maybe a combination of perfectionism plus status seeking? Her related phrase “never enough” makes sense if I interpret it as “never impressive enough”.
I find it hard to distinguish those “bad” attitudes from the attitudes that seem important for me to strive for self-improvement.
She attempts to explain that distinction in a section on perfectionism. She compares perfectionism to healthy striving by noting that perfectionism focuses on what other people will think of us, whereas healthy striving is self-focused. Yet I’m pretty sure I’ve managed to hurt myself with perfectionism while focusing mostly on worries about how I’ll judge myself.
I suspect that healthy striving requires more focus on the benefits of success, and less attention to fear of failure, than is typical of perfectionism. The book hints at this, but doesn’t say it clearly when talking about perfectionism. Maybe she describes perfectionism better in her book The Gifts of Imperfection. Should I read that?
Her claim “When we stop caring about what people think, we lose our capacity for connection” feels important, and an area where I have trouble.
The book devotes too much attention to gender-stereotypical problems with shame. Those stereotypes are starting to look outdated. And it shouldn’t require two whole chapters to say that advice on how to have healthy interactions with people should also apply to relations at work, and to relations between parents and children.
The book was fairly easy to read, and parts of it are worth rereading.
[An unimportant book that I read for ARC; feel free to skip this.]
Book review: Be Yourself, Everyone Else is Already Taken: Transform Your Life with the Power of Authenticity, by Mike Robbins.
This book’s advice mostly feels half-right, and mostly directed at people who have somewhat different problems than I have.
The book’s exercises range from things I’ve already done enough of, to things I ought to practice more but which feel hard (such as the self-love exercise).
Continue Reading
Book review: The Rationality Quotient: Toward a Test of Rational Thinking, by Keith E. Stanovich, Richard F. West and Maggie E. Toplak.
This book describes an important approach to measuring individual rationality: an RQ test that loosely resembles an IQ test. But it pays inadequate attention to the most important problems with tests of rationality.
My biggest concern about rationality testing is what happens when people anticipate the test and are motivated to maximize their scores (as is the case with IQ tests). Do they:
Alas, the book treats these issues as an afterthought. Their test knowingly uses questions for which cheating would be straightforward, such as asking whether the test subject believes in science, and whether they prefer to get $85 now rather than $100 in three months. (If they could use real money, that would drastically reduce my concerns about cheating. I’m almost tempted to advocate doing that, but doing so would hinder widespread adoption of the test, even if using real money added enough value to pay for itself.)
Book review: Bonds That Make Us Free: Healing Our Relationships, Coming to Ourselves, by C. Terry Warner.
This book consists mostly of well-written anecdotes demonstrating how to recognize common kinds of self-deception and motivated cognition that cause friction in interpersonal interactions. He focuses on ordinary motives that lead to blaming others for disputes in order to avoid blaming ourselves.
He shows that a willingness to accept responsibility for negative feelings about personal relationships usually makes everyone happier, by switching from zero-sum or negative-sum competitions to cooperative relationships.
He describes many examples where my gut reaction is that person B has done something that justifies person A’s decision to get upset, and then explaining that person A should act nicer. He does this without the “don’t be judgmental” attitude that often accompanies advice to be more understanding.
Most of the book focuses on the desire to blame others when something goes wrong, but he also notes that blaming nature (or oneself) can produce similar problems and have similar solutions. That insight describes me better than the typical anecdotes do, and has been a bit of help at enabling me to stop wasting effort fighting reality.
I expect that there are a moderate number of abusive relationships where the book’s advice would be counterproductive, but that most people (even many who have apparently abusive spouses or bosses) will be better off following the book’s advice.
Book review: Leadership and Self-Deception: Getting out of the Box, by the Arbinger Institute.
In spite of being marketed as mainly for corporate executives, this book’s advice is important for most interactions between people. Executives have more to gain from it, but I suspect they’re somewhat less willing to believe it.
I had already learned a lot about self-deception before reading this, but this book clarifies how to recognize and correct common instances in which I’m tempted to deceive myself. More importantly, it provides a way to explain self-deception to a number of people. I had previously despaired of explaining my understanding of self-deception to people who hadn’t already sought out the ideas I’d found. Now I can point people to this book. But I still can’t summarize it in a way that would change many people’s minds.
It’s written mostly as a novel, which makes it very readable without sacrificing much substance.
Some of the books descriptions don’t sound completely right to me. They describe people as acting “inside the box” or “outside the box” with respect to another person (not the same as the standard meaning of “thinking outside the box”) as if people normally did one or the other, we I think I often act somewhere in between those two modes. Also, the term “self-betrayal”, which I’d describe as acting selfishly and rationalizing the act as selfless, should not be portrayed as if the selfishness automatically causes self-deception. If people felt a little freer to admit that they act selfishly, they’d be less tempted to deceive themselves about their motives.
The book seems a bit too rosy about the benefits of following it’s advice. For instance, the book leaves the reader to imagine that Semmelweis benefited from admitting that he had been killing patients. Other accounts of Semmelweis suggest that he suffered, and the doctors who remained in denial prospered. Maybe he would have done much better if he had understood this book and been able to adopt its style. But it’s important to remember that self-deception isn’t an accident. It happens because it has sometimes worked.