Book Reviews

Book review: The Charisma Myth: How Anyone Can Master the Art and Science of Personal Magnetism, by Olivia Fox Cabane.

This book provides clear and well-organized instructions on how to become more charismatic.

It does not make the process sound easy. My experience with some of her suggestions (gratitude journalling and meditation) seems typical of her ideas – they took a good deal of attention, and probably caused gradual improvements in my life, but the effects were subtle enough to leave lots of uncertainty about how effective they were.

Many parts of the book talk as if more charisma is clearly better, but occasionally she talks about downsides such as being convincing even when you’re wrong. The chapter that distinguishes four types of charisma (focus, kindness, visionary, and authority) helped me clarify what I want and don’t want from charisma. Yet I still feel a good deal of conflict about how much charisma I want, due to doubts about whether I can separate the good from the bad. I’ve had some bad experiences in with feeling and sounding confident about investments in specific stocks has caused me to lose money by holding those stocks too long. I don’t think I can increase my visionary or authority charisma without repeating that kind of mistake unless I can somehow avoid talking about investments when I turn on those types of charisma.

I’ve been trying the exercises that are designed to boost self-compassion, but my doubts about the effort required for good charisma and about the desirability of being charismatic have limited the energy I’m willing to put into it.

Book review: A Universe from Nothing: Why There Is Something Rather than Nothing by Lawrence M. Krauss.

This book has a few worthwhile sections, such as a good explanation of how virtual particles imply that matter came to exist in a previously empty region of space. But the book has much less substance than Tegmark’s Our Mathematical Universe (in particular, Tegmark comes much closer to answering why there is something rather than nothing).

One puzzling claim he makes is that scientists of the far future (when the visible universe is 100 times the current size) are likely to falsely conclude there was no big bang. Whatever problems they might have with new experiments, why wouldn’t they use results of experiments done when the universe was younger?

Book review: Fragile by Design: The Political Origins of Banking Crises and Scarce Credit, by Charles W. Calomiris, and Stephen H. Haber.

This book start out with some fairly dull theory, then switches to specific histories of banking in several countries with moderately interesting claims about how differences in which interest groups acquired power influenced the stability of banks.

For much of U.S. history, banks were mostly constrained to a single location, due to farmers who feared banks with many branches would shift their lending elsewhere when local crop failures made local farms risky to loan to. Yet comparing to Canada, where seemingly small political differences led to banks with many branches, it seems clear that U.S. banks were more fragile because of those restrictions, and less competition in the U.S. left consumers with less desirable interest rates.

By the 1980s, improved communications eroded farmers’ ability to tie banks to one locale, so political opposition to multi-branch banks vanished, resulting in a big merger spree. The biggest problem with this merger spree was that the regulators who approved the mergers asked for more loans to risky low-income borrowers. As a result, banks (plus Fannie Mae and Freddie Mac) felt compelled to lower their standards for all borrowers (the book doesn’t explain what problems they would have faced if they had used different standards for loans the regulators pressured them to make).

These stories provide a clear and plausible explanation of why the U.S. has a pattern of banking crises that Canada and a few other well-run countries have almost entirely avoided over the past two centuries. But they suggest the U.S. banking crises should have been more unique among mature democracies than was actually the case.

The authors are overly dismissive of problems that don’t fit their narrative. Commenting on the failure of Citibank, Lehman, AIG, etc to sell more equity in early 2008, they say “Why go to the markets to raise new capital when you are confident that the government is going to bail you out?”. It seems likely bankers would have gotten better terms from the market as long as they didn’t wait until the worst part of the crisis. I’m pretty sure they gave little thought to bailouts, and relied instead on overly complacent expectations for housing prices.

The book has a number of asides that seem as important as their main points, such as claims that Britain’s greater ability to borrow money led to its military power, and its increased need for military manpower drove its expansion of the franchise.

Book review: Poor Economics: A Radical Rethinking of the Way to Fight Global Poverty by Abhijit V. Banerjee and Esther Duflo.

This book gives an interesting perspective on the obstacles to fixing poverty in the developing world. They criticize both Jeffrey Sach and William Easterly for overstating how easy/hard it is provide useful aid to the poor by attempting simple and sweeping generalizations, where Banerjee and Duflo want us to look carefully at evidence from mostly small-scale interventions which sometimes produce decent results.

They describe a few randomized controlled trials, but apparently there aren’t enough of those to occupy a full book, so they spend more time on less rigorous evidence of counter-intuitive ways that aid programs can fail.

They portray the poor as mostly rational and rarely making choices that are clearly stupid given the information that is readily available to them. But their cognitive abilities are sometimes suboptimal due to mediocre nutrition, disease, and/or stress from financial risks. Relieving any of those problems can sometimes enable them to become more productive workers.

The book advocates mild paternalism in the form of nudging weakly held beliefs about health-related questions where people can’t easily observe the results (e.g. vaccination, iodine supplementation), but probably not birth control (the poor generally choose how many children to have, although there are complex issues influencing those choices). They point out that the main reason people in developed countries make better health choices is due to better defaults, not more intelligence. I wish they’d gone a bit farther and speculated about how many of our current health practices will look pointlessly harmful to more advanced societies.

They give a lukewarm endorsement of microcredit, showing that it needs to be inflexible to avoid high default rates, and only provides small benefits overall. Most of the poor would be better off with a salaried job than borrowing money to run a shaky business.

The book fits in well with Givewell’s approach.

Book review: How China Became Capitalist, by Ronald Coase and Ning Wang.

This is my favorite book about China so far, due to a combination of insights and readability.

They emphasize that growth happened rather differently from how China’s leaders planned, and that their encouragement of trial and error was more important than their ability to recognize good plans.

The most surprising features of China’s government after 1978 were a lack of powerful special interests and freedom from ideological rigidity. Mancur Olson’s book The Rise and Decline of Nations suggests how a revolution such as Mao’s might free a nation from special interest power for a good while.

I’m still somewhat puzzled by the rapid and nearly complete switch from a country blinded by ideology to a country pragmatically searching for a good economy. Coase and Wang attribute it to awareness of the harm Maoism caused, but I can easily imagine that such awareness could mainly cause a switch to a new ideology.

It ends with a cautiously optimistic outlook on China’s future, with some doubts about freedom of expression, and some hope that China will contribute to diversity of capitalist cultures.

Book review: Bonds That Make Us Free: Healing Our Relationships, Coming to Ourselves, by C. Terry Warner.

This book consists mostly of well-written anecdotes demonstrating how to recognize common kinds of self-deception and motivated cognition that cause friction in interpersonal interactions. He focuses on ordinary motives that lead to blaming others for disputes in order to avoid blaming ourselves.

He shows that a willingness to accept responsibility for negative feelings about personal relationships usually makes everyone happier, by switching from zero-sum or negative-sum competitions to cooperative relationships.

He describes many examples where my gut reaction is that person B has done something that justifies person A’s decision to get upset, and then explaining that person A should act nicer. He does this without the “don’t be judgmental” attitude that often accompanies advice to be more understanding.

Most of the book focuses on the desire to blame others when something goes wrong, but he also notes that blaming nature (or oneself) can produce similar problems and have similar solutions. That insight describes me better than the typical anecdotes do, and has been a bit of help at enabling me to stop wasting effort fighting reality.

I expect that there are a moderate number of abusive relationships where the book’s advice would be counterproductive, but that most people (even many who have apparently abusive spouses or bosses) will be better off following the book’s advice.

Book review: Value-Focused Thinking: A Path to Creative Decisionmaking, by Ralph L. Keeney.

This book argues for focusing on values (goals/objectives) when making decisions, as opposed to the more usual alternative-focused decisionmaking.

The basic idea seems good. Alternative-focused thinking draws our attention away from our values and discourages us from creatively generating new possibilities to choose from. It tends to have us frame decisions as responses to problems, which leads us to associate decisions with undesirable emotions, when we could view decisions as opportunities.

A good deal of the book describes examples of good decisionmaking, but those rarely provide insight into how to avoid common mistakes or to do unusually well.

Occasionally the book switches to some dull math, without clear explanations of what benefit the rigor provides.

The book also includes good descriptions of how to measure the things that matter, but How to Measure Anything by Douglas Hubbard does that much better.

Book review: How the West Won: The Neglected Story of the Triumph of Modernity, by Rodney Stark.

This book is a mostly entertaining defense of Christian and libertarian cultures’ contribution to Western civilization’s dominance.

He wants us to believe that the industrial revolution resulted from mostly steady progress starting with Greek city-states, interrupted only by the Roman empire.

He defends the Catholic church’s record of helping scientific progress and denies that the Reformation was needed, although he suggests the Catholic church’s reaction to the Reformation created harmful anti-capitalist sentiments.

His ideas resemble those in Fukuyama’s The Origins of Political Order, yet there’s little overlap between the content of the two books.

The early parts of the book have too many descriptions of battles and other killings whose relevance is unclear.

I was annoyed at how much space he devoted to attacking political correctness toward the end of the book.

In spite of those problems, he presents many interesting ideas. Some are fairly minor, such as changes in privacy due to the Little Ice Age triggering the invention of chimneys. Others provide potentially important insights into differences between religions, e.g. “many influential Muslim scholars have held that efforts to formulate natural laws are blasphemy because they would seem to deny Allah’s freedom to act.”

Alas, I can only give the book a half-hearted endorsement because I suspect many of his claims are poorly supported. E.g. he thinks increased visibility of child labor in the 1800s caused child labor laws via shocked sensibilities. Two alternatives that seem much more plausible to me are that the increased visibility made the laws feasible to enforce, and the increased concentration of employers into a separate class made them easier scapegoats.

Book review: The Depths: The Evolutionary Origins of the Depression Epidemic, by Johnathan Rottenberg.

This book presents a clear explanation of why the basic outlines of depression look like an evolutionary adaptation to problems such as famine or humiliation. But he ignores many features that still puzzle me. Evolution seems unlikely to select for suicide. Why does loss of a child cause depression rather than some higher-energy negative emotion? What influences the breadth of learned helplessness?

He claims depression has been increasing over the last generation or so, but the evidence he presents can easily be explained by increased willingness to admit to and diagnose depression. He has at least one idea why it’s increasing (increased pressure to be happy), but I can come up with ideas that have the opposite effect (e.g. increased ease of finding a group where one can fit in).

Much of the book has little to do with the origins of depression, and is dominated by descriptions of and anecdotes about how depression works.

He spends a fair amount of time talking about the frequently overlooked late stages of depression recovery, where antidepressants aren’t much use and people can easily fall back into depression.

The book includes a bit of self-help advice to use positive psychology, and to not rely on drugs for much more than an initial nudge in the right direction.

Book review: Superintelligence: Paths, Dangers, Strategies, by Nick Bostrom.

This book is substantially more thoughtful than previous books on AGI risk, and substantially better organized than the previous thoughtful writings on the subject.

Bostrom’s discussion of AGI takeoff speed is disappointingly philosophical. Many sources (most recently CFAR) have told me to rely on the outside view to forecast how long something will take. We’ve got lots of weak evidence about the nature of intelligence, how it evolved, and about how various kinds of software improve, providing data for an outside view. Bostrom assigns a vague but implausibly high probability to AI going from human-equivalent to more powerful than humanity as a whole in days, with little thought of this kind of empirical check.

I’ll discuss this more in a separate post which is more about the general AI foom debate than about this book.

Bostrom’s discussion of how takeoff speed influences the chance of a winner-take-all scenario makes it clear that disagreements over takeoff speed are pretty much the only cause of my disagreement with him over the likelihood of a winner-take-all outcome. Other writers aren’t this clear about this. I suspect those who assign substantial probability to a winner-take-all outcome if takeoff is slow will wish he’d analyzed this in more detail.

I’m less optimistic than Bostrom about monitoring AGI progress. He says “it would not be too difficult to identify most capable individuals with a long-standing interest in [AGI] research”. AGI might require enough expertise for that to be true, but if AGI surprises me by only needing modest new insights, I’m concerned by the precedent of Tim Berners-Lee creating a global hypertext system while barely being noticed by the “leading” researchers in that field. Also, the large number of people who mistakenly think they’ve been making progress on AGI may obscure the competent ones.

He seems confused about the long-term trends in AI researcher beliefs about the risks: “The pioneers of artificial intelligence … mostly did not contemplate the possibility of greater-than-human AI” seems implausible; it’s much more likely they expected it but were either overconfident about it producing good results or fatalistic about preventing bad results (“If we’re lucky, they might decide to keep us as pets” – Marvin Minsky, LIFE Nov 20, 1970).

The best parts of the book clarify many issues related to ensuring that an AGI does what we want.

He catalogs more approaches to controlling AGI than I had previously considered, including tripwires, oracles, and genies, and clearly explains many limits to what they can accomplish.

He briefly mentions the risk that the operator of an oracle AI would misuse it for her personal advantage. Why should we have less concern about the designers of other types of AGI giving them goals that favor the designers?

If an oracle AI can’t produce a result that humans can analyze well enough to decide (without trusting the AI) that it’s safe, why would we expect other approaches (e.g. humans writing the equivalent seed AI directly) to be more feasible?

He covers a wide range of ways we can imagine handling AI goals, including strange ideas such as telling an AGI to use the motivations of superintelligences created by other civilizations

He does a very good job of discussing what values we should and shouldn’t install in an AGI: the best decision theory plus a “do what I mean” dynamic, but not a complete morality.

I’m somewhat concerned by his use of “final goal” without careful explanation. People who anthropomorphise goals are likely to confuse at least the first few references to “final goal” as if it worked like a human goal, i.e. something that the AI might want to modify if it conflicted with other goals.

It’s not clear how much of these chapters depend on a winner-take-all scenario. I get the impression that Bostrom doubts we can do much about the risks associated with scenarios where multiple AGIs become superhuman. This seems strange to me. I want people who write about AGI risks to devote more attention to whether we can influence whether multiple AGIs become a singleton, and how they treat lesser intelligences. Designing AGI to reflect values we want seems almost as desirable in scenarios with multiple AGIs as in the winner-take-all scenario (I’m unsure what Bostrom thinks about that). In a world with many AGIs with unfriendly values, what can humans do to bargain for a habitable niche?

He has a chapter on worlds dominated by whole brain emulations (WBE), probably inspired by Robin Hanson’s writings but with more focus on evaluating risks than on predicting the most probable outcomes. Since it looks like we should still expect an em-dominated world to be replaced at some point by AGI(s) that are designed more cleanly and able to self-improve faster, this isn’t really an alternative to the scenarios discussed in the rest of the book.

He treats starting with “familiar and human-like motivations” (in an augmentation route) as an advantage. Judging from our experience with humans who take over large countries, a human-derived intelligence that conquered the world wouldn’t be safe or friendly, although it would be closer to my goals than a smiley-face maximizer. The main advantage I see in a human-derived superintelligence would be a lower risk of it self-improving fast enough for the frontrunner advantage to be large. But that also means it’s more likely to be eclipsed by a design more amenable to self-improvement.

I’m suspicious of the implication (figure 13) that the risks of WBE will be comparable to AGI risks.

  • Is that mainly due to “neuromorphic AI” risks? Bostrom’s description of neuromorphic AI is vague, but my intuition is that human intelligence isn’t flexible enough to easily get the intelligence part of WBE without getting something moderately close to human behavior.
  • Is the risk of uploaded chimp(s) important? I have some concerns there, but Bostrom doesn’t mention it.
  • How about the risks of competitive pressures driving out human traits (discussed more fully/verbosely at Slate Star Codex)? If WBE and AGI happen close enough together in time that we can plausibly influence which comes first, I don’t expect the time between the two to be long enough for that competition to have large effects.
  • The risk that many humans won’t have enough resources to survive? That’s scary, but wouldn’t cause the astronomical waste of extinction.

Also, I don’t accept his assertion that AGI before WBE eliminates the risks of WBE. Some scenarios with multiple independently designed AGIs forming a weakly coordinated singleton (which I consider more likely than Bostrom does) appear to leave the last two risks in that list unresolved.

This books represents progress toward clear thinking about AGI risks, but much more work still needs to be done.