Book Reviews

The Paleo Cure

Book review: The Paleo Cure, by Chris Kresser.

I wish I had read this when I went paleo 7 years ago. It’s more balanced than the sources I used. Alas, it was published shortly after I finished a big spurt of learning on the subject.

It still has a modest number of ideas that seem new to me, and many ideas that I’d have liked to have known when the book was first published, but which I found through less organized sources.

Continue Reading

Book review: The AI Does Not Hate You: Superintelligence, Rationality and the Race to Save the World, by Tom Chivers.

This book is a sympathetic portrayal of the rationalist movement by a quasi-outsider. It includes a well-organized explanation of why some people expect tha AI will create large risks sometime this century, written in simple language that is suitable for a broad audience.

Caveat: I know many of the people who are described in the book. I’ve had some sort of connection with the rationalist movement since before it became distinct from transhumanism, and I’ve been mostly an insider since 2012. I read this book mainly because I was interested in how the rationalist movement looks to outsiders.

Chivers is a science writer. I normally avoid books by science writers, due to an impression that they mostly focus on telling interesting stories, without developing a deep understanding of the topics they write about.

Chivers’ understanding of the rationalist movement doesn’t quite qualify as deep, but he was surprisingly careful to read a lot about the subject, and to write only things he did understand.

Many times I reacted to something he wrote with “that’s close, but not quite right”. Usually when I reacted that way, Chivers did a good job of describing the the rationalist message in question, and the main problem was either that rationalists haven’t figured out how to explain their ideas in a way that a board audience can understand, or that rationalists are confused. So the complaints I make in the rest of this review are at most weakly directed in Chivers direction.

I saw two areas where Chivers overlooked something important.

Rationality

One involves CFAR.

Chivers wrote seven chapters on biases, and how rationalists view them, ending with “the most important bias”: knowing about biases can make you more biased. (italics his).

I get the impression that Chivers is sweeping this problem under the rug (Do we fight that bias by being aware of it? Didn’t we just read that that doesn’t work?). That is roughly what happened with many people who learned rationalism solely via written descriptions.

Then much later, when describing how he handled his conflicting attitudes toward the risks from AI, he gives a really great description of maybe 3% of what CFAR teaches (internal double crux), much like a blind man giving a really clear description of the upper half of an elephant’s trunk. He prefaces this narrative with the apt warning: “I am aware that this all sounds a bit mystical and self-helpy. It’s not.”

Chivers doesn’t seem to connect this exercise with the goal of overcoming biases. Maybe he was too busy applying the technique on an important problem to notice the connection with his prior discussions of Bayes, biases, and sanity. It would be reasonable for him to argue that CFAR’s ideas have diverged enough to belong in a separate category, but he seems to put them in a different category by accident, without realizing that many of us consider CFAR to be an important continuation of rationalists’ interest in biases.

World conquest

Chivers comes very close to covering all of the layman-accessible claims that Yudkowsky and Bostrom make. My one complaint here is that he only give vague hints about why one bad AI can’t be stopped by other AI’s.

A key claim of many leading rationalists is that AI will have some winner take all dynamics that will lead to one AI having a decisive strategic advantage after it crosses some key threshold, such as human-level intelligence.

This is a controversial position that is somewhat connected to foom (fast takeoff), but which might be correct even without foom.

Utility functions

“If I stop caring about chess, that won’t help me win any chess games, now will it?” – That chapter title provides a good explanation of why a simple AI would continue caring about its most fundamental goals.

Is that also true of an AI with more complex, human-like goals? Chivers is partly successful at explaining how to apply the concept of a utility function to a human-like intelligence. Rationalists (or at least those who actively research AI safety) have a clear meaning here, at least as applied to agents that can be modeled mathematically. But when laymen try to apply that to humans, confusion abounds, due to the ease of conflating subgoals with ultimate goals.

Chivers tries to clarify, using the story of Odysseus and the Sirens, and claims that the Sirens would rewrite Odysseus’ utility function. I’m not sure how we can verify that the Sirens work that way, or whether they would merely persuade Odysseus to make false predictions about his expected utility. Chivers at least states clearly that the Sirens try to prevent Odysseus (by making him run aground) from doing what his pre-Siren utility function advises. Chivers’ point could be a bit clearer if he specified that in his (nonstandard?) version of the story, the Sirens make Odysseus want to run aground.

Philosophy

“Essentially, he [Yudkowsky] (and the Rationalists) are thoroughgoing utilitarians.” – That’s a bit misleading. Leading rationalists are predominantly consequentialists, but mostly avoid committing to a moral system as specific as utilitarianism. Leading rationalists also mostly endorse moral uncertainty. Rationalists mostly endorse utilitarian-style calculation (which entails some of the controversial features of utilitarianism), but are careful to combine that with worry about whether we’re optimizing the quantity that we want to optimize.

I also recommend Utilitarianism and its discontents as an example of one rationalist’s nuanced partial endorsement of utilitarianism.

Political solutions to AI risk?

Chivers describes Holden Karnofsky as wanting “to get governments and tech companies to sign treaties saying they’ll submit any AGI designs to outside scrutiny before switching them on. It wouldn’t be iron-clad, because firms might simply lie”.

Most rationalists seem pessimistic about treaties such as this.

Lying is hardly the only problem. This idea assumes that there will be a tiny number of attempts, each with a very small number of launches that look like the real thing, as happened with the first moon landing and the first atomic bomb. Yet the history of software development suggests it will be something more like hundreds of attempts that look like they might succeed. I wouldn’t be surprised if there are millions of times when an AI is turned on, and the developer has some hope that this time it will grow into a human-level AGI. There’s no way that a large number of designs will get sufficient outside scrutiny to be of much use.

And if a developer is trying new versions of their system once a day (e.g. making small changes to a number that controls, say, openness to new experience), any requirement to submit all new versions for outside scrutiny would cause large delays, creating large incentives to subvert the requirement.

So any realistic treaty would need provisions that identify a relatively small set of design choices that need to be scrutinized.

I see few signs that any experts are close to developing a consensus about what criteria would be appropriate here, and I expect that doing so would require a significant fraction of the total wisdom needed for AI safety. I discussed my hope for one such criterion in my review of Drexler’s Reframing Superintelligence paper.

Rationalist personalities

Chivers mentions several plausible explanations for what he labels the “semi-death of LessWrong”, the most obvious being that Eliezer Yudkowsky finished most of the blogging that he had wanted to do there. But I’m puzzled by one explanation that Chivers reports: “the attitude … of thinking they can rebuild everything”. Quoting Robin Hanson:

At Xanadu they had to do everything different: they had to organize their meetings differently and orient their screens differently and hire a different kind of manager, everything had to be different because they were creative types and full of themselves. And that’s the kind of people who started the Rationalists.

That seems like a partly apt explanation for the demise of the rationalist startups MetaMed and Arbital. But LessWrong mostly copied existing sites, such as Reddit, and was only ambitious in the sense that Eliezer was ambitious about what ideas to communicate.

Culture

I guess a book about rationalists can’t resist mentioning polyamory. “For instance, for a lot of people it would be difficult not to be jealous.” Yes, when I lived in a mostly monogamous culture, jealousy seemed pretty standard. That attititude melted away when the bay area cultures that I associated with started adopting polyamory or something similar (shortly before the rationalists became a culture). Jealousy has much more purpose if my partner is flirting with monogamous people than if he’s flirting with polyamorists.

Less dramatically, We all know people who are afraid of visiting their city centres because of terrorist attacks, but don’t think twice about driving to work.

This suggests some weird filter bubbles somewhere. I thought that fear of cities got forgotten within a month or so after 9/11. Is this a difference between London and the US? Am I out of touch with popular concerns? Does Chivers associate more with paranoid people than I do? I don’t see any obvious answer.

Conclusion

It would be really nice if Chivers and Yudkowsky could team up to write a book, but this book is a close substitute for such a collaboration.

See also Scott Aaronson’s review.

Book review: Prediction Machines: The Simple Economics of Artificial Intelligence, by Ajay Agrawal, Joshua Gans, and Avi Goldfarb.

Three economists decided to write about AI. They got excited about AI, and that distracted them enough that they only said a modest amount about the standard economics principles that laymen need to better understand. As a result, the book ended up mostly being simple descriptions of topics on which the authors had limited expertise. I noticed fewer amateurish mistakes than I expected from this strategy, and they mostly end up doing a good job of describing AI in ways that are mildly helpful to laymen who only want a very high-level view.

The book’s main goal is to advise business on how to adopt current types of AI (“reading this book is almost surely an excellent predictor of being a manager who will use prediction machines”), with a secondary focus on how jobs will be affected by AI.

The authors correctly conclude that a modest extrapolation of current trends implies at most some short-term increases in unemployment.

Continue Reading

The Good Gut

Book review: The Good Gut: Taking Control of Your Weight, Your Mood, and Your Long-term Health, by Justin Sonnenburg and Erica Sonnenburg.

I had hoped this book would help me improve my gut health. Alas, their advice is of limited value, mostly focusing on changes that I’d already adopted based on other types of nutritional ideas, such as eating more fiber from diverse sources. That limited value is probably due mostly to the shortage of useful research on this subject, rather than to any failing of the authors. Research on these topics seems hard due to the complexity of the microbiome, and the large variation between people.

The book convinced me to eat more kimchi, and left me wondering whether to try consuming more bacteria in pill form.

The book repeats warnings that I’d read elsewhere about the dangers of antibiotics, and the problems that arise from having an insufficiently diverse microbiome, such as autoimmune diseases.

I have been placing heavy emphasis on fiber in my nutritional strategies, while having a gut feeling that the concept of fiber left something to be desired. The book pointed me to an alternative concept: microbiota accessible carbohydrates (MACs), which mostly means carbs that aren’t absorbed by the small intestine. A diverse set of MACs feeds a diverse set of microbiota, which at least correlates with good health.

Alas, it seems impossible to reliably measure MACs by analyzing food in isolation – different people’s small intestines absorb different substances. There are also complications such as erythritol, which is mostly absorbed in the small intestine (and is then removed without doing much), but about 10% of which ends up feeding microbiota in the colon. So I’m still stuck with estimating my MAC consumption via the standard fiber estimates, and taking care to get it from diverse sources.

The Sonnenburgs explain that food preparation affects absorption. Flour is absorbed faster than less-processed grain, and the meaning of “flour” has changed over the past century or so, from something that was ground coarsely and eaten soon after, to something that is ground very fine, and stays on a shelf long enough to go rancid if it is whole-grain flour. That nudged me toward a more nuanced position on grains. The “grains are not food” rule was a simple way to improve my diet, but there are clearly big differences between the best whole grains and the worst grain-derived products.

It also helps me understand how grains, as typically used, gradually morphed into mostly being junk food without an easy way to detect the worst effects. More sophisticated machines to grind the grains led to a texture that was more quickly absorbed, leaving less for microbiota. The switch away from whole grain flour was likely, in part, a gradual adaptation to a system where the flour was ground at an increasingly distance from the home, and became more likely to go rancid if the germ wasn’t discarded.

The book has a section on how infants get a microbiome, which explains why it’s really hard to find or create a good substitute for human milk.

The Sonnenburgs have unusual heuristics about when they wash their hands, designed to reduce pathogens while welcoming good bacteria. They avoid washing after gardening or petting the family dog, but are careful to wash after going to places where they could get germs from many other people – malls, petting zoos, etc.

I’m discouraged by the news that microbiome treatments such as Fecal Microbiota Transplantation (FMT) may be regulated as drugs. It seems like regulations should be modeled somewhat more closely on food, or blood transfusion, regulation. Like food, FMT should have broader goals than just combating specific diseases, should provide diverse inputs, and should bear some resemblance to what naturally enters our bodies. Like blood transfusions, FMT should be reasonably safe unless there’s something unusual about the donor.

The book’s advice overlaps a lot with paleo-like advice to go back to how our ancestors ate, played, etc., with a rather balanced approach to borrowing from our grandparents’ lifestyle versus borrowing from hunter-gatherer lifestyles. The book is solid, often at the expense of being exciting.

Book review: The Finders, by Jeffery A Martin.

This book is about the states of mind that Martin labels Fundamental Wellbeing.

These seem to be what people seek through meditation, but Martin carefully avoids focusing on Buddhism, and says that other spiritual approaches produce similar states of mind.

Martin approaches the subject as if he were an anthropologist. I expect that’s about as rigorous as we should hope for on many of the phenomena that he studies.

The most important change associated with Fundamental Wellbeing involves the weakening or disappearance of the Narrative-Self (i.e. the voice that seems to be the center of attention in most human minds).

I’ve experienced a weak version of that. Through a combination of meditation and CFAR ideas (and maybe The Mating Mind, which helped me think of the Narrative-Self as more of a press secretary than as a leader), I’ve substantially reduced the importance that my brain attaches to my Narrative-Self, and that has significantly reduced how much I’m bothered by negative stimuli.

Some more “advanced” versions of Fundamental Wellbeing also involve a loss of “self” – something along the lines of being one with the universe, or having no central locus or vantage point from which to observe the world. I don’t understand this very well. Martin suggests an analogy which describes this feeling as “zoomed-out”, i.e. the opposite extreme from Hyperfocus or a state of Flow. I guess that gives me enough hints to say that I haven’t experienced anything that’s very close to it.

I’m tempted to rephrase this as turning off what Dennett calls the Cartesian Theater. Many of the people that Martin studied seem to have discarded this illusion.

Alas, the book says little about how to achieve Fundamental Wellbeing. The people who he studied tend to have achieved it via some spiritual path, but it sounds like there was typically a good deal of luck involved. Martin has developed an allegedly more reliable path, available at FindersCourse.com, but that requires a rather inflexible commitment to a time-consuming schedule, and a fair amount of money.

Should I want to experience Fundamental Wellbeing?

Most people who experience it show a clear preference for remaining in that state. That’s a clear medium strength reason to suspect that I should want it, and it’s hard to see any counter to that argument.

The weak version of Fundamental Wellbeing that I’ve experienced tends to confirm that conclusion, although I see signs that some aspects require continuing attention to maintain, and the time required to do so sometimes seems large compared to the benefits.

Martin briefly discusses people who experienced Fundamental Wellbeing, and then rejected it. It reminds me of my reaction to an SSRI – it felt like I got a nice vacation, but vacation wasn’t what I wanted, since it conflicted with some of my goals for achieving life satisfaction. Those who reject Fundamental Wellbeing disliked the lack of agency and emotion (I think this refers only to some of the harder-to-achieve versions of Fundamental Wellbeing). That sounds like it overlaps a fair amount with what I experienced on the SSRI.

Martin reports that some of the people he studied have unusual reactions to pain, feeling bliss under circumstances that appear to involve lots of pain. I can sort of see how this is a plausible extreme of the effects that I understand, but it still sounds pretty odd.

Will the world be better if more people achieve Fundamental Wellbeing?

The world would probably be somewhat better. Some people become more willing and able to help others when they reduce their own suffering. But that’s partly offset by people with Fundamental Wellbeing feeling less need to improve themselves, and feeling less bothered by the suffering of others. So the net effect is likely just a minor benefit.

I expect that even in the absence of people treating each other better, the reduced suffering that’s associated with Fundamental Wellbeing would mean that the world is a better place.

However, it’s tricky to determine how important that is. Martin mentions a clear case of a person who said he felt no stress, but exhibited many physical signs of being highly stressed. Is that better or worse than being conscious of stress? I think my answer is very context-dependent.

If it’s so great, why doesn’t everyone learn how to do it?

  • Achieving Fundamental Wellbeing often causes people to have diminished interest in interacting with other people. Only a modest fraction of people who experience it attempt to get others to do so.
  • I presume it has been somewhat hard to understand how to achieve Fundamental Wellbeing, and why we should think it’s valuable.
  • The benefits are somewhat difficult to observe, and there are sometimes visible drawbacks. E.g. one anecdote of a manager who became more generous with his company’s resources – that was likely good for some people, but likely at some cost to the company and/or his career.

Conclusion

The ideas in this book deserve to be more widely known.

I’m unsure whether that means lots of people should read this book. Maybe it’s more important just to repeat simple summaries of the book, and to practice more meditation.

[Note: I read a pre-publication copy that was distributed at the Transformative Technology conference.]

Book review: The Longevity Diet: Discover the New Science Behind Stem Cell Activation and Regeneration to Slow Aging, Fight Disease, and Optimize Weight, by Valter Longo.

Longo is a moderately competent researcher whose ideas about nutrition and fasting are mostly heading in the right general direction, but many of his details look suspicious.

He convinced me to become more serious about occasional, longer fasts, but I probably won’t use his products.
Continue Reading

Book review: Tripping over the Truth: the Return of the Metabolic Theory of Cancer Illuminates a New and Hopeful Path to a Cure, by Travis Christofferson.

This book is mostly a history of cancer research, focusing on competing grand theories, and the treatments suggested by the author’s preferred theory. That’s a simple theory where the prime cause of cancer is a switch to fermentation (known as the metabolic theory, or the Warburg hypothesis).

He describes in detail two promising treatments that were inspired by this theory: a drug based on 3-bromopyruvate (3BP), and a ketogenic diet.

Continue Reading

Book(?) review: Microbial Burden: A Major Cause Of Aging And Age-Related Disease, by Michael Lustgarten.

This minibook has highly variable quality.

Lustgarten demonstrates clear associations between microbes and aging. That’s hardly newsworthy.

He’s much less clear when he switches to talking about causality. He says microbes are the root cause of aging, and occasionally provides weak evidence to support that.

I still have plenty of reason to suspect that much of those associations are due to frailty and declining immune systems, which let microbes take over more. Lustgarten doesn’t make the kind of argument that would convince me that the microbe –> senility causal path is more important than the senility –> microbe causal path.

He has a decent amount of practical advice that is likely to be quite healthy even if he’s wrong about the root cause of aging, including: eat lots of leaves, green peppers, mushrooms, and use low pH soap.

One confusing recommendation is to limit our protein intake to moderate levels.

He provides a nice graph of mortality as a function of BUN (see here for more evidence about BUN), which hints that we should reduce BUN by reducing protein intake.

He also notes that methionine restriction has significant evidence behind it, and methionine restriction requires restricting protein, especially animal proteins.

Yet I see some suggestions that protein (methionine) restriction is likely only helpful in people with kidney disease.

My impression is that high BUN mostly indicates poor health when it’s caused by kidney problems, and doesn’t provide much reason for reducing protein consumption, and least in people with healthy kidneys.

Lustgarten has since blogged about evidence (see the 7/11/2018 update) that higher protein intake helps reduce his homocysteine.

I have also noticed a (noisy) negative correlation between my protein consumption and my homocysteine levels. But that might be due to riboflavin – when I reduce my protein intake, I also reduce my riboflavin intake, since crickets are an important source of riboflavin for me. So I want to do more research into dietary protein before deciding to reduce it.

The book is too quick to dive into technical references, with limited descriptions of why they’re relevant. In many cases, I decided they provided only marginal support to his important points.

Read his blog before deciding whether to read the minibook. The blog focuses more on quantified-self-style reporting, and less on promoting a grand theory.

Book review: Principles: Life and Work, by Ray Dalio.

Most popular books get that way by having an engaging style. Yet this book’s style is mundane, almost forgetable.

Some books become bestsellers by being controversial. Others become bestsellers by manipulating reader’s emotions, e.g. by being fun to read, or by getting the reader to overestimate how profound the book is. Principles definitely doesn’t fit those patterns.

Some books become bestsellers because the author became famous for reasons other than his writings (e.g. Stephen Hawking, Donald Trump, and Bill Gates). Principles fits this pattern somewhat well: if an obscure person had published it, nothing about it would have triggered a pattern of readers enthusiastically urging their friends to read it. I suspect the average book in this category is rather pathetic, but I also expect there’s a very large variance in the quality of books in this category.

Principles contains an unusual amount of wisdom. But it’s unclear whether that’s enough to make it a good book, because it’s unclear whether it will convince readers to follow the advice. Much of the advice sounds like ideas that most of us agree with already. The wisdom comes more in selecting the most underutilized ideas, without being particularly novel. The main benefit is likely to be that people who were already on the verge of adopting the book’s advice will get one more nudge from an authority, providing the social reassurance they need.

Advice

Some of why I trust the book’s advice is that it overlaps a good deal with other sources from which I’ve gotten value, e.g. CFAR.

Key ideas include:

  • be honest with yourself
  • be open-minded
  • focus on identifying and fixing your most important weaknesses

Continue Reading

Time Biases

Book review: Time Biases: A Theory of Rational Planning and Personal Persistence, by Meghan Sullivan.

I was very unsure about whether this book would be worth reading, as it could easily have been focused on complaints about behavior that experts have long known are mistaken.

I was pleasantly surprised when it quickly got to some of the really hard questions, and was thoughtful about what questions deserved attention. I disagree with enough of Sullivan’s premises that I have significant disagreements with her conclusions. Yet her reasoning is usually good enough that I’m unsure what to make of our disagreements – they’re typically due to differences of intuition that she admits are controversial.

I had hoped for some discussion of ethics (e.g. what discount rate to use in evaluating climate change), whereas the book focuses purely on prudential rationality (i.e. what’s rational for a self-interested person). Still, the discussion of prudential rationality covers most of the issues that make the ethical choices hard.

Personal identity

A key issue is the nature of personal identity – does one’s identity change over time?

Continue Reading