Archives

All posts by Peter

Book review: The AI Does Not Hate You: Superintelligence, Rationality and the Race to Save the World, by Tom Chivers.

This book is a sympathetic portrayal of the rationalist movement by a quasi-outsider. It includes a well-organized explanation of why some people expect tha AI will create large risks sometime this century, written in simple language that is suitable for a broad audience.

Caveat: I know many of the people who are described in the book. I’ve had some sort of connection with the rationalist movement since before it became distinct from transhumanism, and I’ve been mostly an insider since 2012. I read this book mainly because I was interested in how the rationalist movement looks to outsiders.

Chivers is a science writer. I normally avoid books by science writers, due to an impression that they mostly focus on telling interesting stories, without developing a deep understanding of the topics they write about.

Chivers’ understanding of the rationalist movement doesn’t quite qualify as deep, but he was surprisingly careful to read a lot about the subject, and to write only things he did understand.

Many times I reacted to something he wrote with “that’s close, but not quite right”. Usually when I reacted that way, Chivers did a good job of describing the the rationalist message in question, and the main problem was either that rationalists haven’t figured out how to explain their ideas in a way that a board audience can understand, or that rationalists are confused. So the complaints I make in the rest of this review are at most weakly directed in Chivers direction.

I saw two areas where Chivers overlooked something important.

Rationality

One involves CFAR.

Chivers wrote seven chapters on biases, and how rationalists view them, ending with “the most important bias”: knowing about biases can make you more biased. (italics his).

I get the impression that Chivers is sweeping this problem under the rug (Do we fight that bias by being aware of it? Didn’t we just read that that doesn’t work?). That is roughly what happened with many people who learned rationalism solely via written descriptions.

Then much later, when describing how he handled his conflicting attitudes toward the risks from AI, he gives a really great description of maybe 3% of what CFAR teaches (internal double crux), much like a blind man giving a really clear description of the upper half of an elephant’s trunk. He prefaces this narrative with the apt warning: “I am aware that this all sounds a bit mystical and self-helpy. It’s not.”

Chivers doesn’t seem to connect this exercise with the goal of overcoming biases. Maybe he was too busy applying the technique on an important problem to notice the connection with his prior discussions of Bayes, biases, and sanity. It would be reasonable for him to argue that CFAR’s ideas have diverged enough to belong in a separate category, but he seems to put them in a different category by accident, without realizing that many of us consider CFAR to be an important continuation of rationalists’ interest in biases.

World conquest

Chivers comes very close to covering all of the layman-accessible claims that Yudkowsky and Bostrom make. My one complaint here is that he only give vague hints about why one bad AI can’t be stopped by other AI’s.

A key claim of many leading rationalists is that AI will have some winner take all dynamics that will lead to one AI having a decisive strategic advantage after it crosses some key threshold, such as human-level intelligence.

This is a controversial position that is somewhat connected to foom (fast takeoff), but which might be correct even without foom.

Utility functions

“If I stop caring about chess, that won’t help me win any chess games, now will it?” – That chapter title provides a good explanation of why a simple AI would continue caring about its most fundamental goals.

Is that also true of an AI with more complex, human-like goals? Chivers is partly successful at explaining how to apply the concept of a utility function to a human-like intelligence. Rationalists (or at least those who actively research AI safety) have a clear meaning here, at least as applied to agents that can be modeled mathematically. But when laymen try to apply that to humans, confusion abounds, due to the ease of conflating subgoals with ultimate goals.

Chivers tries to clarify, using the story of Odysseus and the Sirens, and claims that the Sirens would rewrite Odysseus’ utility function. I’m not sure how we can verify that the Sirens work that way, or whether they would merely persuade Odysseus to make false predictions about his expected utility. Chivers at least states clearly that the Sirens try to prevent Odysseus (by making him run aground) from doing what his pre-Siren utility function advises. Chivers’ point could be a bit clearer if he specified that in his (nonstandard?) version of the story, the Sirens make Odysseus want to run aground.

Philosophy

“Essentially, he [Yudkowsky] (and the Rationalists) are thoroughgoing utilitarians.” – That’s a bit misleading. Leading rationalists are predominantly consequentialists, but mostly avoid committing to a moral system as specific as utilitarianism. Leading rationalists also mostly endorse moral uncertainty. Rationalists mostly endorse utilitarian-style calculation (which entails some of the controversial features of utilitarianism), but are careful to combine that with worry about whether we’re optimizing the quantity that we want to optimize.

I also recommend Utilitarianism and its discontents as an example of one rationalist’s nuanced partial endorsement of utilitarianism.

Political solutions to AI risk?

Chivers describes Holden Karnofsky as wanting “to get governments and tech companies to sign treaties saying they’ll submit any AGI designs to outside scrutiny before switching them on. It wouldn’t be iron-clad, because firms might simply lie”.

Most rationalists seem pessimistic about treaties such as this.

Lying is hardly the only problem. This idea assumes that there will be a tiny number of attempts, each with a very small number of launches that look like the real thing, as happened with the first moon landing and the first atomic bomb. Yet the history of software development suggests it will be something more like hundreds of attempts that look like they might succeed. I wouldn’t be surprised if there are millions of times when an AI is turned on, and the developer has some hope that this time it will grow into a human-level AGI. There’s no way that a large number of designs will get sufficient outside scrutiny to be of much use.

And if a developer is trying new versions of their system once a day (e.g. making small changes to a number that controls, say, openness to new experience), any requirement to submit all new versions for outside scrutiny would cause large delays, creating large incentives to subvert the requirement.

So any realistic treaty would need provisions that identify a relatively small set of design choices that need to be scrutinized.

I see few signs that any experts are close to developing a consensus about what criteria would be appropriate here, and I expect that doing so would require a significant fraction of the total wisdom needed for AI safety. I discussed my hope for one such criterion in my review of Drexler’s Reframing Superintelligence paper.

Rationalist personalities

Chivers mentions several plausible explanations for what he labels the “semi-death of LessWrong”, the most obvious being that Eliezer Yudkowsky finished most of the blogging that he had wanted to do there. But I’m puzzled by one explanation that Chivers reports: “the attitude … of thinking they can rebuild everything”. Quoting Robin Hanson:

At Xanadu they had to do everything different: they had to organize their meetings differently and orient their screens differently and hire a different kind of manager, everything had to be different because they were creative types and full of themselves. And that’s the kind of people who started the Rationalists.

That seems like a partly apt explanation for the demise of the rationalist startups MetaMed and Arbital. But LessWrong mostly copied existing sites, such as Reddit, and was only ambitious in the sense that Eliezer was ambitious about what ideas to communicate.

Culture

I guess a book about rationalists can’t resist mentioning polyamory. “For instance, for a lot of people it would be difficult not to be jealous.” Yes, when I lived in a mostly monogamous culture, jealousy seemed pretty standard. That attititude melted away when the bay area cultures that I associated with started adopting polyamory or something similar (shortly before the rationalists became a culture). Jealousy has much more purpose if my partner is flirting with monogamous people than if he’s flirting with polyamorists.

Less dramatically, We all know people who are afraid of visiting their city centres because of terrorist attacks, but don’t think twice about driving to work.

This suggests some weird filter bubbles somewhere. I thought that fear of cities got forgotten within a month or so after 9/11. Is this a difference between London and the US? Am I out of touch with popular concerns? Does Chivers associate more with paranoid people than I do? I don’t see any obvious answer.

Conclusion

It would be really nice if Chivers and Yudkowsky could team up to write a book, but this book is a close substitute for such a collaboration.

See also Scott Aaronson’s review.

[I have medium confidence in the broad picture, and somewhat lower confidence in the specific pieces of evidence. I’m likely biased by my commitment to an ETG strategy.]

Earning to Give (ETG) should be the default strategy for most Effective Altruists (EAs).

Five years ago, EA goals were pretty clearly constrained a good deal by funding. Today, there’s almost enough money going into far future causes, so that vetting and talent constraints have become at least as important as funding. That led to a multi-year trend of increasingly downplaying ETG that was initially appropriate, but which has gone too far.

Continue Reading

Mouse Chow Questions

[Highly speculative, and rather near the fringes of my expertise].

I’ve previously mentioned that medical studies on mice may have produced poor results due to the use of unnatural environments (cold stress, and the lack of burrow-like protection from predators).

Now I notice that standard rodent food has suspiciously high methionine levels.

Continue Reading

There are a number of investment ideas that pop up about once per generation, work well for years, and then investors get reminded of why they’re not so good, and they get ignored for long enough that the average investor doesn’t remember that the idea has been tried.

The idea I’m remembering this month is known by the phrase Nifty Fifty, meaning that there were about 50 stocks that were considered safe investments, whose reliable growth enabled investors to ignore standard valuation measures such as price/earnings ratios, dividend yields, and price to book value.

The spirit behind the Nifty Fifty was characterized by this line from a cryonaut in Woody Allen’s Sleeper (1973): “I bought Polaroid at seven, it’s probably up millions by now!”.

There was nothing particularly wrong with the belief that those were good companies. The main mistakes were to believe that their earnings would grow forever, and/or that growing earnings would imply growing stock prices, no matter how high the current stock price is.

I’ve seen a number of stocks recently that seem to fit this pattern, with Amazon and Salesforce mostly clearly fitting the stereotype. I also ran into one person a few months ago who believed that Amazon was a good investment because it’s a reliable source of 15+% growth. I also visited Salesforce Park last month, and the wealth that it radiated weakly suggests the kind of overconfidence that’s associated with an overpriced stock market.

I took a stab at quantifying my intuitions, and came up with a list of 50 companies (shown below) based on data from SI Pro as of 2019-09-20, filtered by these criteria:

  • pe_ey1 > 30 (price more that 30 times next year’s forecast earnings)
  • mktcap > 5000 (market capitalization more than $5 billion)
  • prp_2yh > 75 (price more than 75% of its 2 year high)
  • rsales_g5f > 50 (5 year sales growth above the median stock in the database)
  • sales_y1 < 0.33333*mktcap (market capitalization more than 3 times last year’s sales)
  • yield < 3 (dividend yield less than 3%)
  • pbvps > 5 (price more than 5 times book value)
  • epsdc_y2 > 0 (it earned money the year before last)

I did a half-assed search over the past 20 years, and it looks like there were more companies meeting these criteria in the dot com bubble (my data for that period isn’t fully comparable), but during 2005-2015 there were generally less than a dozen companies meeting these criteria.

The companies on this list aren’t as widely known as I’d expected, which weakens the stereotype a bit, but otherwise they fit the Nifty Fifty pattern of the market seeming confident that their earnings will grow something like 20% per year for the next decade.

There were some other companies that arguably belonged on the list, but which the filter excluded mainly due to their forward price/earnings ratio being less than 30: BABA (Alibaba Group Holding Ltd), FB (Facebook), and GOOGL (Alphabet). Maybe I should have used a threshold less than 30, or maybe I should take their price/earnings ratio as evidence that the market is evaluating them sensibly.

This looks like a stock market bubble, but a significantly less dramatic one than the dot com bubble. The market is doing a decent job of distinguishing good companies from bad ones (much more so than in the dot com era), and is merely getting a bit overconfident about how long the good ones will be able to maintain their relative quality.

How much longer will these stocks rise? I’m guessing until the next major bear market. No, I’m sorry, I don’t have any prediction for when that bear market will occur or what will trigger it. It will likely be triggered by something that’s not specific to the new nifty fifty.

I’m currently short EQIX. I expect to short more of these stocks someday, but probably not this year.

ticker company pe_ey1 mktcap sales_y1 yield pbvps
AMT American Tower Corp 52 100520 7440.1 1.7 18.21
AMZN Amazon.com, Inc. 54 901016 232887 0 16.67
ANSS ANSYS, Inc. 31.8 18422.3 1293.6 0 6.46
AZPN Aspen Technology, Inc. 30.5 8820.8 598.3 0 22.2
BFAM Bright Horizons Family Solutio 37.2 9181.8 1903.2 0 10.21
CMG Chipotle Mexican Grill, Inc. 48 23067.9 4865 0 15.04
CRM salesforce.com, inc. 50.1 134707 13282 0 7.02
CSGP CoStar Group Inc 49.3 21878.4 1191.8 0 6.74
DASTY Dassault Systemes SE (ADR) 32.9 37683.3 3839 1 7.05
DXCM DexCom, Inc. 110.3 14132.9 1031.6 0 20.44
EQIX Equinix Inc 70.2 48264.7 5071.7 1.7 5.46
ETSY Etsy Inc 61.7 7115.6 603.7 0 16.42
EW Edwards Lifesciences Corp 36.7 44736.3 3722.8 0 13.06
FICO Fair Isaac Corporation 36.5 9275.5 1032.5 0 33.71
FIVE Five Below Inc 33.6 7152.1 1559.6 0 10.86
FTNT Fortinet Inc 31.6 13409.2 1801.2 0 11.93
GDDY Godaddy Inc 67.2 11976.7 2660.1 0 12.41
GWRE Guidewire Software Inc 70.3 8835.9 652.8 0 5.84
HEI Heico Corp 49.3 15277.3 1777.7 0.1 10.58
HUBS HubSpot Inc 94.6 6877.4 513 0 10.93
IAC IAC/InterActiveCorp 38.3 19642.5 4262.9 0 6.51
IDXX IDEXX Laboratories, Inc. 49.3 23570.6 2213.2 0 138.03
ILMN Illumina, Inc. 44.3 44864.4 3333 0 10.5
INTU Intuit Inc. 31.6 70225.1 6784 0.8 18.67
INXN InterXion Holding NV 106.8 5995.1 620.2 0 7.95
ISRG Intuitive Surgical, Inc. 38.8 61020.9 3724.2 0 8.44
LULU Lululemon Athletica inc. 33.7 25222.7 3288.3 0 16.36
MA Mastercard Inc 30.1 276768 14950 0.5 55.23
MASI Masimo Corporation 41.8 8078.9 858.3 0 7.8
MDSO Medidata Solutions Inc 43.4 5734.1 635.7 0 8.35
MELI Mercadolibre Inc 285.4 27306.2 1439.7 0 12.59
MKTX MarketAxess Holdings Inc. 56.7 12941.1 435.6 0.6 18.93
MPWR Monolithic Power Systems, Inc. 31.5 6761.2 582.4 1 9.44
MTCH Match Group Inc 38.4 22264.7 1729.9 0 105.51
OLED Universal Display Corporation 45.5 8622.8 247.4 0.2 11.36
PAYC Paycom Software Inc 51.3 12812.8 566.3 0 28.6
PCTY Paylocity Holding Corp 47.6 5150.5 467.6 0 17.01
PEGA Pegasystems Inc. 167.2 5686.4 891.6 0.2 10.14
PEN Penumbra Inc 134 5142.8 444.9 0 11.3
RMD ResMed Inc. 30.6 19181.5 2606.6 1.2 9.33
RNG RingCentral Inc 139.4 11062.4 673.6 0 30.93
ROL Rollins, Inc. 43.6 11383.4 1821.6 1.2 15.04
RP RealPage Inc 31.3 6084.5 869.5 0 5.3
TAL TAL Education Group (ADR) 39.4 21326.2 2563 0 8.45
TECH BIO-TECHNE Corp 34.6 7483.8 714 0.6 6.52
TREX Trex Company Inc 30.2 5074.2 684.3 0 13.05
TYL Tyler Technologies, Inc. 43.5 10004.5 935.3 0 6.98
VEEV Veeva Systems Inc 62 21366.2 862.2 0 15.27
VRSK Verisk Analytics, Inc. 32.3 25948.4 2395.1 0.6 11.73
ZAYO Zayo Group Holdings Inc 42.5 7997.8 2578 0 5.95

Book review: Prediction Machines: The Simple Economics of Artificial Intelligence, by Ajay Agrawal, Joshua Gans, and Avi Goldfarb.

Three economists decided to write about AI. They got excited about AI, and that distracted them enough that they only said a modest amount about the standard economics principles that laymen need to better understand. As a result, the book ended up mostly being simple descriptions of topics on which the authors had limited expertise. I noticed fewer amateurish mistakes than I expected from this strategy, and they mostly end up doing a good job of describing AI in ways that are mildly helpful to laymen who only want a very high-level view.

The book’s main goal is to advise business on how to adopt current types of AI (“reading this book is almost surely an excellent predictor of being a manager who will use prediction machines”), with a secondary focus on how jobs will be affected by AI.

The authors correctly conclude that a modest extrapolation of current trends implies at most some short-term increases in unemployment.

Continue Reading

Are Blue Zones Healthy?

I’ve mentioned Blue Zones approvingly several times on this blog (here, here, and here).

Alas, there are reasons to doubt that they’re unusually healthy. The paper Supercentenarians and the oldest-old are concentrated into regions with no birth certificates and short lifespans makes a decent case that they’re mostly just areas where ages have been overstated. There are some relatively unhelpful arguments about who’s right on Andrew Gelman’s blog and on Bluezones.com.

As a consequence, I’m slightly decreasing my opinion of some foods that I was encouraged to eat by the Blue Zone memes: whole grains, beans, olive oil, and sweet potatoes. Sweet potatoes still seem likely to be quite healthy compared to the average American food, but I’m now uncertain whether they’re better or worse than the average paleo food (I previously considered them one of the best foods available). The rest of those foods seem no worse than the average American food, but I’m less optimistic about the safety of the average American food than I previously was.

I’ve also become less confident in the safety of a diet with less than 10% of calories from protein (Blue Zone Okinawans in 1949 got 9% of calories from protein), but I’d already decided not to pursue a low protein diet.

I’ve slightly decreased my opinion of Steven Gundry and Valter Longo

H/T William Eden.

The Good Gut

Book review: The Good Gut: Taking Control of Your Weight, Your Mood, and Your Long-term Health, by Justin Sonnenburg and Erica Sonnenburg.

I had hoped this book would help me improve my gut health. Alas, their advice is of limited value, mostly focusing on changes that I’d already adopted based on other types of nutritional ideas, such as eating more fiber from diverse sources. That limited value is probably due mostly to the shortage of useful research on this subject, rather than to any failing of the authors. Research on these topics seems hard due to the complexity of the microbiome, and the large variation between people.

The book convinced me to eat more kimchi, and left me wondering whether to try consuming more bacteria in pill form.

The book repeats warnings that I’d read elsewhere about the dangers of antibiotics, and the problems that arise from having an insufficiently diverse microbiome, such as autoimmune diseases.

I have been placing heavy emphasis on fiber in my nutritional strategies, while having a gut feeling that the concept of fiber left something to be desired. The book pointed me to an alternative concept: microbiota accessible carbohydrates (MACs), which mostly means carbs that aren’t absorbed by the small intestine. A diverse set of MACs feeds a diverse set of microbiota, which at least correlates with good health.

Alas, it seems impossible to reliably measure MACs by analyzing food in isolation – different people’s small intestines absorb different substances. There are also complications such as erythritol, which is mostly absorbed in the small intestine (and is then removed without doing much), but about 10% of which ends up feeding microbiota in the colon. So I’m still stuck with estimating my MAC consumption via the standard fiber estimates, and taking care to get it from diverse sources.

The Sonnenburgs explain that food preparation affects absorption. Flour is absorbed faster than less-processed grain, and the meaning of “flour” has changed over the past century or so, from something that was ground coarsely and eaten soon after, to something that is ground very fine, and stays on a shelf long enough to go rancid if it is whole-grain flour. That nudged me toward a more nuanced position on grains. The “grains are not food” rule was a simple way to improve my diet, but there are clearly big differences between the best whole grains and the worst grain-derived products.

It also helps me understand how grains, as typically used, gradually morphed into mostly being junk food without an easy way to detect the worst effects. More sophisticated machines to grind the grains led to a texture that was more quickly absorbed, leaving less for microbiota. The switch away from whole grain flour was likely, in part, a gradual adaptation to a system where the flour was ground at an increasingly distance from the home, and became more likely to go rancid if the germ wasn’t discarded.

The book has a section on how infants get a microbiome, which explains why it’s really hard to find or create a good substitute for human milk.

The Sonnenburgs have unusual heuristics about when they wash their hands, designed to reduce pathogens while welcoming good bacteria. They avoid washing after gardening or petting the family dog, but are careful to wash after going to places where they could get germs from many other people – malls, petting zoos, etc.

I’m discouraged by the news that microbiome treatments such as Fecal Microbiota Transplantation (FMT) may be regulated as drugs. It seems like regulations should be modeled somewhat more closely on food, or blood transfusion, regulation. Like food, FMT should have broader goals than just combating specific diseases, should provide diverse inputs, and should bear some resemblance to what naturally enters our bodies. Like blood transfusions, FMT should be reasonably safe unless there’s something unusual about the donor.

The book’s advice overlaps a lot with paleo-like advice to go back to how our ancestors ate, played, etc., with a rather balanced approach to borrowing from our grandparents’ lifestyle versus borrowing from hunter-gatherer lifestyles. The book is solid, often at the expense of being exciting.

Food Tidbits

Fruit

Most fruits have been genetically engineered (via selective breeding, not via the methods that people have been complaining about) to have more sugar relative to fiber than wild fruit. So I seek fruits that have been neglected by agriculture and have nutrient levels that are more like the fruit that our ancestors ate … hmm, I’m probably oversimplifying dangerously there. I suspect there are a number of wild fruits that aren’t especially nutritious, but my heuristic of getting more wild-caught fruit is at least slightly healthier than eating exclusively factory-farmed fruit.

I’m currently trying to get a paleo level of fiber using the heuristics that good food should have at least 25% of its carbs as fiber, and more than 50g of fiber per 2000 calories.

Saskatoon berries (sometimes known as serviceberry, shadberry, or juneberry) are my favorite fruit.

I’ve seen some highly conflicting reports about their nutritional value, but the most reliable-looking version says 140 grams of fiber per 2000 calories (32% of total carbs). The most convenient way to get saskatoon berries is in pie filling that has too much added sugar, but still has around 67g of fiber per 2000 calories.

I recommend adding generous amounts of nutmeg and cinnamon, and a few walnuts, and eating without any further preparation.

Baobab fruit has about 320g of fiber per 2000 calories (64% of carbs come from fiber!). I find the taste to be slightly less pleasant than that of a typical fruit, so I mostly use baobab powder as somewhere in between food and medicine, and also eat various baobab bites from nuts.com (alas, those contain fruit juice that dilutes their nutritional value a lot).

The Hadza get 7-15% of calories from baobab, and get little western disease.

Fruits that are less healthy, but have interesting taste:

Freeze dried durian

Freeze dried mangosteen

Freeze dried dragonfruit

A fruit that breaks my category system:
Avocados have a well rounded set of nutrients, and Trader Joe’s Avocado’s Number Guacamole is almost pure avocado, and more convenient than whole avocados.

Roots

Taro root:
if you shop at a grocery store that caters to asian customers, you should be able to buy taro roots that weigh several pounds, for a dollar a pound. It has more nutrition per pound (or dollar) than most other starchy roots, although not as much nutrition per calorie as sweet potatoes (I presume you already know that sweet potatoes are a good source of nutrition).

I just slice off a piece, microwave it, and add salt/potassium chloride. I find it especially valuable for feeling full/getting plenty of fiber on days when I’m doing protein fasts, as it’s unusually low in protein.

Nomato sauce:
is made primarily from carrots and beets. It’s mainly a tomato sauce substitute for those who are allergic to tomato. It’s likely a bit healthier than tomato sauce, but if you’re currently satisfied with tomato sauce, then nomato sauce is likely not tasty enough to get you to switch (but it’s fairly close to being that tasty). I use it for paleo-friendly versions of pizza (crust: shredded carrot, egg, olive oil, and various flours) and spaghetti (with sweet potato noodles).

Flours

I use a variety of flours in baking that are generally healthier than grain-based flours:

  • Almond
  • Arrowroot
  • Baobab
  • Cassava
  • Chestnut
  • Coconut
  • Cricket
  • Garbanzo
  • Green Banana
  • Green Pea
  • Potato
  • Sweet Potato
  • Tigernut

Nuts/seeds

Baru nuts:
interesting, and backed by some trendy sounding hype. I don’t know whether there’s any reason to prefer them to other nuts. I eat a modest amount, for variety.

I’ve moderated my anti-grain stance a bit, and now cook millet occasionally. It’s got the nutritional features of a whole grain, and tastes more like a refined grain.

Popped sorghum:
like popcorn, but with more fiber and other nutrients.

Lupin:
yet another legume, that isn’t particularly remarkable compared to familiar legumes. It’s a bit like a lima bean or soybean. Brami sells a lightly fermented version that’s a bit more convenient than a typical bean, and fermenting likely adds some nutritional value.

For canned beans, I recommend Eden Foods, since they’re soaked overnight and pressure cooked, eliminating some possibly harmful lectins.

Milked Almonds:
like almond milk, but with a higher nut to water ratio, and minus the vitamins/minerals that get added to almond milk in order to make its nutritional profile more infant-oriented.

Processed foods

Not whole foods, but likely still fairly healthy:

Pegan thin bars, chocolate lava:
Each of these bars has a whopping 26 g of fiber. It probably a good omega-6/omega-3 ratio, but I can’t find good evidence about that – it uses de-fatted Sacha Inchi seeds, which have a great percentage of omega-3 in any fat that remains, but I don’t see any info about whether the de-fatting leaves much fat compared to the poorer sunflower oil that they add.

Perfect Keto Bars – fiber, and plenty of collagen from grass-fed cows:
if you get most of you protein from animals, you likely have a poor glycine to methionine ratio. If you eat lots of milk or eggs (Mealsquares?), that ratio is even more likely to be poor. It only takes a little collagen to fix that ratio. (The Pegan thin bars also have a good glycine to methionine ratio).

Swerve or pure erythritol:
fairly natural ways to sweeten foods while adding almost no digestible calories. It’s hard to know whether these are as safe as not using sweeteners. They taste fairly similar to regular sugar, maybe cause mild digestive problems in some people, and work in some but not all baked goods.

Potassium chloride:
Most people should be getting more potassium relative to sodium (with important exceptions for people who take some trendy(?) drugs). Regulations restrict many convenient ways to do this, but it’s pretty easy to replace table salt with potassium chloride salt.

Green mussel pills:
if you want something healthier than a vegan diet without causing animals to suffer, but don’t like oysters, green mussel pills seem convenient.

Liver pills (from grass fed beef):
a relatively natural way to get some B12, B2 (riboflavin) and folate.

Wink Frozen Deserts:
taste a bit like ice cream, but have almost no calories. They’re mostly water, with some inulin and pea protein.

Miscellaneous

I bought a scale that weighs food to the nearest gram for my alternate day calorie restriction diet, and that has been better than measuring by volume for a variety of cooking tasks.

Finally, one food that I likely won’t get around to trying: Hákarl (aka rotten shark). It apparently takes months to make it non-poisonous. How did someone have the patience to discover that process?

Book review: The Finders, by Jeffery A Martin.

This book is about the states of mind that Martin labels Fundamental Wellbeing.

These seem to be what people seek through meditation, but Martin carefully avoids focusing on Buddhism, and says that other spiritual approaches produce similar states of mind.

Martin approaches the subject as if he were an anthropologist. I expect that’s about as rigorous as we should hope for on many of the phenomena that he studies.

The most important change associated with Fundamental Wellbeing involves the weakening or disappearance of the Narrative-Self (i.e. the voice that seems to be the center of attention in most human minds).

I’ve experienced a weak version of that. Through a combination of meditation and CFAR ideas (and maybe The Mating Mind, which helped me think of the Narrative-Self as more of a press secretary than as a leader), I’ve substantially reduced the importance that my brain attaches to my Narrative-Self, and that has significantly reduced how much I’m bothered by negative stimuli.

Some more “advanced” versions of Fundamental Wellbeing also involve a loss of “self” – something along the lines of being one with the universe, or having no central locus or vantage point from which to observe the world. I don’t understand this very well. Martin suggests an analogy which describes this feeling as “zoomed-out”, i.e. the opposite extreme from Hyperfocus or a state of Flow. I guess that gives me enough hints to say that I haven’t experienced anything that’s very close to it.

I’m tempted to rephrase this as turning off what Dennett calls the Cartesian Theater. Many of the people that Martin studied seem to have discarded this illusion.

Alas, the book says little about how to achieve Fundamental Wellbeing. The people who he studied tend to have achieved it via some spiritual path, but it sounds like there was typically a good deal of luck involved. Martin has developed an allegedly more reliable path, available at FindersCourse.com, but that requires a rather inflexible commitment to a time-consuming schedule, and a fair amount of money.

Should I want to experience Fundamental Wellbeing?

Most people who experience it show a clear preference for remaining in that state. That’s a clear medium strength reason to suspect that I should want it, and it’s hard to see any counter to that argument.

The weak version of Fundamental Wellbeing that I’ve experienced tends to confirm that conclusion, although I see signs that some aspects require continuing attention to maintain, and the time required to do so sometimes seems large compared to the benefits.

Martin briefly discusses people who experienced Fundamental Wellbeing, and then rejected it. It reminds me of my reaction to an SSRI – it felt like I got a nice vacation, but vacation wasn’t what I wanted, since it conflicted with some of my goals for achieving life satisfaction. Those who reject Fundamental Wellbeing disliked the lack of agency and emotion (I think this refers only to some of the harder-to-achieve versions of Fundamental Wellbeing). That sounds like it overlaps a fair amount with what I experienced on the SSRI.

Martin reports that some of the people he studied have unusual reactions to pain, feeling bliss under circumstances that appear to involve lots of pain. I can sort of see how this is a plausible extreme of the effects that I understand, but it still sounds pretty odd.

Will the world be better if more people achieve Fundamental Wellbeing?

The world would probably be somewhat better. Some people become more willing and able to help others when they reduce their own suffering. But that’s partly offset by people with Fundamental Wellbeing feeling less need to improve themselves, and feeling less bothered by the suffering of others. So the net effect is likely just a minor benefit.

I expect that even in the absence of people treating each other better, the reduced suffering that’s associated with Fundamental Wellbeing would mean that the world is a better place.

However, it’s tricky to determine how important that is. Martin mentions a clear case of a person who said he felt no stress, but exhibited many physical signs of being highly stressed. Is that better or worse than being conscious of stress? I think my answer is very context-dependent.

If it’s so great, why doesn’t everyone learn how to do it?

  • Achieving Fundamental Wellbeing often causes people to have diminished interest in interacting with other people. Only a modest fraction of people who experience it attempt to get others to do so.
  • I presume it has been somewhat hard to understand how to achieve Fundamental Wellbeing, and why we should think it’s valuable.
  • The benefits are somewhat difficult to observe, and there are sometimes visible drawbacks. E.g. one anecdote of a manager who became more generous with his company’s resources – that was likely good for some people, but likely at some cost to the company and/or his career.

Conclusion

The ideas in this book deserve to be more widely known.

I’m unsure whether that means lots of people should read this book. Maybe it’s more important just to repeat simple summaries of the book, and to practice more meditation.

[Note: I read a pre-publication copy that was distributed at the Transformative Technology conference.]

Book review: The Longevity Diet: Discover the New Science Behind Stem Cell Activation and Regeneration to Slow Aging, Fight Disease, and Optimize Weight, by Valter Longo.

Longo is a moderately competent researcher whose ideas about nutrition and fasting are mostly heading in the right general direction, but many of his details look suspicious.

He convinced me to become more serious about occasional, longer fasts, but I probably won’t use his products.
Continue Reading