Archives

All posts by Peter

Food Delivery Reviews

New food delivery services are springing up like weeds.

Hopes

I’m primarily interested now in a substitute for restaurants. As I currently use restaurants, they provide variety in my food, but aren’t particularly convenient or healthy. Restaurant delivery services have been improving, but the user interfaces for ordering still seem clumsy and primitive (few restaurants seem to care enough to interface well with delivery services, and even fewer restaurants have both healthy food and adequate nutritional labeling).
Continue Reading

Book review: Tripping over the Truth: the Return of the Metabolic Theory of Cancer Illuminates a New and Hopeful Path to a Cure, by Travis Christofferson.

This book is mostly a history of cancer research, focusing on competing grand theories, and the treatments suggested by the author’s preferred theory. That’s a simple theory where the prime cause of cancer is a switch to fermentation (known as the metabolic theory, or the Warburg hypothesis).

He describes in detail two promising treatments that were inspired by this theory: a drug based on 3-bromopyruvate (3BP), and a ketogenic diet.

Continue Reading

Book(?) review: Microbial Burden: A Major Cause Of Aging And Age-Related Disease, by Michael Lustgarten.

This minibook has highly variable quality.

Lustgarten demonstrates clear associations between microbes and aging. That’s hardly newsworthy.

He’s much less clear when he switches to talking about causality. He says microbes are the root cause of aging, and occasionally provides weak evidence to support that.

I still have plenty of reason to suspect that much of those associations are due to frailty and declining immune systems, which let microbes take over more. Lustgarten doesn’t make the kind of argument that would convince me that the microbe –> senility causal path is more important than the senility –> microbe causal path.

He has a decent amount of practical advice that is likely to be quite healthy even if he’s wrong about the root cause of aging, including: eat lots of leaves, green peppers, mushrooms, and use low pH soap.

One confusing recommendation is to limit our protein intake to moderate levels.

He provides a nice graph of mortality as a function of BUN (see here for more evidence about BUN), which hints that we should reduce BUN by reducing protein intake.

He also notes that methionine restriction has significant evidence behind it, and methionine restriction requires restricting protein, especially animal proteins.

Yet I see some suggestions that protein (methionine) restriction is likely only helpful in people with kidney disease.

My impression is that high BUN mostly indicates poor health when it’s caused by kidney problems, and doesn’t provide much reason for reducing protein consumption, and least in people with healthy kidneys.

Lustgarten has since blogged about evidence (see the 7/11/2018 update) that higher protein intake helps reduce his homocysteine.

I have also noticed a (noisy) negative correlation between my protein consumption and my homocysteine levels. But that might be due to riboflavin – when I reduce my protein intake, I also reduce my riboflavin intake, since crickets are an important source of riboflavin for me. So I want to do more research into dietary protein before deciding to reduce it.

The book is too quick to dive into technical references, with limited descriptions of why they’re relevant. In many cases, I decided they provided only marginal support to his important points.

Read his blog before deciding whether to read the minibook. The blog focuses more on quantified-self-style reporting, and less on promoting a grand theory.

Book review: Principles: Life and Work, by Ray Dalio.

Most popular books get that way by having an engaging style. Yet this book’s style is mundane, almost forgetable.

Some books become bestsellers by being controversial. Others become bestsellers by manipulating reader’s emotions, e.g. by being fun to read, or by getting the reader to overestimate how profound the book is. Principles definitely doesn’t fit those patterns.

Some books become bestsellers because the author became famous for reasons other than his writings (e.g. Stephen Hawking, Donald Trump, and Bill Gates). Principles fits this pattern somewhat well: if an obscure person had published it, nothing about it would have triggered a pattern of readers enthusiastically urging their friends to read it. I suspect the average book in this category is rather pathetic, but I also expect there’s a very large variance in the quality of books in this category.

Principles contains an unusual amount of wisdom. But it’s unclear whether that’s enough to make it a good book, because it’s unclear whether it will convince readers to follow the advice. Much of the advice sounds like ideas that most of us agree with already. The wisdom comes more in selecting the most underutilized ideas, without being particularly novel. The main benefit is likely to be that people who were already on the verge of adopting the book’s advice will get one more nudge from an authority, providing the social reassurance they need.

Advice

Some of why I trust the book’s advice is that it overlaps a good deal with other sources from which I’ve gotten value, e.g. CFAR.

Key ideas include:

  • be honest with yourself
  • be open-minded
  • focus on identifying and fixing your most important weaknesses

Continue Reading

Eric Drexler has published a book-length paper on AI risk, describing an approach that he calls Comprehensive AI Services (CAIS).

His primary goal seems to be reframing AI risk discussions to use a rather different paradigm than the one that Nick Bostrom and Eliezer Yudkowsky have been promoting. (There isn’t yet any paradigm that’s widely accepted, so this isn’t a Kuhnian paradigm shift; it’s better characterized as an amorphous field that is struggling to establish its first paradigm). Dueling paradigms seems to be the best that the AI safety field can manage to achieve for now.

I’ll start by mentioning some important claims that Drexler doesn’t dispute:

  • an intelligence explosion might happen somewhat suddenly, in the fairly near future;
  • it’s hard to reliably align an AI’s values with human values;
  • recursive self-improvement, as imagined by Bostrom / Yudkowsky, would pose significant dangers.

Drexler likely disagrees about some of the claims made by Bostrom / Yudkowsky on those points, but he shares enough of their concerns about them that those disagreements don’t explain why Drexler approaches AI safety differently. (Drexler is more cautious than most writers about making any predictions concerning these three claims).

CAIS isn’t a full solution to AI risks. Instead, it’s better thought of as an attempt to reduce the risk of world conquest by the first AGI that reaches some threshold, preserve existing corrigibility somewhat past human-level AI, and postpone need for a permanent solution until we have more intelligence.

Continue Reading

The point of this blog post feels almost too obvious to be worth saying, yet I doubt that it’s widely followed.

People often avoid doing projects that have a low probability of success, even when the expected value is high. To counter this bias, I recommend that you mentally combine many such projects into a strategy of trying new things, and evaluate the strategy’s probability of success.

1.

Eliezer says in On Doing the Improbable:

I’ve noticed that, by my standards and on an Eliezeromorphic metric, most people seem to require catastrophically high levels of faith in what they’re doing in order to stick to it. By this I mean that they would not have stuck to writing the Sequences or HPMOR or working on AGI alignment past the first few months of real difficulty, without assigning odds in the vicinity of 10x what I started out assigning that the project would work. … But you can’t get numbers in the range of what I estimate to be something like 70% as the required threshold before people will carry on through bad times. “It might not work” is enough to force them to make a great effort to continue past that 30% failure probability. It’s not good decision theory but it seems to be how people actually work on group projects where they are not personally madly driven to accomplish the thing.

I expect this reluctance to work on projects with a large chance of failure is a widespread problem for individual self-improvement experiments.

2.

One piece of advice I got from my CFAR workshop was to try lots of things. Their reasoning involved the expectation that we’d repeat the things that worked, and forget the things that didn’t work.

I’ve been hesitant to apply this advice to things that feel unlikely to work, and I expect other people have similar reluctance.

The relevant kind of “things” are experiments that cost maybe 10 to 100 hours to try, which don’t risk much other than wasting time, and for which I should expect on the order of a 10% chance of noticeable long-term benefits.

Here are some examples of the kind of experiments I have in mind:

  • gratitude journal
  • morning pages
  • meditation
  • vitamin D supplements
  • folate supplements
  • a low carb diet
  • the Plant Paradox diet
  • an anti-anxiety drug
  • ashwaghanda
  • whole fruit coffee extract
  • piracetam
  • phenibut
  • modafinil
  • a circling workshop
  • Auditory Integration Training
  • various self-help books
  • yoga
  • sensory deprivation chamber

I’ve cheated slightly, by being more likely to add something to this list if it worked for me than if it was a failure that I’d rather forget. So my success rate with these was around 50%.

The simple practice of forgetting about the failures and mostly repeating the successes is almost enough to cause the net value of these experiments to be positive. More importantly, I kept the costs of these experiments low, so the benefits of the top few outweighed the costs of the failures by a large factor.

3.

I face a similar situation when I’m investing.

The probability that I’ll make any profit on a given investment is close to 50%, and the probability of beating the market on a given investment is lower. I don’t calculate actual numbers for that, because doing so would be more likely to bias me than to help me.

I would find it rather discouraging to evaluate each investment separately. Doing so would focus my attention on the fact that any individual result is indistinguishable from luck.

Instead, I focus my evaluations much more on bundles of hundreds of trades, often associated with a particular strategy. Aggregating evidence in that manner smooths out the good and bad luck to make my skill (or lack thereof) more conspicuous. I’m focusing in this post not on the logical interpretation of evidence, but on how the subconscious parts of my mind react. This mental bundling of tasks is particularly important for my subconscious impressions of whether I’m being productive.

I believe this is a well-known insight (possibly from poker?), but I can’t figure out where I’ve seen it described.

I’ve partly applied this approach to self-improvement tasks (not quite as explicitly as I ought to), and it has probably helped.

Time Biases

Book review: Time Biases: A Theory of Rational Planning and Personal Persistence, by Meghan Sullivan.

I was very unsure about whether this book would be worth reading, as it could easily have been focused on complaints about behavior that experts have long known are mistaken.

I was pleasantly surprised when it quickly got to some of the really hard questions, and was thoughtful about what questions deserved attention. I disagree with enough of Sullivan’s premises that I have significant disagreements with her conclusions. Yet her reasoning is usually good enough that I’m unsure what to make of our disagreements – they’re typically due to differences of intuition that she admits are controversial.

I had hoped for some discussion of ethics (e.g. what discount rate to use in evaluating climate change), whereas the book focuses purely on prudential rationality (i.e. what’s rational for a self-interested person). Still, the discussion of prudential rationality covers most of the issues that make the ethical choices hard.

Personal identity

A key issue is the nature of personal identity – does one’s identity change over time?

Continue Reading

Descriptions of AI-relevant ontological crises typically choose examples where it seems moderately obvious how humans would want to resolve the crises. I describe here a scenario where I don’t know how I would want to resolve the crisis.

I will incidentally ridicule express distate for some philosophical beliefs.

Suppose a powerful AI is programmed to have an ethical system with a version of the person-affecting view. A version which says only persons who exist are morally relevant, and “exist” only refers to the present time. [Note that the most sophisticated advocates of the person-affecting view are willing to treat future people as real, and only object to comparing those people to other possible futures where those people don’t exist.]

Suppose also that it is programmed by someone who thinks in Newtonian models. Then something happens which prevents the programmer from correcting any flaws in the AI. (For simplicity, I’ll say programmer dies, and the AI was programmed to only accept changes to its ethical system from the programmer).

What happens when the AI tries to make ethical decisions about people in distant galaxies (hereinafter “distant people”) using a model of the universe that works like relativity?

Continue Reading

I wrote this post to try to clarify my thoughts about donating to the Longevity Research Institute (LRI).

Much of that thought involves asking: is there a better approach to cures for aging? Will a better aging-related charity be created soon?

I started to turn this post into an overview of all approaches to curing aging, but I saw that would sidetrack me into doing too much research, so I’ve ended up skimping on some.

I’ve ordered the various approaches that I mention from most directly focused on the underlying causes of aging, to most focused on mitigating the symptoms.

I’ve been less careful than usual to distinguish my intuitions from solid research. I’m mainly trying here to summarize lots of information that I’ve accumulated over the years, and I’m not trying to do new research.
Continue Reading

Book review: Artificial Intelligence Safety and Security, by Roman V. Yampolskiy.

This is a collection of papers, with highly varying topics, quality, and importance.

Many of the papers focus on risks that are specific to superintelligence, some assuming that a single AI will take over the world, and some assuming that there will be many AIs of roughly equal power. Others focus on problems that are associated with current AI programs.

I’ve tried to arrange my comments on individual papers in roughly descending order of how important the papers look for addressing the largest AI-related risks, while also sometimes putting similar topics in one group. The result feels a little more organized than the book, but I worry that the papers are too dissimilar to be usefully grouped. I’ve ignored some of the less important papers.

The book’s attempt at organizing the papers consists of dividing them into “Concerns of Luminaries” and “Responses of Scholars”. Alas, I see few signs that many of the authors are even aware of what the other authors have written, much less that the later papers are attempts at responding to the earlier papers. It looks like the papers are mainly arranged in order of when they were written. There’s a modest cluster of authors who agree enough with Bostrom to constitute a single scientific paradigm, but half the papers demonstrate about as much of a consensus on what topic they’re discussing as I would expect to get from asking medieval peasants about airplane safety.

Continue Reading