willpower

All posts tagged willpower

The point of this blog post feels almost too obvious to be worth saying, yet I doubt that it’s widely followed.

People often avoid doing projects that have a low probability of success, even when the expected value is high. To counter this bias, I recommend that you mentally combine many such projects into a strategy of trying new things, and evaluate the strategy’s probability of success.

1.

Eliezer says in On Doing the Improbable:

I’ve noticed that, by my standards and on an Eliezeromorphic metric, most people seem to require catastrophically high levels of faith in what they’re doing in order to stick to it. By this I mean that they would not have stuck to writing the Sequences or HPMOR or working on AGI alignment past the first few months of real difficulty, without assigning odds in the vicinity of 10x what I started out assigning that the project would work. … But you can’t get numbers in the range of what I estimate to be something like 70% as the required threshold before people will carry on through bad times. “It might not work” is enough to force them to make a great effort to continue past that 30% failure probability. It’s not good decision theory but it seems to be how people actually work on group projects where they are not personally madly driven to accomplish the thing.

I expect this reluctance to work on projects with a large chance of failure is a widespread problem for individual self-improvement experiments.

2.

One piece of advice I got from my CFAR workshop was to try lots of things. Their reasoning involved the expectation that we’d repeat the things that worked, and forget the things that didn’t work.

I’ve been hesitant to apply this advice to things that feel unlikely to work, and I expect other people have similar reluctance.

The relevant kind of “things” are experiments that cost maybe 10 to 100 hours to try, which don’t risk much other than wasting time, and for which I should expect on the order of a 10% chance of noticeable long-term benefits.

Here are some examples of the kind of experiments I have in mind:

  • gratitude journal
  • morning pages
  • meditation
  • vitamin D supplements
  • folate supplements
  • a low carb diet
  • the Plant Paradox diet
  • an anti-anxiety drug
  • ashwaghanda
  • whole fruit coffee extract
  • piracetam
  • phenibut
  • modafinil
  • a circling workshop
  • Auditory Integration Training
  • various self-help books
  • yoga
  • sensory deprivation chamber

I’ve cheated slightly, by being more likely to add something to this list if it worked for me than if it was a failure that I’d rather forget. So my success rate with these was around 50%.

The simple practice of forgetting about the failures and mostly repeating the successes is almost enough to cause the net value of these experiments to be positive. More importantly, I kept the costs of these experiments low, so the benefits of the top few outweighed the costs of the failures by a large factor.

3.

I face a similar situation when I’m investing.

The probability that I’ll make any profit on a given investment is close to 50%, and the probability of beating the market on a given investment is lower. I don’t calculate actual numbers for that, because doing so would be more likely to bias me than to help me.

I would find it rather discouraging to evaluate each investment separately. Doing so would focus my attention on the fact that any individual result is indistinguishable from luck.

Instead, I focus my evaluations much more on bundles of hundreds of trades, often associated with a particular strategy. Aggregating evidence in that manner smooths out the good and bad luck to make my skill (or lack thereof) more conspicuous. I’m focusing in this post not on the logical interpretation of evidence, but on how the subconscious parts of my mind react. This mental bundling of tasks is particularly important for my subconscious impressions of whether I’m being productive.

I believe this is a well-known insight (possibly from poker?), but I can’t figure out where I’ve seen it described.

I’ve partly applied this approach to self-improvement tasks (not quite as explicitly as I ought to), and it has probably helped.

[Warning: this post contains lots of guesses based on weak evidence. I’d be surprised if I got more than 80% of it right.]

I’ve long acted as if a good diet is fairly important, and I’ve gathered lots of relevant evidence. But until recently I classified that evidence into many small topics related to specific nutrients and health problems, and never organized those ideas into an overall assessment of how important a good diet is.

Comments by Jim Babcock prompted me to investigate a broader overview.

This post will mainly focus on evaluating the importance of nutrition for adults in wealthy nations, then will summarize my guesses about how to achieve a good diet.

Continue Reading

Book review: The Hungry Brain: Outsmarting the Instincts That Make Us Overeat, by Stephan Guyenet.

Researchers who studied obesity in rats used to have trouble coaxing their rats to overeat. The obvious approaches (a high fat diet, or a high sugar diet) were annoyingly slow. Then they stumbled on the approach of feeding human junk food to the rats, and made much faster progress.

What makes something “junk food”? The best parts of this book help to answer this, although some ambiguity remains. It mostly boils down to palatability (is it yummier than what our ancestors evolved to expect? If so, it’s somewhat addictive) and caloric density.

Presumably designers of popular snack foods have more sophisticated explanations of what makes people obese, since that’s apparently identical to what they’re paid to optimize (with maybe a few exceptions, such as snacks that are marketed as healthy or ethical). Yet researchers who officially study obesity seem reluctant to learn from snack food experts. (Because they’re the enemy? Because they’re low status? Because they work for evil corporations? Your guess is likely as good as mine.)

Guyenet provides fairly convincing evidence that it’s simple to achieve a healthy weight while feeling full. (E.g. the 20 potatoes a day diet). To the extent that we need willpower, it’s to avoid buying convenient/addictive food, and to avoid restaurants.

My experience is that I need a moderate amount of willpower to follow Guyenet’s diet ideas, and that it would require large amount of willpower if I attended many social events involving food. But for full control over my weight, it seemed like I needed to supplement a decent diet with some form of intermittent fasting (e.g. alternate day calorie restriction); Guyenet says little about that.

Guyenet’s practical advice boils down to a few simple rules: eat whole foods that resemble what our ancestors ate; don’t have other “food” anywhere that you can quickly grab it; sleep well; exercise; avoid stress. That’s sufficiently similar to advice I’ve heard before that I’m confident The Hungry Brain won’t revolutionize many people’s understanding of obesity. But it’s got a pretty good ratio of wisdom to questionable advice, and I’m unaware of reasons to expect much more than that.

Guyenet talks a lot about neuroscience. That would make sense if readers wanted to learn how to fix obesity via brain surgery. The book suggests that, in the absence of ethical constraints, it might be relatively easy to cure obesity by brain surgery. Yet I doubt such a solution would become popular, even given optimistic assumptions about safety.

An alternate explanation is that Guyenet is showing off his knowledge of brains, in order to show that he’s smart enough to have trustworthy beliefs about diets. But that effect is likely small, due to competition among diet-mongers for comparable displays of smartness.

Or maybe he’s trying to combat dualism, in order to ridicule the “just use willpower” approach to diet? Whatever the reason is, the focus on neuroscience implies something unimpressive about the target audience.

You should read this book if you eat a fairly healthy diet but are still overweight. Otherwise, read Guyenet’s blog instead, for a wider variety of health advice.

[Caveat: this post involves abstract theorizing whose relevance to practical advice is unclear. ]

What we call willpower mostly derives from conflicts between parts of our minds, often over what discount rate to use.

An additional source of willpower-like conflicts comes from social desirability biases.

I model the mind as having many mental sub-agents, each focused on a fairly narrow goal. Different goals produce different preferences for caring about the distant future versus caring only about the near future.

The sub-agents typically are as smart and sophisticated as a three year old (probably with lots of variation). E.g. my hunger-minimizing sub-agent is willing to accept calorie restriction days with few complaints now that I have a reliable pattern of respecting the hunger-minimizing sub-agent the next day, but complained impatiently when calorie restriction days seemed abnormal.

We have beliefs about how safe we are from near-term dangers, often reflected in changes to the autonomic nervous system (causing relaxation or the fight or flight reflex). Those changes cause quick, crude shifts in something resembling a global discount rate. In addition, each sub-agent has some ability to demand that it’s goals be treated fairly.

We neglect sub-agents whose goals are most long-term when many sub-agents say their goals have been neglected, and/or when the autonomic nervous system says immediate problems deserve attention.

Our willpower is high when we feel safe and are satisfied with our progress at short-term goals.

Social status

The time-discounting effects are sometimes obscured by social signaling.

Writing a will hints at health problems, whereas doing something about global warming can signal wealth. We have sub-agents that steer us to signal health and wealth, but without doing so in a deliberate enough way that people see that we are signaling. That leads us to exaggerate how much of our failure to write a will is due to the time-discounting type of low willpower.

Video games convince parts of our minds that we’re gaining status (in a virtual society) and/or training to win status-related games in real life. That satisfies some sub-agents who care about status. (Video games deceive us about status effects, but that has limited relevance to this post.) Yet as with most play, we suppress awareness of the zero-sum competitions we’re aiming to win. So we get confused about whether we’re being short-sighted here, because we’re pursuing somewhat long-term benefits, probably deceiving ourselves somewhat about them, and pretending not to care about them.

Time asymmetry?

Why do we feel an asymmetry in effects of neglecting distant goals versus neglecting immediate goals?

The fairness to sub-agents metaphor suggests that neglecting the distant future ought to produce emotional reactions comparable to what happens when we neglect the near future.

Neglecting the distant future does produce some discomfort that somewhat resembles willpower problems. If I spend lots of time watching TV, I end up feeling declining life-satisfaction, which tends to eventually cause me to pay more attention to long-term goals.

But the relevant emotions still don’t seem symmetrical.

One reason for asymmetry is that different goals imply different things for what constitutes neglecting a goal: neglecting sleep or food for a day implies something more unfair to the relevant sub-agents than does neglecting one’s career skills.

Another reason is that for both time-preference and social desirability conflicts, we have instincts that aren’t optimized for our current environment.

Our hunter-gatherer ancestors needed to devote most of their time to tasks that paid off within days, and didn’t know how to devote more than a few percent of their time to usefully preparing for events that were several years in the future. Our farmer ancestors needed to devote more time to 3-12 month planning horizons, but not much more than hunter-gatherers did. Today many of us can productively spend large fractions of our time on tasks (such as getting a college degree) that take more than 5 years to pay off. Social desirability biases show (less clear) versions of that same pattern.

That means we need to override our system 1 level heuristics with system 2 level analysis. That requires overriding the instinctive beliefs of some sub-agents about how much attention their goals deserve. Whereas the long-term goals we override to deal with hunger have less firmly established “rights” to fairness.

Also, there may be some fairness rules about how often system 2 can override system 1 agents – doing that too often may cause coalitions within system 1 to treat system 2 as a politician who has grabbed too much power. [Does this explain decision fatigue? I’m unsure.]

Other Models of Willpower

The depletion model

Willpower depletion captures a nontrivial effect of key sub-agents rebelling when their goals have been overlooked for too long.

But I’m confused – the depletion model doesn’t seem like it’s trying to be a complete model of willpower. In particular, it either isn’t trying explain evolutionary sources of willpower problems, or is trying to explain it via the clearly inadequate claim that willpower is a simple function of current blood glucose levels.

It would be fine if the depletion model were just a heuristic that helped us develop more willpower. But if anything it seems more likely to reduce willpower.

Kurzban’s opportunity costs model

Kurzban et al. have a model involving the opportunity costs of using cognitive resources for a given task.

It seems more realistic than most models I’ve seen. It describes some important mental phenomena more clearly than I can, but doesn’t quite seem to be about willpower. In particular, it seems uninformative about differing time horizons. Also, it focuses on cognitive resource constraints, whereas I’d expect some non-cognitive resource constraints to be equally important.

Ainslie’s Breakdown of Will

George Ainslie wrote a lot about willpower, describing it as intertemporal bargaining, with hyperbolic discounting. I read that book 6 years ago, but don’t remember it very clearly, and I don’t recall how much it influenced my current beliefs. I think my model looks a good deal like what I’d get if I had set out to combine the best parts of Ainslie’s ideas and Kurzban’s ideas, but I wrote 90% of this post before remembering that Ainslie’s book was relevant.

Ainslie apparently wrote his book before it became popular to generate simple models of willpower, so he didn’t put much thought into comparing his views to others.

Hyperbolic discounting seems to be a real phenomenon that would be sufficient to cause willpower-like conflicts. But I’m unclear on why it should be a prominent part of a willpower model.

Distractible

This “model” isn’t designed to say much beyond pointing out that willpower doesn’t reliably get depleted.

Hot/cool

A Hot/cool-system model sounds like an attempt to generalize the effects of the autonomic nervous system to explain all of willpower. I haven’t found it to be very informative.

Muscle

Some say that willpower works like a muscle, in that using it strengthens it.

My model implies that we should expect this result when preparing for the longer-term future causes our future self to be safer and/or to more easily satisfy near-term goals.

I expect this effect to be somewhat observable with using willpower to save money, because having more money makes us feel safer and better able to satisfy our goals.

I expect this effect to be mostly absent after using willpower to loose weight or to write a will, since those produce benefits which are less intuitive and less observable.

Why do drugs affect willpower?

Scott at SlateStarCodex asks why drugs have important effects on willpower.

Many drugs affect the autonomic nervous system, thereby influencing our time preferences. I’d certainly expect that drugs which reduce anxiety will enable us to give higher priority to far future goals.

I expect stimulants make us feel less concern about depleting our available calories, and less concern about our need for sleep, thereby satisfying a few short-term sub-agents. I expect this to cause small increases in willpower.

But this is probably incomplete. I suspect the effect of SSRIs on willpower varies quite widely between people. I suspect that’s due to an anti-anxiety effect which increases willpower, plus an anti-obsession effect which reduces willpower in a way that my model doesn’t explain.

And Scott implies that some drugs have larger effects on willpower than I can explain.

My model implies that placebos can be mildly effective at increasing willpower, by convincing some short-sighted sub-agents that resources are being applied toward their goals. A quick search suggests this prediction has been poorly studied so far, with one low-quality study confirming this.

Conclusion

I’m more puzzled than usual about whether these ideas are valuable. Is this model profound, or too obvious to matter?

I presume part of the answer is that people who care about improving willpower care less about theory, and focus on creating heuristics that are easy to apply.

CFAR does a decent job of helping people develop more willpower, not by explaining a clear theory of what willpower is, but by focusing more on how to resolve conflicts between sub-agents.

And I recommend that most people start with practical advice, such as the advice in The Willpower Instinct, and worry about theory later.

I started writing morning pages a few months ago. That means writing three pages, on paper, before doing anything else [1].

I’ve only been doing this on weekends and holidays, because on weekdays I feel a need to do some stock market work close to when the market opens.

It typically takes me one hour to write three pages. At first, it felt like I needed 75 minutes but wanted to finish faster. After a few weeks, it felt like I could finish in about 50 minutes when I was in a hurry, but often preferred to take more than an hour.

That suggests I’m doing much less stream-of-consciousness writing than is typical for morning pages. It’s unclear whether that matters.

It feels like devoting an hour per day to morning pages ought to be costly. Yet I never observed it crowding out anything I valued (except maybe once or twice when I woke up before getting an optimal amount of sleep in order to get to a hike on time – that was due to scheduling problems, not due to morning pages reducing the available of time per day).
Continue Reading

Why do people knowingly follow bad investment strategies?

I won’t ask (in this post) about why people hold foolish beliefs about investment strategies. I’ll focus on people who intend to follow a decent strategy, and fail. I’ll illustrate this with a stereotype from a behavioral economist (Procrastination in Preparing for Retirement):[1]

For instance, one of the authors has kept an average of over $20,000 in his checking account over the last 10 years, despite earning an average of less than 1% interest on this account and having easy access to very liquid alternative investments earning much more.

A more mundane example is a person who holds most of their wealth in stock of a single company, for reasons of historical accident (they acquired it via employee stock options or inheritance), but admits to preferring a more diversified portfolio.

An example from my life is that, until this year, I often borrowed money from Schwab to buy stock, when I could have borrowed at lower rates in my Interactive Brokers account to do the same thing. (Partly due to habits that I developed while carelessly unaware of the difference in rates; partly due to a number of trivial inconveniences).

Behavioral economists are somewhat correct to attribute such mistakes to questionable time discounting. But I see more patterns than such a model can explain (e.g. people procrastinate more over some decisions (whether to make a “boring” trade) than others (whether to read news about investments)).[2]

Instead, I use CFAR-style models that focus on conflicting motives of different agents within our minds.

Continue Reading

I use Beeminder occasionally. The site’s emails normally suffice to bug me into accomplishing whatever I’ve committed to doing. But I only use it for a few tasks for which my motivation is marginal. Most of the times that I consider using Beeminder, I either figure out how to motivate myself properly, or (more often) decide that my goal isn’t important.

The real value of Beeminder is that if I want to compel future-me to do something, I can’t give up by using the excuse that future-me is lazy or unreliable. Instead, I find myself wondering why I’m unwilling to risk $X to make myself likely to complete the task. That typically causes me to notice legitimate doubts about how highly I value the result.

My alternate day calorie restriction diet is going well. My body and/or habits are adapting. But the visible benefits are still small.

  • I normally do three restricted days per week (very rarely only two). I eat 800-1000 calories on those days (or 1200-1400 when I burn more than 1000 calories by hiking). On unrestricted days, I try to eat a little more than feels natural.
  • I have an improved ability to bring my weight to a particular target, but the range of weights that feel good is much narrower than I expected. My weight has stabilized to a range of 142-145 pounds, compared to 145-148 last year and an erratic 138-148 in the first few weeks of my new diet. If I reduce my weight below 142, I feel irritable in the afternoon or evening of a restricted day. At 145, I’m on the verge of that too-full feeling that was common in prior years.
  • My resting heart rate has declined from about 70 to about 65.
  • For many years I’ve been waking in the middle of the night feeling too warm, with little apparent pattern. A byproduct of my new diet is that I’ve noticed it’s connected to having eaten protein.
  • I’m using less willpower now than in prior years to eat the right amount. My understanding of the willpower effect is influenced by CFAR’s attitude, which is that occasionally using willpower to fight the goals of one of my mind’s sub-agents is reasonable, but the longer I continue it, the more power and thought that sub-agent will devote to accomplishing its goals. My sub-agent in charge of getting me to eat lots to prepare for a famine can now rely on me, if I’m resisting it today, to encourage it tomorrow; whereas in prior years I was continually pressuring it to do less than it wanted. That makes it more cooperative.

The only drawbacks are the increased attention I need to pay to what I eat on restricted days, and the difficulties of eating out on restricted days (due to my need to control portion sizes and to time my main meals near the middle of the day). I find it fairly easy to schedule my restricted days so that I’m almost always eating at home, but I expect many people to find that hard.

Alternate day calorie restriction seems to be one of the most effective ways of increasing my life expectancy, but it isn’t easy. I tried it about three years ago, but gave up because it interfered with my sleep. I started it again three weeks ago, and this time I seem to be adjusting to it.

One important difference is that this time I’m better informed about what it takes to adjust to the diet. I planned a strict induction phase of 7 down days (about 550 calories) and 7 up days (unlimited food), followed by a less strict pattern of 2-3 days per week on which I’m limited to around 1000 calories a day. (I ended up adding an extra up day after each of the first two down days, then switched to strict alternation for the remainder of the induction phase). The severity of the induction phase may be important at triggering adaptation to this kind of diet.

The second difference is that this time I’ve been obsessive about measuring my food intake to the nearest gram. I suspect that when I intended to eat 1200 calories a day in my prior attempt, I was actually getting at least 1400 calories and fooling myself into thinking I was following the diet. This time I’m using a good scale to weigh each serving.

After the first down day, I slept poorly (as expected), getting impatient for sunrise to bring me an excuse to get up for food. After about the fourth down day, waking with an empty stomach seemed normal enough that it doesn’t provide a motive to get out of bed, or to get food quickly when I do get out of bed. I hardly notice the feelings of hunger then, even though I ought to be hungrier than late in the previous day when I did notice some of the standard hunger feelings. My sleep isn’t quite back to normal, but it seems close to normal and improving.

I’ve been feeling full about 50% of the time. I felt noticeably hungry about 30% of the time at first, and now it’s more like 20% of the time. Hunger feels a bit less important now than it used to feel (i.e. it affects my attention less).

Weight loss wasn’t an important motive for changing my diet, but I hoped I would lose about 7 pounds. I lost at least 5 pounds by the end of the 5th down day (my weight fluctuated enough that it’s hard to evaluate it precisely). I couldn’t comfortably eat enough on the up days to make up for what I lost on the 550 calorie days, even when I became mildly alarmed at my rate of weight loss.

Then my weight rebounded within a few days, without any apparent change in my diet, to roughly what it was at the start. The obvious guess is that my metabolism slowed down to compensate for the reduced calories. I did feel noticeably colder in bed after down days. I also felt less mental energy, and when doing an easy hike on the day after the 6th down day I felt a need to take rest breaks that was unusual in that it wasn’t caused by anything like muscle fatigue.

During the induction phase, I practiced strict protein fasting (< 15 grams of protein per day) on down days due to guesses that protein restriction is more effective at causing beneficial metabolic changes, which might cause faster psychological adaptation. My results seem to provide weak evidence in support of this guess. My diet on down days was mostly sweet potato and lettuce, with modest amounts of other vegetables and sugar-free chocolate. This provided more bulk to fill my gut than is typical for this kind of diet, but that was likely offset by the lack of protein related satiety. I’m not restricting protein now that I’m out of the induction phase (although I expect to do so maybe once a month).

My heart rate variability mysteriously increased after the first down day, then declined to a much lower than average level after the fourth down day, and has fluctuated a lot since then (averaging somewhat below normal).

Why did I have enough willpower to get this far, when I probably didn’t have the willpower needed to do it right three years ago?

One factor is that I now consider the CFAR community to be an important tribe to belong to, so my sense of self-identity has changed to attach more importance to being able to make big changes to my life.

Another factor is having information that led me to be somewhat confident that by a specific, not too distant, date it would become a good deal easier.

A third factor is being more obsessive about measuring how well I was complying with the rules I set down.

The induction phase cost me a fair amount of productivity. For 17 days I wasn’t close to having enough willpower/ambition to start writing a blog post (and had similar problems with most other non-routine tasks). But now I feel that writing this post is easier than normal. It’s too early to tell whether that means I have more mental energy than before.

I don’t know how to get strong evidence about whether it is worth the effort. I seem to feel more self-efficacy. I now think I can set my weight to any reasonable target simply by changing my calorie target on 2 or 3 down days per week. But in order to be clearly worthwhile it needs to improve my long-term health. I won’t know that for quite a while.

Book review: The Willpower Instinct: How Self-Control Works, Why It Matters, and What You Can Do To Get More of It, by Kelly McGonigal.

This book starts out seeming to belabor ideas that seem obvious to me, but before too long it offers counterintuitive approaches that I ought to try.

The approach that I find hardest to reconcile with my intuition is that self-forgiveness over giving into temptations helps increase willpower, while feeling guilt or shame about having failed reduces willpower, so what seems like an incentive to avoid temptation is likely to reduce our ability to resist the temptation.

Another important but counterintuitive claim is that trying to suppress thoughts about a temptation (e.g. candy) makes it harder to resist the temptation. Whereas accepting that part of my mind wants candy (while remembering that I ought to follow a rule of eating less candy) makes it easier for me to resist the candy.

A careless author could have failed to convince me this is plausible. But McGonigal points out the similarities to trying to follow an instruction to not think of white bears – how could I suppress thoughts of white bears of some part of my mind didn’t activate a concept of white bears to monitor my compliance with the instruction? Can I think of candy without attracting the attention of the candy-liking parts of my mind?

As a result of reading the book, I have started paying attention to whether the pleasure I feel when playing computer games lives up to the anticipation I feel when I’m tempted to start one. I haven’t been surprised to observe that I sometimes feel no pleasure after starting the game. But it now seems easier to remember those times of pleasureless playing, and I expect that is weakening my anticipation or rewards.