[Caveat: this post involves abstract theorizing whose relevance to practical advice is unclear. ]
What we call willpower mostly derives from conflicts between parts of our minds, often over what discount rate to use.
An additional source of willpower-like conflicts comes from social desirability biases.
I model the mind as having many mental sub-agents, each focused on a fairly narrow goal. Different goals produce different preferences for caring about the distant future versus caring only about the near future.
The sub-agents typically are as smart and sophisticated as a three year old (probably with lots of variation). E.g. my hunger-minimizing sub-agent is willing to accept calorie restriction days with few complaints now that I have a reliable pattern of respecting the hunger-minimizing sub-agent the next day, but complained impatiently when calorie restriction days seemed abnormal.
We have beliefs about how safe we are from near-term dangers, often reflected in changes to the autonomic nervous system (causing relaxation or the fight or flight reflex). Those changes cause quick, crude shifts in something resembling a global discount rate. In addition, each sub-agent has some ability to demand that it’s goals be treated fairly.
We neglect sub-agents whose goals are most long-term when many sub-agents say their goals have been neglected, and/or when the autonomic nervous system says immediate problems deserve attention.
Our willpower is high when we feel safe and are satisfied with our progress at short-term goals.
The time-discounting effects are sometimes obscured by social signaling.
Writing a will hints at health problems, whereas doing something about global warming can signal wealth. We have sub-agents that steer us to signal health and wealth, but without doing so in a deliberate enough way that people see that we are signaling. That leads us to exaggerate how much of our failure to write a will is due to the time-discounting type of low willpower.
Video games convince parts of our minds that we’re gaining status (in a virtual society) and/or training to win status-related games in real life. That satisfies some sub-agents who care about status. (Video games deceive us about status effects, but that has limited relevance to this post.) Yet as with most play, we suppress awareness of the zero-sum competitions we’re aiming to win. So we get confused about whether we’re being short-sighted here, because we’re pursuing somewhat long-term benefits, probably deceiving ourselves somewhat about them, and pretending not to care about them.
Why do we feel an asymmetry in effects of neglecting distant goals versus neglecting immediate goals?
The fairness to sub-agents metaphor suggests that neglecting the distant future ought to produce emotional reactions comparable to what happens when we neglect the near future.
Neglecting the distant future does produce some discomfort that somewhat resembles willpower problems. If I spend lots of time watching TV, I end up feeling declining life-satisfaction, which tends to eventually cause me to pay more attention to long-term goals.
But the relevant emotions still don’t seem symmetrical.
One reason for asymmetry is that different goals imply different things for what constitutes neglecting a goal: neglecting sleep or food for a day implies something more unfair to the relevant sub-agents than does neglecting one’s career skills.
Another reason is that for both time-preference and social desirability conflicts, we have instincts that aren’t optimized for our current environment.
Our hunter-gatherer ancestors needed to devote most of their time to tasks that paid off within days, and didn’t know how to devote more than a few percent of their time to usefully preparing for events that were several years in the future. Our farmer ancestors needed to devote more time to 3-12 month planning horizons, but not much more than hunter-gatherers did. Today many of us can productively spend large fractions of our time on tasks (such as getting a college degree) that take more than 5 years to pay off. Social desirability biases show (less clear) versions of that same pattern.
That means we need to override our system 1 level heuristics with system 2 level analysis. That requires overriding the instinctive beliefs of some sub-agents about how much attention their goals deserve. Whereas the long-term goals we override to deal with hunger have less firmly established “rights” to fairness.
Also, there may be some fairness rules about how often system 2 can override system 1 agents – doing that too often may cause coalitions within system 1 to treat system 2 as a politician who has grabbed too much power. [Does this explain decision fatigue? I’m unsure.]
Other Models of Willpower
The depletion model
Willpower depletion captures a nontrivial effect of key sub-agents rebelling when their goals have been overlooked for too long.
But I’m confused – the depletion model doesn’t seem like it’s trying to be a complete model of willpower. In particular, it either isn’t trying explain evolutionary sources of willpower problems, or is trying to explain it via the clearly inadequate claim that willpower is a simple function of current blood glucose levels.
It would be fine if the depletion model were just a heuristic that helped us develop more willpower. But if anything it seems more likely to reduce willpower.
Kurzban’s opportunity costs model
Kurzban et al. have a model involving the opportunity costs of using cognitive resources for a given task.
It seems more realistic than most models I’ve seen. It describes some important mental phenomena more clearly than I can, but doesn’t quite seem to be about willpower. In particular, it seems uninformative about differing time horizons. Also, it focuses on cognitive resource constraints, whereas I’d expect some non-cognitive resource constraints to be equally important.
Ainslie’s Breakdown of Will
George Ainslie wrote a lot about willpower, describing it as intertemporal bargaining, with hyperbolic discounting. I read that book 6 years ago, but don’t remember it very clearly, and I don’t recall how much it influenced my current beliefs. I think my model looks a good deal like what I’d get if I had set out to combine the best parts of Ainslie’s ideas and Kurzban’s ideas, but I wrote 90% of this post before remembering that Ainslie’s book was relevant.
Ainslie apparently wrote his book before it became popular to generate simple models of willpower, so he didn’t put much thought into comparing his views to others.
Hyperbolic discounting seems to be a real phenomenon that would be sufficient to cause willpower-like conflicts. But I’m unclear on why it should be a prominent part of a willpower model.
This “model” isn’t designed to say much beyond pointing out that willpower doesn’t reliably get depleted.
A Hot/cool-system model sounds like an attempt to generalize the effects of the autonomic nervous system to explain all of willpower. I haven’t found it to be very informative.
Some say that willpower works like a muscle, in that using it strengthens it.
My model implies that we should expect this result when preparing for the longer-term future causes our future self to be safer and/or to more easily satisfy near-term goals.
I expect this effect to be somewhat observable with using willpower to save money, because having more money makes us feel safer and better able to satisfy our goals.
I expect this effect to be mostly absent after using willpower to loose weight or to write a will, since those produce benefits which are less intuitive and less observable.
Why do drugs affect willpower?
Scott at SlateStarCodex asks why drugs have important effects on willpower.
Many drugs affect the autonomic nervous system, thereby influencing our time preferences. I’d certainly expect that drugs which reduce anxiety will enable us to give higher priority to far future goals.
I expect stimulants make us feel less concern about depleting our available calories, and less concern about our need for sleep, thereby satisfying a few short-term sub-agents. I expect this to cause small increases in willpower.
But this is probably incomplete. I suspect the effect of SSRIs on willpower varies quite widely between people. I suspect that’s due to an anti-anxiety effect which increases willpower, plus an anti-obsession effect which reduces willpower in a way that my model doesn’t explain.
And Scott implies that some drugs have larger effects on willpower than I can explain.
My model implies that placebos can be mildly effective at increasing willpower, by convincing some short-sighted sub-agents that resources are being applied toward their goals. A quick search suggests this prediction has been poorly studied so far, with one low-quality study confirming this.
I’m more puzzled than usual about whether these ideas are valuable. Is this model profound, or too obvious to matter?
I presume part of the answer is that people who care about improving willpower care less about theory, and focus on creating heuristics that are easy to apply.
CFAR does a decent job of helping people develop more willpower, not by explaining a clear theory of what willpower is, but by focusing more on how to resolve conflicts between sub-agents.
And I recommend that most people start with practical advice, such as the advice in The Willpower Instinct, and worry about theory later.