In this post, I’ll describe features of the moral system that I use. I expect that it’s similar enough to Robin Hanson’s views I’ll use his name dealism to describe it, but I haven’t seen a well-organized description of dealism. (See a partial description here).
It’s also pretty similar to the system that Drescher described in Good and Real, combined with Anna Salamon’s description of causal models for Newcomb’s problem (which describes how to replace Drescher’s confused notion of “subjunctive relations” with a causal model). Good and Real eloquently describes why people should want to follow dealist-like moral system; my post will be easier to understand if you understand Good and Real.
The most similar mainstream system is contractarianism. Dealism applies to a broader set of agents, and depends less on the initial conditions. I haven’t read enough about contractarianism to decide whether dealism is a special type of contractarianism or whether it should be classified as something separate. Gauthier’s writings look possibly relevant, but I haven’t found time to read them.
Scott Aaronson’s eigenmorality also overlaps a good deal with dealism, and is maybe a bit easier to understand.
Under dealism, morality consists of rules / agreements / deals, especially those that can be universalized. We become more civilized as we coordinate better to produce more cooperative deals. I’m being somewhat ambiguous about what “deal” and “universalized” mean, but those ambiguities don’t seem important to the major disagreements over moral systems, and I want to focus in this post on high-level disagreements.
I want to emphasize that I’m using a broad category of deals, not just explicit contracts. These may include:
- agreements between humans and cats (e.g. I’ll express appreciation/affection if you pet me)
- superrationality, which I consider a fairly typical form of a morally important deal
- the confusingly-named acausal trade (but I don’t want to argue for that here)
Most attempts at morality have either a good answer for why morality is good for the world, or why an individual ought to follow it, but not both. [citation needed?]
Dealism seems like the best available attempt to reconcile those desires. Broad deals don’t get adopted unless they’re mostly good. And people (with a few exceptions, such as despots) have incentives to agree to increasingly universal rules.
I expect that most people underestimate our ability to make agreements that approximate “I’ll act responsibly when nobody’s looking if you’ll also do that”. I.e. agree to alter the algorithms we’re running, at a level fairly close to our utility function, so that we intend to act more morally . This seems pretty different from how our formal contracts work, but moderately close to how normal social pressure works.
Note that dealism attaches less importance than most moral systems to the threshold between moral rules and other rules. Such thresholds may be useful if they cause people to take important rules more seriously, but otherwise the distinction seems fairly arbitrary. Dealism may be a bit less extreme in this regard than utilitarianism, but not by much.
Dealism implies that we have weak pressures to expand our notion of which agents qualify as relevant to morality. We have important interactions with members of our tribe, which makes it important to consider their interests. We have limited cooperative interactions with animals and with people who will be born millions of years in the future, so we only have tiny incentives to include them in our moral sphere.
Most other moral systems sound strange when dealing with those topics. Some use ad hoc rules to effectively exclude most animals and most of the far future from their moral sphere. The moral systems that take the distant future and animal welfare seriously produce weird results that people are reluctant to follow.
Under dealism, it shouldn’t be very surprising that many people are upset by cruelty to cats (since many have cooperative interactions with cats), but substantially fewer care about cruelty to pigs (since typical human-pig interactions are much less cooperative).
Utilitarianism is the stereotypical example of a moral system that I’d like distant strangers to adopt , but which most people reject. It usually seems like this rejection is consistent with the hypothesis that they want to act, and have their friends and allies act, as if they are more valuable than distant strangers.
I don’t expect any of us to be sufficiently altruistic to become pure utilitarianians in the foreseeable future. But I expect us to improve the world by continuing to find new agreements that come closer to approximating that ideal, because such agreements mean that we help each other more. Dealism seems like what we get when we aim for the benefits of utilitarianism, but admit that people won’t be that altruistic without ideal incentives to be altruistic.
Why are people attracted to moral systems that are more deontological than dealism?
DeScioli and Kurzban present evidence that moral systems were developed mostly for somewhat selfish reasons: people wanted predictable rules for how to choose sides in disputes. We’re biased toward rules for which it’s easy to predict how others will choose sides. That limits how complex the rules can be, and causes us to prefer rules that depend on readily observable evidence. E.g. people are reluctant to give the utilitarian answer to the trolley problem because “don’t kill” is a simple rule, and it’s easy to observe that inaction in the trolley problem satisfies that rule, whereas it’s hard to observe that someone is only able to save five people by killing one (how would the observer evaluate whether you could have saved the five people some other way?).
Under this model, moral systems are created primarily for selfish reasons, but when there are multiple options to choose from, people tend to choose the option under which society works better. [citation needed?] We want to look more moral than we actually are, so we often claim that this weak altruistic component is our only reason for choosing moral rules.
We can see this effect at work in rules regarding animal welfare. Vegetarianism is almost certainly not a great rule for improving animal welfare: it allows followers to eat eggs from chickens who are kept in cruel conditions, while preventing the existence of some animals who would live somewhat happy lives before becoming meat. Even if you imagine that the act of killing a chicken matters much more than the conditions under which it lives, the egg-eater is still harming chickens, because it’s predictable that most egg-buying practices will cause farmers to kill chickens once they’re past egg-laying age. The main sense in which typical egg-eating vegetarians are being more moral than meat-eaters is that they’re obeying a rule which is relatively easy to enforce, and that rule has some tendency to help animals. The rule seems to be valued mainly because it obscures responsibility for the chicken’s death.
I see a spectrum of motives for moral rules, from caring mostly about having clear boundaries (low cost of analyzing?) to mostly caring about benefits of good rules (at a potentially high cost of detecting infractions).
Technological progress provides better ability to observe evidence of which rules we’re obeying, so we now have a wider set of potential rules that we can select from to produce our current moral agreements . And maybe technology is enabling us to understand slightly more complex rules. That is enabling us to shift our moral systems closer to utilitarian ones.
But doesn’t dealism endorse immoral things such as slavery under some conditions?
Sort of. If slaves in 1800 didn’t have enough influence to offer a way out of their slavery, then dealism doesn’t provide a way to abolish slavery that would have worked in 1800. But that’s also true of any moral system that doesn’t have a convincing argument for why we should be moral . If you want to convince me that your moral system is better than dealism, then convince me that people weren’t aware of it in 1800, but would have used it to abolish slavery if they were aware of it then.
But that doesn’t mean that slavery needed to be unprofitable to slaveowners in order to abolish it – the behavior of people who had no direct interest in slavery suggests they valued a more universalizable ethical system. Enough so that they did a bit to subvert the laws about runaway slaves . The costs associated with runaway slaves helped erode support for slavery.
Adopting dealism will mostly have little impact on most people’s day to day life. It is mainly important for how it affects our analysis of how to improve our moral agreements.
Has dealism or something equivalent been described better elsewhere? I’m sure there are vast amounts of loosely related writings, and I have little idea how successful I’ve been at locating the best ones.
How well does dealism describe the moral systems used by LWers / CFARians / EAs? I typically get the impression that people in these communities have beliefs which are mostly compatible with dealism, but I’d like clearer evidence.
– Good and Real describes this better than I expect I can.
 – Some examples:
- The internet has prompted changes in attitudes toward keeping information proprietary. People now feel some social pressure to make many types of source code and research data available freely, whereas it was normal 25 years ago to restrict that access so that only a few privileged people would see them.
- The book The Institutional Revolution argues that improved measurement prompted society to agree not to fight duels. The book convinced me that this is part of a broad pattern of how social agreement change. Alas, I don’t expect to create a convincing summary of that argument.
- Some societies have recently (i.e. in the past few centuries) developed the ability to create competent organizations that are much larger than the Dunbar number. I’m mainly thinking of corporations, but this also applies to other organizations. Fukuyama’s book Trust provides some insights into how culture affects the feasibility of large organizations. Some of our large corporations (e.g. Intel or solar panel companies) wouldn’t be able to achieve the economies of scale that they have achieved unless investors trusted both the culture and the legal rules to give them a predictable share of profits. Making that work is tricky – mainland China is trying, and it’s unclear whether investors should trust their current system.
- Futarchy is an example of an option which would have been impractical a few centuries ago, but which could probably be implemented today if people wanted it.
None of these examples point as clearly as I would like to the underlying pattern that I’m trying to describe, but together they should provide a decent outline.
 – some people seem to imagine that moral arguments caused the abolition of slavery. I expect that moral reasons played some role in the abolition of slavery, but not in the sense that the timing could be explained by the availability of new moral understanding. Has anyone documented new moral arguments that were introduced shortly before the abolition of slavery which changed peoples’ minds?
I suspect the timing of abolitionism was much more connected to technological changes – e.g. railroads made it easier for slaves to escape; slaveowners tried to adjust by making more draconian laws for escaped slaves, which imposed annoying costs on regions with few slaves.
 – see the Fugitive Slave Act, and this commentary on northern aid to runaway slaves. Also see Hummel and Weingast’s paperfor (controversial?) arguments that slaveowners had reason to worry about this.