In this post, I’ll describe features of the moral system that I use. I expect that it’s similar enough to Robin Hanson’s views I’ll use his name dealism to describe it, but I haven’t seen a well-organized description of dealism. (See a partial description here).
It’s also pretty similar to the system that Drescher described in Good and Real, combined with Anna Salamon’s description of causal models for Newcomb’s problem (which describes how to replace Drescher’s confused notion of “subjunctive relations” with a causal model). Good and Real eloquently describes why people should want to follow dealist-like moral system; my post will be easier to understand if you understand Good and Real.
The most similar mainstream system is contractarianism. Dealism applies to a broader set of agents, and depends less on the initial conditions. I haven’t read enough about contractarianism to decide whether dealism is a special type of contractarianism or whether it should be classified as something separate. Gauthier’s writings look possibly relevant, but I haven’t found time to read them.
Scott Aaronson’s eigenmorality also overlaps a good deal with dealism, and is maybe a bit easier to understand.
Under dealism, morality consists of rules / agreements / deals, especially those that can be universalized. We become more civilized as we coordinate better to produce more cooperative deals. I’m being somewhat ambiguous about what “deal” and “universalized” mean, but those ambiguities don’t seem important to the major disagreements over moral systems, and I want to focus in this post on high-level disagreements.
2.
I want to emphasize that I’m using a broad category of deals, not just explicit contracts. These may include:
- agreements between humans and cats (e.g. I’ll express appreciation/affection if you pet me)
- superrationality, which I consider a fairly typical form of a morally important deal
- the confusingly-named acausal trade (but I don’t want to argue for that here)
Most attempts at morality have either a good answer for why morality is good for the world, or why an individual ought to follow it, but not both. [citation needed?]
Dealism seems like the best available attempt to reconcile those desires. Broad deals don’t get adopted unless they’re mostly good. And people (with a few exceptions, such as despots) have incentives to agree to increasingly universal rules.
I expect that most people underestimate our ability to make agreements that approximate “I’ll act responsibly when nobody’s looking if you’ll also do that”. I.e. agree to alter the algorithms we’re running, at a level fairly close to our utility function, so that we intend to act more morally [1]. This seems pretty different from how our formal contracts work, but moderately close to how normal social pressure works.
Note that dealism attaches less importance than most moral systems to the threshold between moral rules and other rules. Such thresholds may be useful if they cause people to take important rules more seriously, but otherwise the distinction seems fairly arbitrary. Dealism may be a bit less extreme in this regard than utilitarianism, but not by much.
Dealism implies that we have weak pressures to expand our notion of which agents qualify as relevant to morality. We have important interactions with members of our tribe, which makes it important to consider their interests. We have limited cooperative interactions with animals and with people who will be born millions of years in the future, so we only have tiny incentives to include them in our moral sphere.
Most other moral systems sound strange when dealing with those topics. Some use ad hoc rules to effectively exclude most animals and most of the far future from their moral sphere. The moral systems that take the distant future and animal welfare seriously produce weird results that people are reluctant to follow.
Under dealism, it shouldn’t be very surprising that many people are upset by cruelty to cats (since many have cooperative interactions with cats), but substantially fewer care about cruelty to pigs (since typical human-pig interactions are much less cooperative).
3.
Utilitarianism is the stereotypical example of a moral system that I’d like distant strangers to adopt [2], but which most people reject. It usually seems like this rejection is consistent with the hypothesis that they want to act, and have their friends and allies act, as if they are more valuable than distant strangers.
I don’t expect any of us to be sufficiently altruistic to become pure utilitarianians in the foreseeable future. But I expect us to improve the world by continuing to find new agreements that come closer to approximating that ideal, because such agreements mean that we help each other more. Dealism seems like what we get when we aim for the benefits of utilitarianism, but admit that people won’t be that altruistic without ideal incentives to be altruistic.
4.
Why are people attracted to moral systems that are more deontological than dealism?
DeScioli and Kurzban present evidence that moral systems were developed mostly for somewhat selfish reasons: people wanted predictable rules for how to choose sides in disputes. We’re biased toward rules for which it’s easy to predict how others will choose sides. That limits how complex the rules can be, and causes us to prefer rules that depend on readily observable evidence. E.g. people are reluctant to give the utilitarian answer to the trolley problem because “don’t kill” is a simple rule, and it’s easy to observe that inaction in the trolley problem satisfies that rule, whereas it’s hard to observe that someone is only able to save five people by killing one (how would the observer evaluate whether you could have saved the five people some other way?).
Under this model, moral systems are created primarily for selfish reasons, but when there are multiple options to choose from, people tend to choose the option under which society works better. [citation needed?] We want to look more moral than we actually are, so we often claim that this weak altruistic component is our only reason for choosing moral rules.
We can see this effect at work in rules regarding animal welfare. Vegetarianism is almost certainly not a great rule for improving animal welfare: it allows followers to eat eggs from chickens who are kept in cruel conditions, while preventing the existence of some animals who would live somewhat happy lives before becoming meat. Even if you imagine that the act of killing a chicken matters much more than the conditions under which it lives, the egg-eater is still harming chickens, because it’s predictable that most egg-buying practices will cause farmers to kill chickens once they’re past egg-laying age. The main sense in which typical egg-eating vegetarians are being more moral than meat-eaters is that they’re obeying a rule which is relatively easy to enforce, and that rule has some tendency to help animals. The rule seems to be valued mainly because it obscures responsibility for the chicken’s death.
I see a spectrum of motives for moral rules, from caring mostly about having clear boundaries (low cost of analyzing?) to mostly caring about benefits of good rules (at a potentially high cost of detecting infractions).
Technological progress provides better ability to observe evidence of which rules we’re obeying, so we now have a wider set of potential rules that we can select from to produce our current moral agreements [3]. And maybe technology is enabling us to understand slightly more complex rules. That is enabling us to shift our moral systems closer to utilitarian ones.
5.
But doesn’t dealism endorse immoral things such as slavery under some conditions?
Sort of. If slaves in 1800 didn’t have enough influence to offer a way out of their slavery, then dealism doesn’t provide a way to abolish slavery that would have worked in 1800. But that’s also true of any moral system that doesn’t have a convincing argument for why we should be moral [4]. If you want to convince me that your moral system is better than dealism, then convince me that people weren’t aware of it in 1800, but would have used it to abolish slavery if they were aware of it then.
But that doesn’t mean that slavery needed to be unprofitable to slaveowners in order to abolish it – the behavior of people who had no direct interest in slavery suggests they valued a more universalizable ethical system. Enough so that they did a bit to subvert the laws about runaway slaves [5]. The costs associated with runaway slaves helped erode support for slavery.
6.
Adopting dealism will mostly have little impact on most people’s day to day life. It is mainly important for how it affects our analysis of how to improve our moral agreements.
Has dealism or something equivalent been described better elsewhere? I’m sure there are vast amounts of loosely related writings, and I have little idea how successful I’ve been at locating the best ones.
How well does dealism describe the moral systems used by LWers / CFARians / EAs? I typically get the impression that people in these communities have beliefs which are mostly compatible with dealism, but I’d like clearer evidence.
footnotes
[1]– Good and Real describes this better than I expect I can.
[2] – unless I can get them to adopt a moral system more favorable to me. The veil of ignorance view suggests I shouldn’t expect that.
[3] – Some examples:
- The internet has prompted changes in attitudes toward keeping information proprietary. People now feel some social pressure to make many types of source code and research data available freely, whereas it was normal 25 years ago to restrict that access so that only a few privileged people would see them.
- The book The Institutional Revolution argues that improved measurement prompted society to agree not to fight duels. The book convinced me that this is part of a broad pattern of how social agreement change. Alas, I don’t expect to create a convincing summary of that argument.
- Some societies have recently (i.e. in the past few centuries) developed the ability to create competent organizations that are much larger than the Dunbar number. I’m mainly thinking of corporations, but this also applies to other organizations. Fukuyama’s book Trust provides some insights into how culture affects the feasibility of large organizations. Some of our large corporations (e.g. Intel or solar panel companies) wouldn’t be able to achieve the economies of scale that they have achieved unless investors trusted both the culture and the legal rules to give them a predictable share of profits. Making that work is tricky – mainland China is trying, and it’s unclear whether investors should trust their current system.
- Futarchy is an example of an option which would have been impractical a few centuries ago, but which could probably be implemented today if people wanted it.
None of these examples point as clearly as I would like to the underlying pattern that I’m trying to describe, but together they should provide a decent outline.
[4] – some people seem to imagine that moral arguments caused the abolition of slavery. I expect that moral reasons played some role in the abolition of slavery, but not in the sense that the timing could be explained by the availability of new moral understanding. Has anyone documented new moral arguments that were introduced shortly before the abolition of slavery which changed peoples’ minds?
I suspect the timing of abolitionism was much more connected to technological changes – e.g. railroads made it easier for slaves to escape; slaveowners tried to adjust by making more draconian laws for escaped slaves, which imposed annoying costs on regions with few slaves.
[5] – see the Fugitive Slave Act, and this commentary on northern aid to runaway slaves. Also see Hummel and Weingast’s paperfor (controversial?) arguments that slaveowners had reason to worry about this.
Updated 2018-06-06: Harsanyi’s Theorem (see also here) provides a much clearer explanation of why moral systems should look increasing like utilitarianism (H/t Abram Demski).
I agree a lot with your model of morality. I am currently writing a series of posts outlining similar idea. Understanding function of morality is key. For the longest time I was unhappy (and confused) why people are not trying to answer “Why be moral to begin with?” question.
Though I think couple more ingredients are required for a full working model — primarily: why is there so much self-deception around morality? For example, admitting that you’re behaving morally in order to gain something somehow diminishes the goodness. Thus people will deny that dealism approximates their morality well.
Because of this I feel EAs will generally not agree with your position. Dealism (or at least my naive interpretation of it) will argue that using animals in whatever way we want (including torture for fun) is fine, because they cannot strike back. This on its own does map to our moral intuitions well.
Morality is clearly about signalling. Agents like to cooperate with other agents that are self-less and non-Machiavellian. But how do you prove that you are self-less? If it’s obvious to others that your good actions were taken to improve your perceived trustworthiness or status, than they will not assume you’re actually non-Machiavellian. You somehow have to prove to everyone that you do the goodness for the goodness sake. That your algorithm cannot act unnicely.
So how do agents show that they are self-less? By using:
(1) self-deception — don’t ask “why be moral?”, only Machiavellians do this
(2) emotions (for some reason difficult to fake for humans) for verifiably precomitting to agreements (friendships, marriages, tribalism)
Re: [4]
You know that the “Underground Railroad” was a metaphor, don’t you? Slaves didn’t use actual railroads to escape because they were a choke point, too easily policed. It was only the last leg, Baltimore to Philadelphia, where free blacks provided sufficient camouflage. Railroads made it easier to get to Canada, but it was the Fugitive Slave Act that spurred them to Canada, not vice versa. (Henry “Box” Brown started in Virginia and relied on high speed transit, but I don’t think he was typical.)
You shouldn’t think just in terms of North vs South, because that isn’t the only place the debate played out. England had few slaves and no fugitives, but clarified its ban on slavery c1770, banned the slave trade in 1807, and slavery in the Empire in 1833, all before railroads. Maybe other technological change made communication easier; and maybe railroads had the same effect in America.
Maybe railroads brought the North and South into conflict, but why did the North abolish slavery? Agricultural slavery turned out not to be viable, but Northerners kept domestic slaves for a long time. Around 1800 they began a slow process of emancipation, declaring the children of slaves to be free.
Ok Douglas, I guess I don’t have good evidence for why runaway slaves became an important problem for the south. Although the Hummel and Weingast paper points out that states such as Maryland were where the runaway slaves mattered and were somewhat common (if Maryland stopped supporting southern slavery, the slaveowners’ influence on congress would be crippled).
And maybe slavery became doomed when England banned it, and I don’t have a good understanding of why that happened.
LoopyBeliever, I don’t expect any consensus among EAs about the nature of morality. I expect the EA movement to continue trying to generate compromises between multiple moral systems. I’m guessing that dealism is about as popular among EAs as any other comparable-level moral system.https://concepts.effectivealtruism.org/concepts/moral-trade/
Yes, people pretend to be more moral than they actually are, for signalling reasons. People aren’t able to prove themselves to be perfectly selfless or perfectly honest. But we can provide plenty of relevant evidence, and humans have evolved to devote a good deal of brainpower to evaluating such evidence.
I think it is important to separate the questions. Why did the rate of fugitives increase? Why did the South care so much to accept the Fugitive Slave Act in place of a slave state? Why did the north adopt Abolition — did fugitives play a role? I don’t think fugitives were very important for moral change in America because they weren’t important anywhere else.
Sorry, I hadn’t read your links, so I didn’t know it was just about Maryland. So, yes, railroads could have made a difference. I don’t think that they did, because they were a chokepoint, but I don’t have numbers. I ran across the claim that the reason to keep slaves illiterate was to prevent them from forging freedom papers to use on the railroad. They did keep slaves illiterate, but those on the Underground Railroad had the help of literate conductors who could have done the forging. But I don’t think that they exploited this. I think rural fugitives avoided cities and those in Baltimore more often sought refuge with black shipping crews. But I don’t know.
As for why the South accepted the Fugitive Slave Act, I’ll stick with the historical consensus that it was “symbolic,” which you can dress up as “signaling,” but I don’t think that helps much. Maybe Hummel and Weingast are right that the Deep South should have worried about fugitives leading to the collapse of slavery in Maryland, but that is very weak as evidence that they did worry about it, that it motivated the passage of the Act. I think they did worry a bit about the slow decline of Maryland slavery because it was uncompetitive, like in the North.
Pingback: The Life You Can Save | Bayesian Investor Blog