ethics

All posts tagged ethics

Some of Robin Hanson’s Malthusian-sounding posts prompted me to wonder how we can create a future that is better than the repugnant conclusion. It struck me that there’s no reason to accept the assumption that increasing the number of living minds to the limit of available resources implies that the quality of the lives those minds live will decrease to where they’re barely worth living.

If we imagine the minds to be software, then a mind that barely has enough resources to live could be designed so that it is very happy with the cpu cycles or negentropy it gets even if those are negligible compared to other minds. Or if there is some need for life to be biological, a variant of hibernation might accomplish the same result.

If this is possible, then what I find repugnant about the repugnant conclusion is that it perpetuates the cruelty of evolution which produces suffering in beings with fewer resources than they were evolved to use. Any respectable civilization will engineer away the conflict between average utilitarianism and total utilitarianism.

If instead the most important limit on the number of minds is the supply of matter, then there is a tradeoff between more minds and more atoms per mind. But there is no mere addition paradox to create concerns about a repugnant conclusion if the creation of new minds reduces the utility of other minds.

(Douglas W. Portmore has a similar but less ambitious conclusion (pdf)).

Book review: Human Enhancement, edited by Julian Savulescu and Nick Bostrom.

This book starts out with relatively uninteresting articles and only the last quarter of so of it is worth reading.

Because I agree with most of the arguments for enhancement, I skipped some of the pro-enhancement arguments and tried to read the anti-enhancement arguments carefully. They mostly boil down to the claim that people’s preference for natural things is sufficient to justify broad prohibitions on enhancing human bodies and human nature. That isn’t enough of an argument to deserve as much discussion as it gets.

A few of the concerns discussed by advocates of enhancement are worth more thought. The question of whether unenhanced humans would retain political equality and rights enables us to imagine dystopian results of enhancement. Daniel Walker provides a partly correct analysis of conditions under which enhanced beings ought to paternalistically restrict the choices and political power of the unenhanced. But he’s overly complacent about assuming the paternalists will have the interests of the unenhanced at heart. The biggest problem with paternalism to date is that it’s done by people who are less thoughtful about the interests of the people they’re controlling than they are about finding ways to serve their own self-interest. It is possible that enhanced beings will be perfect altruists, but it is far from being a natural consequence of enhancement.

The final chapter points out the risks of being overconfident of our ability to improve on nature. They describe questions we should ask about why evolution would have produced a result that is different from what we want. One example that they give suggests they remain overconfident – they repeat a standard claim about the human appendix being a result of evolution getting stuck in a local optimum. Recent evidence suggests that the appendix performs a valuable function in recovery from diarrhea (still a major cause of death in places) and harm from appendicitis seems rare outside of industrialized nations (maybe due to differences in dietary fiber?).

The most new and provocative ideas in the book have little to do with the medical enhancements that the title evokes. Robin Hanson’s call for mechanisms to make people more truthful probably won’t gather much support, as people are clever about finding objections to any specific method that would be effective. Still, asking the question the way he does may encourage some people to think more clearly about their goals.

Nick Bostrom and Anders Sandberg describe an interesting (original?) hypothesis about why placebos (sometimes) work. It involves signaling that there is relatively little need to conserve the body’s resources for fighting future injuries and diseases. Could this understanding lead to insights about how to more directly and reliably trigger this effect? More effective placebos have been proposed as jokes. Why is it so unusual to ask about serious research into this subject?

Book review: Good and Real: Demystifying Paradoxes from Physics to Ethics by Gary Drescher.

This book tries to derive ought from is. The more important steps explain why we should choose the one-box answer to Newcomb’s problem, then argue that the same reasoning should provide better support for Hofstadter’s idea of superrationality than has previously been demonstrated, and that superrationality can be generalized to provide morality. He comes close to the right approach to these problems, and I agree with the conclusions he reaches, but I don’t find his reasoning convincing.

He uses a concept which he calls a subjunctive relation, which is intermediate between a causal relation and a correlation, to explain why a choice that seems to happen after its goal has been achieved can be rational. That is the part of his argument that I find unconvincing. The subjunctive relation behaves a lot like a causal relation, and I can’t figure out why it should be treated as more than a correlation unless it’s equivalent to a causal relation.

I say that the one-box choice in Newcomb’s problem causes money to be placed in the box, and that superrationality and morality should be followed for similar reasons involving counterintuitive types of causality. It looks like Drescher is reluctant to accept this type of causality because he doesn’t think clearly enough about the concept of choice. It often appears that he is using something like a folk-psychology notion of choice that appears incompatible with the assumptions of Newcomb’s problem. I expect that with a sufficiently sophisticated concept of choice, Newcomb’s problem and similar situations cease to seem paradoxical. That concept should reflect a counterintuitive difference between the time at which a choice is made and the time at which it is introspectively observed as being irrevocable. When describing Kavka’s toxin problem, he talks more clearly about the concept of choice, and almost finds a better answer than subjunctive relations, but backs off without adequate analysis.

The book also has a long section explaining why the Everett interpretation of quantum mechanics is better than the Copenhagen interpretation. The beginning and end of this section are good, but there’s a rather dense section in the middle that takes much effort to follow without adding much.

Book review: Why Humans Cooperate: A Cultural and Evolutionary Explanation by Joseph Henrich, Natalie Henrich.
This book provides a clear and informative summary of the evolutionary theories that explain why people cooperate (but few novel ideas), and some good but unexciting evidence that provides a bit of support for the theories.
One nice point they make is that unconditional altruism discourages cooperation – it’s important to have some sort of reciprocity (possibly indirect) for a society to prevent non-cooperators from outcompeting cooperators.
The one surprising fact uncovered in their field studies is that people are more generous in the Dictator Game than in the Ultimatum Game (games where one player decides how to divide money between himself and another player; in the Ultimatum Game the second player can reject the division, in which case neither gets anything). It appears that the Ultimatum Game encourages people to think in terms of business-like interactions, but in the Dictator Game a noncompetitive mode of thought dominates.

Book review: Beyond AI: Creating the Conscience of the Machine by J. Storrs Hall
The first two thirds of this book survey current knowledge of AI and make some guesses about when and how it will take off. This part is more eloquent than most books on similar subjects, and its somewhat different from normal perspective makes it worth reading if you are reading several books on the subject. But ease of reading is the only criterion by which this section stands out as better than competing books.
The last five chapters that are surprisingly good, and should shame most professional philosophers whose writings by comparison are a waste of time.
His chapter on consciousness, qualia, and related issues is more concise and persuasive than anything else I’ve read on these subjects. It’s unlikely to change the opinions of people who have already thought about these subjects, but it’s an excellent place for people who are unfamiliar with them to start.
His discussions of ethics using game theory and evolutionary pressures is an excellent way to frame ethical discussions.
My biggest disappointment was that he starts to recognize a possibly important risk of AI when he says “disparities among the abilities of AIs … could negate the evolutionary pressure to reciprocal altruism”, but then seems to dismiss that thoughtlessly (“The notion of one single AI taking off and obtaining hegemony over the whole world by its own efforts is ludicrous”).
He probably has semi-plausible grounds for dismissing some of the scenarios of this nature that have been proposed (e.g. the speed at which some people imagine an AI would take off is improbable). But if AIs with sufficiently general purpose intelligence enhance their intelligence at disparate rates for long enough, the results would render most of the book’s discussion of ethics irrelevant. The time it took humans to accumulate knowledge didn’t give Neanderthals much opportunity to adapt. Would the result have been different if Neanderthals had learned to trade with humans? The answer is not obvious, and probably depends on Neanderthal learning abilities in ways that I don’t know how to analyze.
Also, his arguments for optimism aren’t quite as strong as he thinks. His point that career criminals are generally of low intelligence is reassuring if the number of criminals is all that matters. But when the harm done by one relatively smart criminal can be very large (e.g. Mao), it’s hard to say that the number of criminals is all that matters.
Here’s a nice quote from Mencken which this book quotes part of:

Moral certainty is always a sign of cultural inferiority. The more uncivilized the man, the surer he is that he knows precisely what is right and what is wrong. All human progress, even in morals, has been the work of men who have doubted the current moral values, not of men who have whooped them up and tried to enforce them. The truly civilized man is always skeptical and tolerant, in this field as in all others. His culture is based on ‘I am not too sure.’

Another interesting tidbit is the anecdote that H.G. Wells predicted in 1907 that flying machines would be built. In spite of knowing a lot about attempts to build them, he wasn’t aware that the Wright brothers had succeeded in 1903.
If an AI started running in 2003 that has accumulated the knowledge of a 4-year old human and has the ability to continue learning at human or faster speeds, would we have noticed? Or would the reports we see about it sound too much like the reports of failed AIs for us to pay attention?

Book review: Reasons and Persons by Derek Parfit.
This book does a very good job of pointing out inconsistencies in common moral intuitions, and does a very mixed job of analyzing how to resolve them.
The largest section of the book deals with personal identity, using a bit of neuroscience plus scenarios such as a Star Trek transporter to show that nonreductionsist approaches produce conclusions which are strange enough to disturb most people. I suspect this analysis was fairly original when it was written, but I’ve seen most of the ideas elsewhere. His analysis is more compelling than most other versions, but it’s not concise enough for many to read it.
The most valuable part of the book is the last section, weighing conflicts of interest between actual people and people who could potentially exist in the future. His description of the mere addition paradox convinced me that it’s harder than I thought to specify plausible beliefs which don’t lead to the Repugnant Conclusion (i.e. that some very large number of people with lives barely worth living can be a morally better result than some smaller number of very happy people). He ends by concluding he hasn’t found a way resolve the conflicts between the principles he thinks morality ought to satisfy.
It appears that if he had applied the critical analysis that makes up most of the book to the principle of impersonal ethics, he would see signs that his dilemma results from trying to satisfy incompatible intuitions. Human desire for ethical rules that are more impersonal is widespread when the changes are close to Pareto improvements, but human intuition seems to be generally incompatible with impersonal ethical rules that are as far from Pareto improvements as the Repugnant Conclusion appears to be. Thus it appears Parfit could only resolve the dilemma by finding a source of morality that transcends human intuition and logical consistency (he wisely avoids looking for non-human sources of morality, but intuition doesn’t seem quite the right way to find a human source) or by resolving the conflicting intuitions people seem to have about impersonal ethics.
The most disappointing part of the book is the argument that consequentialism is self-defeating. The critical part of his argument involves a scenario where a mother must choose between saving her child and saving two strangers. His conclusion depends on an assumption about the special relationship between parent and child which consequentialists have no obvious obligation to agree with. He isn’t clear enough about what that assumption is for me to figure out why we disagree.
I find it especially annoying that the book’s index only covers names, since it’s a long book whose subjects aren’t simple enough for me to fully remember.

Nick Bostrom has a good paper on Astronomical Waste: The Opportunity Cost of Delayed Technological Development, which argues that under most reasonable ethical systems that aren’t completely selfish or very parochial, our philanthropic activities ought to be devoted primarily toward preventing disasters that would cause the extinction of intelligent life.
Some people who haven’t thought about the Fermi Paradox carefully may overestimate the probability that most of the universe is already occupied by intelligent life. Very high estimates for that probability would invalidate Bostrom’s conclusion, but I haven’t found any plausible arguments that would justify that high a probability.
I don’t want to completely dismiss Malthusian objections that life in the distant future will be barely worth living, but the risk of a Malthusian future would need to be well above 50 percent to substantially alter the optimal focus of philanthropy, and the strongest Malthusian arguments that I can imagine leave much more uncertainty than that. (If I thought I could alter the probability of a Malthusian future, maybe I should devote effort to that. But I don’t currently know where to start).
Thus the conclusion seems like it ought to be too obvious to need repeating, but it’s far enough from our normal experiences that most of us tend to pay inadequate attention to it. So I’m mentioning it in order to remind people (including myself) of the need to devote more of our time to thinking about risks such as those associated with AI or asteroid impacts.

Voluntary Slavery

Jeff Hummel recently gave an interesting talk on the subject of how the law should treat a contract that involves one person becoming enslaved to another. The title made the talk sound like a quaint subject of little importance to present day politics, and I attended because Hummel has a reputation for being interesting, not due to the title of the talk. But his arguments were designed to apply not just to the status that was outlawed by the 13th amendment, but also to military service (not just the draft, but any type where the soldier can’t quit at will) and marriage.
Hummel argues that instead of asking whether slavery ought to be illegal, we should ask what a legal system ought to do when presented with a dispute between two people over enforcement of a contract requiring slavery.
Some simplistic notions of contracts assume that valid contracts should always be honored and enforced at all costs, but the term efficient breach describes exceptions to that rule (if you’re unfamiliar with this subject, I recommend David Friedman’s book Law’s Order as valuable background to this post).
In the distant past, people who failed to fulfill contracts and had insufficient assets to compensate were often put in debtors prisons. That was generally abolished around the time that traditional slavery was abolished, and replaced with a more forgiving bankruptcy procedure. Hummel suggests that this wasn’t a random coincidence, and that a slave breaching his promise to his master ought to be treated like most other breaches of contracts. If bankruptcy is the appropriate worst case result of a breach of contract, then the same reasoning ought to imply that bankruptcy is the worst result a legal system should impose on a person who reneges on a promise to be a slave.
Hummel noted that the change from debtors prisons to bankruptcy happened around the time that the industrial revolution took off, and suggested that we should wonder whether the timing implies that we became wealthy enough that we could afford to abolish debtors prisons, and/or whether the change helped to cause the industrial revolution. Neither he nor I have a good argument for or against those possibilities.
If this rule were applied to military service, unpopular wars would become harder to fight, as many more soldiers would quit the military. As far as I can tell, peer pressure kept soldiers fighting in any war I think they ought to have fought, so I think this would be a clear improvement.
The talk ended with some disagreement between Hummel and some audience members about what should happen if people want to have a legal system that provides harsher penalties for breach of contract (assume for simplicity that they’re forming a new country in order to do this). Hummel disapproved of this, but it wasn’t clear whether he was doing more than just predicting nobody would want this. I think he should have once again rephrased the question in terms of what existing legal systems should do about such a new legal system. Military/police action to stop the new legal system seems excessive. Social pressure for it to change seems desirable. It was unclear whether anyone there had an opinion about intermediate responses such as economic boycotts, or whether the apparent disagreement was just posturing.

Robin Hanson writes in a post on Intuition Error and Heritage:

Unless you can see a reason to have expected to be born into a culture or species with more accurate than average intuitions, you must expect your cultural or species specific intuitions to be random, and so not worth endorsing.

Deciding whether an intuition is species specific and no more likely than random to be right seems a bit hard, due to the current shortage of species whose cultures address many of the disputes humans have.
The ideas in this quote follow logically from other essays of Robin’s that I’ve read, but phrasing them this way makes them seem superficially hard to reconcile with arguments by Hayek that we should respect the knowledge contained in culture.
Part of this apparent conflict seems to be due to the Hayek’s emphasis on intuitions for which there is some unobvious and inconclusive evidence that supports the cultural intuitions. Hayek wasn’t directing his argument to a random culture, but rather to a culture for which there was some evidence of better than random results, and it would make less sense to apply his arguments to, say, North Korean society. For many other intuitions that Hayek cared about, the number of cultures which agree with the intuition may be large enough to constitute evidence in support of the intuition.
Some intuitions may be appropriate for a culture even though they were no better than random when first adopted. Driving on the right side of the road is a simple example. The arguments given in favor of a judicial bias toward stare decisis suggest this is just the tip of an iceberg.
Some of this apparent conflict may be due the importance of treating interrelated practices together. For instance, laws against extramarital sex might be valuable in societies where people depend heavily on marital fidelity but not in societies where a divorced person can support herself comfortably. A naive application of Robin’s rule might lead the former society to decide such a law is arbitrary, when a Hayekian might wonder if it is better to first analyze whether to treat the two practices as a unit which should only be altered together.
I’m uncertain whether these considerations fully reconcile the two views, or whether Hayek’s arguments need more caveats.

Robin Hanson has another interesting paper on human attitudes toward truth and on how they might be improved.
See also some related threads on the extropy-chat list here and here.
One issue that Robin raises involves disputes between us and future generations over how much we ought to constrain our descendants to be similar to us. He is correct that some of this disagreement results from what he calls “moral arrogance” (i.e. at least one group of people overestimating their ability to know what is best). But even if we and our descendants were objective about analyzing the costs and benefits of the alternatives, I would expect some disagreement to remain, because different generations will want to maximize the interests of different groups of beings. Conflicting interests between two groups that exist at the same time can in principle be resolved by one group paying the other to change it’s position. But when one group exists only in the future, and its existence is partly dependent on which policy is adopted now, it’s difficult to see how such disagreements could be resolved in a way that all could agree upon.