All posts tagged transhumanism

Book review: Dark Skies: Space Expansionism, Planetary Geopolitics, and the Ends of Humanity, by Daniel Deudney.

Dark Skies is an unusually good and bad book.

Good in the sense that 95% of the book consists of uncontroversial, scholarly, mundane claims that accurately describe the views that Deudney is attacking. These parts of the book are careful to distinguish between value differences and claims about objective facts.

Bad in the senses that the good parts make the occasional unfair insult more gratuitous, and that Deudney provides little support for his predictions that his policies will produce better results than those of his adversaries. I count myself as one of his adversaries.

Dark Skies is an opposite of Where Is My Flying Car? in both style and substance.

Continue Reading

Book review: Where Is My Flying Car? A Memoir of Future Past, by J. Storrs Hall (aka Josh).

If you only read the first 3 chapters, you might imagine that this is the history of just one industry (or the mysterious lack of an industry).

But this book attributes the absence of that industry to a broad set of problems that are keeping us poor. He looks at the post-1970 slowdown in innovation that Cowen describes in The Great Stagnation[1]. The two books agree on many symptoms, but describe the causes differently: where Cowen says we ate the low hanging fruit, Josh says it’s due to someone “spraying paraquat on the low-hanging fruit”.

The book is full of mostly good insights. It significantly changed my opinion of the Great Stagnation.

The book jumps back and forth between polemics about the Great Strangulation (with a bit too much outrage porn), and nerdy descriptions of engineering and piloting problems. I found those large shifts in tone to be somewhat disorienting – it’s like the author can’t decide whether he’s an autistic youth who is eagerly describing his latest obsession, or an angry old man complaining about how the world is going to hell (I’ve met the author at Foresight conferences, and got similar but milder impressions there).

Josh’s main explanation for the Great Strangulation is the rise of Green fundamentalism[2], but he also describes other cultural / political factors that seem related. But before looking at those, I’ll look in some depth at three industries that exemplify the Great Strangulation.

Continue Reading

Book review: The Age of Em: Work, Love and Life when Robots Rule the Earth, by Robin Hanson.

This book analyzes a possible future era when software emulations of humans (ems) dominate the world economy. It is too conservative to tackle longer-term prospects for eras when more unusual intelligent beings may dominate the world.

Hanson repeatedly tackles questions that scare away mainstream academics, and gives relatively ordinary answers (guided as much as possible by relatively standard, but often obscure, parts of the academic literature).


Hanson’s scenario relies on a few moderately controversial assumptions. The assumptions which I find most uncertain are related to human-level intelligence being hard to understand (because it requires complex systems), enough so that ems will experience many subjective centuries before artificial intelligence is built from scratch. For similar reasons, ems are opaque enough that it will be quite a while before they can be re-engineered to be dramatically different.

Hanson is willing to allow that ems can be tweaked somewhat quickly to produce moderate enhancements (at most doubling IQ) before reaching diminishing returns. He gives somewhat plausible reasons for believing this will only have small effects on his analysis. But few skeptics will be convinced.

Some will focus on potential trillions of dollars worth of benefits that higher IQs might produce, but that wealth would not much change Hanson’s analysis.

Others will prefer an inside view analysis which focuses on the chance that higher IQs will better enable us to handle risks of superintelligent software. Hanson’s analysis implies we should treat that as an unlikely scenario, but doesn’t say what we should do about modest probabilities of huge risks.

Another way that Hanson’s assumptions could be partly wrong is if tweaking the intelligence of emulated Bonobos produces super-human entities. That seems to only require small changes to his assumptions about how tweakable human-like brains are. But such a scenario is likely harder to analyze than Hanson’s scenario, and it probably makes more sense to understand Hanson’s scenario first.


Wages in this scenario are somewhat close to subsistence levels. Ems have some ability to restrain wage competition, but less than they want. Does that mean wages are 50% above subsistence levels, or 1%? Hanson hints at the former. The difference feels important to me. I’m concerned that sound-bite versions of book will obscure the difference.

Hanson claims that “wealth per em will fall greatly”. It would be possible to construct a measure by which ems are less wealthy than humans are today. But I expect it will be at least as plausible to use a measure under which ems are rich compared to humans of today, but have high living expenses. I don’t believe there’s any objective unit of value that will falsify one of those perspectives [1].

Style / Organization

The style is more like a reference book than a story or an attempt to persuade us of one big conclusion. Most chapters (except for a few at the start and end) can be read in any order. If the section on physics causes you to doubt whether the book matters, skip to chapter 12 (labor), and return to the physics section later.

The style is very concise. Hanson rarely repeats a point, so understanding him requires more careful attention than with most authors.

It’s odd that the future of democracy gets less than twice as much space as the future of swearing. I’d have preferred that Hanson cut out a few of his less important predictions, to make room for occasional restatements of important ideas.

Many little-known results that are mentioned in the book are relevant to the present, such as: how the pitch of our voice affects how people perceive us, how vacations affect productivity, and how bacteria can affect fluid viscosity.

I was often tempted to say that Hanson sounds overconfident, but he is clearly better than most authors at admitting appropriate degrees of uncertainty. If he devoted much more space to caveats, I’d probably get annoyed at the repetition. So it’s hard to say whether he could have done any better.


Even if we should expect a much less than 50% chance of Hanson’s scenario becoming real, it seems quite valuable to think about how comfortable we should be with it and how we could improve on it.


[1] – The difference matters only in one paragraph, where Hanson discusses whether ems deserve charity more than do humans living today. Hanson sounds like he’s claiming ems deserve our charity because they’re poor. Most ems in this scenario are comfortable enough for this to seem wrong.

Hanson might also be hinting that our charity would be effective at increasing the number of happy ems, and that basic utilitarianism says that’s preferable to what we can do by donating to today’s poor. That argument deserves more respect and more detailed analysis.

I’d like to see more discussion of uploaded ape risks.

There is substantial disagreement over how fast an uploaded mind (em) would improve its abilities or the abilities of its progeny. I’d like to start by analyzing a scenario where it takes between one and ten years for an uploaded bonobo to achieve human-level cognitive abilities. This scenario seems plausible, although I’ve selected it more to illustrate a risk that can be mitigated than because of arguments about how likely it is.

I claim we should anticipate at least a 20% chance a human-level bonobo-derived em would improve at least as quickly as a human that uploaded later.

Considerations that weigh in favor of this are: that bonobo minds seem to be about as general-purpose as humans, including near-human language ability; and the likely ease of ems interfacing with other software will enable them to learn new skills faster than biological minds will.

The most concrete evidence that weighs against this is the modest correlation between IQ and brain size. It’s somewhat plausible that it’s hard to usefully add many neurons to an existing mind, and that bonobo brain size represents an important cognitive constraint.

I’m not happy about analyzing what happens when another species develops more powerful cognitive abilities than humans, so I’d prefer to have some humans upload before the bonobos become superhuman.

A few people worry that uploading a mouse brain will generate enough understanding of intelligence to quickly produce human-level AGI. I doubt that biological intelligence is simple / intelligible enough for that to work. So I focus more on small tweaks: the kind of social pressures which caused the Flynn Effect in humans, selective breeding (in the sense of making many copies of the smartest ems, with small changes to some copies), and faster software/hardware.

The risks seem dependent on the environment in which the ems live and on the incentives that might drive their owners to improve em abilities. The most obvious motives for uploading bonobos (research into problems affecting humans, and into human uploading) create only weak incentives to improve the ems. But there are many other possibilities: military use, interesting NPCs, or financial companies looking for interesting patterns in large databases. No single one of those looks especially likely, but with many ways for things to go wrong, the risks add up.

What could cause a long window between bonobo uploading and human uploading? Ethical and legal barriers to human uploading, motivated by risks to the humans being uploaded and by concerns about human ems driving human wages down.

What could we do about this risk?

Political activism may mitigate the risks of hostility to human uploading, but if done carelessly it could create a backlash which worsens the problem.

Conceivably safety regulations could restrict em ownership/use to people with little incentive to improve the ems, but rules that looked promising would still leave me worried about risks such as irresponsible people hacking into computers that run ems and stealing copies.

A more sophisticated approach is to improve the incentives to upload humans. I expect the timing of the first human uploads to be fairly sensitive to whether we have legal rules which enable us to predict who will own em labor. But just writing clear rules isn’t enough – how can we ensure political support for them at a time when we should expect disputes over whether they’re people?

We could also find ways to delay ape uploading. But most ways of doing that would also delay human uploading, which creates tradeoffs that I’m not too happy with (partly due to my desire to upload before aging damages me too much).

If a delay between bonobo and human uploading is dangerous, then we should also ask about dangers from other uploaded species. My intuition says the risks are much lower, since it seems like there are few technical obstacles to uploading a bonobo brain shortly after uploading mice or other small vertebrates.

But I get the impression that many people associated with MIRI worry about risks of uploaded mice, and I don’t have strong evidence that I’m wiser than they are. I encourage people to develop better analyses of this issue.

Book review: Singularity Hypotheses: A Scientific and Philosophical Assessment.

This book contains papers of widely varying quality on superhuman intelligence, plus some fairly good discussions of what ethics we might hope to build into an AGI. Several chapters resemble cautious versions of LessWrong, others come from a worldview totally foreign to LessWrong.

The chapter I found most interesting was Richard Loosemore and Ben Goertzel’s attempt to show there are no likely obstacles to a rapid “intelligence explosion”.

I expect what they label as the “inherent slowness of experiments and environmental interaction” to be an important factor limiting the rate at which an AGI can become more powerful. They think they see evidence from current science that this is an unimportant obstacle compared to a shortage of intelligent researchers: “companies complain that research staff are expensive and in short supply; they do not complain that nature is just too slow.”

Some explanations that come to mind are:

  • Complaints about nature being slow are not very effective at speeding up nature.
  • Complaints about specific tools being slow probably aren’t very unusual, but there are plenty of cases where people know complaints aren’t effective (e.g. complaints about spacecraft traveling slower than the theoretical maximum [*]).
  • Hiring more researchers can increase the status of a company even if the additional staff don’t advance knowledge.

They also find it hard to believe that we have independently reached the limit of the physical rate at which experiments can be done at the same time we’ve reached the limits of how many intelligent researchers we can hire. For literal meanings of physical limits this makes sense, but if it’s as hard to speed up experiments as it is to throw more intelligence into research, then the apparent coincidence could be due to wise allocation of resources to whichever bottleneck they’re better used in.

None of this suggests that it would be hard for an intelligence explosion to produce the 1000x increase in intelligence they talk about over a century, but it seems like an important obstacle to the faster time periods some people believe (days or weeks).

Some shorter comments on other chapters:

James Miller describes some disturbing incentives that investors would create for companies developing AGI if AGI is developed by companies large enough that no single investor has much influence on the company. I’m not too concerned about this because if AGI were developed by such a company, I doubt that small investors would have enough awareness of the project to influence it. The company might not publicize the project, or might not be honest about it. Investors might not believe accurate reports if they got them, since the reports won’t sound much different from projects that have gone nowhere. It seems very rare for small investors to understand any new software project well enough to distinguish between an AGI that goes foom and one that merely makes some people rich.

David Pearce expects the singularity to come from biological enhancements, because computers don’t have human qualia. He expects it would be intractable for computers to analyze qualia. It’s unclear to me whether this is supposed to limit AGI power because it would be hard for AGI to predict human actions well enough, or because the lack of qualia would prevent an AGI from caring about its goals.

Itamar Arel believes AGI is likely to be dangerous, and suggests dealing with the danger by limiting the AGI’s resources (without saying how it can be prevented from outsourcing its thought to other systems), and by “educational programs that will help mitigate the inevitable fear humans will have” (if the dangers are real, why is less fear desirable?).

* No, that example isn’t very relevant to AGI. Better examples would be atomic force microscopes, or the stock market (where it can take a generation to get a new test of an important pattern), but it would take lots of effort to convince you of that.

Some comments on last weekend’s Foresight Conference:

At lunch on Sunday I was in a group dominated by a discussion between Robin Hanson and Eliezer Yudkowsky over the relative plausibility of new intelligences having a variety of different goal systems versus a single goal system (as in a society of uploads versus Friendly AI). Some of the debate focused on how unified existing minds are, with Eliezer claiming that dogs mostly don’t have conflicting desires in different parts of their minds, and Robin and others claiming such conflicts are common (e.g. when deciding whether to eat food the dog has been told not to eat).

One test Eliezer suggested for the power of systems with a unified goal system is that if Robin were right, bacteria would have outcompeted humans. That got me wondering whether there’s an appropriate criterion by which humans can be said to have outcompeted bacteria. The most obvious criterion on which humans and bacteria are trying to compete is how many copies of their DNA exist. Using biomass as a proxy, bacteria are winning by several orders of magnitude. Another possible criterion is impact on large-scale features of Earth. Humans have not yet done anything that seems as big as the catastrophic changes to the atmosphere (“the oxygen crisis”) produced by bacteria. Am I overlooking other appropriate criteria?

Kartik Gada described two humanitarian innovation prizes that bear some resemblance to a valuable approach to helping the world’s poorest billion people, but will be hard to turn into something with a reasonable chance of success. The Water Liberation Prize would be pretty hard to judge. Suppose I submit a water filter that I claim qualifies for the prize. How will the judges test the drinkability of the water and the reusability of the filter under common third world conditions (which I suspect vary a lot and which probably won’t be adequately duplicated where the judges live)? Will they ship sample devices to a number of third world locations and ask whether it produces water that tastes good, or will they do rigorous tests of water safety? With a hoped for prize of $50,000, I doubt they can afford very good tests. The Personal Manufacturing Prizes seem somewhat more carefully thought out, but need some revision. The “three different materials” criterion is not enough to rule out overly specialized devices without some clear guidelines about which differences are important and which are trivial. Setting specific award dates appears to assume an implausible ability to predict how soon such a device will become feasible. The possibility that some parts of the device are patented is tricky to handle, as it isn’t cheap to verify the absence of crippling patents.

There was a debate on futarchy between Robin Hanson and Mencius Moldbug. Moldbug’s argument seems to boil down to the absence of a guarantee that futarchy will avoid problems related to manipulation/conflicts of interest. It’s unclear whether he thinks his preferred form of government would guarantee any solution to those problems, and he rejects empirical tests that might compare the extent of those problems under the alternative systems. Still, Moldbug concedes enough that it should be possible to incorporate most of the value of futarchy within his preferred form of government without rejecting his views. He wants to limit trading to the equivalent of the government’s stockholders. Accepting that limitation isn’t likely to impair the markets much, and may make futarchy more palatable to people who share Moldbug’s superstitions about markets.

Book review: Human Enhancement, edited by Julian Savulescu and Nick Bostrom.

This book starts out with relatively uninteresting articles and only the last quarter of so of it is worth reading.

Because I agree with most of the arguments for enhancement, I skipped some of the pro-enhancement arguments and tried to read the anti-enhancement arguments carefully. They mostly boil down to the claim that people’s preference for natural things is sufficient to justify broad prohibitions on enhancing human bodies and human nature. That isn’t enough of an argument to deserve as much discussion as it gets.

A few of the concerns discussed by advocates of enhancement are worth more thought. The question of whether unenhanced humans would retain political equality and rights enables us to imagine dystopian results of enhancement. Daniel Walker provides a partly correct analysis of conditions under which enhanced beings ought to paternalistically restrict the choices and political power of the unenhanced. But he’s overly complacent about assuming the paternalists will have the interests of the unenhanced at heart. The biggest problem with paternalism to date is that it’s done by people who are less thoughtful about the interests of the people they’re controlling than they are about finding ways to serve their own self-interest. It is possible that enhanced beings will be perfect altruists, but it is far from being a natural consequence of enhancement.

The final chapter points out the risks of being overconfident of our ability to improve on nature. They describe questions we should ask about why evolution would have produced a result that is different from what we want. One example that they give suggests they remain overconfident – they repeat a standard claim about the human appendix being a result of evolution getting stuck in a local optimum. Recent evidence suggests that the appendix performs a valuable function in recovery from diarrhea (still a major cause of death in places) and harm from appendicitis seems rare outside of industrialized nations (maybe due to differences in dietary fiber?).

The most new and provocative ideas in the book have little to do with the medical enhancements that the title evokes. Robin Hanson’s call for mechanisms to make people more truthful probably won’t gather much support, as people are clever about finding objections to any specific method that would be effective. Still, asking the question the way he does may encourage some people to think more clearly about their goals.

Nick Bostrom and Anders Sandberg describe an interesting (original?) hypothesis about why placebos (sometimes) work. It involves signaling that there is relatively little need to conserve the body’s resources for fighting future injuries and diseases. Could this understanding lead to insights about how to more directly and reliably trigger this effect? More effective placebos have been proposed as jokes. Why is it so unusual to ask about serious research into this subject?

Convergence08 had an amazing number of interesting people in attendance. No one person stood out as unusually impressive – it was more that the average was unusually high for a 300 person gathering. I’ll list many small ideas, which partly reflects the fact that I was trying to sample a wide enough variety of sessions that I didn’t manage to absorb any one presentation in depth.
Genescient is a new company whose founders include SF author Greg Benford. It has a strain of fruit flies bred for lifespans more than 4 times normal, and has used their DNA to identify substances that might improve human lifespan. It sounds like they will soon offer dietary supplements which have little risk and a hope of slowing down aging by some hard to predict (probably small) amount.

Advice from Eliezer Yudkowsky (responding to a concern that transhumanists have few children): don’t reproduce until you can code your child from scratch.

Several ideas from a session run by Anders Sandberg:

  • AntiGroupware is designed to remove many social pressures from group decision-making
  • Once it’s easy to make copies of people, political campaigns will be run by a large number of copies. [This assumes that democracy can attempt to survive – are copies going to be denied votes?]
  • Politicians should be selected from losers of the game Diplomacy [It might be hard to keep them from deliberately losing, but with big incentives winning plus a low probability of any one loser becoming a politician, it might work.]

Ideas from a session run by Milton Huang:

  • Keeping Skype video connections open for hours at a time changes remote interactions between two people in ways that make them seem very different from telephone conversations, and more like being physically together
  • We should try to implement a way to transmit hugs remotely
  • We might be able to make people (especially those with autistic tendencies) experience more empathy via an “empathy machine” that measures and reports on what others are feeling

The conference on Human Enhancement Technologies and Human Rights this past weekend had many boring parts and a few interesting tidbits.
Many of the speakers were left-wing ideologues who seemed to be directing their speeches only to others from the same small set of left-wing academics. There were fewer libertarians at the conference than I expected, but still enough that it was strange how much of a disconnect there was between the ideology shown in the speeches and the ideology I knew from elsewhere that many people held but were being quiet about.
There was plenty of concern about whether increased control over one’s body would decrease diversity, but I heard little that enlightened me on that subject. There have clearly been many technologies that increased diversity, such as tattoos. There are some that have decreased diversity because there is a substantial consensus about what’s best (e.g. eyesight – it’s unclear why we should be concerned about a shortage of people who can’t see well enough to drive). Then there are a few traits such as degree of autism where there’s some uncertainty whether reduced diversity would be good. There are some pontificators (I didn’t hear anyone this focused at the conference) who think they know better than the masses what the right amount of diversity is, and that their opinions should be imposed on the masses. But the evidence for the pontificators’ expertise and the masses propensity to make mistakes is generally underwhelming, so I can’t find much reason to be as concerned about the effects of enhancement technology as I am about the desire to impose expert opinion on those who don’t want it.
Hank Greely pointed out that the letter of the law authorizes the FDA to regulate anything that could be considered a body enhancement, including clothing. So only the FDA’s interest in obeying the spirit of the law will deter them from regulating external enhancements.
One amusing report of unwanted side effects of an enhancement technology is the increase in sexually transmitted diseases in seniors following the introduction of Viagra.
Aubrey de Grey made an interesting argument that the most effective approach to convincing people to support a cure for aging is to persuade them that they are being logically inconsistent when they fail to do so. He has a point, but it’s weaker than he thinks. He gave several examples of problems that were allegedly solved by persuading society to be more logically consistent, but I generally doubt that’s what happened. One example was tolerance of homosexuality. I see few signs that logical arguments had much effect on that. I think the biggest change came from peer pressure, which became increasingly popular as gays became able to migrate to places where there were enough gays to safely start exerting peer pressure. Another factor was the shift away from the belief that the main purpose of sex should be reproduction. That initially happened due to changing circumstances (reduced reliance on children to support elderly parents). I’d say that has generally produced beliefs that are more inconsistent as people abandon the least convenient symptoms of the belief (e.g. contraception) but are much slower to abandon symptoms that are remote from their experience. I think similar theories could be made about some other examples he gave (slavery becoming more expensive to enforce when railroads made it easier for slaves to escape to a non-slave state).