mind uploading

All posts tagged mind uploading

Convergence08 had an amazing number of interesting people in attendance. No one person stood out as unusually impressive – it was more that the average was unusually high for a 300 person gathering. I’ll list many small ideas, which partly reflects the fact that I was trying to sample a wide enough variety of sessions that I didn’t manage to absorb any one presentation in depth.
Genescient is a new company whose founders include SF author Greg Benford. It has a strain of fruit flies bred for lifespans more than 4 times normal, and has used their DNA to identify substances that might improve human lifespan. It sounds like they will soon offer dietary supplements which have little risk and a hope of slowing down aging by some hard to predict (probably small) amount.

Advice from Eliezer Yudkowsky (responding to a concern that transhumanists have few children): don’t reproduce until you can code your child from scratch.

Several ideas from a session run by Anders Sandberg:

  • AntiGroupware is designed to remove many social pressures from group decision-making
  • Once it’s easy to make copies of people, political campaigns will be run by a large number of copies. [This assumes that democracy can attempt to survive – are copies going to be denied votes?]
  • Politicians should be selected from losers of the game Diplomacy [It might be hard to keep them from deliberately losing, but with big incentives winning plus a low probability of any one loser becoming a politician, it might work.]

Ideas from a session run by Milton Huang:

  • Keeping Skype video connections open for hours at a time changes remote interactions between two people in ways that make them seem very different from telephone conversations, and more like being physically together
  • We should try to implement a way to transmit hugs remotely
  • We might be able to make people (especially those with autistic tendencies) experience more empathy via an “empathy machine” that measures and reports on what others are feeling

Book review: Reasons and Persons by Derek Parfit.
This book does a very good job of pointing out inconsistencies in common moral intuitions, and does a very mixed job of analyzing how to resolve them.
The largest section of the book deals with personal identity, using a bit of neuroscience plus scenarios such as a Star Trek transporter to show that nonreductionsist approaches produce conclusions which are strange enough to disturb most people. I suspect this analysis was fairly original when it was written, but I’ve seen most of the ideas elsewhere. His analysis is more compelling than most other versions, but it’s not concise enough for many to read it.
The most valuable part of the book is the last section, weighing conflicts of interest between actual people and people who could potentially exist in the future. His description of the mere addition paradox convinced me that it’s harder than I thought to specify plausible beliefs which don’t lead to the Repugnant Conclusion (i.e. that some very large number of people with lives barely worth living can be a morally better result than some smaller number of very happy people). He ends by concluding he hasn’t found a way resolve the conflicts between the principles he thinks morality ought to satisfy.
It appears that if he had applied the critical analysis that makes up most of the book to the principle of impersonal ethics, he would see signs that his dilemma results from trying to satisfy incompatible intuitions. Human desire for ethical rules that are more impersonal is widespread when the changes are close to Pareto improvements, but human intuition seems to be generally incompatible with impersonal ethical rules that are as far from Pareto improvements as the Repugnant Conclusion appears to be. Thus it appears Parfit could only resolve the dilemma by finding a source of morality that transcends human intuition and logical consistency (he wisely avoids looking for non-human sources of morality, but intuition doesn’t seem quite the right way to find a human source) or by resolving the conflicting intuitions people seem to have about impersonal ethics.
The most disappointing part of the book is the argument that consequentialism is self-defeating. The critical part of his argument involves a scenario where a mother must choose between saving her child and saving two strangers. His conclusion depends on an assumption about the special relationship between parent and child which consequentialists have no obvious obligation to agree with. He isn’t clear enough about what that assumption is for me to figure out why we disagree.
I find it especially annoying that the book’s index only covers names, since it’s a long book whose subjects aren’t simple enough for me to fully remember.

Several posts on EconLog recently have assumed that human capital will be sufficient for their children to prosper in a Kurzweilian future.
That is a very risky assumption. Human capital has historically been a good investment largely because there have been few innovations that made it easier to produce more humans. Kurzweil’s forecasts imply that around 2040 or 2050 the cost of duplicating a human-equivalent intelligence will plunge. Which means that for most kinds of jobs, the supply of labor should be expected to become nearly unlimited, and in the absence of substantial monopoly, we should expect the price of labor under a Kurzweil scenario to approach zero. Maybe something will guarantee everyone a luxurious lifestyle in a world where there’s little reason for salaries, but I’d rather hedge my bets and accumulate financial assets.
See Robin Hanson’s analysis for a more detailed argument.

Book Review: The Singularity Is Near : When Humans Transcend Biology by Ray Kurzweil
Kurzweil does a good job of arguing that extrapolating trends such as Moore’s Law works better than most alternative forecasting methods, and he does a good job of describing the implications of those trends. But he is a bit long-winded, and tries to hedge his methodology by pointing to specific research results which he seems to think buttress his conclusions. He neither convinces me that he is good at distinguishing hype from value when analyzing current projects, nor that doing so would help with the longer-term forecasting that constitutes the important aspect of the book.
Given the title, I was slightly surprised that he predicts that AIs will become powerful slightly more gradually than I recall him suggesting previously (which is a good deal more gradual than most Singulitarians). He offsets this by predicting more dramatic changes in the 22nd century than I imagined could be extrapolated from existing trends.
His discussion of the practical importance of reversible computing is clearer than anything else I’ve read on this subject.
When he gets specific, large parts of what he says seem almost right, but there are quite a few details that are misleading enough that I want to quibble with them.
For instance (on page 244, talking about the world circa 2030): “The bulk of the additional energy needed is likely to come from new nanoscale solar, wind, and geothermal technologies.” Yet he says little to justify this, and most of what I know suggests that wind and geothermal have little hope of satisfying more than 1 or 2 percent of new energy demand.
His reference on page 55 to “the devastating effect that illegal file sharing has had on the music-recording industry” seems to say something undesirable about his perspective.
His comments on economists thoughts about deflation are confused and irrelevant.
On page 92 he says “Is the problem that we are not running the evolutionary algorithms long enough? … This won’t work, however, because conventional genetic algorithms reach an asymptote in their level of performance, so running them for a longer period of time won’t help.” If “conventional” excludes genetic programming, then maybe his claim is plausible. But genetic programming originator John Koza claims his results keep improving when he uses more computing power.
His description of nanotech progress seems naive. (page 228) “Drexler’s dissertation … laid out the foundation and provided the road map still being followed today.” (page 234): “each aspect of Drexler’s conceptual designs has been validated”. I’ve been following this area pretty carefully, and I’m aware of some computer simulations which do a tiny fraction of what is needed, but if any lab research is being done that could be considered to follow Drexler’s road map, it’s a well kept secret. Kurzweil then offsets his lack of documentation for those claims by going overboard about documenting his accurate claim that “no serious flaw in Drexler’s nanoassembler concept has been described”.
Kurzweil argues that self-replicating nanobots will sometimes be desirable. I find this poorly thought out. His reasons for wanting them could be satisfied by nanobots that replicate under the control of a responsible AI.
I’m bothered by his complacent attitude toward the risks of AI. He sometimes hints that he is concerned, but his suggestions for dealing with the risks don’t indicate that he has given much thought to the subject. He has a footnote that mentions Yudkowsky’s Guidelines on Friendly AI. The context could lead readers to think they are comparable to the Foresight Guidelines on Molecular Nanotechnology. Alas, Yudkowsky’s guidelines depend on concepts which are hard enough to understand that few researchers are likely to comprehend them, and the few who have tried disagree about their importance.
Kurzweil’s thoughts on the risks that the simulation we may live in will be turned off are somewhat interesting, but less thoughtful than Robin Hanson’s essay on How To Live In A Simulation.
A couple of nice quotes from the book:
(page 210): “It’s mostly in your genes” is only true if you take the usual passive attitude toward health and aging.
(page 301): Sex has largely been separated from its biological function. … So why don’t we provide the same for … another activity that also provides both social intimacy and sensual pleasure – namely, eating?