Science and Technology

Rob Freitas has a good report analyzing how to use molecular nanotechnology to return atmospheric CO2 levels to pre-industrial levels by about 2060 or 2070.

My only complaint is that his attempt to estimate the equivalent of Moore’s Law for photovoltaics looks too optimistic, as it puts too much weight on the 2006-2008 trend, which was influenced by an abnormal rise in energy prices. If the y-axis on that graph were logarithmic instead of linear, it would be easier to visualize the lower long-term trend.

(HT Brian Wang).

Impro

Book review: Impro: Improvisation and the Theatre, by Keith Johnstone.

This book describes aspects of the human mind and social interactions which actors often need to analyze more explicitly than others, because actors need to be aware of the differences between various roles/personalities that they play, whereas unconscious understanding is adequate for people who only interact as a single personality.

The best chapter is about status, and emphasizes the important role that status games play in most social situations and how hard it is to be aware of one’s status-related behavior.

One disturbing claim he makes is that “acquaintances become friends when they agree to play status games together”. I’m very tempted to deny that I do that (as he predicts most people will deny acting). But I know there’s more happening in social interactions than I’m aware of, so I’m hesitant to dismiss his claim.

The chapter on spontaneity has apparently important insights about the role self-censorship plays in spontaneity and creativity. But I find it hard enough to change my behavior in response to those insights that I can’t be confident he’s correct.

He has the insight that “personality” functions as a public-relations department for the mind. Personality doesn’t seem like quite the right word here, but this is remarkably similar to an idea that Geoffrey Miller later developed from evolutionary theory in his excellent book The Mating Mind.

The chapter on masks and trance is strange and hard to evaluate.

Some comments on last weekend’s Foresight Conference:

At lunch on Sunday I was in a group dominated by a discussion between Robin Hanson and Eliezer Yudkowsky over the relative plausibility of new intelligences having a variety of different goal systems versus a single goal system (as in a society of uploads versus Friendly AI). Some of the debate focused on how unified existing minds are, with Eliezer claiming that dogs mostly don’t have conflicting desires in different parts of their minds, and Robin and others claiming such conflicts are common (e.g. when deciding whether to eat food the dog has been told not to eat).

One test Eliezer suggested for the power of systems with a unified goal system is that if Robin were right, bacteria would have outcompeted humans. That got me wondering whether there’s an appropriate criterion by which humans can be said to have outcompeted bacteria. The most obvious criterion on which humans and bacteria are trying to compete is how many copies of their DNA exist. Using biomass as a proxy, bacteria are winning by several orders of magnitude. Another possible criterion is impact on large-scale features of Earth. Humans have not yet done anything that seems as big as the catastrophic changes to the atmosphere (“the oxygen crisis”) produced by bacteria. Am I overlooking other appropriate criteria?

Kartik Gada described two humanitarian innovation prizes that bear some resemblance to a valuable approach to helping the world’s poorest billion people, but will be hard to turn into something with a reasonable chance of success. The Water Liberation Prize would be pretty hard to judge. Suppose I submit a water filter that I claim qualifies for the prize. How will the judges test the drinkability of the water and the reusability of the filter under common third world conditions (which I suspect vary a lot and which probably won’t be adequately duplicated where the judges live)? Will they ship sample devices to a number of third world locations and ask whether it produces water that tastes good, or will they do rigorous tests of water safety? With a hoped for prize of $50,000, I doubt they can afford very good tests. The Personal Manufacturing Prizes seem somewhat more carefully thought out, but need some revision. The “three different materials” criterion is not enough to rule out overly specialized devices without some clear guidelines about which differences are important and which are trivial. Setting specific award dates appears to assume an implausible ability to predict how soon such a device will become feasible. The possibility that some parts of the device are patented is tricky to handle, as it isn’t cheap to verify the absence of crippling patents.

There was a debate on futarchy between Robin Hanson and Mencius Moldbug. Moldbug’s argument seems to boil down to the absence of a guarantee that futarchy will avoid problems related to manipulation/conflicts of interest. It’s unclear whether he thinks his preferred form of government would guarantee any solution to those problems, and he rejects empirical tests that might compare the extent of those problems under the alternative systems. Still, Moldbug concedes enough that it should be possible to incorporate most of the value of futarchy within his preferred form of government without rejecting his views. He wants to limit trading to the equivalent of the government’s stockholders. Accepting that limitation isn’t likely to impair the markets much, and may make futarchy more palatable to people who share Moldbug’s superstitions about markets.

Book review: Moral Machines: Teaching Robots Right from Wrong by Wendell Wallach and Collin Allen.

This book combines the ideas of leading commentators on ethics, methods of implementing AI, and the risks of AI, into a set of ideas on how machines ought to achieve ethical behavior.

The book mostly provides an accurate survey of what those commentators agree and disagree about. But there’s enough disagreement that we need some insights into which views are correct (especially about theories of ethics) in order to produce useful advice to AI designers, and the authors don’t have those kinds of insights.

The book focuses more on near term risks of software that is much less intelligent than humans, and is complacent about the risks of superhuman AI.

The implications of superhuman AIs for theories of ethics ought to illuminate flaws in them that aren’t obvious when considering purely human-level intelligence. For example, they mention an argument that any AI would value humans for their diversity of ideas, which would help AIs to search the space of possible ideas. This seems to have serious problems, such as what stops an AI from fiddling with human minds to increase their diversity? Yet the authors are too focused on human-like minds to imagine an intelligence which would do that.

Their discussion of the advocates friendly AI seems a bit confused. The authors wonder if those advocates are trying to quell apprehension about AI risks, when I’ve observed pretty consistent efforts by those advocates to create apprehension among AI researchers.

Book review: What Intelligence Tests Miss – The Psychology of Rational Thought by Keith E. Stanovich.

Stanovich presents extensive evidence that rationality is very different from what IQ tests measure, and the two are only weakly related. He describes good reasons why society would be better if people became more rational.

He is too optimistic that becoming more rational will help most people who accomplish it. Overconfidence provides widespread benefits to people who use it in job interviews, political discussions, etc.

He gives some advice on how to be more rational, such as thinking the opposite of each new hypothesis you are about to start believing. But will training yourself to do that on test problems cause you to do it when it matters? I don’t see signs that Stanovich practiced it much while writing the book. The most important implication he wants us to draw from the book is that we should develop and use Rationality Quotient (RQ) tests for at least as many purposes as IQ tests are used. But he doesn’t mention any doubts that I’d expect him to have if he thought about how rewarding high RQ scores might affect the validity of those scores.

He reports that high IQ people can avoid some framing effects and overconfidence, but do so only when told to do so. Also, the sunk cost bias test looks easy to learn how to score well on, even when it’s hard to practice the right behavior – the Bruine de Bruin, Parker and Fischhoff paper than Stanovich implies is the best attempt so far to produce an RQ test lists a sample question for the sunk costs bias that involves abandoning food when you’re too full at a restaurant. It’s obvious what answer produces a higher RQ score, but that doesn’t say much about how I’d behave when the food is in front of me.

He sometimes writes as if rationality were as close to being a single mental ability as IQ is, but at other times he implies it isn’t. I needed to read the Bruine de Bruin, Parker and Fischhoff paper to get real evidence. Their path independence component looks unrelated to the others. The remaining components have enough correlation with each other that there may be connections between them, but those correlations are lower than the correlations between the overall rationality score and IQ tests. So it’s far from clear whether a single RQ score is better than using the components as independent tests.

Given the importance he attaches to testing for and rewarding rationality, it’s disappointing that he devotes so little attention to how to do that.

He has some good explanations of why evolution would have produced minds with the irrational features we observe. He’s much less impressive when he describes how we should classify various biases.

I was occasionally annoyed that he treats disrespect for scientific authority as if it were equivalent to irrationality. The evidence for Big Foot or extraterrestrial visitors may be too flimsy to belong in scientific papers, but when he says there’s “not a shred of evidence” for them, he’s either using a meaning of “evidence” that’s inappropriate when discussing the rationality of people who may be sensibly lazy about gathering relevant data, or he’s simply wrong.

Book review: Create Your Own Economy: The Path to Prosperity in a Disordered World by Tyler Cowen.

This somewhat misleadingly titled book is mainly about the benefits of neurodiversity and how changing technology is changing our styles of thought, and how we ought to improve our styles of thought.

His perspective on these subjects usually reflects a unique way of ordering his thoughts about the world. Few things he says seem particularly profound, but he persistently provides new ways to frame our understanding of the human mind that will sometimes yield better insights than conventional ways of looking at these subjects. Even if you think you know a good deal about autism, he’ll illuminate some problems with your stereotypes of autistics.

Even though it is marketed as an economics book, it only has about one page about financial matters, but that page is an eloquent summary of two factors that are important causes of our recent problems.

He’s an extreme example of an infovore who processes more information than most people can imagine. E.g. “Usually a blog will fail if the blogger doesn’t post … at least every weekday.” His idea of failure must be quite different from mine, as I more often stop reading a blog because it has too many posts than because it goes a few weeks without a post.

One interesting tidbit hints that healthcare costs might be high because telling patients their treatment was expensive may enhance the placebo effect, much like charging more for a given bottle of wine makes it taste better.

The book’s footnotes aren’t as specific as I would like, and sometimes leave me wondering whether he’s engaging in wild speculation or reporting careful research. His conjecture that “self-aware autistics are especially likely to be cosmopolitans in their thinking” sounds like something that results partly from the selection biases that come from knowing more autistics who like economics than autistics who hate economics. I wish he’d indicated whether he found a way to avoid that bias.

This review by Cosma Shalizi of James Flynn’s book What Is Intelligence? provides some interesting criticisms of Flynn (while agreeing with much of what Flynn says).

Shalizi’s most important argument is that Flynn and others who attach a good deal of importance to g haven’t made much of an argument that it measures a single phenomenon.

After a century of IQ testing, there is still no theory which says which questions belongs on an intelligence test, just correlational analyses and tradition.

Flynn and others have good arguments that whatever g measures is important. But Shalizi leaves me with the impression that the only way to decide whether it’s a single phenomenon is to compare its usefulness to models which describe multiple flavors of intelligence. So far those attempts that I’ve looked at seem underwhelming. Maybe that means trying to break down intelligence into components which deserve separate measures isn’t fruitful, but it might also mean that the people who might do a good job of it have been scared away by the political controversies over IQ.

HT Kenny Easwaran.

Book review: Human Enhancement, edited by Julian Savulescu and Nick Bostrom.

This book starts out with relatively uninteresting articles and only the last quarter of so of it is worth reading.

Because I agree with most of the arguments for enhancement, I skipped some of the pro-enhancement arguments and tried to read the anti-enhancement arguments carefully. They mostly boil down to the claim that people’s preference for natural things is sufficient to justify broad prohibitions on enhancing human bodies and human nature. That isn’t enough of an argument to deserve as much discussion as it gets.

A few of the concerns discussed by advocates of enhancement are worth more thought. The question of whether unenhanced humans would retain political equality and rights enables us to imagine dystopian results of enhancement. Daniel Walker provides a partly correct analysis of conditions under which enhanced beings ought to paternalistically restrict the choices and political power of the unenhanced. But he’s overly complacent about assuming the paternalists will have the interests of the unenhanced at heart. The biggest problem with paternalism to date is that it’s done by people who are less thoughtful about the interests of the people they’re controlling than they are about finding ways to serve their own self-interest. It is possible that enhanced beings will be perfect altruists, but it is far from being a natural consequence of enhancement.

The final chapter points out the risks of being overconfident of our ability to improve on nature. They describe questions we should ask about why evolution would have produced a result that is different from what we want. One example that they give suggests they remain overconfident – they repeat a standard claim about the human appendix being a result of evolution getting stuck in a local optimum. Recent evidence suggests that the appendix performs a valuable function in recovery from diarrhea (still a major cause of death in places) and harm from appendicitis seems rare outside of industrialized nations (maybe due to differences in dietary fiber?).

The most new and provocative ideas in the book have little to do with the medical enhancements that the title evokes. Robin Hanson’s call for mechanisms to make people more truthful probably won’t gather much support, as people are clever about finding objections to any specific method that would be effective. Still, asking the question the way he does may encourage some people to think more clearly about their goals.

Nick Bostrom and Anders Sandberg describe an interesting (original?) hypothesis about why placebos (sometimes) work. It involves signaling that there is relatively little need to conserve the body’s resources for fighting future injuries and diseases. Could this understanding lead to insights about how to more directly and reliably trigger this effect? More effective placebos have been proposed as jokes. Why is it so unusual to ask about serious research into this subject?

Book review: Greatness: Who Makes History and Why by Dean Keith Simonton.

This broad and mediocre survey of psychology of people who stand out in history probably contains a fair number of good ideas, but it’s hard to separate them from the many ideas that are questionable guesses. He’s inconsistent about distinguishing his guesses from claims backed by good evidence.

One of the clearest examples is his assertion that childhood adversity builds character. He presents evidence that eminent figures were unusually likely to have had a parent die early, and describes this as the “most impressive proof” of his claim. He ignores the possibility those people come from families with a pattern of taking sufficiently unusual risks to explain that evidence.

In other places, he makes mistakes which seemed reasonable when the book was published, such as “Mendelian laws of inheritance are blind to whether an individual is first-born or later-born” (parental age has a measurable effect on mutation rates).

He avoids some of the worst mistakes that a psychology of history could make, such as trying to psychoanalyze individuals without having enough information about them.

He mentions some approaches to analyzing presidential addresses and corporate letters to stockholders, which have some potential to be used in predicting whether leaders have the appropriate personality for their jobs. I wonder what would happen if many voters/stockholders demanded that leaders pass tests of this nature (I’m assuming the tests can be scored objectively, but that may be shaky assumption). I’m confident that we’d get leaders with rhetoric that passes those tests. Would that simply mean the leaders change their rhetoric, or would it be hard enough to maintain a mismatch between rhetoric and thought patterns that we’d get leaders with better thought patterns?