Archives

All posts by Peter

Book review: Moral Machines: Teaching Robots Right from Wrong by Wendell Wallach and Collin Allen.

This book combines the ideas of leading commentators on ethics, methods of implementing AI, and the risks of AI, into a set of ideas on how machines ought to achieve ethical behavior.

The book mostly provides an accurate survey of what those commentators agree and disagree about. But there’s enough disagreement that we need some insights into which views are correct (especially about theories of ethics) in order to produce useful advice to AI designers, and the authors don’t have those kinds of insights.

The book focuses more on near term risks of software that is much less intelligent than humans, and is complacent about the risks of superhuman AI.

The implications of superhuman AIs for theories of ethics ought to illuminate flaws in them that aren’t obvious when considering purely human-level intelligence. For example, they mention an argument that any AI would value humans for their diversity of ideas, which would help AIs to search the space of possible ideas. This seems to have serious problems, such as what stops an AI from fiddling with human minds to increase their diversity? Yet the authors are too focused on human-like minds to imagine an intelligence which would do that.

Their discussion of the advocates friendly AI seems a bit confused. The authors wonder if those advocates are trying to quell apprehension about AI risks, when I’ve observed pretty consistent efforts by those advocates to create apprehension among AI researchers.

Book review: What Intelligence Tests Miss – The Psychology of Rational Thought by Keith E. Stanovich.

Stanovich presents extensive evidence that rationality is very different from what IQ tests measure, and the two are only weakly related. He describes good reasons why society would be better if people became more rational.

He is too optimistic that becoming more rational will help most people who accomplish it. Overconfidence provides widespread benefits to people who use it in job interviews, political discussions, etc.

He gives some advice on how to be more rational, such as thinking the opposite of each new hypothesis you are about to start believing. But will training yourself to do that on test problems cause you to do it when it matters? I don’t see signs that Stanovich practiced it much while writing the book. The most important implication he wants us to draw from the book is that we should develop and use Rationality Quotient (RQ) tests for at least as many purposes as IQ tests are used. But he doesn’t mention any doubts that I’d expect him to have if he thought about how rewarding high RQ scores might affect the validity of those scores.

He reports that high IQ people can avoid some framing effects and overconfidence, but do so only when told to do so. Also, the sunk cost bias test looks easy to learn how to score well on, even when it’s hard to practice the right behavior – the Bruine de Bruin, Parker and Fischhoff paper than Stanovich implies is the best attempt so far to produce an RQ test lists a sample question for the sunk costs bias that involves abandoning food when you’re too full at a restaurant. It’s obvious what answer produces a higher RQ score, but that doesn’t say much about how I’d behave when the food is in front of me.

He sometimes writes as if rationality were as close to being a single mental ability as IQ is, but at other times he implies it isn’t. I needed to read the Bruine de Bruin, Parker and Fischhoff paper to get real evidence. Their path independence component looks unrelated to the others. The remaining components have enough correlation with each other that there may be connections between them, but those correlations are lower than the correlations between the overall rationality score and IQ tests. So it’s far from clear whether a single RQ score is better than using the components as independent tests.

Given the importance he attaches to testing for and rewarding rationality, it’s disappointing that he devotes so little attention to how to do that.

He has some good explanations of why evolution would have produced minds with the irrational features we observe. He’s much less impressive when he describes how we should classify various biases.

I was occasionally annoyed that he treats disrespect for scientific authority as if it were equivalent to irrationality. The evidence for Big Foot or extraterrestrial visitors may be too flimsy to belong in scientific papers, but when he says there’s “not a shred of evidence” for them, he’s either using a meaning of “evidence” that’s inappropriate when discussing the rationality of people who may be sensibly lazy about gathering relevant data, or he’s simply wrong.

Book review: Finding Alpha: The Search for Alpha When Risk and Return Break Down by Eric Falkenstein.

This book presents mostly convincing arguments that refute the basic principle of CAPM that riskier investments are rewarded with higher returns, and the relation between risk and returns is better explained by modeling investors as wanting high returns relative to other investors rather than high absolute returns. But the quality of the arguments is quite variable. Much of the book assumes a good understanding of finance theory. If you don’t understand the importance of a Sharpe ratio, you’re not in his target audience.

I was not convinced by his most heavily emphasized empirical claim, that returns on equities are unrelated to beta because controlling for size eliminates the apparent relation. There’s enough connection between size and risk that this raises many questions he doesn’t answer (e.g. JB Berk, A critique of size-related anomalies). But later on he devotes a chapter to a wide variety of evidence that overcomes these concerns, and somewhat supports his claim that for riskier investments, the correlation between risk and return is negative (for the safest investments, it’s positive). And the authoritative Fama and French paper has more convincing evidence about beta – even without controlling for size, the correlation between beta and returns vanished during the 1963 to 1990 period.

He claims that the equity risk premium is effectively zero for a typical investor. His attempt to add up the different adjustments is confusing. He concludes with a table showing size adjustments to that standard estimate that add up to a mind-boggling 15 percent, which would result in a “premium” of -9 percent or so. But adding them is clearly wrong – the tax adjustment assumes the absence of some of the other adjustments. Still, the arguments he assembles from other researchers imply a good chance that the sign of the equity risk premium varies with the time period over which it’s measured.

He suggests some strategies to invest more wisely as a result of the ideas he presents, which he aptly summarizes as “selling hope relative to the market” (i.e. treating volatile stocks as overpriced due to a hope premium). But claiming this produces “superior returns, with less risk however measured” is too strong. Financial risk is not the only relevant measure of risk. Following his advice has social risks that he hints at elsewhere. Being invested in boring stocks in a bubble impairs your ability to engage in some interesting conversations, and you won’t make up for that by mentioning how you outperform the market in times when other want to avoid remembering their investments. Is it possible to minimize both kinds of risks by investing token amounts in ways that trendy folks are talking about, and investing most of your money to maximize your Sharpe ratio? Or does that require too much cognitive dissonance?

The book encourages pessimism, especially about the effects of people wanting relative wealth, and makes disturbing claims such as “Envy is necessary for compassion”.

He provides a number of other good ideas about investing, such as the possibility that the internet bubble adds a big anomaly to many data sets used for backtesting.

Book review: Outliers: The Story of Success by Malcolm Gladwell.

Gladwell has taken what would be a few ordinary blog posts and added enough eloquent fluff to them to make them into a book. There is probably a good deal of truth to his conclusions, but the evidence is much weaker than he wants you to think.

For his claim that 10,000 hours of practice are needed to become an expert, he doesn’t discuss the possibility that the causality often runs the opposite way: having the talent to become an expert creates a desire to practice a lot. He gives at least one example where the person seemed to lack expertise before getting the 10,000 hours of practice, but it’s not hard to imagine a variety of immaturity-related reasons why that might happen without the amount of practice causing the expertise.

I’m confused by his claims about how much practice he thinks the Beatles had before becoming successful. He points to somewhere between 1,200 and 1,800 hours of practice they had by late 1962 (which is about when Wikipedia indicates they became successful in the UK). Gladwell seems to say they weren’t successful until they came to the US in February 1964. He implies that they had 10,000 hours of practice by then, but I don’t see how he could claim they had much more than 3,000 hours of practice by then. So calling the 10,000 hour estimate a rule appears involve a good deal of exaggeration.

Book review: Capitalism with Chinese Characteristics: Entrepreneurship and the State by Yasheng Huang.

This is the most insightful book I’ve read so far on the Chinese economy. Most commentators only look at the most readily available data, but Huang dug through many obscure detailed records that were less likely to be manipulated.

The most important point of the book is to show that the widely held view of China as having gradual, steady improvement since 1978 is wrong. There was a dramatic political change in 1978 that allowed the rural parts of China (which still account for a large part of the economy, and where entrepreneurial culture had not been stamped out by communism) to prosper. Then starting in 1989 urban-focused leaders stifled rural businesses, causing stagnation there until 2002, when leaders more friendly to rural business gained power and allowed fairly healthy growth to resume.

Meanwhile urban areas have been dominated by crony capitalism which produced a good deal of gdp growth through massive state-directed investment in large companies, especially in the 1990s. This growth has produced fewer benefits to the average person than gdp numbers would lead us to expect.

Most of China’s success has been due to private enterprise. Beliefs that state-run businesses have produced growth are partly due to confusing reports about which companies are private.

I’m fairly impressed by the documentation of the changes in the rural political climate, but since the author seems to be the only one reading his sources of data and since it would be very time consuming to check them, it would be easy for errors to go unnoticed. For urban issues, he appears to be overstating the importance of problems that are not unique to China.

He partly clears up the puzzle of China doing better than should be expected for a country whose legal system doesn’t provide much rule of law. He provides evidence that some of the most important successes depend on British law imported via Hong Kong. But he doesn’t provide enough evidence to tell us how important this effect has been.

He leaves unanswered many questions I’d like answered. Why did government policies undergo these changes? Is the surprisingly reported steady gdp growth mostly the result of manipulated statistics? How much of the growth has been an investment bubble, and how much is sustainable? How did entrepreneurial culture survive communism in rural China so much better than in other countries?

How can a hospital-like business operating outside of existing territorial jurisdictions avoid harrassment by governments whose medical lobbies want to spread FUD?

Given that these businesses will initially have no track record to point to and less protection than existing medical tourism providers from whatever government provides a flag of convenience to the business, merely providing comparable quality medical care won’t be enough for such businesses to thrive. So I’m proposing practices which could enable those businesses to argue that current U.S. hospitals are more dangerous. I’m not suggesting this just for marketing purposes – I want safe hospitals to be available, and regulatory costs in the U.S. make it easier to start an innovative hospital offshore than in the U.S. (especially for types of innovation that don’t respect doctors’ prestige).

It has been known since 1847 that doctors kill patients by failing to wash their hands often enough. Yet this threat is still common. An offshore hospital could offer patients documentation showing when medical personel who touch the patient washed their hands (e.g. by providing the patient with video recordings of the procedures sufficient for the patient to verify cleanliness), with a double your money back guarantee. There are many other less common errors that patients could use such videos to check for.

The book Counting Sheep argues that hospitals often impair patients’ health by disturbing their sleep. Paying patients if night-time noise or light levels exceed some pre-specified limits should reduce this problem.

Next, I want the hospital’s fee structure to give it increased incentives to avoid failure. For procedures with objectively measurable results, I want the hospital to charge the patient only if those results are achieved, and to pay the patient some pre-specified amount if results leave the patient measurably worse off. (For hard to measure results such as change in pain, this approach won’t work).

The article You Get What You Pay For: Result-Based Compensation for Health Care has more extensive discussion of incentives and of strategies that hospitals might use to reduce the rate at which they harm patients.

I once proposed using life expectancy as the primary indicator of what society should try to maximize.

Recently there have been reports that life expectancy is negatively correlated with standard measures of economic growth. I accept the conclusion that depressions and recessions are less harmful than is commonly believed, but I want to point out the dangers of looking at only the life expectancy in the same year as an event that influences life expectancy. Depressions may have harmful effects that take a decade to show up in life expectancy figures (e.g. long-term wealth effects, effects on willingness to wage war, etc). So I’d like to see how life expectancy averaged over the ensuing 10 or 15 years correlates with a year’s gdp change.

I attended about 2/3 of the recent Seasteading conference. There were plenty of interesting people there. But I became less optimistic that seasteading will be implemented within the next decade.

The most discouraging news was that floating breakwaters probably won’t work with using propulsion to control location. They might work if anchored (which needs shallow water that only provides a little usable area outside territorial waters), and should still work with seasteads that drift were the currents take them (only suitable for people comfortable with being isolated).

The medical tourism ship business idea had last year seemed the most promising stepping stone on the way to seasteading. This year’s talk by Na’ama Moran on that subject provided better talking points that might be used to interest investors, but had nothing resembling a business plan. A year ago there was some hope that moderate changes to SurgiCruise‘s business plan could turn it into something viable. The seasteaders who were involved in that gave up on working with SurgiCruise recently, and no progress appears to have been made yet on creating an alternative.

I was also disappointed that she described no plans for dealing with the U.S. medical establishment’s ability to smear competitors. A company with no track record and weak regulation by, say, Panama can be made to sound dangerous to patients even if it provides care as good as U.S. hospitals. Could a medical cruise company hope to get accreditation early enough? There are large uncertainties about how much that costs and how soon it would be needed. I want a medical tourism company to prepare to demonstrate ways in which it provides higher quality care than U.S. hospitals (more on this in a later post).

Kevin Overman presented a vaguely promising idea for using RepRap and products from algae to build (print) structures at a cost that he hopes will be an order of magnitude less than with the materials currently envisioned to build a seastead. If he’s right, he should be able to make a nice profit building things on land before anyone is ready to build a seastead. The one drawback that I noticed is that it requires thicker structures (2X?) to get the same strength.

I also stopped by Ephemerisle for Saturday afternoon. It shows some promise as a competitor to Burning Man, but it’s unclear whether anything people learned there is related to skills needed to hold a festival in international waters. Possibly the design of the main platform can be adapted to the ocean without radical changes, but virtually all the other activity was done without any apparent regard for whether it could be repeated in the ocean.

Some of Robin Hanson’s Malthusian-sounding posts prompted me to wonder how we can create a future that is better than the repugnant conclusion. It struck me that there’s no reason to accept the assumption that increasing the number of living minds to the limit of available resources implies that the quality of the lives those minds live will decrease to where they’re barely worth living.

If we imagine the minds to be software, then a mind that barely has enough resources to live could be designed so that it is very happy with the cpu cycles or negentropy it gets even if those are negligible compared to other minds. Or if there is some need for life to be biological, a variant of hibernation might accomplish the same result.

If this is possible, then what I find repugnant about the repugnant conclusion is that it perpetuates the cruelty of evolution which produces suffering in beings with fewer resources than they were evolved to use. Any respectable civilization will engineer away the conflict between average utilitarianism and total utilitarianism.

If instead the most important limit on the number of minds is the supply of matter, then there is a tradeoff between more minds and more atoms per mind. But there is no mere addition paradox to create concerns about a repugnant conclusion if the creation of new minds reduces the utility of other minds.

(Douglas W. Portmore has a similar but less ambitious conclusion (pdf)).