Science and Technology

Book review: Singularity Hypotheses: A Scientific and Philosophical Assessment.

This book contains papers of widely varying quality on superhuman intelligence, plus some fairly good discussions of what ethics we might hope to build into an AGI. Several chapters resemble cautious versions of LessWrong, others come from a worldview totally foreign to LessWrong.

The chapter I found most interesting was Richard Loosemore and Ben Goertzel’s attempt to show there are no likely obstacles to a rapid “intelligence explosion”.

I expect what they label as the “inherent slowness of experiments and environmental interaction” to be an important factor limiting the rate at which an AGI can become more powerful. They think they see evidence from current science that this is an unimportant obstacle compared to a shortage of intelligent researchers: “companies complain that research staff are expensive and in short supply; they do not complain that nature is just too slow.”

Some explanations that come to mind are:

  • Complaints about nature being slow are not very effective at speeding up nature.
  • Complaints about specific tools being slow probably aren’t very unusual, but there are plenty of cases where people know complaints aren’t effective (e.g. complaints about spacecraft traveling slower than the theoretical maximum [*]).
  • Hiring more researchers can increase the status of a company even if the additional staff don’t advance knowledge.

They also find it hard to believe that we have independently reached the limit of the physical rate at which experiments can be done at the same time we’ve reached the limits of how many intelligent researchers we can hire. For literal meanings of physical limits this makes sense, but if it’s as hard to speed up experiments as it is to throw more intelligence into research, then the apparent coincidence could be due to wise allocation of resources to whichever bottleneck they’re better used in.

None of this suggests that it would be hard for an intelligence explosion to produce the 1000x increase in intelligence they talk about over a century, but it seems like an important obstacle to the faster time periods some people believe (days or weeks).

Some shorter comments on other chapters:

James Miller describes some disturbing incentives that investors would create for companies developing AGI if AGI is developed by companies large enough that no single investor has much influence on the company. I’m not too concerned about this because if AGI were developed by such a company, I doubt that small investors would have enough awareness of the project to influence it. The company might not publicize the project, or might not be honest about it. Investors might not believe accurate reports if they got them, since the reports won’t sound much different from projects that have gone nowhere. It seems very rare for small investors to understand any new software project well enough to distinguish between an AGI that goes foom and one that merely makes some people rich.

David Pearce expects the singularity to come from biological enhancements, because computers don’t have human qualia. He expects it would be intractable for computers to analyze qualia. It’s unclear to me whether this is supposed to limit AGI power because it would be hard for AGI to predict human actions well enough, or because the lack of qualia would prevent an AGI from caring about its goals.

Itamar Arel believes AGI is likely to be dangerous, and suggests dealing with the danger by limiting the AGI’s resources (without saying how it can be prevented from outsourcing its thought to other systems), and by “educational programs that will help mitigate the inevitable fear humans will have” (if the dangers are real, why is less fear desirable?).

* No, that example isn’t very relevant to AGI. Better examples would be atomic force microscopes, or the stock market (where it can take a generation to get a new test of an important pattern), but it would take lots of effort to convince you of that.

Book review: Radical Abundance: How a Revolution in Nanotechnology Will Change Civilization, by K. Eric Drexler.

Radical Abundance is more cautious than his prior books, and targeted at a very nontechnical audience. It accurately describes many likely ways in which technology will create orders of magnitude more material wealth.

Much of it repackages old ideas, and it focuses too much on the history of nanotechnology.

He defines the subject of the book to be atomically precise manufacturing (APM), and doesn’t consider nanobots to have much relevance to the book.

One new idea that I liked is that rare elements will become unimportant to manufacturing. In particular, solar energy can be made entirely out of relatively common elements (unlike current photovoltaics). Alas, he doesn’t provide enough detail for me to figure out how confident I should be about that.

He predicts that progress toward APM will accelerate someday, but doesn’t provide convincing arguments. I don’t recall him pointing out the likelihood that investment in APM companies will increase dramatically when VCs realize that a few years of effort will produce commercial products.

He doesn’t do a good jobs of documenting his claims that APM has advanced far. I’m pretty sure that the million atom DNA scaffolds he mentions have as much programmable complexity as he hints, but if I only relied on this book to analyze that I’d suspect that those structures were simpler and filled with redundancy.

He wants us to believe that APM will largely eliminate pollution, and that waste heat will “have little adverse impact”. I’m disappointed that he doesn’t quantify the global impact of increasing waste heat. Why does he seem to disagree with Rob Freitas about this?

Book review: The Motivation Hacker, by Nick Winter.

This is a productivity book that might improve some peoples’ motivation.

It provides an entertaining summary (with clear examples) of how to use tools such as precommitment to accomplish an absurd number of goals.

But it mostly fails at explaining how to feel enthusiastic about doing so.

The section on Goal Picking Exercises exemplifies the problems I have with the book. The most realistic sounding exercise had me rank a bunch of goals by how much the goal excites me times the probability of success divided by the time required. I found that the variations in the last two terms overwhelmed the excitement term, leaving me with the advice that I should focus on the least exciting goals. (Modest changes to the arbitrary scale of excitement might change that conclusion).

Which leaves me wondering whether I should focus on goals that I’m likely to achieve soon but which I have trouble caring about, or whether I should focus on longer term goals such as mind uploading (where I might spend years on subgoals which turn out to be mistaken).

The author doesn’t seem to have gotten enough out of his experience to motivate me to imitate the way he picks goals.

I tried O2Amp glasses to correct for my colorblindness. They’re very effective at enabling me to notice some shades of red that I’ve found hard to see. In particular, two species of wildflowers (Indian Paintbrush and Cardinal Larkspur) look bright orange through the glasses, whereas without the glasses my vision usually fills in their color by guessing it’s similar to the surrounding colors unless I look very close.

But this comes at the cost of having green look much duller. The net effect causes vegetation to be less scenic.

The glasses are supposed to have some benefits for observing emotions via better recognition of blood concentration and oxygenation near the skin. But this effect seems too small to help me.

O2Amp is a small step toward enhanced sensory processing that is likely to become valuable someday, but for now it seems mainly valuable for a few special medical uses.

Book review: Error and the Growth of Experimental Knowledge by Deborah Mayo.

This book provides a fairly thoughtful theory of how scientists work, drawing on
Popper and Kuhn while improving on them. It also tries to describe a quasi-frequentist philosophy (called Error Statistics, abbreviated as ES) which poses a more serious challenge to the Bayesian Way than I’d seen before.

Mayo’s attacks on Bayesians are focused more on subjective Bayesians than objective Bayesians, and they show some real problems with the subjectivists willingness to treat arbitrary priors as valid. The criticisms that apply to objective Bayesians (such as E.T. Jaynes) helped me understand why frequentism is taken seriously, but didn’t convince me to change my view that the Bayesian interpretation is more rigorous than the alternatives.

Mayo shows that much of the disagreement stems from differing goals. ES is designed for scientists whose main job is generating better evidence via new experiments. ES uses statistics for generating severe tests of hypotheses. Bayesians take evidence as a given and don’t think experiments deserve special status within probability theory.

The most important difference between these two philosophies is how they treat experiments with “stopping rules” (e.g. tossing a coin until it produces a pre-specified pattern instead of doing a pre-specified number of tosses). Each philosophy tells us to analyze the results in ways that seem bizarre to people who only understand the other philosophy. This subject is sufficiently confusing that I’ll write a separate post about it after reading other discussions of it.

She constructs a superficially serious disagreement where Bayesians say that evidence increases the probability of a hypothesis while ES says the evidence provides no support for the (Gellerized) hypothesis. Objective Bayesians seem to handle this via priors which reflect the use of old evidence. Marcus Hutter has a description of a general solution in his paper On Universal Prediction and Bayesian Confirmation, but I’m concerned that Bayesians may be more prone to mistakes in implementing such an approach than people who use ES.

Mayo occasionally dismisses the Bayesian Way as wrong due to what look to me like differing uses of concepts such as evidence. The Bayesian notion of very weak evidence seems wrong given her assumption that concept scientific evidence is the “right” concept. This kind of confusion makes me wish Bayesians had invented a different word for the non-prior information that gets fed into Bayes Theorem.

One interesting and apparently valid criticism Mayo makes is that Bayesians treat the evidence that they feed into Bayes Theorem as if it had a probability of one, contrary to the usual Bayesian mantra that all data have a probability and the use of zero or one as a probability is suspect. This is clearly just an approximation for ease of use. Does it cause problems in practice? I haven’t seen a good answer to this.

Mayo claims that ES can apportion blame for an anomalous test result (does it disprove the hypothesis? or did an instrument malfunction?) without dealing with prior probabilities. For example, in the classic 1919 eclipse test of relativity, supporters of Newton’s theory agreed with supporters of relativity about which data to accept and which to reject, whereas Bayesians would have disagreed about the probabilities to assign to the evidence. If I understand her correctly, this also means that if the data had shown light being deflected at a 90 degree angle to what both theories predict, ES scientists wouldn’t look any harder for instrument malfunctions.

Mayo complains that when different experimenters reach different conclusions (due to differing experimental results) “Lindley says all the information resides in an agent’s posterior probability”. This may be true in the unrealistic case where each one perfectly incorporates all relevant evidence into their priors. But a much better Bayesian way to handle differing experimental results is to find all the information created by experiments in the likelihood ratios that they produce.

Many of the disagreements could be resolved by observing which approach to statistics produced better results. The best Mayo can do seems to be when she mentions an obscure claim by Pierce that Bayesian methods had a consistently poor track record in (19th century?) archaeology. I’m disappointed that I haven’t seen a good comparison of more recent uses of the competing approaches.

Book review: The Willpower Instinct: How Self-Control Works, Why It Matters, and What You Can Do To Get More of It, by Kelly McGonigal.

This book starts out seeming to belabor ideas that seem obvious to me, but before too long it offers counterintuitive approaches that I ought to try.

The approach that I find hardest to reconcile with my intuition is that self-forgiveness over giving into temptations helps increase willpower, while feeling guilt or shame about having failed reduces willpower, so what seems like an incentive to avoid temptation is likely to reduce our ability to resist the temptation.

Another important but counterintuitive claim is that trying to suppress thoughts about a temptation (e.g. candy) makes it harder to resist the temptation. Whereas accepting that part of my mind wants candy (while remembering that I ought to follow a rule of eating less candy) makes it easier for me to resist the candy.

A careless author could have failed to convince me this is plausible. But McGonigal points out the similarities to trying to follow an instruction to not think of white bears – how could I suppress thoughts of white bears of some part of my mind didn’t activate a concept of white bears to monitor my compliance with the instruction? Can I think of candy without attracting the attention of the candy-liking parts of my mind?

As a result of reading the book, I have started paying attention to whether the pleasure I feel when playing computer games lives up to the anticipation I feel when I’m tempted to start one. I haven’t been surprised to observe that I sometimes feel no pleasure after starting the game. But it now seems easier to remember those times of pleasureless playing, and I expect that is weakening my anticipation or rewards.

The recent Quantified Self conference was my first QS event, and was one of the best conferences I’ve attended.

I had been hesitant to attend QS events because they seem to attract large crowds, where I usually find it harder to be social. But this conference was arranged so that there was no real center where crowds gathered, so people spread out into smaller groups where I found it easier to join a conversation.

Kevin Kelly called this “The Measured Century”. People still underestimate how much improved measurement contributed to the industrial revolution. If we’re seeing a much larger improvement in measurement, people will likely underestimate the importance of that for quite a while.

The conference had many more ideas than I had time to hear, and I still need to evaluate many of he ideas I did hear. Here are a few:

I finally got around to looking at DIYgenomics, and have signed up for their empathy study (not too impressive so far) and their microbiome study (probiotics) which is waiting for more people before starting.

LUMOback looks like it will be an easy way to improve my posture. The initial version will require a device I don’t have, but it sounds like they’ll have an Android version sometime next year.

Steve Fowkes’ talk about urine pH testing sounds worth trying out.

Book review: The Righteous Mind: Why Good People Are Divided by Politics and Religion, by Jonathan Haidt.

This book carefully describes the evolutionary origins of human moralizing, explains why tribal attitudes toward morality have both good and bad effects, and how people who want to avoid moral hostility can do so.

Parts of the book are arranged to describe the author’s transition from having standard delusions about morality being the result of the narratives we use to justify them and about why other people had alien-sounding ideologies. His description about how his study of psychology led him to overcome his delusions makes it hard for those who agree with him to feel very superior to those who disagree.

He hints at personal benefits from abandoning partisanship (“It felt good to be released from partisan anger.”), so he doesn’t rely on altruistic motives for people to accept his political advice.

One part of the book that surprised me was the comparison between human morality and human taste buds. Some ideologies are influenced a good deal by all 6 types of human moral intuitions. But the ideology that pervades most of academia only respect 3 types (care, liberty, and fairness). That creates a difficult communication gap between them and cultures that employ others such as sanctity in their moral system, much like people who only experience sweet and salty foods would have trouble imagining a desire for sourness in some foods.

He sometimes gives the impression of being more of a moral relativist than I’d like, but a careful reading of the book shows that there are a fair number of contexts in which he believes some moral tastes produce better results than others.

His advice could be interpreted as encouraging us to to replace our existing notions of “the enemy” with Manichaeans. Would his advice polarize societies into Manichaeans and non-Manichaeans? Maybe, but at least the non-Manichaeans would have a decent understanding of why Manichaeans disagreed with them.

The book also includes arguments that group selection played an important role in human evolution, and that an increase in cooperation (group-mindedness, somewhat like the cooperation among bees) had to evolve before language could become valuable enough to evolve. This is an interesting but speculative alternative to the common belief that language was the key development that differentiated humans from other apes.

Book review: The Intelligence Paradox: Why the Intelligent Choice Isn’t Always the Smart One, by Satoshi Kanazawa.

This book is entertaining and occasionally thought-provoking, but not very well thought out.

The main idea is that intelligence (what IQ tests measure) is an adaptation for evolutionarily novel situations, and shouldn’t be positively correlated with cognitive abilities that are specialized for evolutionarily familiar problems. He defines “smart” so that it’s very different from intelligence. His notion of smart includes a good deal of common sense that is unconnected with IQ.

He only provides one example of an evolutionarily familiar skill which I assumed would be correlated with IQ but which isn’t: finding your way in situations such as woods where there’s some risk of getting lost.

He does make and test many odd predictions about high IQ people being more likely to engage in evolutionarily novel behavior, such as high IQ people going to bed later than low IQ people. But I’m a bit concerned at the large number of factors he controls for before showing associations (e.g. 19 factors for alcohol use). How hard would it be to try many combinations and only report results when he got conclusions that fit his prediction? On the other hand, he can’t be trying too hard to reject all evidence that conflicts with his predictions, since he occasionally reports evidence that conflicts with his predictions (e.g. tobacco use).

He reports that fertility is heritable, and finds that puzzling. He gives a kin selection based argument saying that someone with many siblings ought to put more effort into the siblings reproductive success and less into personally reproducing. But I see no puzzle – I expect people to have varying intuitions about whether the current abundance of food will last, and pursue different strategies, some of which will be better if food remains abundant, and others better if overpopulation produces a famine.

He’s eager to sound controversial, and his chapter titles will certainly offend some people. Sometimes those are backed up by genuinely unpopular claims, sometimes the substance is less interesting. E.g. the chapter title “Why Homosexuals Are More Intelligent than Heterosexuals” says there’s probably no connection between intelligence and homosexual desires, but there’s a connection between intelligence and how willing people are to act on those desires (yawn).

Here is some evidence against his main hypothesis.

Book review: The Beginning of Infinity by David Deutsch.

This is an ambitious book centered around the nature of explanation, why it has been an important part of science (misunderstood by many who think of science as merely prediction), and why it is important for the future of the universe.

He provides good insights on jump during the Enlightenment to thinking in universals (e.g. laws of nature that apply to a potentially infinite scope). But he overstates some of its implications. He seems confident that greater-than-human intelligences will view his concept of “universal explainers” as the category that identifies which beings have the rights of people. I find this about as convincing as attempts to find a specific time when a fetus acquires the rights of personhood. I can imagine AIs deciding that humans fail often enough at universalizing their thought to be less than a person, or that they will decide that monkeys are on a trajectory toward the same kind of universality.

He neglects to mention some interesting evidence of the spread of universal thinking – James Flynn’s explanation of the Flynn Effect documents that low IQ cultures don’t use the abstract thought that we sometimes take for granted, and describes IQ increases as an escape from concrete thinking.

Deutsch has a number of interesting complaints about people who attempt science but are confused about the philosophy of science, such as people who imagine that measuring heritability of a trait tells us something important without further inquiry – he notes that being enslaved was heritable in 1860, but that was useless for telling us how to change slavery.

He has interesting explanations for why anthropic arguments, the simulation argument, and the doomsday argument are weaker in a spatially infinite universe. But I was disappointed that he didn’t provide good references for his claim that the universe is infinite – a claim which I gather is controversial and hasn’t gotten as much attention as it deserves.

He sometimes gets carried away with his ambition and seems to forget his rule that explanations should be hard to vary in order to make it hard to fool ourselves.

He focuses on the beauty of flowers in an attempt to convince us that beauty is partially objective. But he doesn’t describe this objective beauty in a way that would make it hard to alter to fit whatever evidence he wants it to fit. I see an obvious alternative explanation for humans finding flowers beautiful – they indicate where fruit will be.

He argues that creativity evolved to help people find better ways of faithfully transmitting knowledge (understanding someone can require creative interpretation of the knowledge that they are imperfectly expressing). That might be true, but I can easily create other explanations that fit the evidence he’s trying to explain, such as that creativity enabled people to make better choices about when to seek a new home.

He imagines that he has a simple way to demonstrate that hunter-gatherer societies could not have lived in a golden age (the lack of growth of their knowledge):

Since static societies cannot exist without effectively extinguishing the growth of knowledge, they cannot allow their members much opportunity to pursue happiness.

But that requires implausible assumptions such as that happiness depends more on the pursuit of knowledge than availability of sex. And it’s not clear that hunter-gatherer societies were stable – they may have been just a few mistakes away from extinction, and accumulating knowledge faster than any previous species had. (I think Deutsch lives in a better society than hunter-gatherers, but it would take a complex argument to show that the average person today does).

But I generally enjoyed his arguments even when I thought they were wrong.

See also the review in the New York Times.