Book review: Radical Abundance: How a Revolution in Nanotechnology Will Change Civilization, by K. Eric Drexler.

Radical Abundance is more cautious than his prior books, and targeted at a very nontechnical audience. It accurately describes many likely ways in which technology will create orders of magnitude more material wealth.

Much of it repackages old ideas, and it focuses too much on the history of nanotechnology.

He defines the subject of the book to be atomically precise manufacturing (APM), and doesn’t consider nanobots to have much relevance to the book.

One new idea that I liked is that rare elements will become unimportant to manufacturing. In particular, solar energy can be made entirely out of relatively common elements (unlike current photovoltaics). Alas, he doesn’t provide enough detail for me to figure out how confident I should be about that.

He predicts that progress toward APM will accelerate someday, but doesn’t provide convincing arguments. I don’t recall him pointing out the likelihood that investment in APM companies will increase dramatically when VCs realize that a few years of effort will produce commercial products.

He doesn’t do a good jobs of documenting his claims that APM has advanced far. I’m pretty sure that the million atom DNA scaffolds he mentions have as much programmable complexity as he hints, but if I only relied on this book to analyze that I’d suspect that those structures were simpler and filled with redundancy.

He wants us to believe that APM will largely eliminate pollution, and that waste heat will “have little adverse impact”. I’m disappointed that he doesn’t quantify the global impact of increasing waste heat. Why does he seem to disagree with Rob Freitas about this?

Book review: The Motivation Hacker, by Nick Winter.

This is a productivity book that might improve some peoples’ motivation.

It provides an entertaining summary (with clear examples) of how to use tools such as precommitment to accomplish an absurd number of goals.

But it mostly fails at explaining how to feel enthusiastic about doing so.

The section on Goal Picking Exercises exemplifies the problems I have with the book. The most realistic sounding exercise had me rank a bunch of goals by how much the goal excites me times the probability of success divided by the time required. I found that the variations in the last two terms overwhelmed the excitement term, leaving me with the advice that I should focus on the least exciting goals. (Modest changes to the arbitrary scale of excitement might change that conclusion).

Which leaves me wondering whether I should focus on goals that I’m likely to achieve soon but which I have trouble caring about, or whether I should focus on longer term goals such as mind uploading (where I might spend years on subgoals which turn out to be mistaken).

The author doesn’t seem to have gotten enough out of his experience to motivate me to imitate the way he picks goals.

Automated market-making software agents have been used in many prediction markets to deal with problems of low liquidity.

The simplest versions provide a fixed amount of liquidity. This either causes excessive liquidity when trading starts, or too little later.

For instance, in the first year that I participated in the Good Judgment Project, the market maker provided enough liquidity that there was lots of money to be made pushing the market maker price from its initial setting in a somewhat obvious direction toward the market consensus. That meant much of the reward provided by the market maker went to low-value information.

The next year, the market maker provided less liquidity, so the prices moved more readily to a crude estimate of the traders’ beliefs. But then there wasn’t enough liquidity for traders to have an incentive to refine that estimate.

One suggested improvement is to have liquidity increase with increasing trading volume.

I present some sample Python code below (inspired by equation 18.44 in E.T. Jaynes’ Probability Theory) which uses the prices at which traders have traded against the market maker to generate probability-like estimates of how likely a price is to reflect the current consensus of traders.

This works more like human market makers, in that it provides the most liquidity near prices where there’s been the most trading. If the market settles near one price, liquidity rises. When the market is not trading near prices of prior trades (due to lack of trading or news that causes a significant price change), liquidity is low and prices can change more easily.

I assume that the possible prices a market maker can trade at are integers from 1 through 99 (percent).

When traders are pushing the price in one direction, this is taken as evidence that increases the weight assigned to the most recent price and all others farther in that direction. When traders reverse the direction, that is taken as evidence that increases the weight of the two most recent trade prices.

The resulting weights (p_px in the code) are fractions which should be multiplied by the maximum number of contracts the market maker is willing to offer when liquidity ought to be highest (one weight for each price at which the market maker might position itself (yes there will actually be two prices; maybe two weight ought to be averaged)).

There is still room for improvement in this approach, such as giving less weight to old trades after the market acts like it has responded to news. But implementers should test simple improvements before worrying about finding the optimal rules.

trades = [(1, 51), (1, 52), (1, 53), (-1, 52), (1, 53), (-1, 52), (1, 53), (-1, 52), (1, 53), (-1, 52),]
p_px = {}
num_agree = {}

probability_list = range(1, 100)
num_probabilities = len(probability_list)

for i in probability_list:
    p_px[i] = 1.0/num_probabilities
    num_agree[i] = 0

num_trades = 0
last_trade = 0
for (buy, price) in trades: # test on a set of made-up trades
    num_trades += 1
    for i in probability_list:
        if last_trade * buy < 0: # change of direction
            if buy < 0 and (i == price or i == price+1):
                num_agree[i] += 1
            if buy > 0 and (i == price or i == price-1):
                num_agree[i] += 1
        else:
            if buy < 0 and i <= price:
                num_agree[i] += 1
            if buy > 0 and i >= price:
                num_agree[i] += 1
        p_px[i] = (num_agree[i] + 1.0)/(num_trades + num_probabilities)
    last_trade = buy

for i in probability_list:
    print i, num_agree[i], '%.3f' % p_px[i]

Paleofantasy

Book review: Paleofantasy: What Evolution Really Tells Us about Sex, Diet, and How We Live, by Marlene Zuk

This book refutes some myths about what would happen if we adopted the lifestyle of some imaginary hunter-gather ancestor who some imagine was perfectly adapted to his environment.

I’m a bit disappointed that it isn’t as provocative as the hype around it suggested. It mostly just points out that there’s no single environment that we’re adapted to, plus uncertainty about what our ancestors’ lifestyle was.

She spends a good deal of the book demonstrating what ought to be the well-known fact that we’re still evolving and have partly adapted to an agricultural lifestyle. A more surprising point is that we still have problems stemming from not yet having fully evolved to be land animals rather than fish (e.g. hiccups).

She provides a reference to a study disputing the widely held belief that the transition from hunter-gatherer to farmer made people less healthy.

She cites evidence that humans haven’t evolved much adaptation to specific diets, and do about equally well on a wide variety of diets involving wild foods, so that looking at plant to animal ratios in hunter-gather diets isn’t useful.

Her practical lifestyle advice is mostly consistent with an informed guess about how we can imitate our ancestors’ lifestyle (e.g. eat less processed food), and mainly serves to counteract some of the overconfident claims of the less thoughtful paleo lifestyle promoters.

Charity for Corporations

In his talk last week, Robin Hanson mentioned an apparently suboptimal level of charitable donations to for-profit companies.

My impression is that some of the money raised on Kickstarter and Indiegogo is motivated by charity.

Venture capitalists occasionally bias their investments towards more “worthy” causes.

I wonder whether there’s also some charitable component to people accepting lower salaries in order to work at jobs that sound like they produce positive externalities.

Charity for profitable companies isn’t likely to become a popular concept anytime soon, but that doesn’t keep subsets of it from becoming acceptable if framed differently.

I tried O2Amp glasses to correct for my colorblindness. They’re very effective at enabling me to notice some shades of red that I’ve found hard to see. In particular, two species of wildflowers (Indian Paintbrush and Cardinal Larkspur) look bright orange through the glasses, whereas without the glasses my vision usually fills in their color by guessing it’s similar to the surrounding colors unless I look very close.

But this comes at the cost of having green look much duller. The net effect causes vegetation to be less scenic.

The glasses are supposed to have some benefits for observing emotions via better recognition of blood concentration and oxygenation near the skin. But this effect seems too small to help me.

O2Amp is a small step toward enhanced sensory processing that is likely to become valuable someday, but for now it seems mainly valuable for a few special medical uses.

Book review: Why Nations Fail: The Origins of Power, Prosperity, and Poverty, by Daron Acemoglu and James Robinson.

This book claims that “extractive institutions” prevent nations from becoming wealthy, and “inclusive institutions” favor wealth creation. It is full of anecdotes that occasionally have some relevance to their thesis. (The footnotes hint that they’ve written something more rigorous elsewhere).

The stereotypical extractive institutions certainly do harm that the stereotypical inclusive institutions don’t. But they describe those concepts in ways that do a mediocre job of generalizing to non-stereotypical governments.

They define “extractive institutions” broadly to include regions that don’t have “sufficiently centralized and pluralistic” political institutions. That enables them to classify regions such as Somalia as extractive without having to identify anything that would fit the normal meaning of extractive.

Their description of Somalia as having an “almost constant state of warfare” is strange. Their only attempt to quantify this warfare is a reference to a 1955 incident where 74 people were killed (if that’s a memorable incident, it would suggest war kills few people there; do they ignore the early 90’s because it was an aberration?). Wikipedia lists Somalia’s most recently reported homicide rate as 1.5 per 100,000 (compare to 14.5 for their favorite African nation Botswana, and 4.2 for the U.S.).

They don’t discuss the success of Dubai and Hong Kong because those governments don’t come very close to fitting their stereotype of a pluralistic and centralized nation.

They describe Mao’s China as “highly extractive”, but it looks to me more like ignorant destruction than an attempt at extracting anything. They say China’s current growth is unsustainable, somewhat like the Soviet Union (but they hedge and say it might succeed by becoming inclusive as South Korea did). Whereas I predict that China’s relatively decentralized planning will be enough to sustain modest growth, but it will be held back somewhat by the limits to the rule of law.

They do a good (but hardly novel) job of explaining why elites often fear that increased prosperity would threaten their position.

They correctly criticize some weak alternative explanations of poverty such as laziness. But they say little about explanations that partly overlap with theirs, such as Fukuyama’s Trust (a bit odd given that the book contains a blurb from Fukuyama). Fukuyama doesn’t seem to discuss Africa much, but the effects of slave trade seem to have large long-lasting consequences on social capital.

For a good introduction to some more thoughtful explanations of national growth such as the rule of law and the scientific method, see William Bernstein’s The Birth of Plenty.

Why Nations Fail may be useful for correcting myths among people who are averse to math, but for people who are already familiar with this subject, it will just add a few anecdotes without adding much insight.

Anti-Paleo Diet

Soylent is an almost pure chemical diet, whose most natural looking ingredients are olive oil and whey protein. It provides the FDA recommended nutrients from mostly purified sources of the individual nutrients. The creator claims to have experienced improved health after adopting it (after previously eating something slightly better than a typical US diet).

This seems like a very effective way to minimize the poisons in our diet.

It’s also cheaper than most diets (he claims less than $2/day, but that seems questionable). He claims it tastes good, although eating the same thing day after day would seem a bit monotonous.

FDA recommendations are known to be suboptimal – too little vitamin D, too much calcium.

He seems confused about the fiber requirements, and is a bit reckless about his omega-6/omega-3 ratio. But these are easily improved.

He almost certainly misses some important nutrients that haven’t yet been identified, but that can be partly compensated for by adding a few low-risk foods such as salmon, seaweed, spinach, and sweet potatoes (the four S’s?).

I’m giving some thought to replacing 25-50% of my calories with something along these lines.

Book review: Error and the Growth of Experimental Knowledge by Deborah Mayo.

This book provides a fairly thoughtful theory of how scientists work, drawing on
Popper and Kuhn while improving on them. It also tries to describe a quasi-frequentist philosophy (called Error Statistics, abbreviated as ES) which poses a more serious challenge to the Bayesian Way than I’d seen before.

Mayo’s attacks on Bayesians are focused more on subjective Bayesians than objective Bayesians, and they show some real problems with the subjectivists willingness to treat arbitrary priors as valid. The criticisms that apply to objective Bayesians (such as E.T. Jaynes) helped me understand why frequentism is taken seriously, but didn’t convince me to change my view that the Bayesian interpretation is more rigorous than the alternatives.

Mayo shows that much of the disagreement stems from differing goals. ES is designed for scientists whose main job is generating better evidence via new experiments. ES uses statistics for generating severe tests of hypotheses. Bayesians take evidence as a given and don’t think experiments deserve special status within probability theory.

The most important difference between these two philosophies is how they treat experiments with “stopping rules” (e.g. tossing a coin until it produces a pre-specified pattern instead of doing a pre-specified number of tosses). Each philosophy tells us to analyze the results in ways that seem bizarre to people who only understand the other philosophy. This subject is sufficiently confusing that I’ll write a separate post about it after reading other discussions of it.

She constructs a superficially serious disagreement where Bayesians say that evidence increases the probability of a hypothesis while ES says the evidence provides no support for the (Gellerized) hypothesis. Objective Bayesians seem to handle this via priors which reflect the use of old evidence. Marcus Hutter has a description of a general solution in his paper On Universal Prediction and Bayesian Confirmation, but I’m concerned that Bayesians may be more prone to mistakes in implementing such an approach than people who use ES.

Mayo occasionally dismisses the Bayesian Way as wrong due to what look to me like differing uses of concepts such as evidence. The Bayesian notion of very weak evidence seems wrong given her assumption that concept scientific evidence is the “right” concept. This kind of confusion makes me wish Bayesians had invented a different word for the non-prior information that gets fed into Bayes Theorem.

One interesting and apparently valid criticism Mayo makes is that Bayesians treat the evidence that they feed into Bayes Theorem as if it had a probability of one, contrary to the usual Bayesian mantra that all data have a probability and the use of zero or one as a probability is suspect. This is clearly just an approximation for ease of use. Does it cause problems in practice? I haven’t seen a good answer to this.

Mayo claims that ES can apportion blame for an anomalous test result (does it disprove the hypothesis? or did an instrument malfunction?) without dealing with prior probabilities. For example, in the classic 1919 eclipse test of relativity, supporters of Newton’s theory agreed with supporters of relativity about which data to accept and which to reject, whereas Bayesians would have disagreed about the probabilities to assign to the evidence. If I understand her correctly, this also means that if the data had shown light being deflected at a 90 degree angle to what both theories predict, ES scientists wouldn’t look any harder for instrument malfunctions.

Mayo complains that when different experimenters reach different conclusions (due to differing experimental results) “Lindley says all the information resides in an agent’s posterior probability”. This may be true in the unrealistic case where each one perfectly incorporates all relevant evidence into their priors. But a much better Bayesian way to handle differing experimental results is to find all the information created by experiments in the likelihood ratios that they produce.

Many of the disagreements could be resolved by observing which approach to statistics produced better results. The best Mayo can do seems to be when she mentions an obscure claim by Pierce that Bayesian methods had a consistently poor track record in (19th century?) archaeology. I’m disappointed that I haven’t seen a good comparison of more recent uses of the competing approaches.

Talking20

Talking20 is an ambitious startup attempting to make a wide variety of blood tests available at the surprisingly cheap price of $2 per test. Getting the drop of blood needed will still be a pain, but doing it at home and mailing in a postcard will simplify the process a lot.

If this succeeds it would dramatically increase our knowledge of things such as our cholesterol levels.

But I get the impression that they are being rather optimistic about how quickly they can get enough sales volume to make money.

Their attempt to use Indiegogo doesn’t appear to be as appropriate to their needs as seeking angel or VC investment would be.

I’m also concerned that the institutions they would compete with will try to get them regulated in ways that would drastically increase their costs.

I’m somewhat tempted to order something from them via Indiegogo, but I’m not confident in their ability to deliver.