The Quantified Self 2013 Global Conference attracted many interesting people.

There were lots of new devices to measure the usual things more easily or to integrate multiple kinds of data.

Airo is an ambitious attempt to detect a wide variety of things, including food via sensing metabolites.

TellSpec plans to detect food nutrients and allergens through Raman spectroscopy.

OMsignal has a t-shirt with embedded sensors.

The M1nd should enable users to find more connections and spurious correlations between electromagnetic fields and health.

Ios is becoming a more important platform for trendy tools. As an Android user who wants to stick to devices with a large screen and traditional keyboard, I feel a bit left out.

The Human Locomotome Project is an ambitious attempt to produce an accurate and easy to measure biomarker of aging, using accelerometer data from devices such as FitBit. They’re measuring something that was previously not well measured, but there doesn’t appear to be any easy way to tell whether that information is valuable.

The hug brigade that was at last year’s conference (led by Paul Grasshoff?) was missing this year.

Attempts to attract a critical mass to the QS Forum seem to be having little effect.

Book review: Reinventing Philanthropy: A Framework for More Effective Giving, by Eric Friedman.

This book will spread the ideas behind effective altruism to a modestly wider set of donors than other efforts I’m aware of. It understates how much the effective altruism movement differs from traditional charity and how hard it is to implement, but given the shortage of books on this subject any addition is valuable. It focuses on how to ask good questions about philanthropy rather than attempting to find good answers.

The author provides a list of objections he’s heard to maximizing the effectiveness of charity, a majority of which seem to boil down to the “diversification of nonprofit goals would be drastically reduced”, leading to many existing benefits being canceled. He tries to argue that people have extremely diverse goals which would result in an extremely diverse set of charities. He later argues that the subjectivity of determining the effectiveness of charities will maintain that diversity. Neither of these arguments seem remotely plausible. When individuals explicitly compare how they value their own pleasure, life expectancy, dignity, freedom, etc., I don’t see more than a handful of different goals. How could it be much different for recipients of charity? There exist charities whose value can’t easily be compared to GiveWell’s recommended ones (stopping nuclear war?), but they seem to get a small fraction of the money that goes to charities that GiveWell has decent reasons for rejecting.

So I conclude that widespread adoption of effective giving would drastically reduce the diversity of charitable goals (limited mostly by the fact that spending large amount on a single goal is subject to diminishing returns). The only plausible explanation I see for peoples’ discomfort with that is that people are attached to beliefs which are inconsistent with treating all potential recipients as equally deserving.

He’s reluctant to criticize “well-intentioned” donors who use traditional emotional reasoning. I prefer to think of them as normally-intentioned (i.e. acting on a mix of selfish and altruistic motives).

I still have some concerns that asking average donors to objectively maximize the impact of their donations would backfire by reducing the emotional benefit they get from giving more than it increases the effectiveness of their giving. But since I don’t expect more than a few percent of the population to be analytical enough to accept the arguments in this book, this doesn’t seem like an important concern.

He tries to argue that effective giving can increase the emotional benefit we get from giving. This mostly seems to depend on getting more warm fuzzy feelings from helping more people. But as far as I can tell, those feelings are very insensitive to the number of people helped. I haven’t noticed any improved feelings as I alter my giving to increase its impact, and the literature on scope insensitivity suggests that’s typical.

He wants donors to treat potentially deserving recipients as equally deserving regardless of how far away they are, but he fails to include people who are distant in time. He might have good reasons for not wanting to donate to people of the distant future, but not analyzing those reasons risks making the same kind of mistake he criticizes donors for making about distant continents.

War

Book review: War in Human Civilization by Azar Gat.

This ambitious book has some valuable insights into what influences the frequency of wars, but is sufficiently long-winded that I wasn’t willing to read much more than half of it (I skipped part 2).

Part 1 describes the evolutionary pressures which lead to war, most of which ought to be fairly obvious.

One point that seemed new to me in that section was the observation that for much of the human past, group selection was almost equivalent to kin selection because tribes were fairly close kin.

Part 3 describes how the industrial revolution altered the nature of war.

The best section of the book contains strong criticisms of the belief that democracy makes war unlikely (at least with other democracies).

Part of the reason for the myth that democracies don’t fight each other was people relying on a database of wars that only covers the period starting in 1815. That helped people overlook many wars between democracies in ancient Greece, the 1812 war between the US and Britain, etc.

A more tenable claim is that something associated with modern democracies is deterring war.

But in spite of number of countries involved and the number of years in which we can imagine some of them fighting, there’s little reason to consider the available evidence for the past century to be much more than one data point. There was a good deal of cultural homogeneity across democracies in that period. And those democracies were part of an alliance that was unified by the threat of communism.

He suggests some alternate explanations for modern peace that are only loosely connected to democracy, including:

  • increased wealth makes people more risk averse
  • war has become less profitable
  • young males are a smaller fraction of the population
  • increased availability of sex made men less desperate to get sex by raping the enemy (“Make love, not war” wasn’t just a slogan)

He has an interesting idea about why trade wasn’t very effective at preventing wars between wealthy nations up to 1945 – there was an expectation that the world would be partitioned into a few large empires with free trade within but limited trade between empires. Being part of a large empire was expected to imply greater wealth than a small empire. After 1945, the expectation that trade would be global meant that small nations appeared viable.

Another potentially important historical change was that before the 1500s, power was an effective way of gaining wealth, but wealth was not very effective at generating power. After the 1500s, wealth became important to being powerful, and military power became less effective at acquiring wealth.

Book review: Singularity Hypotheses: A Scientific and Philosophical Assessment.

This book contains papers of widely varying quality on superhuman intelligence, plus some fairly good discussions of what ethics we might hope to build into an AGI. Several chapters resemble cautious versions of LessWrong, others come from a worldview totally foreign to LessWrong.

The chapter I found most interesting was Richard Loosemore and Ben Goertzel’s attempt to show there are no likely obstacles to a rapid “intelligence explosion”.

I expect what they label as the “inherent slowness of experiments and environmental interaction” to be an important factor limiting the rate at which an AGI can become more powerful. They think they see evidence from current science that this is an unimportant obstacle compared to a shortage of intelligent researchers: “companies complain that research staff are expensive and in short supply; they do not complain that nature is just too slow.”

Some explanations that come to mind are:

  • Complaints about nature being slow are not very effective at speeding up nature.
  • Complaints about specific tools being slow probably aren’t very unusual, but there are plenty of cases where people know complaints aren’t effective (e.g. complaints about spacecraft traveling slower than the theoretical maximum [*]).
  • Hiring more researchers can increase the status of a company even if the additional staff don’t advance knowledge.

They also find it hard to believe that we have independently reached the limit of the physical rate at which experiments can be done at the same time we’ve reached the limits of how many intelligent researchers we can hire. For literal meanings of physical limits this makes sense, but if it’s as hard to speed up experiments as it is to throw more intelligence into research, then the apparent coincidence could be due to wise allocation of resources to whichever bottleneck they’re better used in.

None of this suggests that it would be hard for an intelligence explosion to produce the 1000x increase in intelligence they talk about over a century, but it seems like an important obstacle to the faster time periods some people believe (days or weeks).

Some shorter comments on other chapters:

James Miller describes some disturbing incentives that investors would create for companies developing AGI if AGI is developed by companies large enough that no single investor has much influence on the company. I’m not too concerned about this because if AGI were developed by such a company, I doubt that small investors would have enough awareness of the project to influence it. The company might not publicize the project, or might not be honest about it. Investors might not believe accurate reports if they got them, since the reports won’t sound much different from projects that have gone nowhere. It seems very rare for small investors to understand any new software project well enough to distinguish between an AGI that goes foom and one that merely makes some people rich.

David Pearce expects the singularity to come from biological enhancements, because computers don’t have human qualia. He expects it would be intractable for computers to analyze qualia. It’s unclear to me whether this is supposed to limit AGI power because it would be hard for AGI to predict human actions well enough, or because the lack of qualia would prevent an AGI from caring about its goals.

Itamar Arel believes AGI is likely to be dangerous, and suggests dealing with the danger by limiting the AGI’s resources (without saying how it can be prevented from outsourcing its thought to other systems), and by “educational programs that will help mitigate the inevitable fear humans will have” (if the dangers are real, why is less fear desirable?).

* No, that example isn’t very relevant to AGI. Better examples would be atomic force microscopes, or the stock market (where it can take a generation to get a new test of an important pattern), but it would take lots of effort to convince you of that.

Book review: The Origins of Political Order: From Prehuman Times to the French Revolution, by Francis Fukuyama.

This ambitious attempt to explain the rise of civilization (especially the rule of law) is partly successful.

The most important idea in the book is that the Catholic church (in the Gregorian Reforms) played a critical role in creating important institutions.

The church differed from religions in other cultures in that it was sufficiently organized to influence political policy, but not strong enough to become a state. This lead it to acquire resources by creating rules that enabled people to leave property to the church (often via wills, which hardly existed before then). This turned what had been resources belonging to some abstract group (families or ancestors) into things owned by individuals, and created rules for transferring those resources.

In the process, it also weakened the extended family, which was essential to having a state that impartially promoted the welfare of a society that was larger than a family.

He also provides a moderately good description of China’s earlier partial adoption of something similar in its merit-selected bureaucracy.

I recommend reading the first 7 chapters plus chapter 16. The rest of the book contains more ordinary history, including some not-too-convincing explanations of why northwest Europe did better than the rest of Christianity.

More Ancestral Diet Evidence

There was a large shift in our ancestors diet about 3.5 million years ago to food derived from grasses and/or sedges. This has potentially important implications for what diet we’re adapted to. Unfortunately, the evidence isn’t specific enough to be very useful:

The isotope method cannot distinguish what parts of grasses and sedges human ancestors ate – leaves, stems, seeds and-or underground storage organs such as roots or rhizomes. The method also can’t determine when human ancestors began getting much of their grass by eating grass-eating insects or meat from grazing animals.

Book review: Radical Abundance: How a Revolution in Nanotechnology Will Change Civilization, by K. Eric Drexler.

Radical Abundance is more cautious than his prior books, and targeted at a very nontechnical audience. It accurately describes many likely ways in which technology will create orders of magnitude more material wealth.

Much of it repackages old ideas, and it focuses too much on the history of nanotechnology.

He defines the subject of the book to be atomically precise manufacturing (APM), and doesn’t consider nanobots to have much relevance to the book.

One new idea that I liked is that rare elements will become unimportant to manufacturing. In particular, solar energy can be made entirely out of relatively common elements (unlike current photovoltaics). Alas, he doesn’t provide enough detail for me to figure out how confident I should be about that.

He predicts that progress toward APM will accelerate someday, but doesn’t provide convincing arguments. I don’t recall him pointing out the likelihood that investment in APM companies will increase dramatically when VCs realize that a few years of effort will produce commercial products.

He doesn’t do a good jobs of documenting his claims that APM has advanced far. I’m pretty sure that the million atom DNA scaffolds he mentions have as much programmable complexity as he hints, but if I only relied on this book to analyze that I’d suspect that those structures were simpler and filled with redundancy.

He wants us to believe that APM will largely eliminate pollution, and that waste heat will “have little adverse impact”. I’m disappointed that he doesn’t quantify the global impact of increasing waste heat. Why does he seem to disagree with Rob Freitas about this?

Book review: The Motivation Hacker, by Nick Winter.

This is a productivity book that might improve some peoples’ motivation.

It provides an entertaining summary (with clear examples) of how to use tools such as precommitment to accomplish an absurd number of goals.

But it mostly fails at explaining how to feel enthusiastic about doing so.

The section on Goal Picking Exercises exemplifies the problems I have with the book. The most realistic sounding exercise had me rank a bunch of goals by how much the goal excites me times the probability of success divided by the time required. I found that the variations in the last two terms overwhelmed the excitement term, leaving me with the advice that I should focus on the least exciting goals. (Modest changes to the arbitrary scale of excitement might change that conclusion).

Which leaves me wondering whether I should focus on goals that I’m likely to achieve soon but which I have trouble caring about, or whether I should focus on longer term goals such as mind uploading (where I might spend years on subgoals which turn out to be mistaken).

The author doesn’t seem to have gotten enough out of his experience to motivate me to imitate the way he picks goals.

Automated market-making software agents have been used in many prediction markets to deal with problems of low liquidity.

The simplest versions provide a fixed amount of liquidity. This either causes excessive liquidity when trading starts, or too little later.

For instance, in the first year that I participated in the Good Judgment Project, the market maker provided enough liquidity that there was lots of money to be made pushing the market maker price from its initial setting in a somewhat obvious direction toward the market consensus. That meant much of the reward provided by the market maker went to low-value information.

The next year, the market maker provided less liquidity, so the prices moved more readily to a crude estimate of the traders’ beliefs. But then there wasn’t enough liquidity for traders to have an incentive to refine that estimate.

One suggested improvement is to have liquidity increase with increasing trading volume.

I present some sample Python code below (inspired by equation 18.44 in E.T. Jaynes’ Probability Theory) which uses the prices at which traders have traded against the market maker to generate probability-like estimates of how likely a price is to reflect the current consensus of traders.

This works more like human market makers, in that it provides the most liquidity near prices where there’s been the most trading. If the market settles near one price, liquidity rises. When the market is not trading near prices of prior trades (due to lack of trading or news that causes a significant price change), liquidity is low and prices can change more easily.

I assume that the possible prices a market maker can trade at are integers from 1 through 99 (percent).

When traders are pushing the price in one direction, this is taken as evidence that increases the weight assigned to the most recent price and all others farther in that direction. When traders reverse the direction, that is taken as evidence that increases the weight of the two most recent trade prices.

The resulting weights (p_px in the code) are fractions which should be multiplied by the maximum number of contracts the market maker is willing to offer when liquidity ought to be highest (one weight for each price at which the market maker might position itself (yes there will actually be two prices; maybe two weight ought to be averaged)).

There is still room for improvement in this approach, such as giving less weight to old trades after the market acts like it has responded to news. But implementers should test simple improvements before worrying about finding the optimal rules.

trades = [(1, 51), (1, 52), (1, 53), (-1, 52), (1, 53), (-1, 52), (1, 53), (-1, 52), (1, 53), (-1, 52),]
p_px = {}
num_agree = {}

probability_list = range(1, 100)
num_probabilities = len(probability_list)

for i in probability_list:
    p_px[i] = 1.0/num_probabilities
    num_agree[i] = 0

num_trades = 0
last_trade = 0
for (buy, price) in trades: # test on a set of made-up trades
    num_trades += 1
    for i in probability_list:
        if last_trade * buy < 0: # change of direction
            if buy < 0 and (i == price or i == price+1):
                num_agree[i] += 1
            if buy > 0 and (i == price or i == price-1):
                num_agree[i] += 1
        else:
            if buy < 0 and i <= price:
                num_agree[i] += 1
            if buy > 0 and i >= price:
                num_agree[i] += 1
        p_px[i] = (num_agree[i] + 1.0)/(num_trades + num_probabilities)
    last_trade = buy

for i in probability_list:
    print i, num_agree[i], '%.3f' % p_px[i]