Archives

All posts by Peter

Nutritional Meals

I’ve been thinking more about convenient, healthy alternatives to Soylent or MealSquares that are closer to the kind of food we’ve evolved to eat.

Here’s some food that exceeds the recommended daily intake of most vitamins and minerals with only about 1300 calories (leaving room for less healthy snacks):

  • 4 bags of Brad’s Raw Chips, Indian
  • 1.5 bags of Brad’s Raw Chips, Sweet Pepper
  • 6 crackers, Lydia’s Green Crackers (vitamin E)
  • 1 oz Atlantic oysters (B12, zinc) (one 3 oz tin every 3 days)
  • 1 brazil nut (selenium)

Caveats: I’m unsure how accurately I estimated the nutrition in the processed foods (I made guesses based on the list of ingredients).

This diet has little vitamin D (which I expect to get from supplements and sun).

It’s slightly low in calcium, sodium, B12, and saturated fat. I consider it important to get more B12 from other animal sources (sardines, salmon or pastured eggs). I’m not concerned about the calcium or sodium because this diet would provide more than hunter-gathers got and because I don’t have much trouble getting more from other food. And it’s hard not to get more saturated fat from other foods I like (e.g. chocolate).

I don’t know whether it has enough iodine, so when I’m not having much fish it’s probably good to add a little seaweed (I’m careful to avoid the common kinds that have added oil that’s been subjected to questionable processing).

It has just barely 100% of vitamin E, B3, and B5 (in practice I get more of those from eggs and sweet potatoes).

It’s possibly too high in omega-3 (10+ grams?) from flax seeds in the Raw Chips (my estimate here is more uncertain than with the other nutrients).

The only convenient way to get oysters that keep well and don’t need preparation is cans of smoked oysters, and smoking seems to be an unhealthy way to process food.

Note that I chose this list without trying to make it affordable, and it ended up costing about $50 per day. I don’t plan to spend that much unless I become too busy to cook cheaper foods such as sweet potatoes, mushrooms, bean sprouts, fish, and eggs.

In practice, I’ve been relying more on Questbars for convenient food, but I’m trying to cut down on those as I eat more Brad’s Raw Chips.

Book review: The Great Degeneration: How Institutions Decay and Economies Die, by Niall Ferguson.

Read (or skim) Reinhart and Rogoff’s book This Time is Different instead. The Great Degeneration contains little value beyond a summary of that.

The other part which comes closest to analyzing US decay is a World Bank report about governance quality from 1996 to 2011 which shows the US in decline from 2000 to 2009. He makes some half-hearted attempts to argue for a longer trend using anecdotes that don’t really say much.

Large parts of the book are just standard ideological fluff.

Book review: Our Final Invention: Artificial Intelligence and the End of the Human Era by James Barrat.

This book describes the risks that artificial general intelligence will cause human extinction, presenting the ideas propounded by Eliezer Yudkowsky in a slightly more organized but less rigorous style than Eliezer has.

Barrat is insufficiently curious about why many people who claim to be AI experts disagree, so he’ll do little to change the minds of people who already have opinions on the subject.

He dismisses critics as unable or unwilling to think clearly about the arguments. My experience suggests that while it’s normally the case that there’s an argument that any one critic hasn’t paid much attention to, that’s often because they’ve rejected with some thought some other step in Eliezer’s reasoning and concluded that the step they’re ignoring wouldn’t influence their conclusions.

The weakest claim in the book is that an AGI might become superintelligent in hours. A large fraction of people who have worked on AGI (e.g. Eric Baum’s What is Thought?) dismiss this as too improbable to be worth much attention, and Barrat doesn’t offer them any reason to reconsider. The rapid takeoff scenarios influence how plausible it is that the first AGI will take over the world. Barrat seems only interested in talking to readers who can be convinced we’re almost certainly doomed if we don’t build the first AGI right. Why not also pay some attention to the more complex situation where an AGI takes years to become superhuman? Should people who think there’s a 1% chance of the first AGI conquering the world worry about that risk?

Some people don’t approve of trying to build an immutable utility function into an AGI, often pointing to changes in human goals without clearly analyzing whether those are subgoals that are being altered to achieve a stable supergoal/utility function. Barrat mentions one such person, but does little to analyze this disagreement.

Would an AGI that has been designed without careful attention to safety blindly follow a narrow interpretation of its programmed goal(s), or would it (after achieving superintelligence) figure out and follow the intentions of its authors? People seem to jump to whatever conclusion supports their attitude toward AGI risk without much analysis of why others disagree, and Barrat follows that pattern.

I can imagine either possibility. If the easiest way to encode a goal system in an AGI is something like “output chess moves which according to the rules of chess will result in checkmate” (turning the planet into computronium might help satisfy that goal).

An apparently harder approach would have the AGI consult a human arbiter to figure out whether it wins the chess game – “human arbiter” isn’t easy to encode in typical software. But AGI wouldn’t be typical software. It’s not obviously wrong to believe that software smart enough to take over the world would be smart enough to handle hard concepts like that. I’d like to see someone pin down people who think this is the obvious result and get them to explain how they imagine the AGI handling the goal before it reaches human-level intelligence.

He mentions some past events that might provide analogies for how AGI will interact with us, but I’m disappointed by how little thought he puts into this.

His examples of contact between technologically advanced beings and less advanced ones all refer to Europeans contacting Native Americans. I’d like to have seen a wider variety of analogies, e.g.:

  • Japan’s contact with the west after centuries of isolation
  • the interaction between neanderthals and humans
  • the contact that resulted in mitochondria becoming part of our cells

He quotes Vinge saying an AGI ‘would not be humankind’s “tool” – any more than humans are the tools of rabbits or robins or chimpanzees.’ I’d say that humans are sometimes the tools of human DNA, which raises more complex questions of how well the DNA’s interests are served.

The book contains many questionable digressions which seem to be designed to entertain.

He claims Google must have an AGI project in spite of denials by Google’s Peter Norvig (this was before it bought DeepMind). But the evidence he uses to back up this claim is that Google thinks something like AGI would be desirable. The obvious conclusion would be that Google did not then think it had the skill to usefully work on AGI, which would be a sensible position given the history of AGI.

He thinks there’s something paradoxical about Eliezer Yudkowsky wanting to keep some information about himself private while putting lots of personal information on the web. The specific examples Barrat gives strongly suggests that Eliezer doesn’t value the standard notion of privacy, but wants to limit peoples’ ability to distract him. Barrat also says Eliezer “gave up reading for fun several years ago”, which will surprise those who see him frequently mention works of fiction in his Author’s Notes.

All this makes me wonder who the book’s target audience is. It seems to be someone less sophisticated than a person who could write an AGI.

A somewhat new hypothesis:

The Intense World Theory states that autism is the consequence of a supercharged brain that makes the world painfully intense and that the symptoms are largely because autistics are forced to develop strategies to actively avoid the intensity and pain.

Here’s a more extensive explanation.

This hypothesis connects many of the sensory peculiarities of autism with the attentional and social ones. Those had seemed like puzzling correlations to me until now.

However, it still leaves me wondering why the variations is sensory sensitivities seem much larger with autism. The researchers suggest an explanation involving increased plasticity, but I don’t see a strong connection between the Intense World hypothesis and that.

One implication (from this page):

According to the intense world perspective, however, warmth isn’t incompatible with autism. What looks like antisocial behavior results from being too affected by others’ emotions—the opposite of indifference.

Indeed, research on typical children and adults finds that too much distress can dampen ordinary empathy as well. When someone else’s pain becomes too unbearable to witness, even typical people withdraw and try to soothe themselves first rather than helping—exactly like autistic people. It’s just that autistic people become distressed more easily, and so their reactions appear atypical.

Book review: Self Comes to Mind: Constructing the Conscious Brain by Antonio R. Damasio.

This book describes many aspects of human minds in ways that aren’t wrong, but the parts that seem novel don’t have important implications.

He devotes a sizable part of the book to describing how memory works, but I don’t understand memory any better than I did before.

His perspective often seems slightly confusing or wrong. The clearest example I noticed was his belief (in the context of pre-historic humans) that “it is inconceivable that concern [as expressed in special treatment of the dead] or interpretation could arise in the absence of a robust self”. There may be good reasons for considering it improbable that humans developed burial rituals before developing Damasio’s notion of self, but anyone who is familiar with Julian Jaynes (as Damasio is) ought to be able to imagine that (and stranger ideas).

He pays a lot of attention to the location in the brain of various mental processes (e.g. his somewhat surprising claim that the brainstem plays an important role in consciousness), but rarely suggests how we could draw any inferences from that about how normal minds behave.

The Quantified Self 2013 Global Conference attracted many interesting people.

There were lots of new devices to measure the usual things more easily or to integrate multiple kinds of data.

Airo is an ambitious attempt to detect a wide variety of things, including food via sensing metabolites.

TellSpec plans to detect food nutrients and allergens through Raman spectroscopy.

OMsignal has a t-shirt with embedded sensors.

The M1nd should enable users to find more connections and spurious correlations between electromagnetic fields and health.

Ios is becoming a more important platform for trendy tools. As an Android user who wants to stick to devices with a large screen and traditional keyboard, I feel a bit left out.

The Human Locomotome Project is an ambitious attempt to produce an accurate and easy to measure biomarker of aging, using accelerometer data from devices such as FitBit. They’re measuring something that was previously not well measured, but there doesn’t appear to be any easy way to tell whether that information is valuable.

The hug brigade that was at last year’s conference (led by Paul Grasshoff?) was missing this year.

Attempts to attract a critical mass to the QS Forum seem to be having little effect.

Book review: Reinventing Philanthropy: A Framework for More Effective Giving, by Eric Friedman.

This book will spread the ideas behind effective altruism to a modestly wider set of donors than other efforts I’m aware of. It understates how much the effective altruism movement differs from traditional charity and how hard it is to implement, but given the shortage of books on this subject any addition is valuable. It focuses on how to ask good questions about philanthropy rather than attempting to find good answers.

The author provides a list of objections he’s heard to maximizing the effectiveness of charity, a majority of which seem to boil down to the “diversification of nonprofit goals would be drastically reduced”, leading to many existing benefits being canceled. He tries to argue that people have extremely diverse goals which would result in an extremely diverse set of charities. He later argues that the subjectivity of determining the effectiveness of charities will maintain that diversity. Neither of these arguments seem remotely plausible. When individuals explicitly compare how they value their own pleasure, life expectancy, dignity, freedom, etc., I don’t see more than a handful of different goals. How could it be much different for recipients of charity? There exist charities whose value can’t easily be compared to GiveWell’s recommended ones (stopping nuclear war?), but they seem to get a small fraction of the money that goes to charities that GiveWell has decent reasons for rejecting.

So I conclude that widespread adoption of effective giving would drastically reduce the diversity of charitable goals (limited mostly by the fact that spending large amount on a single goal is subject to diminishing returns). The only plausible explanation I see for peoples’ discomfort with that is that people are attached to beliefs which are inconsistent with treating all potential recipients as equally deserving.

He’s reluctant to criticize “well-intentioned” donors who use traditional emotional reasoning. I prefer to think of them as normally-intentioned (i.e. acting on a mix of selfish and altruistic motives).

I still have some concerns that asking average donors to objectively maximize the impact of their donations would backfire by reducing the emotional benefit they get from giving more than it increases the effectiveness of their giving. But since I don’t expect more than a few percent of the population to be analytical enough to accept the arguments in this book, this doesn’t seem like an important concern.

He tries to argue that effective giving can increase the emotional benefit we get from giving. This mostly seems to depend on getting more warm fuzzy feelings from helping more people. But as far as I can tell, those feelings are very insensitive to the number of people helped. I haven’t noticed any improved feelings as I alter my giving to increase its impact, and the literature on scope insensitivity suggests that’s typical.

He wants donors to treat potentially deserving recipients as equally deserving regardless of how far away they are, but he fails to include people who are distant in time. He might have good reasons for not wanting to donate to people of the distant future, but not analyzing those reasons risks making the same kind of mistake he criticizes donors for making about distant continents.

War

Book review: War in Human Civilization by Azar Gat.

This ambitious book has some valuable insights into what influences the frequency of wars, but is sufficiently long-winded that I wasn’t willing to read much more than half of it (I skipped part 2).

Part 1 describes the evolutionary pressures which lead to war, most of which ought to be fairly obvious.

One point that seemed new to me in that section was the observation that for much of the human past, group selection was almost equivalent to kin selection because tribes were fairly close kin.

Part 3 describes how the industrial revolution altered the nature of war.

The best section of the book contains strong criticisms of the belief that democracy makes war unlikely (at least with other democracies).

Part of the reason for the myth that democracies don’t fight each other was people relying on a database of wars that only covers the period starting in 1815. That helped people overlook many wars between democracies in ancient Greece, the 1812 war between the US and Britain, etc.

A more tenable claim is that something associated with modern democracies is deterring war.

But in spite of number of countries involved and the number of years in which we can imagine some of them fighting, there’s little reason to consider the available evidence for the past century to be much more than one data point. There was a good deal of cultural homogeneity across democracies in that period. And those democracies were part of an alliance that was unified by the threat of communism.

He suggests some alternate explanations for modern peace that are only loosely connected to democracy, including:

  • increased wealth makes people more risk averse
  • war has become less profitable
  • young males are a smaller fraction of the population
  • increased availability of sex made men less desperate to get sex by raping the enemy (“Make love, not war” wasn’t just a slogan)

He has an interesting idea about why trade wasn’t very effective at preventing wars between wealthy nations up to 1945 – there was an expectation that the world would be partitioned into a few large empires with free trade within but limited trade between empires. Being part of a large empire was expected to imply greater wealth than a small empire. After 1945, the expectation that trade would be global meant that small nations appeared viable.

Another potentially important historical change was that before the 1500s, power was an effective way of gaining wealth, but wealth was not very effective at generating power. After the 1500s, wealth became important to being powerful, and military power became less effective at acquiring wealth.

Book review: Singularity Hypotheses: A Scientific and Philosophical Assessment.

This book contains papers of widely varying quality on superhuman intelligence, plus some fairly good discussions of what ethics we might hope to build into an AGI. Several chapters resemble cautious versions of LessWrong, others come from a worldview totally foreign to LessWrong.

The chapter I found most interesting was Richard Loosemore and Ben Goertzel’s attempt to show there are no likely obstacles to a rapid “intelligence explosion”.

I expect what they label as the “inherent slowness of experiments and environmental interaction” to be an important factor limiting the rate at which an AGI can become more powerful. They think they see evidence from current science that this is an unimportant obstacle compared to a shortage of intelligent researchers: “companies complain that research staff are expensive and in short supply; they do not complain that nature is just too slow.”

Some explanations that come to mind are:

  • Complaints about nature being slow are not very effective at speeding up nature.
  • Complaints about specific tools being slow probably aren’t very unusual, but there are plenty of cases where people know complaints aren’t effective (e.g. complaints about spacecraft traveling slower than the theoretical maximum [*]).
  • Hiring more researchers can increase the status of a company even if the additional staff don’t advance knowledge.

They also find it hard to believe that we have independently reached the limit of the physical rate at which experiments can be done at the same time we’ve reached the limits of how many intelligent researchers we can hire. For literal meanings of physical limits this makes sense, but if it’s as hard to speed up experiments as it is to throw more intelligence into research, then the apparent coincidence could be due to wise allocation of resources to whichever bottleneck they’re better used in.

None of this suggests that it would be hard for an intelligence explosion to produce the 1000x increase in intelligence they talk about over a century, but it seems like an important obstacle to the faster time periods some people believe (days or weeks).

Some shorter comments on other chapters:

James Miller describes some disturbing incentives that investors would create for companies developing AGI if AGI is developed by companies large enough that no single investor has much influence on the company. I’m not too concerned about this because if AGI were developed by such a company, I doubt that small investors would have enough awareness of the project to influence it. The company might not publicize the project, or might not be honest about it. Investors might not believe accurate reports if they got them, since the reports won’t sound much different from projects that have gone nowhere. It seems very rare for small investors to understand any new software project well enough to distinguish between an AGI that goes foom and one that merely makes some people rich.

David Pearce expects the singularity to come from biological enhancements, because computers don’t have human qualia. He expects it would be intractable for computers to analyze qualia. It’s unclear to me whether this is supposed to limit AGI power because it would be hard for AGI to predict human actions well enough, or because the lack of qualia would prevent an AGI from caring about its goals.

Itamar Arel believes AGI is likely to be dangerous, and suggests dealing with the danger by limiting the AGI’s resources (without saying how it can be prevented from outsourcing its thought to other systems), and by “educational programs that will help mitigate the inevitable fear humans will have” (if the dangers are real, why is less fear desirable?).

* No, that example isn’t very relevant to AGI. Better examples would be atomic force microscopes, or the stock market (where it can take a generation to get a new test of an important pattern), but it would take lots of effort to convince you of that.