Rule of the Clan

Book review: The Rule of the Clan: What an Ancient Form of Social Organization Reveals About the Future of Individual Freedom by Mark S. Weiner.

This book does a good job of explaining how barbaric practices such as feuds and honor killings are integral parts of clan-based systems of dispute resolution, and can’t safely be suppressed without first developing something like the modern rule of law to remove the motives that perpetuate them.

He has a coherent theory of why societies with no effective courts and police need to have kin-based groups be accountable for the actions of their members, which precludes some of the individual rights that we take for granted.

He does a poor job of explaining how this is relevant to modern government. He writes as if anyone who wants governments to exert less power wants to weaken the rule of law and the ability of governments to stop violent disputes. (His comments about modern government are separate enough to not detract much from the rest of the book).

He implies that modern rule of law and rule by clans are the only stable possibilities. He convinced me that it would be hard to create good alternatives to those two options, but not that alternatives are impossible.

To better understand how modern individualism replaced clan-based society, read Fukuyama’s The Origins of Political Order together with this book.

Ambronite

Yet another soylent competitor has appeared: Ambronite.

It’s higher quality and high price than Soylent or MealSquares. It has more B12 than MealSquares even though it’s vegan.

It’s low enough in saturated fat that I probably want to add an additional source of saturated fat to my diet, but that’s a nice problem to have – I’d want to add chocolate anyway. My biggest reservation is the high level of polyunsaturated fat – if I could get a version without the walnuts I’d probably be satisfied there.

Most ingredients look like what our ancestors evolved to eat, but the first two ingredients listed are oats and rice protein.

Drugs without Patents

When considering proposals to weaken patent monopolies, drug development seems like the main type of innovation that would be hurt.

Most of drug development cost seems to be verification of safety and effectiveness, which doesn’t look like the kind of novelty-creation patents are designed to help, but that doesn’t mean it’s easy to implement an alternative.

It turns out we have an example of a company making monopoly-style profits from an unpatented drug (Questcor Pharmaceuticals, Acthar).

Questcor bought Acthar for $100,000, suggesting the seller expected it would hardly make any money. Sometime later Acthar was designated an orphan drug, and Questcor raised the price from $1,650 to 28,000 per vial, without causing competitors to sell it. It now gets profits of roughly $300 million per year off of it.

So something must be restraining competition, probably something connected to the orphan drug laws, suggesting that if protections for patents in general were weakened, it would only take small changes in existing rules to maintain existing incentives for drug development.

Book review: Our Mathematical Universe: My Quest for the Ultimate Nature of Reality, by Max Tegmark.

His most important claim is the radical Platonist view that all well-defined mathematical structures exist, therefore most physics is the study of which of those we inhabit. His arguments are more tempting than any others I’ve seen for this view, but I’m left with plenty of doubt.

He points to ways that we can imagine this hypothesis being testable, such as via the fine-tuning of fundamental constants. But he doesn’t provide a good reason to think that those tests will distinguish his hypothesis from other popular approaches, as it’s easy to imagine that we’ll never find situations where they make different predictions.

The most valuable parts of the book involve the claim that the multiverse is spatially infinite. He mostly talks as if that’s likely to be true, but his explanations caused me to lower my probability estimate for that claim.

He gets that infinity by claiming that inflation continues in places for infinite time, and then claiming there are reference frames for which that infinite time is located in a spatial rather than a time direction. I have a vague intuition why that second step might be right (but I’m fairly sure he left something important out of the explanation).

For the infinite time part, I’m stuck with relying on argument from authority, without much evidence that the relevant authorities have much confidence in the claim.

Toward the end of the book he mentions reasons to doubt infinities in physics theories – it’s easy to find examples where we model substances such as air as infinitely divisible, when we know that at some levels of detail atomic theory is more accurate. The eternal inflation theory depends on an infinitely expandable space which we can easily imagine is only an approximation. Plus, when physicists explicitly ask whether the universe will last forever, they don’t seem very confident. I’m also tempted to say that the measure problem (i.e. the absence of a way to say some events are more likely than others if they all happen an infinite number of times) is a reason to doubt infinities, but I don’t have much confidence that reality obeys my desire for it to be comprehensible.

I’m disappointed by his claim that we can get good evidence that we’re not Boltzmann brains. He wants us to test our memories, because if I am a Boltzmann brain I’ll probably have a bunch of absurd memories. But suppose I remember having done that test in the past few minutes. The Boltzmann brain hypothesis suggests it’s much more likely for me to have randomly acquired the memory of having passed the test than for me to actually be have done the test. Maybe there’s a way to turn Tegmark’s argument into something rigorous, but it isn’t obvious.

He gives a surprising argument that the differences between the Everett and Copenhagen interpretations of quantum mechanics don’t matter much, because unrelated reasons involving multiverses lead us to expect results comparable to the Everett interpretation even if the Copenhagen interpretation is correct.

It’s a bit hard to figure out what the book’s target audience is – he hides the few equations he uses in footnotes to make it look easy for laymen to follow, but he also discusses hard concepts such as universes with more than one time dimension with little attempt to prepare laymen for them.

The first few chapters are intended for readers with little knowledge of physics. One theme is a historical trend which he mostly describes as expanding our estimate of how big reality is. But the evidence he provides only tells us that the lower bounds that people give keep increasing. Looking at the upper bound (typically infinity) makes that trend look less interesting.

The book has many interesting digressions such as a description of how to build Douglas Adams’ infinite improbability drive.

Nutritional Meals

I’ve been thinking more about convenient, healthy alternatives to Soylent or MealSquares that are closer to the kind of food we’ve evolved to eat.

Here’s some food that exceeds the recommended daily intake of most vitamins and minerals with only about 1300 calories (leaving room for less healthy snacks):

  • 4 bags of Brad’s Raw Chips, Indian
  • 1.5 bags of Brad’s Raw Chips, Sweet Pepper
  • 6 crackers, Lydia’s Green Crackers (vitamin E)
  • 1 oz Atlantic oysters (B12, zinc) (one 3 oz tin every 3 days)
  • 1 brazil nut (selenium)

Caveats: I’m unsure how accurately I estimated the nutrition in the processed foods (I made guesses based on the list of ingredients).

This diet has little vitamin D (which I expect to get from supplements and sun).

It’s slightly low in calcium, sodium, B12, and saturated fat. I consider it important to get more B12 from other animal sources (sardines, salmon or pastured eggs). I’m not concerned about the calcium or sodium because this diet would provide more than hunter-gathers got and because I don’t have much trouble getting more from other food. And it’s hard not to get more saturated fat from other foods I like (e.g. chocolate).

I don’t know whether it has enough iodine, so when I’m not having much fish it’s probably good to add a little seaweed (I’m careful to avoid the common kinds that have added oil that’s been subjected to questionable processing).

It has just barely 100% of vitamin E, B3, and B5 (in practice I get more of those from eggs and sweet potatoes).

It’s possibly too high in omega-3 (10+ grams?) from flax seeds in the Raw Chips (my estimate here is more uncertain than with the other nutrients).

The only convenient way to get oysters that keep well and don’t need preparation is cans of smoked oysters, and smoking seems to be an unhealthy way to process food.

Note that I chose this list without trying to make it affordable, and it ended up costing about $50 per day. I don’t plan to spend that much unless I become too busy to cook cheaper foods such as sweet potatoes, mushrooms, bean sprouts, fish, and eggs.

In practice, I’ve been relying more on Questbars for convenient food, but I’m trying to cut down on those as I eat more Brad’s Raw Chips.

Book review: The Great Degeneration: How Institutions Decay and Economies Die, by Niall Ferguson.

Read (or skim) Reinhart and Rogoff’s book This Time is Different instead. The Great Degeneration contains little value beyond a summary of that.

The other part which comes closest to analyzing US decay is a World Bank report about governance quality from 1996 to 2011 which shows the US in decline from 2000 to 2009. He makes some half-hearted attempts to argue for a longer trend using anecdotes that don’t really say much.

Large parts of the book are just standard ideological fluff.

Book review: Our Final Invention: Artificial Intelligence and the End of the Human Era by James Barrat.

This book describes the risks that artificial general intelligence will cause human extinction, presenting the ideas propounded by Eliezer Yudkowsky in a slightly more organized but less rigorous style than Eliezer has.

Barrat is insufficiently curious about why many people who claim to be AI experts disagree, so he’ll do little to change the minds of people who already have opinions on the subject.

He dismisses critics as unable or unwilling to think clearly about the arguments. My experience suggests that while it’s normally the case that there’s an argument that any one critic hasn’t paid much attention to, that’s often because they’ve rejected with some thought some other step in Eliezer’s reasoning and concluded that the step they’re ignoring wouldn’t influence their conclusions.

The weakest claim in the book is that an AGI might become superintelligent in hours. A large fraction of people who have worked on AGI (e.g. Eric Baum’s What is Thought?) dismiss this as too improbable to be worth much attention, and Barrat doesn’t offer them any reason to reconsider. The rapid takeoff scenarios influence how plausible it is that the first AGI will take over the world. Barrat seems only interested in talking to readers who can be convinced we’re almost certainly doomed if we don’t build the first AGI right. Why not also pay some attention to the more complex situation where an AGI takes years to become superhuman? Should people who think there’s a 1% chance of the first AGI conquering the world worry about that risk?

Some people don’t approve of trying to build an immutable utility function into an AGI, often pointing to changes in human goals without clearly analyzing whether those are subgoals that are being altered to achieve a stable supergoal/utility function. Barrat mentions one such person, but does little to analyze this disagreement.

Would an AGI that has been designed without careful attention to safety blindly follow a narrow interpretation of its programmed goal(s), or would it (after achieving superintelligence) figure out and follow the intentions of its authors? People seem to jump to whatever conclusion supports their attitude toward AGI risk without much analysis of why others disagree, and Barrat follows that pattern.

I can imagine either possibility. If the easiest way to encode a goal system in an AGI is something like “output chess moves which according to the rules of chess will result in checkmate” (turning the planet into computronium might help satisfy that goal).

An apparently harder approach would have the AGI consult a human arbiter to figure out whether it wins the chess game – “human arbiter” isn’t easy to encode in typical software. But AGI wouldn’t be typical software. It’s not obviously wrong to believe that software smart enough to take over the world would be smart enough to handle hard concepts like that. I’d like to see someone pin down people who think this is the obvious result and get them to explain how they imagine the AGI handling the goal before it reaches human-level intelligence.

He mentions some past events that might provide analogies for how AGI will interact with us, but I’m disappointed by how little thought he puts into this.

His examples of contact between technologically advanced beings and less advanced ones all refer to Europeans contacting Native Americans. I’d like to have seen a wider variety of analogies, e.g.:

  • Japan’s contact with the west after centuries of isolation
  • the interaction between neanderthals and humans
  • the contact that resulted in mitochondria becoming part of our cells

He quotes Vinge saying an AGI ‘would not be humankind’s “tool” – any more than humans are the tools of rabbits or robins or chimpanzees.’ I’d say that humans are sometimes the tools of human DNA, which raises more complex questions of how well the DNA’s interests are served.

The book contains many questionable digressions which seem to be designed to entertain.

He claims Google must have an AGI project in spite of denials by Google’s Peter Norvig (this was before it bought DeepMind). But the evidence he uses to back up this claim is that Google thinks something like AGI would be desirable. The obvious conclusion would be that Google did not then think it had the skill to usefully work on AGI, which would be a sensible position given the history of AGI.

He thinks there’s something paradoxical about Eliezer Yudkowsky wanting to keep some information about himself private while putting lots of personal information on the web. The specific examples Barrat gives strongly suggests that Eliezer doesn’t value the standard notion of privacy, but wants to limit peoples’ ability to distract him. Barrat also says Eliezer “gave up reading for fun several years ago”, which will surprise those who see him frequently mention works of fiction in his Author’s Notes.

All this makes me wonder who the book’s target audience is. It seems to be someone less sophisticated than a person who could write an AGI.

A somewhat new hypothesis:

The Intense World Theory states that autism is the consequence of a supercharged brain that makes the world painfully intense and that the symptoms are largely because autistics are forced to develop strategies to actively avoid the intensity and pain.

Here’s a more extensive explanation.

This hypothesis connects many of the sensory peculiarities of autism with the attentional and social ones. Those had seemed like puzzling correlations to me until now.

However, it still leaves me wondering why the variations is sensory sensitivities seem much larger with autism. The researchers suggest an explanation involving increased plasticity, but I don’t see a strong connection between the Intense World hypothesis and that.

One implication (from this page):

According to the intense world perspective, however, warmth isn’t incompatible with autism. What looks like antisocial behavior results from being too affected by others’ emotions—the opposite of indifference.

Indeed, research on typical children and adults finds that too much distress can dampen ordinary empathy as well. When someone else’s pain becomes too unbearable to witness, even typical people withdraw and try to soothe themselves first rather than helping—exactly like autistic people. It’s just that autistic people become distressed more easily, and so their reactions appear atypical.

Book review: Self Comes to Mind: Constructing the Conscious Brain by Antonio R. Damasio.

This book describes many aspects of human minds in ways that aren’t wrong, but the parts that seem novel don’t have important implications.

He devotes a sizable part of the book to describing how memory works, but I don’t understand memory any better than I did before.

His perspective often seems slightly confusing or wrong. The clearest example I noticed was his belief (in the context of pre-historic humans) that “it is inconceivable that concern [as expressed in special treatment of the dead] or interpretation could arise in the absence of a robust self”. There may be good reasons for considering it improbable that humans developed burial rituals before developing Damasio’s notion of self, but anyone who is familiar with Julian Jaynes (as Damasio is) ought to be able to imagine that (and stranger ideas).

He pays a lot of attention to the location in the brain of various mental processes (e.g. his somewhat surprising claim that the brainstem plays an important role in consciousness), but rarely suggests how we could draw any inferences from that about how normal minds behave.