Science and Technology

Book review: The Motivation Hacker, by Nick Winter.

This is a productivity book that might improve some peoples’ motivation.

It provides an entertaining summary (with clear examples) of how to use tools such as precommitment to accomplish an absurd number of goals.

But it mostly fails at explaining how to feel enthusiastic about doing so.

The section on Goal Picking Exercises exemplifies the problems I have with the book. The most realistic sounding exercise had me rank a bunch of goals by how much the goal excites me times the probability of success divided by the time required. I found that the variations in the last two terms overwhelmed the excitement term, leaving me with the advice that I should focus on the least exciting goals. (Modest changes to the arbitrary scale of excitement might change that conclusion).

Which leaves me wondering whether I should focus on goals that I’m likely to achieve soon but which I have trouble caring about, or whether I should focus on longer term goals such as mind uploading (where I might spend years on subgoals which turn out to be mistaken).

The author doesn’t seem to have gotten enough out of his experience to motivate me to imitate the way he picks goals.

I tried O2Amp glasses to correct for my colorblindness. They’re very effective at enabling me to notice some shades of red that I’ve found hard to see. In particular, two species of wildflowers (Indian Paintbrush and Cardinal Larkspur) look bright orange through the glasses, whereas without the glasses my vision usually fills in their color by guessing it’s similar to the surrounding colors unless I look very close.

But this comes at the cost of having green look much duller. The net effect causes vegetation to be less scenic.

The glasses are supposed to have some benefits for observing emotions via better recognition of blood concentration and oxygenation near the skin. But this effect seems too small to help me.

O2Amp is a small step toward enhanced sensory processing that is likely to become valuable someday, but for now it seems mainly valuable for a few special medical uses.

Book review: Error and the Growth of Experimental Knowledge by Deborah Mayo.

This book provides a fairly thoughtful theory of how scientists work, drawing on
Popper and Kuhn while improving on them. It also tries to describe a quasi-frequentist philosophy (called Error Statistics, abbreviated as ES) which poses a more serious challenge to the Bayesian Way than I’d seen before.

Mayo’s attacks on Bayesians are focused more on subjective Bayesians than objective Bayesians, and they show some real problems with the subjectivists willingness to treat arbitrary priors as valid. The criticisms that apply to objective Bayesians (such as E.T. Jaynes) helped me understand why frequentism is taken seriously, but didn’t convince me to change my view that the Bayesian interpretation is more rigorous than the alternatives.

Mayo shows that much of the disagreement stems from differing goals. ES is designed for scientists whose main job is generating better evidence via new experiments. ES uses statistics for generating severe tests of hypotheses. Bayesians take evidence as a given and don’t think experiments deserve special status within probability theory.

The most important difference between these two philosophies is how they treat experiments with “stopping rules” (e.g. tossing a coin until it produces a pre-specified pattern instead of doing a pre-specified number of tosses). Each philosophy tells us to analyze the results in ways that seem bizarre to people who only understand the other philosophy. This subject is sufficiently confusing that I’ll write a separate post about it after reading other discussions of it.

She constructs a superficially serious disagreement where Bayesians say that evidence increases the probability of a hypothesis while ES says the evidence provides no support for the (Gellerized) hypothesis. Objective Bayesians seem to handle this via priors which reflect the use of old evidence. Marcus Hutter has a description of a general solution in his paper On Universal Prediction and Bayesian Confirmation, but I’m concerned that Bayesians may be more prone to mistakes in implementing such an approach than people who use ES.

Mayo occasionally dismisses the Bayesian Way as wrong due to what look to me like differing uses of concepts such as evidence. The Bayesian notion of very weak evidence seems wrong given her assumption that concept scientific evidence is the “right” concept. This kind of confusion makes me wish Bayesians had invented a different word for the non-prior information that gets fed into Bayes Theorem.

One interesting and apparently valid criticism Mayo makes is that Bayesians treat the evidence that they feed into Bayes Theorem as if it had a probability of one, contrary to the usual Bayesian mantra that all data have a probability and the use of zero or one as a probability is suspect. This is clearly just an approximation for ease of use. Does it cause problems in practice? I haven’t seen a good answer to this.

Mayo claims that ES can apportion blame for an anomalous test result (does it disprove the hypothesis? or did an instrument malfunction?) without dealing with prior probabilities. For example, in the classic 1919 eclipse test of relativity, supporters of Newton’s theory agreed with supporters of relativity about which data to accept and which to reject, whereas Bayesians would have disagreed about the probabilities to assign to the evidence. If I understand her correctly, this also means that if the data had shown light being deflected at a 90 degree angle to what both theories predict, ES scientists wouldn’t look any harder for instrument malfunctions.

Mayo complains that when different experimenters reach different conclusions (due to differing experimental results) “Lindley says all the information resides in an agent’s posterior probability”. This may be true in the unrealistic case where each one perfectly incorporates all relevant evidence into their priors. But a much better Bayesian way to handle differing experimental results is to find all the information created by experiments in the likelihood ratios that they produce.

Many of the disagreements could be resolved by observing which approach to statistics produced better results. The best Mayo can do seems to be when she mentions an obscure claim by Pierce that Bayesian methods had a consistently poor track record in (19th century?) archaeology. I’m disappointed that I haven’t seen a good comparison of more recent uses of the competing approaches.

Book review: The Willpower Instinct: How Self-Control Works, Why It Matters, and What You Can Do To Get More of It, by Kelly McGonigal.

This book starts out seeming to belabor ideas that seem obvious to me, but before too long it offers counterintuitive approaches that I ought to try.

The approach that I find hardest to reconcile with my intuition is that self-forgiveness over giving into temptations helps increase willpower, while feeling guilt or shame about having failed reduces willpower, so what seems like an incentive to avoid temptation is likely to reduce our ability to resist the temptation.

Another important but counterintuitive claim is that trying to suppress thoughts about a temptation (e.g. candy) makes it harder to resist the temptation. Whereas accepting that part of my mind wants candy (while remembering that I ought to follow a rule of eating less candy) makes it easier for me to resist the candy.

A careless author could have failed to convince me this is plausible. But McGonigal points out the similarities to trying to follow an instruction to not think of white bears – how could I suppress thoughts of white bears of some part of my mind didn’t activate a concept of white bears to monitor my compliance with the instruction? Can I think of candy without attracting the attention of the candy-liking parts of my mind?

As a result of reading the book, I have started paying attention to whether the pleasure I feel when playing computer games lives up to the anticipation I feel when I’m tempted to start one. I haven’t been surprised to observe that I sometimes feel no pleasure after starting the game. But it now seems easier to remember those times of pleasureless playing, and I expect that is weakening my anticipation or rewards.

The recent Quantified Self conference was my first QS event, and was one of the best conferences I’ve attended.

I had been hesitant to attend QS events because they seem to attract large crowds, where I usually find it harder to be social. But this conference was arranged so that there was no real center where crowds gathered, so people spread out into smaller groups where I found it easier to join a conversation.

Kevin Kelly called this “The Measured Century”. People still underestimate how much improved measurement contributed to the industrial revolution. If we’re seeing a much larger improvement in measurement, people will likely underestimate the importance of that for quite a while.

The conference had many more ideas than I had time to hear, and I still need to evaluate many of he ideas I did hear. Here are a few:

I finally got around to looking at DIYgenomics, and have signed up for their empathy study (not too impressive so far) and their microbiome study (probiotics) which is waiting for more people before starting.

LUMOback looks like it will be an easy way to improve my posture. The initial version will require a device I don’t have, but it sounds like they’ll have an Android version sometime next year.

Steve Fowkes’ talk about urine pH testing sounds worth trying out.

Book review: The Righteous Mind: Why Good People Are Divided by Politics and Religion, by Jonathan Haidt.

This book carefully describes the evolutionary origins of human moralizing, explains why tribal attitudes toward morality have both good and bad effects, and how people who want to avoid moral hostility can do so.

Parts of the book are arranged to describe the author’s transition from having standard delusions about morality being the result of the narratives we use to justify them and about why other people had alien-sounding ideologies. His description about how his study of psychology led him to overcome his delusions makes it hard for those who agree with him to feel very superior to those who disagree.

He hints at personal benefits from abandoning partisanship (“It felt good to be released from partisan anger.”), so he doesn’t rely on altruistic motives for people to accept his political advice.

One part of the book that surprised me was the comparison between human morality and human taste buds. Some ideologies are influenced a good deal by all 6 types of human moral intuitions. But the ideology that pervades most of academia only respect 3 types (care, liberty, and fairness). That creates a difficult communication gap between them and cultures that employ others such as sanctity in their moral system, much like people who only experience sweet and salty foods would have trouble imagining a desire for sourness in some foods.

He sometimes gives the impression of being more of a moral relativist than I’d like, but a careful reading of the book shows that there are a fair number of contexts in which he believes some moral tastes produce better results than others.

His advice could be interpreted as encouraging us to to replace our existing notions of “the enemy” with Manichaeans. Would his advice polarize societies into Manichaeans and non-Manichaeans? Maybe, but at least the non-Manichaeans would have a decent understanding of why Manichaeans disagreed with them.

The book also includes arguments that group selection played an important role in human evolution, and that an increase in cooperation (group-mindedness, somewhat like the cooperation among bees) had to evolve before language could become valuable enough to evolve. This is an interesting but speculative alternative to the common belief that language was the key development that differentiated humans from other apes.

Book review: The Intelligence Paradox: Why the Intelligent Choice Isn’t Always the Smart One, by Satoshi Kanazawa.

This book is entertaining and occasionally thought-provoking, but not very well thought out.

The main idea is that intelligence (what IQ tests measure) is an adaptation for evolutionarily novel situations, and shouldn’t be positively correlated with cognitive abilities that are specialized for evolutionarily familiar problems. He defines “smart” so that it’s very different from intelligence. His notion of smart includes a good deal of common sense that is unconnected with IQ.

He only provides one example of an evolutionarily familiar skill which I assumed would be correlated with IQ but which isn’t: finding your way in situations such as woods where there’s some risk of getting lost.

He does make and test many odd predictions about high IQ people being more likely to engage in evolutionarily novel behavior, such as high IQ people going to bed later than low IQ people. But I’m a bit concerned at the large number of factors he controls for before showing associations (e.g. 19 factors for alcohol use). How hard would it be to try many combinations and only report results when he got conclusions that fit his prediction? On the other hand, he can’t be trying too hard to reject all evidence that conflicts with his predictions, since he occasionally reports evidence that conflicts with his predictions (e.g. tobacco use).

He reports that fertility is heritable, and finds that puzzling. He gives a kin selection based argument saying that someone with many siblings ought to put more effort into the siblings reproductive success and less into personally reproducing. But I see no puzzle – I expect people to have varying intuitions about whether the current abundance of food will last, and pursue different strategies, some of which will be better if food remains abundant, and others better if overpopulation produces a famine.

He’s eager to sound controversial, and his chapter titles will certainly offend some people. Sometimes those are backed up by genuinely unpopular claims, sometimes the substance is less interesting. E.g. the chapter title “Why Homosexuals Are More Intelligent than Heterosexuals” says there’s probably no connection between intelligence and homosexual desires, but there’s a connection between intelligence and how willing people are to act on those desires (yawn).

Here is some evidence against his main hypothesis.

Book review: The Beginning of Infinity by David Deutsch.

This is an ambitious book centered around the nature of explanation, why it has been an important part of science (misunderstood by many who think of science as merely prediction), and why it is important for the future of the universe.

He provides good insights on jump during the Enlightenment to thinking in universals (e.g. laws of nature that apply to a potentially infinite scope). But he overstates some of its implications. He seems confident that greater-than-human intelligences will view his concept of “universal explainers” as the category that identifies which beings have the rights of people. I find this about as convincing as attempts to find a specific time when a fetus acquires the rights of personhood. I can imagine AIs deciding that humans fail often enough at universalizing their thought to be less than a person, or that they will decide that monkeys are on a trajectory toward the same kind of universality.

He neglects to mention some interesting evidence of the spread of universal thinking – James Flynn’s explanation of the Flynn Effect documents that low IQ cultures don’t use the abstract thought that we sometimes take for granted, and describes IQ increases as an escape from concrete thinking.

Deutsch has a number of interesting complaints about people who attempt science but are confused about the philosophy of science, such as people who imagine that measuring heritability of a trait tells us something important without further inquiry – he notes that being enslaved was heritable in 1860, but that was useless for telling us how to change slavery.

He has interesting explanations for why anthropic arguments, the simulation argument, and the doomsday argument are weaker in a spatially infinite universe. But I was disappointed that he didn’t provide good references for his claim that the universe is infinite – a claim which I gather is controversial and hasn’t gotten as much attention as it deserves.

He sometimes gets carried away with his ambition and seems to forget his rule that explanations should be hard to vary in order to make it hard to fool ourselves.

He focuses on the beauty of flowers in an attempt to convince us that beauty is partially objective. But he doesn’t describe this objective beauty in a way that would make it hard to alter to fit whatever evidence he wants it to fit. I see an obvious alternative explanation for humans finding flowers beautiful – they indicate where fruit will be.

He argues that creativity evolved to help people find better ways of faithfully transmitting knowledge (understanding someone can require creative interpretation of the knowledge that they are imperfectly expressing). That might be true, but I can easily create other explanations that fit the evidence he’s trying to explain, such as that creativity enabled people to make better choices about when to seek a new home.

He imagines that he has a simple way to demonstrate that hunter-gatherer societies could not have lived in a golden age (the lack of growth of their knowledge):

Since static societies cannot exist without effectively extinguishing the growth of knowledge, they cannot allow their members much opportunity to pursue happiness.

But that requires implausible assumptions such as that happiness depends more on the pursuit of knowledge than availability of sex. And it’s not clear that hunter-gatherer societies were stable – they may have been just a few mistakes away from extinction, and accumulating knowledge faster than any previous species had. (I think Deutsch lives in a better society than hunter-gatherers, but it would take a complex argument to show that the average person today does).

But I generally enjoyed his arguments even when I thought they were wrong.

See also the review in the New York Times.

Book review: Inside Jokes – Using Humor to Reverse-Engineer the Mind, by Matthew M. Hurley, Daniel C. Dennett and Reginald B. Adams, Jr.

This book has the best explanation I’ve seen so far of why we experience humor. The simplistic summary is that it is a reward for detecting certain kinds of false assumptions. And after it initially evolved it has been adapted to additional purposes (signaling one’s wit), and exploited by professional comedians in the way that emotions which reward reproductive functions are exploited by pornography.

Some of the details of which false beliefs qualify as a source of humor and how diagnosing them to be false qualifies as a source of humor seem arbitrary enough that the theory falls well short of the kind of insight that tempts me to say “that’s obvious, why didn’t I think of that?”. And a few details seem suspicious – the claims that people are averse to being tickled and that one sensation tickling creates is that of being attacked don’t seem consistent with my experience.

They provide some clues about the precursors of humor in other species (including laughter, which apparently originated independently from humor as a “false alarm” signal), and give some hints about why the greater complexity of the human mind triggered a more complex version of humor than the poorly understood versions that probably exist in some other species.

The book has some entertaining sections, but the parts that dissect individual jokes are rather tedious. Also, don’t expect this book to be of much help at generating new and better humor – it does a good job of clarifying how to ruin a joke, but it also explains why we should expect creating good jokes to be hard.

Book review: Thinking, Fast and Slow, by Daniel Kahneman.

This book is an excellent introduction to the heuristics and biases literature, but only small parts of it will seem new to those who are familiar with the subject.

While the book mostly focuses on conditions where slow, logical thinking can do better than fast, intuitive thinking, I find it impressive that he was careful to consider the views of those who advocate intuitive thinking, and that he collaborated with a leading advocate of intuition to resolve many of their apparent disagreements (mainly by clarifying when each kind of thinking is likely to work well).

His style shows that he has applied some of the lessons of the research in his field to his own writing, such as by giving clear examples. (“Subjects’ unwillingness to deduce the particular from the general was matched only by their willingness to infer the general from the particular”).

He sounds mildly overconfident (and believes mild overconfidence can be ok), but occasionally provides examples of his own irrationality.

He has good advice for investors (e.g. reduce loss aversion via “broad framing” – think of a single loss as part of a large class of results that are on average profitable), and appropriate disdain for investment advisers. But he goes overboard when he treats the stock market as unpredictable. The stock market has some real regularities that could be exploited. Most investors fail to find them because they see many more regularities than are real, are overconfident about their ability to distinguish the real ones, and because it’s hard to distinguish valuable feedback (which often takes many years to get) from misleading feedback.

I wish I could find equally good book for overuse of logical analysis when I want the speed of intuition (e.g. “analysis paralysis”).