Archives

All posts by Peter

Book review: Wired for War: The Robotics Revolution and Conflict in the 21st Century, by P. W. Singer.

This book covers a wide range of topics related to robotics and war. The author put a good deal of thought into what topics we ought to pay attention to, but provides few answers that will tell us how to avoid problems. The style is entertaining. That doesn’t necessarily interfere with the substance, but I have some suspicions that the style influenced the author to be a bit more superficial than he ought to be.

I’m disappointed by his three-paragraph treatment of EMP risks. He understands that EMPs could cause major problems, but he failed to find any of the ideas people have about mitigating the risk.

With some lesser-known risks, the attention he provides may be helpful at reducing the danger. For instance, he identifies overconfidence as an important cause of war, and points out that the hype often created by designers of futuristic devices such as robots can cause leaders to overestimate their military value. This ought to be repeated widely enough that leaders will be aware of the danger.

He expresses some interesting concerns about how unmanned vehicles blur the lines between soldiers in battle and innocent civilians. Is a civilian technician who is actively working on an autonomous vehicle that is about to engage in hostile action against an enemy an ‘illegal combatant’? Does a pilot walking to work in Nevada to pilot a drone that will drop bombs in Afghanistan a military target?

Book review: Leadership and Self-Deception: Getting out of the Box, by the Arbinger Institute.

In spite of being marketed as mainly for corporate executives, this book’s advice is important for most interactions between people. Executives have more to gain from it, but I suspect they’re somewhat less willing to believe it.

I had already learned a lot about self-deception before reading this, but this book clarifies how to recognize and correct common instances in which I’m tempted to deceive myself. More importantly, it provides a way to explain self-deception to a number of people. I had previously despaired of explaining my understanding of self-deception to people who hadn’t already sought out the ideas I’d found. Now I can point people to this book. But I still can’t summarize it in a way that would change many people’s minds.

It’s written mostly as a novel, which makes it very readable without sacrificing much substance.

Some of the books descriptions don’t sound completely right to me. They describe people as acting “inside the box” or “outside the box” with respect to another person (not the same as the standard meaning of “thinking outside the box”) as if people normally did one or the other, we I think I often act somewhere in between those two modes. Also, the term “self-betrayal”, which I’d describe as acting selfishly and rationalizing the act as selfless, should not be portrayed as if the selfishness automatically causes self-deception. If people felt a little freer to admit that they act selfishly, they’d be less tempted to deceive themselves about their motives.

The book seems a bit too rosy about the benefits of following it’s advice. For instance, the book leaves the reader to imagine that Semmelweis benefited from admitting that he had been killing patients. Other accounts of Semmelweis suggest that he suffered, and the doctors who remained in denial prospered. Maybe he would have done much better if he had understood this book and been able to adopt its style. But it’s important to remember that self-deception isn’t an accident. It happens because it has sometimes worked.

Some quotes from Bacteria ‘R’ Us:

the vast majority — estimated by many scientists at 90 percent — of the cells in what you think of as your body are actually bacteria

researchers describe bacteria that communicate in sophisticated ways, take concerted action, influence human physiology, alter human thinking and work together to bioengineer the environment. These findings may foreshadow new medical procedures that encourage bacterial participation in human health.

Many researchers are coming to view such diseases as manifestations of imbalance in the ecology of the microbes inhabiting the human body. If further evidence bears this out, medicine is about to undergo a profound paradigm shift, and medical treatment could regularly involve kindness to microbes.

bacteria “have to have a reason to hurt you.” Surgery is just such a reason.

bacteria that have antibiotic-resistance genes advertise the fact, attracting other bacteria shopping for those genes; the latter then emit pheromones to signal their willingness to close the deal. These phenomena, Herbert Levine’s group argues, reveal a capacity for language long considered unique to humans.

Prizes

Aubrey de Grey has a good interview in Wired. I want to object to one claim:

You want prizes to be ways to attract people who get scared when you talk about science for more than ten seconds. So the language has to be very glitzy and superficial and populist. Whereas, a foundation that’s trying to get money to put toward research, you want to look really knowledgeable and responsible and low-key.

A ten second soundbite may be very important for the prize to get a widespread reputation, but there’s more to attracting large donors than that.

Aubrey later says a major hurdle to getting large donations for his research is

3) You’ve got to believe the organization you’re thinking of giving the money to actually has the ability to execute [a promising plan]

Anyone familiar with the difficulty of funding technology startups can see that even people who enjoy talking about science usually fail to predict how well an organization will implement a plan. This is exactly why wise people who understand Aubrey’s vision will mostly prefer to donate to prizes rather than his research. The knowledge required to predict whether the Methuselah Foundation will reward progress at slowing senescence is much less than the knowledge required to evaluate a research project. The prize should at least partly transfer the responsibility for spending the money wisely to the researchers who are most informed about their projects.

Switch

Book review: Switch: How to Change Things When Change Is Hard, by Chip and Dan Heath.

This book uses an understanding of the limits to human rationality to explain how it’s sometimes possible to make valuable behavioral changes, mostly in large institutions, with relatively little effort.

The book presents many anecdotes about people making valuable changes, often demonstrating unusually creative thought. The theories about why the changes worked are not very original, but are presented better than in most other books.

Some of the successes are sufficiently impressive that I wonder whether they cherry-picked too much and made it look too easy. One interesting example that is a partial exception to this pattern is a comparison of two hospitals that tried to implement the same change, with one succeeding and the other failing. Even with a good understanding of the book’s ideas, few people looking at the differences between the hospitals would notice the importance of whether small teams met for afternoon rounds at patients’ bedsides or in a lounge where other doctors overheard the discussions.

They aren’t very thoughtful about whether the goals are wise. This mostly doesn’t matter, although it is strange to read on page 55 about a company that succeeded by focusing on short-term benefits to the exclusion of long-term benefits, and then on page 83 to read about a plan to get businesses to adopt a longer term focus.

Despite strong opposition, a little progress is being made at informing consumers about medical quality and prices.

Healthcare Blue Book has some info about normal prices for standard procedures.

Healthgrades has some information about which hospitals produce the best outcomes (although more of the site seems devoted to patient ratings of doctors, which probably don’t make much distinction between rudeness and killing the patient).

Insurers are trying to create rating systems, but reports are vague about what they’re rating.

One objection to ratings is that

such measures can be wrong more than 25 percent of the time

A 25 percent error rate sounds like a valuable improvement over the current near-blind guesses that consumers currently make. Does anyone think that info such as years of experience, university attended, or ability to make reassuring rhetoric produces an error rate in as low as 25 percent? Do medical malpractice suits catch the majority of poor doctors without targeting many good ones? (There are some complications due to some insurers wanting to combine quality of outcome ratings with cost ratings – those ought to be available separately). Are there better ways of evaluating which doctors produce healthy results that haven’t been publicized?

More likely, doctors want us to believe that we should just trust them rather than try to evaluate their quality. I might consider that if I could see that the profession was aggressively expelling those who make simple, deadly mistakes such as failing to wash their hands between touching patients.

Choke

Book review: Choke: What the Secrets of the Brain Reveal About Getting It Right When You Have To, by Sian Beilock.

This book provides some clues about why pressure causes some people to perform less well than they otherwise would, and gives simple (but not always easy) ways to reduce that effect. There’s a good deal of overlap between this book’s advice and other self-improvement advice. The book modestly enhances how I think about the techniques and how motivated I am to use them.

The main surprise about the causes is that people with large working memories are more likely to choke because they’re more likely to over-analyze a problem, presumably because they’re better at analyzing problems. They’re also less creative. There are also interesting comments about the role of small working memories in ADHD.

The book includes some interesting comments on how SAT tests provide misleading evidence of sexual differences in ability, and how social influences can affect sexual differences in ability (for example, having a more feminine name makes a girl less likely to learn math).

The book’s style is unusually pleasant.

Brainiac Dating recently added two search features which create the potential for it to be one of my favorite dating sites: one that matches people based on books listed in their profiles, and one that matches people based on overlap of all the words in their profiles.

Unfortunately, before adding those features the site was sufficiently uninspired that few people joined, so it’s hard to verify that the features are working as advertised. Please spread the word that they’ve become worth trying.

Ken Hayworth has created an interesting prize for Brain Preservation Technology, designed to improve techniques of relevance to cryonics and mind uploading, but intended to be relevant to goals that don’t require preserving individual identity (such as better understanding of generic brains).

Many of the prize criteria are well thought out, especially the ones concerning quality of preservation. But there a few criteria for which it’s hard to predict how the judges would evaluate a proposed technique, and they will significantly impair the effectiveness of the prize.

The requirement that it have the potential to be performed for less than $20,000 requires a number of subjective judgments, such as the cost of training the necessary personnel (which will be affected by the quality of the trainers and trainees).

The requirement that it “be absolutely safe for the personnel involved” would seem to be prohibitive if I try to interpret it literally. A somewhat clearer approach would be to require that it be at least as safe as some commonly preformed procedure. But the effort required to compare risks will be far from trivial.

The requirement that we have reason to expect the preserved brains to remain stable for 100 years depends on some assumptions that aren’t well explained, such as why a shorter time period wouldn’t be enough (which depends on the specific goals of preservation and on predictions about how fast technology progresses), and what we should look at to estimate the durability – I suspect the obstacles to long-term stability are different for different techniques.

(I noticed this prize in connection with the ASIM 2010 conference, although I didn’t get much out of the part of the conference that I was able to attend).

Book review: Drive: The Surprising Truth About What Motivates Us, by Daniel H. Pink.

This book explores some of the complexities of what motivates humans. It attacks a stereotype that says only financial rewards matter, and exaggerates the extent to which people adopt that fallacy. His style is similar to Malcolm Gladwell’s, but with more substance than Gladwell.

The book’s advice is likely to cause some improvement in how businesses are run and in how people choose careers. But I wonder how many bosses will ignore it because their desire to exert control over people outweighs their desire to create successful companies.

I’m not satisfied with the way he and others classify motivations as intrinsic and extrinsic. While feelings of flow may be almost entirely internally generated, other motivations that he classifies as intrinsic seem to involve an important component of feeling that others are rewarding you with higher status/reputation.

Shirking may have been a been an important problem a century ago for which financial rewards were appropriate solutions, but the nature of work has changed so that it’s much less common for workers to want to put less effort into a job. The author implies that this means standard financial rewards have become fairly unimportant factors in determining productivity. I think he underestimates the importance they play in determining how goals are prioritized.

He believes the changes in work that reduced the importance of financial incentives was the replacement of rule-following routine work with work that requires creativity. I suggest that another factor was that in 1900, work often required muscle-power that consumed almost as much energy as a worker could afford to feed himself.

He states his claims vaguely enough that they could be interpreted as implying that broad categories of financial incentives (including stock options and equity) work poorly. I checked one of the references that sounded like it might address that (“When performance-related pay backfires”), and found it only dealt with payments for completing specific tasks.

His complaints about excessive focus on quarterly earnings probably have some value, but it’s important to remember that it’s easy to err in the other direction as well (the dot-com bubble seemed to coincide with an unusual amount of effort at focusing on earnings 5 to 10 years away).

I’m disappointed that he advises not to encourage workers to compete against each other without offering evidence about its effects.

One interesting story is the bonus system at Kimley-Horn and Associates, where any employee can award another employee $50 for doing something exceptional. I’d be interested in more tests of this – is there something special about Kimley-Horn that prevents abuse, or would it work in most companies?