Life, the Universe, and Everything

Doctors are more willing to prescribe Viagra than cognitive enhancement drugs.

Why?

The report wonders whether it’s due to conservative tendencies among doctors. But Viagra and Modafinil both became available in the U.S. in 1998. Conservatism doesn’t explain why doctors are slower to accept Modafinil than Viagra. Although maybe combined with more patients asking for Viagra it would be plausible.

Concern over side effects might explain why doctors are less comfortable with Ritalin, but not why three different cognitive enhancing drugs all produced similar comfort levels – about half that of Viagra. And I see no signs that Modafinil is much riskier than Viagra.

Could it be concern that Viagra has an equalizing effect (making people more normal), whereas cognitive enhancers make people who can afford them smarter than the less fortunate? Partly – doctors were more willing to prescribe cognitive enhancers for older patients than younger ones. But the cross-drug comparisons were done for a case where “the patient was a 40-year-old reporting symptoms consistent with the label indications for the respective drug”. I’m pretty sure the label indications describe a patient who is functioning well below normal.

The obvious conclusion part of what’s happening is that doctors believe sex produces larger benefits than cognitive enhancement. If we ignore potentially important externalities such as sexually transmitted diseases versus improved science/technology (would doctors admit to doing that?), I could make a decent case for sex being more valuable. There’s no shortage of evidence that sex makes people happy, whereas there seems to be little or no correlation between cognitive ability and happiness.

(HT YourBrainonDrugs.net).

Book review: Counterclockwise: Mindful Health and the Power of Possibility, by Ellen J. Langer.

This book presents ideas about how attitudes and beliefs can alter our health and physical abilities.

The book’s name comes from a 1979 study that the author performed that made nursing home residents act and look younger by putting them in an environment that reminded them of earlier days and by treating them as capable of doing more than most expected they could do.

One odd comment she makes is the there were no known measures of aging other than chronological age at the time of the 1979 study. She goes on to imply that little has changed since then – but it took me little effort to find info about a 1991 book Biomarkers which made a serious attempt at filling this void.

She disputes claims such as those popularized by Atul Gawande that teaching doctors to act more like machines (following checklists) will improve medical practice. She’s concerned that reducing the diversity of medical opinions will reduce our ability to benefit from getting a second opinion that could detect a mistake in the original diagnosis, and cites evidence that North Carolina residents have an unusually high tendency to seek second opinions, and also have signs of better health. But this only tells me that with little use of checklists, getting a second opinion is valuable. That doesn’t say much about whether adopting a culture of using checklists is better than adopting a culture of seeking second opinions. The North Carolina evidence doesn’t suggest a large enough health benefit to provide much competition with the evidence for checklists.

One surprising report is that cultures with positive views of aging seem to produce older people who have better memory than other cultures. It’s not clear what the causal mechanism is, but with the evidence coming from groups as different as mainland Chinese and deaf Americans, it seems likely that the beliefs cause the better memory rather than the better memory causing the beliefs.

Two interesting quotes from the book:

certainty is a cruel mindset

to tell us we’re “terminal” may be a self-fulfilling prophecy. There are no records of how often doctors have been correct or not after making this prediction.

Discussions asking whether “Snowball Earth” triggered animal evolution (see the bottom half of that page) suggest increasing evidence that the Snowball Earth hypothesis may explain an important part of why spacefaring civilizations seem rare.

photosynthetic organisms are limited by nutrients, most often nitrogen or phosphorous

the glaciations led to high phosphorous concentrations, which led to high productivity, which led to high oxygen in the oceans and atmosphere, which allowed for animal evolution to be triggered and thus the rise of the metazoans.

This seems quite speculative, but if true it might mean that our planet needed a snowball earth effect for complex life to evolve, but also needed that snowball earth period to be followed by hundreds of millions of years without another snowball earth period that would wipe out complex life. It’s easy to imagine that the conditions needed to produce one snowball earth effect make it very unusual for the planet to escape repeated snowball earth events for as long as it did, thus explaining more of the Fermi paradox than seemed previously possible.

Some quotes from Bacteria ‘R’ Us:

the vast majority — estimated by many scientists at 90 percent — of the cells in what you think of as your body are actually bacteria

researchers describe bacteria that communicate in sophisticated ways, take concerted action, influence human physiology, alter human thinking and work together to bioengineer the environment. These findings may foreshadow new medical procedures that encourage bacterial participation in human health.

Many researchers are coming to view such diseases as manifestations of imbalance in the ecology of the microbes inhabiting the human body. If further evidence bears this out, medicine is about to undergo a profound paradigm shift, and medical treatment could regularly involve kindness to microbes.

bacteria “have to have a reason to hurt you.” Surgery is just such a reason.

bacteria that have antibiotic-resistance genes advertise the fact, attracting other bacteria shopping for those genes; the latter then emit pheromones to signal their willingness to close the deal. These phenomena, Herbert Levine’s group argues, reveal a capacity for language long considered unique to humans.

Despite strong opposition, a little progress is being made at informing consumers about medical quality and prices.

Healthcare Blue Book has some info about normal prices for standard procedures.

Healthgrades has some information about which hospitals produce the best outcomes (although more of the site seems devoted to patient ratings of doctors, which probably don’t make much distinction between rudeness and killing the patient).

Insurers are trying to create rating systems, but reports are vague about what they’re rating.

One objection to ratings is that

such measures can be wrong more than 25 percent of the time

A 25 percent error rate sounds like a valuable improvement over the current near-blind guesses that consumers currently make. Does anyone think that info such as years of experience, university attended, or ability to make reassuring rhetoric produces an error rate in as low as 25 percent? Do medical malpractice suits catch the majority of poor doctors without targeting many good ones? (There are some complications due to some insurers wanting to combine quality of outcome ratings with cost ratings – those ought to be available separately). Are there better ways of evaluating which doctors produce healthy results that haven’t been publicized?

More likely, doctors want us to believe that we should just trust them rather than try to evaluate their quality. I might consider that if I could see that the profession was aggressively expelling those who make simple, deadly mistakes such as failing to wash their hands between touching patients.

Some comments on last weekend’s Foresight Conference:

At lunch on Sunday I was in a group dominated by a discussion between Robin Hanson and Eliezer Yudkowsky over the relative plausibility of new intelligences having a variety of different goal systems versus a single goal system (as in a society of uploads versus Friendly AI). Some of the debate focused on how unified existing minds are, with Eliezer claiming that dogs mostly don’t have conflicting desires in different parts of their minds, and Robin and others claiming such conflicts are common (e.g. when deciding whether to eat food the dog has been told not to eat).

One test Eliezer suggested for the power of systems with a unified goal system is that if Robin were right, bacteria would have outcompeted humans. That got me wondering whether there’s an appropriate criterion by which humans can be said to have outcompeted bacteria. The most obvious criterion on which humans and bacteria are trying to compete is how many copies of their DNA exist. Using biomass as a proxy, bacteria are winning by several orders of magnitude. Another possible criterion is impact on large-scale features of Earth. Humans have not yet done anything that seems as big as the catastrophic changes to the atmosphere (“the oxygen crisis”) produced by bacteria. Am I overlooking other appropriate criteria?

Kartik Gada described two humanitarian innovation prizes that bear some resemblance to a valuable approach to helping the world’s poorest billion people, but will be hard to turn into something with a reasonable chance of success. The Water Liberation Prize would be pretty hard to judge. Suppose I submit a water filter that I claim qualifies for the prize. How will the judges test the drinkability of the water and the reusability of the filter under common third world conditions (which I suspect vary a lot and which probably won’t be adequately duplicated where the judges live)? Will they ship sample devices to a number of third world locations and ask whether it produces water that tastes good, or will they do rigorous tests of water safety? With a hoped for prize of $50,000, I doubt they can afford very good tests. The Personal Manufacturing Prizes seem somewhat more carefully thought out, but need some revision. The “three different materials” criterion is not enough to rule out overly specialized devices without some clear guidelines about which differences are important and which are trivial. Setting specific award dates appears to assume an implausible ability to predict how soon such a device will become feasible. The possibility that some parts of the device are patented is tricky to handle, as it isn’t cheap to verify the absence of crippling patents.

There was a debate on futarchy between Robin Hanson and Mencius Moldbug. Moldbug’s argument seems to boil down to the absence of a guarantee that futarchy will avoid problems related to manipulation/conflicts of interest. It’s unclear whether he thinks his preferred form of government would guarantee any solution to those problems, and he rejects empirical tests that might compare the extent of those problems under the alternative systems. Still, Moldbug concedes enough that it should be possible to incorporate most of the value of futarchy within his preferred form of government without rejecting his views. He wants to limit trading to the equivalent of the government’s stockholders. Accepting that limitation isn’t likely to impair the markets much, and may make futarchy more palatable to people who share Moldbug’s superstitions about markets.

Book review: Outliers: The Story of Success by Malcolm Gladwell.

Gladwell has taken what would be a few ordinary blog posts and added enough eloquent fluff to them to make them into a book. There is probably a good deal of truth to his conclusions, but the evidence is much weaker than he wants you to think.

For his claim that 10,000 hours of practice are needed to become an expert, he doesn’t discuss the possibility that the causality often runs the opposite way: having the talent to become an expert creates a desire to practice a lot. He gives at least one example where the person seemed to lack expertise before getting the 10,000 hours of practice, but it’s not hard to imagine a variety of immaturity-related reasons why that might happen without the amount of practice causing the expertise.

I’m confused by his claims about how much practice he thinks the Beatles had before becoming successful. He points to somewhere between 1,200 and 1,800 hours of practice they had by late 1962 (which is about when Wikipedia indicates they became successful in the UK). Gladwell seems to say they weren’t successful until they came to the US in February 1964. He implies that they had 10,000 hours of practice by then, but I don’t see how he could claim they had much more than 3,000 hours of practice by then. So calling the 10,000 hour estimate a rule appears involve a good deal of exaggeration.

How can a hospital-like business operating outside of existing territorial jurisdictions avoid harrassment by governments whose medical lobbies want to spread FUD?

Given that these businesses will initially have no track record to point to and less protection than existing medical tourism providers from whatever government provides a flag of convenience to the business, merely providing comparable quality medical care won’t be enough for such businesses to thrive. So I’m proposing practices which could enable those businesses to argue that current U.S. hospitals are more dangerous. I’m not suggesting this just for marketing purposes – I want safe hospitals to be available, and regulatory costs in the U.S. make it easier to start an innovative hospital offshore than in the U.S. (especially for types of innovation that don’t respect doctors’ prestige).

It has been known since 1847 that doctors kill patients by failing to wash their hands often enough. Yet this threat is still common. An offshore hospital could offer patients documentation showing when medical personel who touch the patient washed their hands (e.g. by providing the patient with video recordings of the procedures sufficient for the patient to verify cleanliness), with a double your money back guarantee. There are many other less common errors that patients could use such videos to check for.

The book Counting Sheep argues that hospitals often impair patients’ health by disturbing their sleep. Paying patients if night-time noise or light levels exceed some pre-specified limits should reduce this problem.

Next, I want the hospital’s fee structure to give it increased incentives to avoid failure. For procedures with objectively measurable results, I want the hospital to charge the patient only if those results are achieved, and to pay the patient some pre-specified amount if results leave the patient measurably worse off. (For hard to measure results such as change in pain, this approach won’t work).

The article You Get What You Pay For: Result-Based Compensation for Health Care has more extensive discussion of incentives and of strategies that hospitals might use to reduce the rate at which they harm patients.