Science and Technology

ADHD and Autism

My understanding of Aspergers/autism (AS) and ADHD suggests to me that they can’t coexist in one personality. I keep coming across reports of people having both, and I’ve been trying to research whether those reports result from mistaken diagnoses or whether I’m missing something. I haven’t found any insightful discussion that addresses this directly. I’ve done some research on ADHD recently which clarified my ideas on this subject.

Both produce social problems due to unusual ways in which their attention works, and both involve unusually focused attention, which produce a good deal of overlap in symptoms. But there are many other features for which the two seem opposites.

ADHD – shifts attention easily and quickly in response to new stimuli
AS – slow to shift attention in response to stimuli

ADHD – seeks adrenaline rush from stimulating/risky situations
AS – avoids being overwhelmed by stimulating/risky situations

ADHD – often pays attention to multiple tasks at once
AS – finds multitasking unusually hard

From answers.com:

ADHD – makes inappropriate comments due to being impulsive but realizes afterward it was inappropriate

AS – makes inappropriate comments due to not knowing better and not understanding social conventions

ADHD – forgets details of daily routines

AS – follows daily routines rigidly

From another source:

Children with ADHD frequently break rules they understand, but defy and dislike. Children with Asperger’s Syndrome like rules, and break the ones they don’t understand. They are ever alert to injustice and unfairness and, unfortunately, these are invariably understood from their own nonnegotiable perspective. Children with ADHD are often oppositional in the service of seeking attention. Children with Asperger’s disorder are oppositional in the service of avoiding something that makes them anxious

With all these traits, there are wide variations in the degree to which anyone has them. But most of what I know suggests that people on one side of the AS/ADHD spectrum with regard to one of these traits are almost always on the same side with respect to the others, or else too close to the middle to classify.

Since many of these traits are poorly observed by those who diagnose them (you don’t observe peoples’ daily routines in a doctor’s office), it’s easy to imagine that widespread mistakes in diagnoses create a false impression that AS and ADHD coexist. Does anyone know of a good analysis that disagrees with my conclusion?

[Update 2010-11-15: I’ve gotten some feedback from people with some ADHD traits who don’t clearly fit the pattern I’ve described. Maybe my analysis only works for one subtype of ADHD, or maybe things are too complex for any existing categories to work as well as I’d like.]

The Chimera Hypothesis: Homosexuality and Plural Pregnancy makes the surprising claim that:

at least 50-70% of healthy adults are chimeric to some extent.

The hypothesis can explain some homosexual and transgender tendencies, and suggests some reproductive advantages that offset those reproductive disadvantages:

Therefore, women prone to having more than one egg fertilized, but whose pregnancies only resulted in only one live birth, would have the optimum level of fertility. A side effect of this could be an increased incidence of chimerism in human children … even a relatively small increase in female fertility, which is really the limiting factor in human population growth, could outweigh the disadvantage of less fertility in a small number of male infants.

One observation that this hypothesis doesn’t explain is why there are many more homosexuals than transgenders. So my guess is that it explains only a modest fraction of the homosexuality that we observe, but might explain the observed frequency of transgenders fairly well.

Rob Freitas has a good report analyzing how to use molecular nanotechnology to return atmospheric CO2 levels to pre-industrial levels by about 2060 or 2070.

My only complaint is that his attempt to estimate the equivalent of Moore’s Law for photovoltaics looks too optimistic, as it puts too much weight on the 2006-2008 trend, which was influenced by an abnormal rise in energy prices. If the y-axis on that graph were logarithmic instead of linear, it would be easier to visualize the lower long-term trend.

(HT Brian Wang).

Impro

Book review: Impro: Improvisation and the Theatre, by Keith Johnstone.

This book describes aspects of the human mind and social interactions which actors often need to analyze more explicitly than others, because actors need to be aware of the differences between various roles/personalities that they play, whereas unconscious understanding is adequate for people who only interact as a single personality.

The best chapter is about status, and emphasizes the important role that status games play in most social situations and how hard it is to be aware of one’s status-related behavior.

One disturbing claim he makes is that “acquaintances become friends when they agree to play status games together”. I’m very tempted to deny that I do that (as he predicts most people will deny acting). But I know there’s more happening in social interactions than I’m aware of, so I’m hesitant to dismiss his claim.

The chapter on spontaneity has apparently important insights about the role self-censorship plays in spontaneity and creativity. But I find it hard enough to change my behavior in response to those insights that I can’t be confident he’s correct.

He has the insight that “personality” functions as a public-relations department for the mind. Personality doesn’t seem like quite the right word here, but this is remarkably similar to an idea that Geoffrey Miller later developed from evolutionary theory in his excellent book The Mating Mind.

The chapter on masks and trance is strange and hard to evaluate.

Some comments on last weekend’s Foresight Conference:

At lunch on Sunday I was in a group dominated by a discussion between Robin Hanson and Eliezer Yudkowsky over the relative plausibility of new intelligences having a variety of different goal systems versus a single goal system (as in a society of uploads versus Friendly AI). Some of the debate focused on how unified existing minds are, with Eliezer claiming that dogs mostly don’t have conflicting desires in different parts of their minds, and Robin and others claiming such conflicts are common (e.g. when deciding whether to eat food the dog has been told not to eat).

One test Eliezer suggested for the power of systems with a unified goal system is that if Robin were right, bacteria would have outcompeted humans. That got me wondering whether there’s an appropriate criterion by which humans can be said to have outcompeted bacteria. The most obvious criterion on which humans and bacteria are trying to compete is how many copies of their DNA exist. Using biomass as a proxy, bacteria are winning by several orders of magnitude. Another possible criterion is impact on large-scale features of Earth. Humans have not yet done anything that seems as big as the catastrophic changes to the atmosphere (“the oxygen crisis”) produced by bacteria. Am I overlooking other appropriate criteria?

Kartik Gada described two humanitarian innovation prizes that bear some resemblance to a valuable approach to helping the world’s poorest billion people, but will be hard to turn into something with a reasonable chance of success. The Water Liberation Prize would be pretty hard to judge. Suppose I submit a water filter that I claim qualifies for the prize. How will the judges test the drinkability of the water and the reusability of the filter under common third world conditions (which I suspect vary a lot and which probably won’t be adequately duplicated where the judges live)? Will they ship sample devices to a number of third world locations and ask whether it produces water that tastes good, or will they do rigorous tests of water safety? With a hoped for prize of $50,000, I doubt they can afford very good tests. The Personal Manufacturing Prizes seem somewhat more carefully thought out, but need some revision. The “three different materials” criterion is not enough to rule out overly specialized devices without some clear guidelines about which differences are important and which are trivial. Setting specific award dates appears to assume an implausible ability to predict how soon such a device will become feasible. The possibility that some parts of the device are patented is tricky to handle, as it isn’t cheap to verify the absence of crippling patents.

There was a debate on futarchy between Robin Hanson and Mencius Moldbug. Moldbug’s argument seems to boil down to the absence of a guarantee that futarchy will avoid problems related to manipulation/conflicts of interest. It’s unclear whether he thinks his preferred form of government would guarantee any solution to those problems, and he rejects empirical tests that might compare the extent of those problems under the alternative systems. Still, Moldbug concedes enough that it should be possible to incorporate most of the value of futarchy within his preferred form of government without rejecting his views. He wants to limit trading to the equivalent of the government’s stockholders. Accepting that limitation isn’t likely to impair the markets much, and may make futarchy more palatable to people who share Moldbug’s superstitions about markets.

Book review: Moral Machines: Teaching Robots Right from Wrong by Wendell Wallach and Collin Allen.

This book combines the ideas of leading commentators on ethics, methods of implementing AI, and the risks of AI, into a set of ideas on how machines ought to achieve ethical behavior.

The book mostly provides an accurate survey of what those commentators agree and disagree about. But there’s enough disagreement that we need some insights into which views are correct (especially about theories of ethics) in order to produce useful advice to AI designers, and the authors don’t have those kinds of insights.

The book focuses more on near term risks of software that is much less intelligent than humans, and is complacent about the risks of superhuman AI.

The implications of superhuman AIs for theories of ethics ought to illuminate flaws in them that aren’t obvious when considering purely human-level intelligence. For example, they mention an argument that any AI would value humans for their diversity of ideas, which would help AIs to search the space of possible ideas. This seems to have serious problems, such as what stops an AI from fiddling with human minds to increase their diversity? Yet the authors are too focused on human-like minds to imagine an intelligence which would do that.

Their discussion of the advocates friendly AI seems a bit confused. The authors wonder if those advocates are trying to quell apprehension about AI risks, when I’ve observed pretty consistent efforts by those advocates to create apprehension among AI researchers.

Book review: What Intelligence Tests Miss – The Psychology of Rational Thought by Keith E. Stanovich.

Stanovich presents extensive evidence that rationality is very different from what IQ tests measure, and the two are only weakly related. He describes good reasons why society would be better if people became more rational.

He is too optimistic that becoming more rational will help most people who accomplish it. Overconfidence provides widespread benefits to people who use it in job interviews, political discussions, etc.

He gives some advice on how to be more rational, such as thinking the opposite of each new hypothesis you are about to start believing. But will training yourself to do that on test problems cause you to do it when it matters? I don’t see signs that Stanovich practiced it much while writing the book. The most important implication he wants us to draw from the book is that we should develop and use Rationality Quotient (RQ) tests for at least as many purposes as IQ tests are used. But he doesn’t mention any doubts that I’d expect him to have if he thought about how rewarding high RQ scores might affect the validity of those scores.

He reports that high IQ people can avoid some framing effects and overconfidence, but do so only when told to do so. Also, the sunk cost bias test looks easy to learn how to score well on, even when it’s hard to practice the right behavior – the Bruine de Bruin, Parker and Fischhoff paper than Stanovich implies is the best attempt so far to produce an RQ test lists a sample question for the sunk costs bias that involves abandoning food when you’re too full at a restaurant. It’s obvious what answer produces a higher RQ score, but that doesn’t say much about how I’d behave when the food is in front of me.

He sometimes writes as if rationality were as close to being a single mental ability as IQ is, but at other times he implies it isn’t. I needed to read the Bruine de Bruin, Parker and Fischhoff paper to get real evidence. Their path independence component looks unrelated to the others. The remaining components have enough correlation with each other that there may be connections between them, but those correlations are lower than the correlations between the overall rationality score and IQ tests. So it’s far from clear whether a single RQ score is better than using the components as independent tests.

Given the importance he attaches to testing for and rewarding rationality, it’s disappointing that he devotes so little attention to how to do that.

He has some good explanations of why evolution would have produced minds with the irrational features we observe. He’s much less impressive when he describes how we should classify various biases.

I was occasionally annoyed that he treats disrespect for scientific authority as if it were equivalent to irrationality. The evidence for Big Foot or extraterrestrial visitors may be too flimsy to belong in scientific papers, but when he says there’s “not a shred of evidence” for them, he’s either using a meaning of “evidence” that’s inappropriate when discussing the rationality of people who may be sensibly lazy about gathering relevant data, or he’s simply wrong.

Book review: Create Your Own Economy: The Path to Prosperity in a Disordered World by Tyler Cowen.

This somewhat misleadingly titled book is mainly about the benefits of neurodiversity and how changing technology is changing our styles of thought, and how we ought to improve our styles of thought.

His perspective on these subjects usually reflects a unique way of ordering his thoughts about the world. Few things he says seem particularly profound, but he persistently provides new ways to frame our understanding of the human mind that will sometimes yield better insights than conventional ways of looking at these subjects. Even if you think you know a good deal about autism, he’ll illuminate some problems with your stereotypes of autistics.

Even though it is marketed as an economics book, it only has about one page about financial matters, but that page is an eloquent summary of two factors that are important causes of our recent problems.

He’s an extreme example of an infovore who processes more information than most people can imagine. E.g. “Usually a blog will fail if the blogger doesn’t post … at least every weekday.” His idea of failure must be quite different from mine, as I more often stop reading a blog because it has too many posts than because it goes a few weeks without a post.

One interesting tidbit hints that healthcare costs might be high because telling patients their treatment was expensive may enhance the placebo effect, much like charging more for a given bottle of wine makes it taste better.

The book’s footnotes aren’t as specific as I would like, and sometimes leave me wondering whether he’s engaging in wild speculation or reporting careful research. His conjecture that “self-aware autistics are especially likely to be cosmopolitans in their thinking” sounds like something that results partly from the selection biases that come from knowing more autistics who like economics than autistics who hate economics. I wish he’d indicated whether he found a way to avoid that bias.

This review by Cosma Shalizi of James Flynn’s book What Is Intelligence? provides some interesting criticisms of Flynn (while agreeing with much of what Flynn says).

Shalizi’s most important argument is that Flynn and others who attach a good deal of importance to g haven’t made much of an argument that it measures a single phenomenon.

After a century of IQ testing, there is still no theory which says which questions belongs on an intelligence test, just correlational analyses and tradition.

Flynn and others have good arguments that whatever g measures is important. But Shalizi leaves me with the impression that the only way to decide whether it’s a single phenomenon is to compare its usefulness to models which describe multiple flavors of intelligence. So far those attempts that I’ve looked at seem underwhelming. Maybe that means trying to break down intelligence into components which deserve separate measures isn’t fruitful, but it might also mean that the people who might do a good job of it have been scared away by the political controversies over IQ.

HT Kenny Easwaran.