Science and Technology

Book review: Beyond AI: Creating the Conscience of the Machine by J. Storrs Hall
The first two thirds of this book survey current knowledge of AI and make some guesses about when and how it will take off. This part is more eloquent than most books on similar subjects, and its somewhat different from normal perspective makes it worth reading if you are reading several books on the subject. But ease of reading is the only criterion by which this section stands out as better than competing books.
The last five chapters that are surprisingly good, and should shame most professional philosophers whose writings by comparison are a waste of time.
His chapter on consciousness, qualia, and related issues is more concise and persuasive than anything else I’ve read on these subjects. It’s unlikely to change the opinions of people who have already thought about these subjects, but it’s an excellent place for people who are unfamiliar with them to start.
His discussions of ethics using game theory and evolutionary pressures is an excellent way to frame ethical discussions.
My biggest disappointment was that he starts to recognize a possibly important risk of AI when he says “disparities among the abilities of AIs … could negate the evolutionary pressure to reciprocal altruism”, but then seems to dismiss that thoughtlessly (“The notion of one single AI taking off and obtaining hegemony over the whole world by its own efforts is ludicrous”).
He probably has semi-plausible grounds for dismissing some of the scenarios of this nature that have been proposed (e.g. the speed at which some people imagine an AI would take off is improbable). But if AIs with sufficiently general purpose intelligence enhance their intelligence at disparate rates for long enough, the results would render most of the book’s discussion of ethics irrelevant. The time it took humans to accumulate knowledge didn’t give Neanderthals much opportunity to adapt. Would the result have been different if Neanderthals had learned to trade with humans? The answer is not obvious, and probably depends on Neanderthal learning abilities in ways that I don’t know how to analyze.
Also, his arguments for optimism aren’t quite as strong as he thinks. His point that career criminals are generally of low intelligence is reassuring if the number of criminals is all that matters. But when the harm done by one relatively smart criminal can be very large (e.g. Mao), it’s hard to say that the number of criminals is all that matters.
Here’s a nice quote from Mencken which this book quotes part of:

Moral certainty is always a sign of cultural inferiority. The more uncivilized the man, the surer he is that he knows precisely what is right and what is wrong. All human progress, even in morals, has been the work of men who have doubted the current moral values, not of men who have whooped them up and tried to enforce them. The truly civilized man is always skeptical and tolerant, in this field as in all others. His culture is based on ‘I am not too sure.’

Another interesting tidbit is the anecdote that H.G. Wells predicted in 1907 that flying machines would be built. In spite of knowing a lot about attempts to build them, he wasn’t aware that the Wright brothers had succeeded in 1903.
If an AI started running in 2003 that has accumulated the knowledge of a 4-year old human and has the ability to continue learning at human or faster speeds, would we have noticed? Or would the reports we see about it sound too much like the reports of failed AIs for us to pay attention?

Book review: How to Survive a Robot Uprising: Tips on Defending Yourself Against the Coming Rebellion by Daniel H. Wilson
This book combines good analyses of recent robotics research with an understanding of movie scenarios about robot intentions (“how could millions of dollars of special effects lead us astray?”) to produce advice of unknown value about how humans might deal with any malicious robots of the next decade or two.
It focuses mainly on what an ordinary individual or small groups can do to save themselves or postpone their demise, and says little about whether a major uprising can be prevented.
The book’s style is somewhat like the Daily Show’s style, mixing a good deal of accurate reporting with occasional bits of obvious satire (“Robots have no emotions. Sensing your fear could make a robot jealous”), but it doesn’t quite attain the Daily Show’s entertainment value.
Its analyses of the weaknesses of current robot sensors and intelligence should make it required reading for any science fiction author or movie producer who wants to appear realistic (I haven’t been paying enough attention to those fields recently to know whether such people still exist). But it needs a bit of common sense to be used properly. It’s all too easy to imagine a gullible movie producer following its advice to have humans build a time machine and escape to the Cretaceous without pondering whether the robots will use similar time machines to follow them.

Book review: How Is Quantum Field Theory Possible? by Sunny Y. Auyang
This book contains some good ideas, but large parts of it are too hard for me to get anything out of, both due to an assumption that the reader knows a good deal about quantum mechanics and due to a style which probably requires rereading most parts multiple times in order to decipher even those parts which don’t require an understanding of quantum mechanics.
I was impressed by her explanation of how we should understand the uncertainty of position and momentum measurements. She says the quantum entities have genuine deterministic properties, but we shouldn’t try to think of position and momentum as properties of any persistent entities. They are properties associated with specific measurements. The properties of persistent entities such as atoms are mostly stranger than what we can measure, and measurements only give us indirect evidence of those properties.
Her descriptions of coordinate systems used in quantum physics seem inconsistent with the impressions I got from Smolin’s Trouble with Physics. Smolin implies (but doesn’t clearly state) that quantum theory retains Newtonian background dependent coordinates. Auyang’s descriptions of quantum coordinate systems seem very different. It’s clear that I’ve only scratched the surface of what’s needed to understand these issues.

One way to find evidence concerning whether a politicized theory is being exaggerated or being stated overconfidently is to look at how experts from a very different worldview thought about the theory. I had been under the impression that theories about global warming were recent enough that it was hard to find people who studied it without being subject to biases connected with recent fads in environmental politics.
I now see that Arrhenius predicted in 1896 that human activity would cause global warming, and estimated a sensitivity of world temperature to CO2 levels that differ from current estimates by about a factor of 2. The uncertainty in current estimates is large enough that they disagree with Arrhenius by a surprisingly small amount. This increases my confidence in that part of global warming theory.
Arrhenius disagreed with modern theorists about how fast CO2 level would rise (he thought it would take 3000 years to rise 50% or to double, depending on whether you believe Nature or Wikipedia), and about whether warming is good. That slightly weakens my confidence in forecasts of CO2 levels and of harm from warming (although as a Swede Arrhenius might have overweighted the benefits of warming in arctic regions).

Nick Bostrom has a good paper on Astronomical Waste: The Opportunity Cost of Delayed Technological Development, which argues that under most reasonable ethical systems that aren’t completely selfish or very parochial, our philanthropic activities ought to be devoted primarily toward preventing disasters that would cause the extinction of intelligent life.
Some people who haven’t thought about the Fermi Paradox carefully may overestimate the probability that most of the universe is already occupied by intelligent life. Very high estimates for that probability would invalidate Bostrom’s conclusion, but I haven’t found any plausible arguments that would justify that high a probability.
I don’t want to completely dismiss Malthusian objections that life in the distant future will be barely worth living, but the risk of a Malthusian future would need to be well above 50 percent to substantially alter the optimal focus of philanthropy, and the strongest Malthusian arguments that I can imagine leave much more uncertainty than that. (If I thought I could alter the probability of a Malthusian future, maybe I should devote effort to that. But I don’t currently know where to start).
Thus the conclusion seems like it ought to be too obvious to need repeating, but it’s far enough from our normal experiences that most of us tend to pay inadequate attention to it. So I’m mentioning it in order to remind people (including myself) of the need to devote more of our time to thinking about risks such as those associated with AI or asteroid impacts.

Book review: Why Not?: How to Use Everyday Ingenuity to Solve Problems Big And Small by Barry Nalebuff and Ian Ayres.
This is a very entertaining and somewhat thought-provoking book. I’m uncertain whether it had much effect on my creativity. It certainly demonstrates the authors’ creativity, and gives some insights into how their creative thought processes work. But it’s probably more valuable as a collection of interesting ideas than it is as a recipe for creativity.
While they focus more on presenting interesting ideas than on evaluating how well they would work, the do a decent job of anticipating problems and understanding the relevant incentives.
Possibly the most important idea is mandating anonymity of political campaign contributions (see also the book Voting with Dollars) as an alternative way of ensuring that it’s hard for contributions to influence politicians votes, with plausible suggestions about how to ensure that it’s hard for donors to evade the anonymity rule.
Their examples often leave me wondering why the ideas they describe are so little known (e.g. the anonymity requirement has been tried in 10 states for judicial elections – why hasn’t that been reported widely?).
Another interesting idea is how tests of black boxes in cars (similar to those in planes) cause drivers to drive much more safely (20 to 66 percent declines in accident rates – “Fear of getting caught may be a more powerful motivator than fear of getting killed”).
I am disappointed that it doesn’t have an index.

One obstacle to replacing proprietary peer-reviewed journals with open alternatives is the difficulty of getting good peer review.
The approach of having authors pay publishers to arrange the peer review will probably have some success, but appears to be a recipe for much slower than optimal migration to open publishing due to the incentives it provides for authors to stick with proprietary journals.
More radical alternatives usually raise doubts about whether their quality will rival traditional peer review, due to lack of incentives for someone to ensure that the peer review is done by disinterested peers.
My idea is to have a system where anyone can review papers that have been registered within the system. The reviews would be made public, without identifying the reviewer.
The system would reward reviewers with a reputation. Reviewers would have their reputation score increased if a paper they positively review is widely cited, or a paper they negatively review is retracted (by a larger amount, to offset the lower frequency of this result).
It ought to be possible to convince universities to give this score some weight in tenure decisions, and if so that would ensure an abundant supply of reviewers who are at least as objective as under the current system.
The simplest implementation of this would impair the anonymity of reviewers by enabling people to connect changes in scores with the timing of citations and retractions. That could probably be dealt with by adding a random delay before a score is recalculated.

Molecular nanotechnology is likely to be heavily regulated when it first reaches the stage where it can make a wide variety of products without requiring unusual expertise and laboratories. The main justification for the regulation will be the risk of dangerous products (e.g. weapons). That justification will provide a cover for people who get money from existing manufacturing techniques to use the regulation to prevent typical manufacturing from becoming as cheap as software.
One way to minimize the harm of this special-interest would be to create an industry now that will have incentives to lobby in favor of making most benefits of cheap manufacturing available to the public. I have in mind a variation on a company like Kinko’s that uses ideas from the book Fab and the rapid prototyping industry to provide general purpose 3-D copying and printing services in stores that could be as widespread as photocopying/printing stores. It would then be a modest, natural, and not overly scary step for these stores to start using molecular assemblers to perform services similar to what they’re already doing.
The custom fabrication services of TAP Plastics sound like they might be a small step in this direction.
One example of a potentially lucrative service that such a store could provide in the not-too-distant future would be cheap custom-fit footwear. Trying to fit a nonstandard foot into one of a small number of standard shoes/boots that a store stocks can be time consuming and doesn’t always produce satisfying results. Why not replace that process with one that does a 3-D scan of each foot and prints out footwear that fits that specific shape (or at least a liner that customizes the inside of a standard shoe/boot)? Once that process is done for a large volume of footwear, the costs should drop below that of existing footwear, due to reduced inventory costs and reduced time for salespeople to search the inventory multiple times per customer.

Book review: The Trouble With Physics: The Rise of String Theory, the Fall of a Science, and What Comes Next by Lee Smolin
This book makes a plausible argument that string theorists are following a fad that has little scientific promise. But much of the book leaves me with the impression that the disputes he’s describing can only be fully understood by people who devote years to studying the math, and that the book has necessarily simplified things for laymen in ways that leave out many important insights.
The argument I found most impressive was his claim that relativity shows that background independence is important enough that any theory which unites relativity with quantum mechanics will preserve relativity’s background independence (a result which string theorists don’t pursue). Still, this seems to be little more than an intuition, and until someone creates the revolutionary theory that unites relativity and quantum mechanics, there ought to be plenty of doubt about which approach is best.
His sociological analysis of the problems with physics is less impressive. His endorsement of Feyeraband’s belief that there’s no such thing as a scientific method seems implausible (although it seems plausible for some stages of scientific thought, such as decisions about what questions to ask; maybe I ought to read Feyeraband’s writings on this subject).
I’m unimpressed by his lengthy gripes about the large fraction of funding that goes to routine science rather than revolutionary science. He implies this is making revolutionary science harder than it used to be, but I still see signs that a revolutionary scientist today would follow a path similar to Einstein’s and encounter no greater obstacles.
He wonders why those who fund scientific research don’t fund some research the way the best venture capitalists do – taking risks of 90% of their choices failing in order to get a few really big successes. He seems to think risk aversion is the main reason. What I see missing from his analysis is the absence of large rewards to the funder who picks the next Einstein. I think that to get VC-like attitudes in funding agencies, we would need systems where part of the money and prestige of a Nobel prize went to a few people who made the key decisions to fund the prize-winning research. I expect it would be hard to alter existing institutions to replace committee-based funding decisions with the kind of individual authority needed for these incentives to work.
His proposal to avoid having one unproven paradigm such as string theory dominate the funding in its area by limiting the funding to any one research program to one third of the total seems naive. The most direct effects of such a rule would be that researchers get around the rule by redefining the relevant categories (e.g. claiming that string theory research is diverse enough to qualify as several independent programs, or altering whatever category is used to define the total funding).
He wants academics who have authority to influence hiring decisions to have the kind of training in avoiding prejudice and promoting diversity that their commercial equivalents get. I suspect he is way too optimistic about what that training accomplishes – my impression is that it’s designed mostly to minimize the risk of lawsuits, and does more to hide biases than it does to prevent them.

I had thought that Rothemund’s DNA origami was enough to make this an unusually good year for advances in molecular nanotechnology, but now there are more advances that look possibly as important.
Ned Seeman’s lab has inserted robotic arms into specific locations in DNA arrays (more here) which look like they ought to be able to become independently controllable (they haven’t yet produced independently controlled arms, but it looks like they’ve done the hardest steps to get to that result).
Erik Winfree’s lab has built logic gates out of DNA.
Brian Wang has more info about both reports.
And finally, a recent article in Nature alerted me to a not-so-new discovery of a DNA variant called xDNA, containing an extra benzene ring in one base of each base pair. This provides slightly different shapes that could be added to DNA-based machines, with most of the advantages that DNA has (but presumably not low costs of synthesis).