Molecular Assemblers (Advanced Nanotech)

Paul W.K. Rothemund’s cover article on DNA origami in the March 16 issue of Nature appears to represent an order of magnitude increase in the complexity of objects that can self-assemble to roughly atomic precision (whether it’s really atomic precision depends in part on the purposes you’re using it for – every atom is put in a predictable bond connecting it to neighbors, but there’s enough flexibility in the system that the distances between distant atoms generally aren’t what would be considered atomically precise).
It was interesting watching the delayed reaction in the stock price of Nanoscience Technologies Inc. (symbol NANS), which holds possibly relevant patents. Even though I’m a NANS stockholder, have been following the work in the field carefully, and was masochistic enough to read important parts of the relevant patents produced by Ned Seeman several years ago, I have little confidence in my ability to determine whether the Seeman patents cover Rothemund’s design. (If the patents were worded as broadly as many aggressive patents are these days, the answer would probably be yes, but they’re worded fairly responsibly to cover Seeman’s inventions fairly specifically. It’s clear that Seeman’s inventions at least had an important influence on Rothemund’s design.)
It’s pretty rare for a stock price to take days to start reacting to news, but this was an unusual case. Someone reading the Nature article would think the probability of the technique being covered by patents owned by a publicly traded company to be too small to justify a nontrivial search. Hardly anyone was following the company (which I think is a one-person company). I put in bids on the 20th and 21st for some of the stock at prices that were cautious enough not to signal that I was reacting to potentially important news, and picked up a modest number of shares from people who seemed to not know the news or think it irrelevant. Then late on the 21st some heavy buying started. Now it looks like there’s massive uncertainty about what the news means.

Book Review: The Singularity Is Near : When Humans Transcend Biology by Ray Kurzweil
Kurzweil does a good job of arguing that extrapolating trends such as Moore’s Law works better than most alternative forecasting methods, and he does a good job of describing the implications of those trends. But he is a bit long-winded, and tries to hedge his methodology by pointing to specific research results which he seems to think buttress his conclusions. He neither convinces me that he is good at distinguishing hype from value when analyzing current projects, nor that doing so would help with the longer-term forecasting that constitutes the important aspect of the book.
Given the title, I was slightly surprised that he predicts that AIs will become powerful slightly more gradually than I recall him suggesting previously (which is a good deal more gradual than most Singulitarians). He offsets this by predicting more dramatic changes in the 22nd century than I imagined could be extrapolated from existing trends.
His discussion of the practical importance of reversible computing is clearer than anything else I’ve read on this subject.
When he gets specific, large parts of what he says seem almost right, but there are quite a few details that are misleading enough that I want to quibble with them.
For instance (on page 244, talking about the world circa 2030): “The bulk of the additional energy needed is likely to come from new nanoscale solar, wind, and geothermal technologies.” Yet he says little to justify this, and most of what I know suggests that wind and geothermal have little hope of satisfying more than 1 or 2 percent of new energy demand.
His reference on page 55 to “the devastating effect that illegal file sharing has had on the music-recording industry” seems to say something undesirable about his perspective.
His comments on economists thoughts about deflation are confused and irrelevant.
On page 92 he says “Is the problem that we are not running the evolutionary algorithms long enough? … This won’t work, however, because conventional genetic algorithms reach an asymptote in their level of performance, so running them for a longer period of time won’t help.” If “conventional” excludes genetic programming, then maybe his claim is plausible. But genetic programming originator John Koza claims his results keep improving when he uses more computing power.
His description of nanotech progress seems naive. (page 228) “Drexler’s dissertation … laid out the foundation and provided the road map still being followed today.” (page 234): “each aspect of Drexler’s conceptual designs has been validated”. I’ve been following this area pretty carefully, and I’m aware of some computer simulations which do a tiny fraction of what is needed, but if any lab research is being done that could be considered to follow Drexler’s road map, it’s a well kept secret. Kurzweil then offsets his lack of documentation for those claims by going overboard about documenting his accurate claim that “no serious flaw in Drexler’s nanoassembler concept has been described”.
Kurzweil argues that self-replicating nanobots will sometimes be desirable. I find this poorly thought out. His reasons for wanting them could be satisfied by nanobots that replicate under the control of a responsible AI.
I’m bothered by his complacent attitude toward the risks of AI. He sometimes hints that he is concerned, but his suggestions for dealing with the risks don’t indicate that he has given much thought to the subject. He has a footnote that mentions Yudkowsky’s Guidelines on Friendly AI. The context could lead readers to think they are comparable to the Foresight Guidelines on Molecular Nanotechnology. Alas, Yudkowsky’s guidelines depend on concepts which are hard enough to understand that few researchers are likely to comprehend them, and the few who have tried disagree about their importance.
Kurzweil’s thoughts on the risks that the simulation we may live in will be turned off are somewhat interesting, but less thoughtful than Robin Hanson’s essay on How To Live In A Simulation.
A couple of nice quotes from the book:
(page 210): “It’s mostly in your genes” is only true if you take the usual passive attitude toward health and aging.
(page 301): Sex has largely been separated from its biological function. … So why don’t we provide the same for … another activity that also provides both social intimacy and sensual pleasure – namely, eating?

Book Review: Nanofuture: What’s Next For Nanotechnology by J. Storrs Hall
This book provides some rather well informed insights into what molecular engineering will be able to do in a few decades. It isn’t as thoughtful as Drexler’s Engines of Creation, but it has many ideas that seem new to this reader who has been reading similar essays for many years, such as a solar energy collector that looks and feels like grass.
The book is somewhat eccentric in it’s choice of what to emphasize, devoting three pages to the history of the steam engine, but describing the efficiency of nanotech batteries in a footnote that is a bit too cryptic to be convincing.
The chapter on economics is better than I expected, but I’m still not satisfied. The prediction that interest rates will be much higher sounds correct for the period in which we transition to widespread use of general purpose assemblers, since investing capital in producing more machines will be very productive. But once the technology is widespread and mature, the value of additional manufacturing will decline rapidly to the point where it ceases to put upward pressure on interest rates.
The chapter on AI is disappointing, implying that the main risks of AI are to the human ego. For some better clues about the risks of AI, see Yudkowsky’s essay on Creating Friendly AI.

Book Review: FAB: The Coming Revolution on Your Desktop–From Personal Computers to Personal Fabrication by Neil Gershenfeld
This book brings welcome attention to the neglected field of personal, general-purpose manufacturing. He argues that the technology is at roughly the stage that computing was when minicomputers were the leading edge, is good enough to tell us something about how full-fledged assemblers as envisioned by Drexler will be used, and that the main obstacle to people using it to build what they want is ignorance of what can be accomplished.
The book presents interesting examples of people building things that most would assume were beyond their ability. But he does not do a good job of explaining what can and can’t be accomplished. Too much of the book sounds like a fund-raising appeal for a charity, describing a needy person who was helped rather than focusing on the technology or design process. He is rather thoughtless about choosing what technical details to provide, giving examples of assembly language (something widely known, and hard enough to use that most of his target users will be deterred from making designs which need it), but when he describes novel ideas such as “printing” a kit that can be assembled into a house he is too cryptic for me to guess whether that method would improve on standard methods.
I’ve tried thinking of things I might want to build, and I’m usually no closer to guessing whether it’s feasible than before I read the book. For example, it would be nice if I could make a prototype of a seastead several feet in diameter, but none of the examples the book gives appear to involve methods which could make sturdy cylinders or hemispheres that large.
The index leaves much to be desired – minicomputers are indexed under computers, and open source is indexed under software, when I expected to find them under m and o.
And despite the lip service he pays to open source software, the CAM software he wrote comes with a vague license that doesn’t meet the standard definition of open source.

(Catching up on month-old news…)

The most important technical news from the Foresight Conference on Advanced Nanotechnology was the presentation by Christian Schafmeister, who is working on building molecules with a wide variety of shapes out of bis-amino acids. He is able to build protein-like molecules that are rigid, and whose shape is easy to predict from the sequence. If there are no hidden catches, this may be an innovation as valuable (for the purposes of creating new objects to atomic precision) as solving the protein folding problem. The biggest drawback that he mentioned was the time it takes to synthesize a medium-sized molecule (up to a week), but he says that could be automated.

I’m unsure whether there was anything important in the other talks about nanotech research. Ned Seeman mentioned something about a ribosome-like device – I suppose that might be something important and new that he has done, but he didn’t say enough about it for me to tell.

Rob Freitas made some vaguely impressive claims about the feasibility of building a diamondoid assembler using the tools available today, but he went through some critical issues such as error rates in placing individual atoms where we want them to quickly for me to evaluate the plausibility of his answers. I’ll try to read the papers he has on his web site real soon now to see if he presents those arguments more convincingly there.

The best sound bite from the Foresight Conference on Advanced Nanotechnology was Chris Phoenix’s description of how mature versions of nanotechnology will deal with most forms of pollution:

No Atom Left Behind

He was responsible enough to point out one form of pollution that can’t be solved that way: a scarcity of heat pollution credits is likely to be an important feature of the nanotech economy.