Molecular Assemblers (Advanced Nanotech)

Two months ago I attended Eric Drexler’s launch of MSEP.one. It’s open source software, written by people with professional game design experience, intended to catalyze better designs for atomically precise manufacturing (or generative nanotechnology, as he now calls it).

Drexler wants to draw more attention to the benefits of nanotech, which involve large enough exponents that our intuition boggles at handling them. That includes permanent health (Drexler’s new framing of life extension and cures for aging).

He hopes that a decentralized network of users will create a rich library of open-source components that might be used to build a nanotech factory. With enough effort, it could then become possible to design a complete enough factory that critics would have to shift from their current practice of claiming nanotech is impossible, to arguing with expert chemists over how well it would work.

Continue Reading

In 1986, Drexler predicted (in Engines of Creation) that we’d have molecular assemblers in 30 years. They would roughly act as fast, atomically precise 3-d printers. That was the standard meaning of nanotech for the next decade, until more mainstream authorities co-opted the term.

What went wrong with that forecast?

In my review of Where Is My Flying Car? I wrote:

Josh describes the mainstream reaction to nanotech fairly well, but that’s not the whole story. Why didn’t the military fund nanotech? Nanotech would likely exist today if we had credible fears of Al Qaeda researching it in 2001.

I recently changed my mind about that last sentence, partly because of what I recently read about the Manhattan Project, and partly due to the world’s response to COVID.

Continue Reading

Book review: Where Is My Flying Car? A Memoir of Future Past, by J. Storrs Hall (aka Josh).

If you only read the first 3 chapters, you might imagine that this is the history of just one industry (or the mysterious lack of an industry).

But this book attributes the absence of that industry to a broad set of problems that are keeping us poor. He looks at the post-1970 slowdown in innovation that Cowen describes in The Great Stagnation[1]. The two books agree on many symptoms, but describe the causes differently: where Cowen says we ate the low hanging fruit, Josh says it’s due to someone “spraying paraquat on the low-hanging fruit”.

The book is full of mostly good insights. It significantly changed my opinion of the Great Stagnation.

The book jumps back and forth between polemics about the Great Strangulation (with a bit too much outrage porn), and nerdy descriptions of engineering and piloting problems. I found those large shifts in tone to be somewhat disorienting – it’s like the author can’t decide whether he’s an autistic youth who is eagerly describing his latest obsession, or an angry old man complaining about how the world is going to hell (I’ve met the author at Foresight conferences, and got similar but milder impressions there).

Josh’s main explanation for the Great Strangulation is the rise of Green fundamentalism[2], but he also describes other cultural / political factors that seem related. But before looking at those, I’ll look in some depth at three industries that exemplify the Great Strangulation.

Continue Reading

Book review: Radical Abundance: How a Revolution in Nanotechnology Will Change Civilization, by K. Eric Drexler.

Radical Abundance is more cautious than his prior books, and targeted at a very nontechnical audience. It accurately describes many likely ways in which technology will create orders of magnitude more material wealth.

Much of it repackages old ideas, and it focuses too much on the history of nanotechnology.

He defines the subject of the book to be atomically precise manufacturing (APM), and doesn’t consider nanobots to have much relevance to the book.

One new idea that I liked is that rare elements will become unimportant to manufacturing. In particular, solar energy can be made entirely out of relatively common elements (unlike current photovoltaics). Alas, he doesn’t provide enough detail for me to figure out how confident I should be about that.

He predicts that progress toward APM will accelerate someday, but doesn’t provide convincing arguments. I don’t recall him pointing out the likelihood that investment in APM companies will increase dramatically when VCs realize that a few years of effort will produce commercial products.

He doesn’t do a good jobs of documenting his claims that APM has advanced far. I’m pretty sure that the million atom DNA scaffolds he mentions have as much programmable complexity as he hints, but if I only relied on this book to analyze that I’d suspect that those structures were simpler and filled with redundancy.

He wants us to believe that APM will largely eliminate pollution, and that waste heat will “have little adverse impact”. I’m disappointed that he doesn’t quantify the global impact of increasing waste heat. Why does he seem to disagree with Rob Freitas about this?

Rob Freitas has a good report analyzing how to use molecular nanotechnology to return atmospheric CO2 levels to pre-industrial levels by about 2060 or 2070.

My only complaint is that his attempt to estimate the equivalent of Moore’s Law for photovoltaics looks too optimistic, as it puts too much weight on the 2006-2008 trend, which was influenced by an abnormal rise in energy prices. If the y-axis on that graph were logarithmic instead of linear, it would be easier to visualize the lower long-term trend.

(HT Brian Wang).

The Global Catastrophic Risks conference last Friday was a mix of good and bad talks.
By far the most provocative was Josh‘s talk about “the Weather Machine”. This would consist of small (under 1 cm) balloons made of material a few atoms thick (i.e. needed nanotechnology that won’t be available for a couple of decades) filled with hydrogen and having a mirror in the equatorial plane. They would have enough communications and orientation control to be individually pointed wherever the entity in charge of them wants. They would float 20 miles above the earth’s surface and form a nearly continuous layer surrounding the planet.
This machine would have a few orders of magnitude more power over atmospheric temperatures to compensate for the warming caused by greenhouse gasses this century, although it would only be a partial solution to the waste heat farther in the future that Freitas worries about in his discussion of the global hypsithermal limit.
The military implications make me wish it won’t be possible to make it as powerful as Josh claims. If 10 percent of the mirrors target one location, it would be difficult for anyone in the target area to survive. I suspect defensive mirrors would be of some use, but there would still be serious heating of the atmosphere near the mirrors. Josh claims that it could be designed with a deadman switch that would cause a snowball earth effect if the entity in charge were destroyed, but it’s not obvious why the balloons couldn’t be destroyed in that scenario. Later in the weekend Chris Hibbert raised concerns about how secure it would be against unauthorized people hacking into it, and I wasn’t reassured by Josh’s answer.

James Hughes gave a talk advocating world government. I was disappointed with his inability to imagine that that would result in power becoming too centralized. Nick Bostrom’s discussions of this subject are much more thoughtful.

Alan Goldstein gave a talk about the A-Prize and defining a concept called the carbon barrier to distinguish biological from non-biological life. Josh pointed out that as stated all life fit Goldstein’s definition of biological (since any information can be encoded in DNA). Goldstein modified his definition to avoid that, and then other people mentioned reports such as this which imply that humans don’t fall within Goldstein’s definition of biological due to inheritance of information through means other than DNA. Goldstein seemed unable to understand that objection.

Book review: Global Catastrophic Risks by Nick Bostrom, and Milan Cirkovic.
This is a relatively comprehensive collection of thoughtful essays about the risks of a major catastrophe (mainly those that would kill a billion or more people).
Probably the most important chapter is the one on risks associated with AI, since few people attempting to create an AI seem to understand the possibilities it describes. It makes some implausible claims about the speed with which an AI could take over the world, but the argument they are used to support only requires that a first-mover advantage be important, and that is only weakly dependent on assumptions about that speed with which AI will improve.
The risks of a large fraction of humanity being killed by a super-volcano is apparently higher than the risk from asteroids, but volcanoes have more of a limit on their maximum size, so they appear to pose less risk of human extinction.
The risks of asteroids and comets can’t be handled as well as I thought by early detection, because some dark comets can’t be detected with current technology until it’s way too late. It seems we ought to start thinking about better detection systems, which would probably require large improvements in the cost-effectiveness of space-based telescopes or other sensors.
Many of the volcano and asteroid deaths would be due to crop failures from cold weather. Since mid-ocean temperatures are more stable that land temperatures, ocean based aquaculture would help mitigate this risk.
The climate change chapter seems much more objective and credible than what I’ve previously read on the subject, but is technical enough that it won’t be widely read, and it won’t satisfy anyone who is looking for arguments to justify their favorite policy. The best part is a list of possible instabilities which appear unlikely but which aren’t understood well enough to evaluate with any confidence.
The chapter on plagues mentions one surprising risk – better sanitation made polio more dangerous by altering the age at which it infected people. If I’d written the chapter, I’d have mentioned Ewald’s analysis of how human behavior influences the evolution of strains which are more or less virulent.
There’s good news about nuclear proliferation which has been under-reported – a fair number of countries have abandoned nuclear weapons programs, and a few have given up nuclear weapons. So if there’s any trend, it’s toward fewer countries trying to build them, and a stable number of countries possessing them. The bad news is we don’t know whether nanotechnology will change that by drastically reducing the effort needed to build them.
The chapter on totalitarianism discusses some uncomfortable tradeoffs between the benefits of some sort of world government and the harm that such government might cause. One interesting claim:

totalitarian regimes are less likely to foresee disasters, but are in some ways better-equipped to deal with disasters that they take seriously.

Molecular nanotechnology is likely to be heavily regulated when it first reaches the stage where it can make a wide variety of products without requiring unusual expertise and laboratories. The main justification for the regulation will be the risk of dangerous products (e.g. weapons). That justification will provide a cover for people who get money from existing manufacturing techniques to use the regulation to prevent typical manufacturing from becoming as cheap as software.
One way to minimize the harm of this special-interest would be to create an industry now that will have incentives to lobby in favor of making most benefits of cheap manufacturing available to the public. I have in mind a variation on a company like Kinko’s that uses ideas from the book Fab and the rapid prototyping industry to provide general purpose 3-D copying and printing services in stores that could be as widespread as photocopying/printing stores. It would then be a modest, natural, and not overly scary step for these stores to start using molecular assemblers to perform services similar to what they’re already doing.
The custom fabrication services of TAP Plastics sound like they might be a small step in this direction.
One example of a potentially lucrative service that such a store could provide in the not-too-distant future would be cheap custom-fit footwear. Trying to fit a nonstandard foot into one of a small number of standard shoes/boots that a store stocks can be time consuming and doesn’t always produce satisfying results. Why not replace that process with one that does a 3-D scan of each foot and prints out footwear that fits that specific shape (or at least a liner that customizes the inside of a standard shoe/boot)? Once that process is done for a large volume of footwear, the costs should drop below that of existing footwear, due to reduced inventory costs and reduced time for salespeople to search the inventory multiple times per customer.

I had thought that Rothemund’s DNA origami was enough to make this an unusually good year for advances in molecular nanotechnology, but now there are more advances that look possibly as important.
Ned Seeman’s lab has inserted robotic arms into specific locations in DNA arrays (more here) which look like they ought to be able to become independently controllable (they haven’t yet produced independently controlled arms, but it looks like they’ve done the hardest steps to get to that result).
Erik Winfree’s lab has built logic gates out of DNA.
Brian Wang has more info about both reports.
And finally, a recent article in Nature alerted me to a not-so-new discovery of a DNA variant called xDNA, containing an extra benzene ring in one base of each base pair. This provides slightly different shapes that could be added to DNA-based machines, with most of the advantages that DNA has (but presumably not low costs of synthesis).

I went to an interesting talk Wednesday by the CTO of D-Wave. He indicated that their quantum computing hardware is working well enough that their biggest problems are understanding how to use them and explaining that to potential customers.
This implies that they are much further advanced than the impressions I’ve gotten from sources unconnected with D-Wave suggest is plausible. D-Wave is being sufficiently secretive that I can’t put too much confidence in what they imply, but the degree of secrecy doesn’t seem unusual, and I don’t see any major reasons to doubt them other than the fact that they’re way ahead of what I gather many experts in the field think is possible. Steve Jurvetson’s investment in D-Wave several years ago is grounds for taking them fairly seriously.
The implications if this is real are concentrated in a few special applications (quantum computing sounds even more special purpose than I had previously realized), but for molecular modelling (and fields that depend on it such as drug discovery) it means some really important changes. Modelling that previously required enormous amounts of cpu power and expertise to produce imperfect approximations will apparently now require little more than the time and expertise needed to program a quantum computer (plus whatever exorbitant fees D-Wave charges).