I recently stumbled upon a really nasty threat in emergent science called “malicious ecophagy” that probably should have gone onto my earlier post called Steamrollers except for the fact that this threat doesn’t have the slow-moving inevitability of those I identified before. Rather, ecophagy (the consumption of the ecosphere) would most likely happen suddenly. The threat stems from the race in nanotechnology to create an assembler, a nanobot able to take apart material at the molecular level and reassemble it. Think of replicator technology contemplated in Star Trek fiction for a possible application.
The promise of such technology, which is partly the impetus for developing it, is the hope that, using nanotechnology, we would be able, for instance, to create corn from lawn clippings or clean up a toxic dump by merely rearranging the molecules. It could potentially be the end of want. An array of nanomedicine applications are also contemplated. The potential danger, however, is that if we manage to create an assembler, and if we can’t turn off the molecular transformation, the assembler could then go on to recreate itself ad infinitum until a swarm of biovorous nanobots have literally consumed the totality of biomass and reduced it to dust or some sort of gray goo. It has suitably been termed the “gray goo problem.” Science fiction has already suggested the problem, though of extraterrestrial origin, of an all-consuming biomass in the movie The Blob.
This passage from K. Eric Drexler’s Engines of Creation describes the issue further:
Though masses of uncontrolled replicators need not be gray or gooey, the term “gray goo” emphasizes that replicators able to obliterate life might be less inspiring than a single species of crabgrass. They might be superior in an evolutionary sense, but this need not make them valuable.
The gray goo threat makes one thing perfectly clear: we cannot afford certain kinds of accidents with [self-]replicating assemblers. Gray goo would surely be a depressing ending to our human adventure on Earth, far worse than mere fire or ice, and one that could stem from a simple laboratory accident.
This spells out the stakes fairly succinctly. Yet in their hubris, scientists appear to be confident that they can avoid the problem, and research continues apace because there is no regulatory agency to oversee and halt the development of potentially dangerous technologies. Indeed, weaponization of nanotechnology is virtually assured. It reminded me that in the dawning atomic age, the creators of the first atomic bomb considered the possibility that detonating a device might accidentally ignite the atmosphere. The danger was calculated to be sufficiently low, though, that the gamble appeared to be worth it. (We’re certainly comfortable with that particular doomsday scenario in hindsight.)
Everyone to whom I’ve described the gray goo problem has responded fairly simply that, well, we just shouldn’t go there then. We don’t want an “oops” we can’t recover from. That’s also the argument made by Bill Joy in his lengthy article in Wired titled Why the Future Doesn’t Need Us. His preferred term is “relinquishment,” and he includes genetic engineering and robotics in a triumvirate of “GNR” (Genes-Nanotech-Robots) technologies that we should give up on before we outwit ourselves and alter something irrevocably. Joy’s credentials and scientific acumen are far better than anything I can bring to bear on the issue, and I rather trust his conclusions (and recommend reading the article). However, despite a few good examples of historical relinquishment, I have my doubts that we can muster the necessary humility and restraint to avoid delving ever deeper into the Pandora’s Box of science and technology. Like the so-called shot heard around the world, that “oops” muttered in a lab somewhere could be a signal event.