How to usher humanity into an era of transhumanist bliss: first, end scarcity. Second, eradicate death. Third, eliminate the bungled mechanisms that introduce imperfections into the human body. The vehicle for accomplishing all three? Molecular nanotechnology—in essence, the reduction of all material things to the status of software.
To reduce the splendid complexity of our world to a list of instructions, a mere recipe, would involve harnessing the most basic components of life. Start with Earth’s supply of atoms. Evolution, the laws of physics, and a big dose of chance have arranged those atoms into the objects and life-forms around us. If we could map the position and type of every atom in an object and also place atoms in specific positions, then in principle we could reproduce with absolute fidelity any material thing from its constituent parts. At a stroke, any material or artifact—a Stradivarius or a steak—could be available in abundance. We could build replacement body parts with capabilities that would hugely exceed their natural analogues. The economy, the environment, even what it means to be human, would be utterly transformed.
This vision holds wide currency among those anticipating a singularity, in which the creation of hyperintelligent, self-replicating machines triggers runaway technological advancement and economic growth, transforming human beings into cyborgs that are superhuman and maybe even immortal. Some of these futurists are convinced that this renaissance is just a few decades away. But in academia and industry, nanotechnologists are working on a very different set of technologies. Many of these projects will almost certainly prove to be useful, lucrative, or even transformative, but none of them are likely to bring about the transhumanist rapture foreseen by singularitarians. Not in the next century, anyway.
It’s not that the singularity vision is completely unrecognizable in today’s work. It’s just that the gulf between the two is a bit like the gap between traveling by horse and buggy and by interplanetary transport. The birth of nanotechnology is popularly taken to be 1989, when IBM Fellow Don Eigler used a scanning tunneling microscope to create the company’s logo out of xenon atoms. Since then a whole field has emerged, based mainly on custom-engineered molecules that have gone into such consumer items as wrinkle-free clothes, more-effective sunscreens, and sturdier sports rackets.
However, it is a very long way indeed from a top-notch tennis racket to smart nanoscale robots capable of swarming in our bodies like infinitesimal guardian angels, recognizing and fixing damaged cells or DNA, and detecting, chasing, and destroying harmful viruses and bacteria. But the transhumanists underestimate the magnitude of that leap. They look beyond the manipulation of an atom or molecule with a scanning tunneling microscope and see swarms of manipulators that are themselves nanoscale. Under software control, these “nanofactories” would be able to arrange atoms in any pattern consistent with the laws of physics.
Rather than simply copying existing materials, the transhumanists dream of integrating into those materials almost unlimited functionality: state-of-the-art sensing and information processing could be built into the very fabric of our existence, accompanied by motors with astounding power density. Singularitarians anticipate that Moore’s Law will run on indefinitely, giving us the immense computing power in tiny packages needed to control these nanofactories. These minuscule robots, or nanobots, need not be confined to protecting our bodies, either: if they can fix and purify, why not extend and enhance? Neural nanobots could allow a direct interface between our biological wetware and powerful computers with vast databases.
Maybe we could leave our bodies entirely. Only the need to preserve the contents of our memories and consciousness, our mental identities, ties us to them. Perhaps those nanobots will even be able to swim through our brains to read and upload our thoughts and memories, indeed entire personalities, to a powerful computer.
This expansive view of molecular nanotechnology owes as much to K. Eric Drexler as to anyone else. An MIT graduate and student of Marvin Minsky [see table, “Who’s Who in the Singularity,” in this issue], Drexler laid out his vision in the 1992 book Nanosystems (John Wiley & Sons). Those ideas have been picked up and expanded by other futurists over the past 16 years.
In his book, Drexler envisaged nanostructures built from the strongest and stiffest materials available, using the rational design principles of mechanical engineering. The fundamental building blocks of this paradigm are tiny, rigid cogs and gears, analogous to the plastic pieces of a Lego set. The gears would distribute power from nanoscale electric motors and be small enough to assist in the task of attaching molecules to one another. They would also process information. Drexler drew inspiration from a previous generation of computing devices, which used levers and gears rather than transistors, for his vision of ultrasmall mechanical computers.
Assuming that an object’s structure could easily be reduced to its molecular blueprint, the first order of business is figuring out how to translate macroscale manufacturing methods into nanoscale manipulations. For example, let’s say you wanted a new pancreas. Your first major challenge stems from the fact that a single human cell is composed of about 1014 atoms, and the pancreas you want has at least 80 billion cells, probably more. We could use a scanning tunneling microscope to position individual atoms with some precision, but to make a macroscopic object with it would take a very long time.
The theoretical solution, initially, was an idea known as exponential manufacturing. In its simplest form, this refers to a hypothetical nanoscale “assembler” that could construct objects on its own scale. For instance, it could make another assembler, and each assembler could go on to make more assemblers, resulting in a suite of assemblers that would combine forces to make a macroscopic object.
Setting aside the enormous challenges of creating and coordinating these nanoassemblers, some theorists have worried about a doomsday scenario known as the “gray goo” problem. Runaway replicators could voraciously consume resources to produce ever more stuff, a futuristic take on the old story of the sorcerer’s apprentice. Not to worry, say Drexler and colleagues. In the latest vision of the nanofactory, the reproducing replicators give way to Henry Ford–style mass production, with endlessly repeated elementary operations on countless tiny production lines.
It’s a seductive idea, seemingly validated by the workings of the cells of our own bodies. We’re full of sophisticated nanoassemblers: delve into the inner workings of a typical cell and you’ll find molecular motors that convert chemical energy into mechanical energy and membranes with active ion channels that sort molecules—two key tasks needed for basic nanoscale assembly. ATP synthase, for example, is an intricate cluster of proteins constituting a mechanism that makes adenosine triphosphate, the molecule that fuels the contraction of muscle cells and countless other cellular processes. Cell biology also exhibits software-controlled manufacturing, in the form of protein synthesis. The process starts with the ribosome, a remarkable molecular machine that can read information from a strand of messenger RNA and convert the code into a sequence of amino acids. The amino-acid sequence in turn defines the three-dimensional structure of a protein and its function. The ribosome fulfils the functions expected of an artificial assembler—proof that complex nanoassembly is possible.
If biology can produce a sophisticated nanotechnology based on soft materials like proteins and lipids, singularitarian thinking goes, then how much more powerful our synthetic nanotechnology would be if we could use strong, stiff materials, like diamond. And if biology can produce working motors and assemblers using just the random selections of Darwinian evolution, how much more powerful the devices could be if they were rationally designed using all the insights we’ve learned from macroscopic engineering.
But that reasoning fails to take into account the physical environment in which cell biology takes place, which has nothing in common with the macroscopic world of bridges, engines, and transmissions. In the domain of the cell, water behaves like thick molasses, not the free-flowing liquid that we are familiar with. This is a world dominated by the fluctuations of constant Brownian motion, in which components are ceaselessly bombarded by fast-moving water molecules and flex and stretch randomly. The van der Waals force, which attracts molecules to one another, dominates, causing things in close proximity to stick together. Clingiest of all are protein molecules, whose stickiness underlies a number of undesirable phenomena, such as the rejection of medical implants. What’s to protect a nanobot assailed by particles glomming onto its surface and clogging up its gears?
The watery nanoscale environment of cell biology seems so hostile to engineering that the fact that biology works at all is almost hard to believe. But biology does work—and very well at that. The lack of rigidity, excessive stickiness, and constant random motion may seem like huge obstacles to be worked around, but biology is aided by its own design principles, which have evolved over billions of years to exploit those characteristics. That brutal combination of strong surface forces and random Brownian motion in fact propels the self-assembly of sophisticated structures, such as the sculpting of intricately folded protein molecules. The cellular environment that at first seems annoying—filled with squishy objects and the chaotic banging around of particles—is essential in the operation of molecular motors, where a change in a protein molecule’s shape provides the power stroke to convert chemical energy to mechanical energy.
In the end, rather than ratifying the “hard” nanomachine paradigm, cellular biology casts doubt on it. But even if that mechanical-engineering approach were to work in the body, there are several issues that, in my view, have been seriously underestimated by its proponents.
First, those building blocks—the cogs and gears made famous in countless simulations supporting the case for the singularity—have some questionable chemical properties. They are essentially molecular clusters with odd and special shapes, but it’s far from clear that they represent stable arrangements of atoms that won’t rearrange themselves spontaneously. These crystal lattices were designed using molecular modeling software, which works on the principle that if valences are satisfied and bonds aren’t too distorted from their normal values, then the structures formed will be chemically stable. But this is a problematic assumption.
A regular crystal lattice is a 3-D arrangement of atoms or molecules with well-defined angles between the bonds that hold them together. To build a crystal lattice in a nonnatural shape—say, with a curved surface rather than with the flat faces characteristic of crystals—the natural distances and angles between atoms need to be distorted, severely straining those bonds. Modeling software might tell you that the bonds will hold. However, life has a way of confounding computer models. For example, if you try to make very small, spherical diamond crystals, a layer or two of carbon atoms at the surface will spontaneously rearrange themselves into a new form—not of diamond, but of graphite.
A second problem has to do with the power of surface forces and the high surface area anticipated for these nanobots. Researchers attempting to shrink existing microelectromechanical systems to the nanoscale have already discovered that the combination of friction and persistent sticking can be devastating. Nanobots are expected to operate at very high power densities, so even rather low values of friction may vaporize or burn up the minuscule machines. At the very least, this friction and sticking will play havoc with the machines’ chemical stability.
Then there’s the prospect of irreversible damage if reactive substances—such as water or oxygen—get caught up in a nanobot’s exposed surfaces, upsetting the careful chemistry of each. To avoid those molecules, nanodevices will have to be fabricated in a fully controlled environment. No one yet knows how a medical nanobot would be protected once it is released into the warm, crowded turbulence of the body, perhaps the most heterogeneous environment imaginable.
Finally, there’s the question of how an intricate arrangement of cogs and gears that depends on precision and rigidity to work will respond to thermal noise and Brownian bombardment at room temperature. The turbulence that nanobots will be subjected to will far exceed that inflicted on macroscopically engineered structures, and even the most rigid materials, like diamond, will bend and wobble in response. It would be like making a clock and its gears out of rubber, then watching it tumble around in a clothes dryer and wondering why it doesn’t keep time. The bottom line is that we have no idea whether complex and rigid mechanical systems—even ones made from diamond—can survive in the nanoworld.
Put all these complications together and what they suggest, to me, is that the range of environments in which rigid nanomachines could operate, if they operate at all, would be quite limited. If, for example, such devices can function only at low temperatures and in a vacuum, their impact and economic importance would be virtually nil.
In 15 years of intense nanotechnology research, we have not even come close to experiencing the exponentially accelerating technological progress toward the goals set out by singularitarians. Impressive advances are emerging from the labs of real-world nanotechnologists, but these have little to do with the Drexlerian vision, which seems to be accumulating obstacles faster than it can overcome them. Given these facts, I can’t take seriously the predictions that life-altering molecular nanotechnology will arrive within 15 or 20 years and hasten the arrival of a technological singularity before 2050.
Rather than try to defy or resist nature, I say we need to work with it. DNA itself can be used as a construction material. We can exploit its astounding properties of self-assembly to make programmed structures to execute new and beneficial functions [see sidebar, “The Real Nanobot” Chemists have already made nanoscale molecular shuttles and motors inspired directly by biology, with exciting applications in drug delivery and tissue engineering.
We will reap major medical advances by radically reengineering existing microorganisms, especially in nanodevices that perform integrated diagnosis and treatment of some disorders. But the timescales to reach the clinic are going to be long, and the goal of cell-by-cell repair is far, far beyond our incomplete grasp of biological complexity.
Much the same can be said about the singularitarian computers that are needed to generate a complete reading of a mental state and brain implants that seamlessly integrate our thought processes with a computer network. True, brain-interface systems have already been built. A state-of-the-art system can read about 128 neurons. So: 128 down, 20 billion or so to go.
Nonetheless, I’m an optimist. I think that in the near future we’ll successfully apply nanotechnology to the most pressing social challenges, such as energy and the environment. For example, new polymer- and nanoparticle-based photovoltaics may soon lead to dramatic improvements in the price and production of solar cells.
What, then, of software-controlled matter? Complete control will remain an unattainable goal for generations to come. But some combination of self-assembly and directed assembly could very well lead to precisely built nanostructures that would manipulate the way light, matter, and electrons interact—an application of nanotechnology that’s already leading to exciting new discoveries. We’ve barely scratched the surface of what we’ll eventually be able to do with these custom-built nanostructures. It is altogether possible that we will finally harness the unfamiliar quantum effects of the nanoscale to implement true quantum computing and information processing. Here, I suspect, is the true killer application for the idea of software-controlled matter: devices that integrate electronics and optics, fully exploiting their quantum character in truly novel ways—a far cry from the minuscule diamond engines foreseen by the transhumanists.
We shouldn’t abandon all of the more radical goals of nanotechnology, because they may instead be achieved ultimately by routes quite different from (and longer than) those foreseen by the proponents of molecular nanotechnology. Perhaps we should thank Drexler for alerting us to the general possibilities of nanotechnology, while recognizing that the trajectories of new technologies rarely run smoothly along the paths foreseen by their pioneers.
For more articles, videos, and special features, go to The Singularity Special Report.
RESOURCE/SOURCE: www.blacklistednews.com on Monday June 2, 2008.