Computing, cellular automata and self-assembly

There’s a clear connection between the phenomenon of self-assembly, by which objects at the nanoscale arrange themselves into complex shapes by virtue of programmed patterns of stickiness, and information. The precisely determined three dimensional shape of a protein is entirely specified by the one-dimensional sequence of amino acids along the chain, and the information that specifies this sequence (and thus the shape of the protein) is stored as a sequence of bases on a piece of DNA. If one is talking about information, it’s natural to think of computing, so its natural to ask whether there is any general relationship between computing processes, thought of at their most abstract, and self-assembly.

The person who has, perhaps, done the most to establish this connection is Erik Winfree, at Caltech. Winfree’s colleague, Paul Rothemund, made headlines earlier this year by making a nanoscale smiley face, but I suspect that less well publicised work the pair of them did a couple of years ago will prove just as significant in the long run. In this work, they executed a physical realisation of a cellular automaton whose elements were tiles of DNA with particular patches of programmed stickiness. The work was reported in PLoS Biology here; see also this commentary by Chengde Mao. A simple one-dimensional cellular automaton consists of a row of cells, each of which can take one of two values. The automaton evolves in discrete steps, with a rule that determines the value of a cell on the next step by reference to the values of the adjacent cells on the previous step (for an introduction, to elementary cellular automata, see here). One interesting thing about cellular automata is that very simple rules can generate complex and interesting patterns. Many of these can be seen in Stephen Wolfram’s book, A New Kind of Science, (available on line here. It’s worth noting that some of the grander claims in this book are controversial, as is the respective allocation of credit between Wolfram and the rest of the world, but it remains an excellent overview of the richness of the subject).

I can see at least two aspects of this work that are significant. The first point follows from the fact that a cellular automaton represents a type of computer. It can be shown that some types of cellular automaton are, in fact, equivalent to universal Turing machines, able in principle to carry out any possible computation. Of course, this feature may well be entirely useless in practise. A more recent paper by this group (abstract here, subscription required for full paper), succeeds in using DNA tiles to carry out some elementary calculations, but highlights the difficulties caused by the significant error rate in the elementary operations. Secondly, this offers, in principle, a very effective way of designing and executing very complicated and rich structures that combine design with, in some cases, aperiodicity. In the physical realisation here, the starting conditions are specified by the sequence of a “seed” strand of DNA, while the rule is embodied in the design of the sticky patches on the tiles, itself specified by the sequence of the DNA from which they are made. Simple modifications of the seed strand sequence and the rule implicit in the tile design could result in a wide and rich design space of resulting “algorithmic crystals”.

a physical realisation of a cellular automaton executed using self-assembling DNA tiles

A physical realisation of a cellular automaton executed using self-assembling DNA tiles. Red crosses indicate propagation errors, which intiatiate or terminate the characteristic Sierpinski triangle patterns. From Rothemund et al, PLOS Biology 2 2041 (2004), copyright the authors, reproduced under a CREATIVE COMMONS ATTRIBUTION LICENSE

On my nanotechnology bookshelf

Following my recent rather negative review of a recent book on nanotechnology, a commenter asked me for some more positive recommendations about books on nanotechnology that are worth reading. So here’s a list of nanotechnology books old and new with brief comments. The only criterion for inclusion on this list is that I have a copy of the book in question; I know that there are a few obvious gaps. I’ll list them in the order in which they were published:

Engines of Creation, by K. Eric Drexler (1986). The original book which launched the idea of nanotechnology into popular consciousness, and still very much worth reading. Given the controversy that Drexler has attracted in recent years, it’s easy to forget that he’s a great writer, with a very fertile imagination. What Drexler brought to the idea of nanotechnology, which then was dominated, on the one hand by precision mechanical engineering (this is the world that the word nanotechnology, coined by Taniguchi, originally came from), and on the other by the microelectronics industry, was an appreciation of the importance of cell biology as an exemplar of nanoscale machines and devices and of ultra-precise nanoscale chemical operations.

Nanosystems: Molecular Machinery, Manufacturing, and Computation , by K. Eric Drexler (1992). This is Drexler’s technical book, outlining his particular vision of nanotechnology – “the principles of mechanical engineering applied to chemistry” – in detail. Very much in the category of books that are often cited, but seldom read – I have, though, read it, in some detail. The proponents of the Drexler vision are in the habit of dismissing any objection with the words “it’s all been worked out in ‘Nanosystems'”. This is often not actually true; despite the deliberately dry and textbook-like tone, and the many quite complex calculations (which are largely based on science that was certainly sound at the time of writing, though there are a few heroic assumptions that need to be made), many of the central designs are left as outlines, with much detail left to be filled in. My ultimate conclusion is that this approach to nanotechnology will turn out to have been a blind alley, though in the process of thinking through the advantages and disadvantages of the mechanical approach we will have learned a lot about how radical nanotechnology will need to be done.

Molecular Devices and Machines : A Journey into the Nanoworld , by Vincenzo Balzani, Alberto Credi and Margherita Venturi (2003). The most recent addition to my bookshelf, I’ve not finished reading it yet, but it’s good so far. This is a technical (and expensive) book, giving an overview of the approach to radical nanotechnology through supramolecular chemistry. This is perhaps the part of academic nanoscience that is closest to the Drexler vision, in that the explicit goal is to make molecular scale machines and devices, though the methods and philosophy are rather different from the mechanical approach. A must, if you’re fascinated by cis-trans isomerisation in azobenzene and intermolecular motions in rotaxanes (and if you’re not, you probably should be).

Bionanotechnology : Lessons from Nature, by David Goodsell (2004). I’m a great admirer of the work of David Goodsell as a writer and illustrator of modern cell biology, and this is a really good overview of the biology that provides both inspiration and raw materials for nanobiotechnology.

Soft Machines : Nanotechnology and Life, by Richard Jones (2004). Obviously I can’t comment on this, apart from to say that three years on I wouldn’t have written it substantially differently.

Nanotechnology and Homeland Security: New Weapons for New Wars , by Daniel and Mark Ratner (2004). I still resent the money I spent on this cynically titled and empty book.

Nanoscale Science and Technology, eds Rob Kelsall, Ian Hamley and Mark Geoghegan (2005). A textbook at the advanced undergraduate/postgraduate level, giving a very broad overview of modern nanoscience. I’m not really an objective commentator, as I co-wrote two of the chapters (on bionanotechnology and macromolecules at interfaces), but I like the way this book combines the hard (semiconductor nanotechnology and nanomagnetism) and the soft (self-assembly and bionano).

Nanofuture: What’s Next For Nanotechnology , by J. Storrs Hall (2005). Best thought of as an update of Engines of Creation, this is a an attractive and well-written presentation of the Drexler vision of nanotechnology. I entirely disagree with the premise, of course.

Nano-Hype: The Truth Behind the Nanotechnology Buzz, by David Berube (2006). A book, not about the science, but about nanotechnology as a social and political phenomenon. I reviewed in detail here. I’ve been referring to it quite a lot recently, and am increasingly appreciating the dry humour hidden within its rather complete historical chronicle.

The Dance of Molecules : How Nanotechnology is Changing Our Lives , by Ted Sargent (2006). Reviewed by me here, it’s probably fairly clear that I didn’t like it much.

The Nanotech Pioneers : Where Are They Taking Us?, by Steve Edwards (2006). In contrast to the previous one, I did like this book, which I can recommend as a good, insightful and fairly nanohype-free introduction to the area. I’ve written a full review of this, which will appear in “Physics World” next month (and here also, copyright permitting).

The road to nanomedicine may not always be quick or easy

Of the six volunteers who became seriously ill during a drug trial last week, four, mercifully, seem to be beginning to recover, while two are still critical, according to the most recent BBC news story. It’s still too early to be sure what went so tragically wrong; there are informative articles, with some informed comment, on the websites both of New Scientist and Nature. What we should learn from this is that even as medicine gets more sophisticated and molecularly specific, many things can go wrong in the introduction of new therapies. The length of time it takes new treatments to get regulatory approval can be frustratingly, agonisingly long, but we need to be very careful about the calls we sometimes hear to speed these processes up. The delays are not just gratuitous red tape.

The drug behind this news story was developed by a small, German company, TeGenero immunotherapeutics. It’s a monoclonal antibody, code-named TGN1412; a protein molecule which specifically binds to a receptor molecule on T-cells, a type of white blood cell which is central to the body’s immune response. The binding site – code-named CD28 – is a glyco-protein – a combination of a protein with a carbohydrate segment – which provides the signal to activate the T-cells. What’s special about TGN1412 is that the action of this drug alone is sufficient to activate the T-cells; normally simultaneous binding to two different receptors is required. It’s as if TGN1412 overrides the safety catch, allowing the T-cells to be activated by a single trigger. It’s these activated T-cells that then carry out the therapeutic purpose, killing cancer cells, for example.

Few people have connected these events with bionanotechnology (an exception is the science journalist Niels Boeing in this piece on the German Technology Review blog). There are now a number of monoclonal antibody based drugs in clinical use, and they are not normally considered to be the product of nanomedicine. But they do illustrate some of the strategies that underlie developments in nanomedicine – they are exquisitely targeted to particular cells, they exploit the chemical communication strategies that cells use, and they increasingly co-opt biology’s own mechanisms for clinical purposes. Biology is so complex that it’s always going to spring surprises, and the worry must be that as our interventions in complex biological systems become more targeted, so the potential for unpleasant surprises may increase. Whenever one hears blithe assurances that nanotechnology will soon cure cancer or arrest ageing if only those bureaucratic regulators would allow it, one needs to think of those two men struggling for their lives in a North London hospital. There may be good reasons why the pace of innovation in medicine can sometimes be slow.

How much should we worry about bionanotechnology?

We should be very worried indeed about bionanotechnology, according to Alan Goldstein, a biomaterials scientist from Alfred University, who has written a long article called I Nanobot on this theme in the online magazine Salon.com. According to this article, we are stumbling into creating a new form of life, which is, naturally, out of our control. “And Prometheus has returned. His new screen name is nanobiotechnology.” I think that some very serious ethical issues will be raised by bionanotechnology and synthetic biology as they develop. But this article is not a good start to the discussion; when you cut through Goldstein’s overwrought and overheated writing, quite a lot of what he says is just wrong.

Goldstein makes a few interesting and worthwhile points. Life isn’t just about information, you have to have metabolism too. A virus isn’t truly alive, because it consists only of information – it has to borrow a metabolism from the host it parasitises to reproduce. And our familiarity with one form of life – our form, based on DNA for information storage, proteins for metabolic function, and RNA to intercede between information and metabolism – means that we’re too unimaginative about conceiving entirely alien types of life. But the examples he gives of potentially novel, man-made forms of life reveal some very deep misconceptions about how life itself, at its most abstract, works.

I don’t think Goldstein really understands the distinction between equilibrium self-assembly, by which lipid molecules form vesicles, for example, and the fundamentally out-of-equilibrium character of the self-organisation characteristic of living things. I am literally not the same person I was when I was twenty; living organisms are constantly turning over the molecules they are made from; the patterns persist, but the molecules that make up the pattern are constantly changing. So his notion that if we make an anti-cancer drug delivery device with an antibody that targets a certain molecule on a cell wall, then that device will stay stuck there through the lifetime of the organism, and if it finds its way to a germ cell it will be passed down from generation to generation like a retrovirus, is completely implausible. The molecule that it’s stuck to will soon be turned over, the device itself will be similarly transient. It’s because the device lacks a way to store the information that would be needed to continually regenerate itself that it can’t be considered in any sensible way living.

If rogue, powered vesicles lodging in our sperm and egg cells aren’t scary enough, Goldstein next invokes the possibility of the meddling with the spark of life itself – electricity. But the moment we close that nano-switch and allow electron current to flow between living and nonliving matter, we open the nano-door to new forms of living chemistry — shattering the “carbon barrier.” This is, without doubt, the most momentous scientific development since the invention of nuclear weapons.” This sounds serious, but it seems to be founded on a misconception of how biology uses electricity. Our cells burn sugar, Goldstein says, which “yields high-energy electrons that are the anima of the living state. “ Again, this is highly misleading. The energy currency of biology isn’t electricity, it’s chemistry – specifically it’s the energy containing molecule ATP. And when electrical signals are transmitted, through our nerves, or to make our heart work, it isn’t electrons that are moving, it’s ions. Goldstein makes a big deal out of the idea of a Biomolecule-to-Material interface between a nanofabricated pacemaker and the biological pacemaker cells of the heart. “A nanofabricated pacemaker with a true BTM interface will feed electrons from an implanted nanoscale device directly into electron-conducting biomolecules that are naturally embedded in the membrane of the pacemaker cells. There will be no noise across this type of interface. Electrons will only flow if the living and nonliving materials are hard-wired together. In this sense, the system can be said to have functional self-awareness: Each side of the BTM interface has an operational knowledge of the other.” This sounds like a profound and disturbing blurring of the line between the artificial and the biological. The only trouble is, it’s based on a simple error. Pacemaker cells don’t have electron-conducting biomolecules embedded in their membranes; the membrane potentials are set up and relaxed by the flow of ions through ion channels. There can be no direct interface of the kind that Goldstein describes. Of course, we can and do make artificial interfaces between organisms and artefacts – the artificial pacemakers that Goldstein mentions are one example, and cochlear implants are another. The increasing use of this kind of interface between artefacts and human beings does already raise ethical and philosophical issues, but discussion of these isn’t helped by this kind of mysticism built on misconception.

In an attempt to find an abstract definition of life, Goldstein revives a hoary old error about the relationship between the second law of thermodynamics and life: “The second law of thermodynamics tells us that all natural systems move spontaneously toward maximum entropy. By literally assembling itself from thin air, biological life appears to be the lone exception to this law. “ As I spent several lectures explaining to my first year physics students last semester, what the second law of thermodynamics says is that isolated systems tend to maximum entropy. Systems that can exchange energy with their surroundings are bound only by the weaker constraint that as they change, the total entropy of the universe must not decrease. If a lake freezes, the entropy of the water decreases, but as the ice forms it expels heat which raises the entropy of its surroundings by at least as much as its own entropy decreases. Biology is no different, trading local decreases of entropy for global increases. Goldstein does at least concede this point, noting that “geodes are not alive”, but he then goes on to say that “nanomachines could even be designed to use self-assembly to replicate”. This statement, at least, is half-true; self-assembly is one of the most important design principles used by biology and it’s increasingly being exploited in nanotechnology too. But self-assembly is not, in itself, biology – it’s a tool used by biology. A system that is organised purely by equilibrium self-assembly is moving towards thermodynamic equilibrium, and things that are at equilibrium are dead.

The problem at the heart of this article is that in insisting that life is not about DNA, but metabolism, Goldstein has thrown the baby out with the bathwater. Life isn’t just about information, but it needs information in order to be able to replicate, and most centrally, it needs some way of storing information in order to evolve. It’s true that that information could be carried in other vehicles than DNA, and it need not necessarily be encoded by a sequence of monomers in a macromolecule. I believe that it might in principle be possible in the future to build an artificial system that does fulfill some general definition of life. I agree that this would constitute a dramatic scientific development that would have far-reaching implications that should be discussed well in advance. But I don’t think it’s doing anyone a service to overstate the significance of the developments in nanobiotechnology that we are seeing at the moment, and I think that scientists commenting on these issues do have some obligation to maintain some standards of scientific accuracy.

Computing with molecules

It’s easy to forget that, looking at biology as a whole, computing and information processing is more often done by individual molecules than by brains and nervous systems. After all, most organisms don’t have a nervous system at all, yet they still manage to sense their environment and respond to what they discover. And a multi-cellular organism is itself a colony of many differentiated cells, all of which need to communicate and cooperate in order for the organism to function at all. In these processes, signals are communicated not by electrical pulses, but by the physical movement of molecules, and logic is performed, not by circuits of transistors, but by enzymes. Modern systems biology is just starting to unravel the operation of these complex and effective chemical computers, but we’re very far from being able to build anything like them with our currently available nanotechnology.

A news story on the New Scientist website (seen via Martyn Amos’s blog) reports an interesting step along the way, with an experimental demonstration of an enzyme-based system that chemically implements simple logic operations like a half-adder and a half-subtracter. The report, from Itamar Willner’s group at the Hebrew University of Jerusalem, is published in Angewandte Chemie International Edition (abstract here, subscription required for full paper). No-one is going to be doing complicated sums with these devices for a while; the inputs are provided by supplying certain chemical species (glucose and hydrogen peroxide, in this case), and the answers are provided by the appearance or non-appearance of reaction products. But where this system could come in useful is in providing a nanoscale system like a drug delivery device with some rudimentary mechanisms for sensing the environment and acting on the information, maybe by swimming towards the source of some chemicals or releasing its contents when it has detected some combination of chemicals around it.

This is still not quite a fully synthetic analogue of a cellular information processing system; it uses enzymes of biological origin, and it doesn’t use the ubiquitous chemical trick of allostery. In this, the binding of one molecule to an enzyme changes the way it processes another molecule, effectively allowing a single molecule to act as a logic gate. But it suggests many fascinating possibilities for the future.

Death, life, and amyloids

If you take a solution of a protein, an enzyme, say, and heat it up, it unfolds. The beautifully specific, three dimensional structure, that underlies the workings of the enzyme or molecular machine, melts away, leaving the protein in an open, flexible state. What happens next depends on how concentrated the protein solution is. Remarkably, if the solution is dilute enough that different protein molecules don’t significantly interact, they’ll refold back into their biologically active state. This discovery of reversible refolding won Christian Anfinsen the 1972 Nobel Prize for chemistry; it was these experiments that established that the three dimensional structure of proteins in their functional form is wholly specified by their one-dimensional sequence of amino acids via the remarkable, and still not wholly understood, example of self-assembly that is protein folding. But if the proteins are in a more concentrated solution – the concentration of proteins in egg white or milk whey, for example – then as they cool they don’t fold properly. Instead they interact to make a sticky mess, apparently without biological functionality – you can’t hatch a chick out of a boiled egg.

But over the last fifteen years, it’s become clear that misfolded proteins are of huge biological and medical significance. Previously, the state that many proteins misfold into was believed to be an uninteresting, unstructured mish-mash. But now it’s known that, on the contrary, misfolded proteins often form a generic, highly ordered structure called an amyloid fibril. These are tough, stiff fibres, each about 10 nm wide and up to a few microns in length, in which the protein molecules are stacked together, linked by multiple hydrogen bonds, in extended, crystal-like structures called beta-sheets. The medical significance of these amyloid fibrils is huge; it’s these misfolded proteins that are associated with a number of serious and incurable diseases, like Alzheimers, type II diabetes and Creutzfield-Jacob disease. The physical significance is that there’s an increasingly influential school of thought (led by Chris Dobson of Cambridge) that the amyloid state is actually the most stable state of virtually all proteins. If you take this view to the limit, it implies that all organisms would eventually and inevitably succumb to amyloid diseases if they lived long enough.

This sinister side of amyloid fibrils hasn’t stopped people looking for some positive uses for them. Some researchers, like Harvard’s Susan Lindquist, have thought about using them as templates to make nanowires, though in my view they have several disadvantages compared to other potential biological templates like DNA. But biology is full of surprises, and the discovery by a Swedish group a few years ago that a misfolded version of the milk protein alpha-lactalbumin has a potent anti-cancer effect (full article available, without subscription, here) is certainly one of these. They speculate that this conversion takes place inside the stomach of new-born babies, helping protect them against cancer, and these molecules have already undergone successful clinical trials for treatment of skin papillomas. My children are still young enough for me to remember well the consistency of posset (as we in England delicately call regurgitated baby milk) so the idea of this as a clinicallly proven defense against cancer is rather odd.

But even stranger than this is a story in this weeks Economist, implicating amyloids in the ultimate origin of life itself. This reports from a meeting held at the Royal Society last week about the origin of life, and discusses a theory by the Cardiff biologist Trevor Dale. He takes inspiration from Cairns-Smith, the originator of a brilliant but so far unverified theory of the origin of life which suggests that life began by the templated polymerisation of macromolecules on the surfaces of clay platelets. Dale takes this idea, but suggests that the original macromolecule was RNA, and the surface, rather than being a clay platelet, was a protein amyloid fibril. This then naturally gives rise to the idea of co-evolution of nucleic acids and proteins, rather than requiring, as more popular theories do, a separate, later, stage in which an RNA-only form of life recruits proteins. The theory is described in an pre-publication article in the Journal of Theoretical Biology (abstract only without a subscription). I’m not sure I’m entirely convinced, but who can say what other suprises the amyloid state of proteins may yet spring.

Throbbing gels

This month’s edition of Nano Letters includes a paper from our Sheffield soft nanotechnology group (Jon Howse did most of the work, assisted by chemists Colin Crook and Paul Topham and beam line scientists Anthony Gleeson and Wim Bras, with me and Tony Ryan providing inspiration and/or interference) demonstrating the direct conversion of chemical energy to mechanical energy at the single molecule level. This is a development of the line of work I described here. Our idea is to combine a macromolecule which changes size in response to a change in the acidity of its surroundings with a chemical reaction which spontaneously leads to an oscillation in the acidity, to get a cyclic change in size of the polymer molecule. The work is summarised in a piece on nanotechweb.org.

Comic book synthetic biology

This week’s edition of Nature magazine has a feature on synthetic biology, an approach to making sophisticated nanomachines by taking bacteria and reprogramming them to achieve the functions you want. I wrote about this a few months ago, here. The feature includes a couple of in-depth reviews of the field and a discussion of potential ethical issues. The editor’s summary is here, with links to the full articles for those with subscriptions to Nature.

There’s also a news item on iGEM 2005 – an international, intercollegiate competition in which students teams from 17 universities, including Cambridge, MIT and ETH Zurich, competed to build a functioning device by re-engineering bacteria. This sounds like a very effective way of energising a new field.

Always ready to innovate with new kinds of scientific communication, Nature also commissioned a comic strip on the subject, Adventures in Synthetic Biology. This is well worth a look.

Self-assembly vs self-organisation – can you tell the difference?

Self-assembly and self-organisation are important concepts in both nanotechnology and biology, but the distinction between them isn’t readily apparent, and this can cause considerable confusion, particularly when the other self-word – self-replication– is thrown into the mix.

People use different definitions, but it seems to me that it makes lots of sense to reserve the term self-assembly for equilibrium situations. As described in my book Soft Machines, the combination of programmed patterns of stickiness in nanoscale objects and constant Brownian motion mean that on the nanoscale complex 3-dimensional structures can assemble themselves from their component parts with no external intervention, purely driven by the tendency of systems to minimise their free energy in accordance with the second law of thermodynamics.

We can then reserve self-organisation as a term for those types of pattern forming system which are driven by a constant input of energy. A simple prototype from physics are the well-defined convection cells you get if you heat a fluid from below, while in chemistry there are the beautiful patterns you get from systems that combine some rather special non-linear chemical kinetics with slow diffusion – the Belousov-Zhabotinsky reaction being the most famous example. A great place to read about such systems is the book by Philip Ball – The self-made tapestry – pattern formation in nature (though Ball doesn’t in fact make the distinction I’m trying to set up here).

Self-assembly is pretty well understood, and it’s clear that at small length scales it is important in biology. Protein folding, for example, is a very sophisticated self-assembly process, and viable viruses can be made in the test-tube simply by mixing up the component proteins and nucleic acid. Self-organisation is much less well understood; it isn’t entirely clear that there are universal principles that underly the many different examples observed, and the relevance of the idea in biology is still under debate. There’s a very nice concrete example of the difference between the two ideas reported in a recent issue of Physical Review Letters (abstract here, full PDF preprint here). These authors consider a structural feature of living cells – the pattern of embedded proteins in the cell membrane – and ask, with the help of mathematical models, whether this pattern is likely to arise from equilibrium self-assembly or non-equilibrium self-organisation. The conclusion is that both processes can lead to patterns such as the ones observed, but that self-assembly leads to smaller scale patterns which take longer to develop.

One thing one can say with certainty – living organisms can’t arise wholly from self-assembly, because we know that in the absence of a continuous supply of energy they die. In summary, viruses self-assemble, but elephants (perhaps) self-organise.

Nanotechnology gets complex

The theme of my book Soft Machines is that the nanomachines of biology operate under quite different design principles from those we are familiar with at the macroscale. These design principles exploit the different physics of the nanoworld, rather than trying to engineer around them. The combination of Brownian motion – the relentless shaking and jostling that’s ubiquitous in the nanoworld, at least at normal temperatures – and strong surface forces is exploited in the principle of self-assembly. Brownian motion and the floppiness of small scale structures are exploited in the principle of molecular shape change, which provides the way our muscles work. We are well on our way to exploiting both these principles in synthetic nanotechnology. But there’s another design principle that’s extensively used in Nature, that nanotechnologists have not yet exploited at all. This is the idea of chemical computing – processing information by using individual molecules as logic gates, and transmitting messages through space using the random motion of messenger molecules, driven to diffuse by Brownian motion. These are the mechanisms that allow bacteria to swim towards food and away from toxins, but they also underly the intricate way in which cells in higher organisms like mammals interact and differentiate.

One argument that holders of a mechanical conception of radical nanotechnology sometimes use against trying to copy these control mechanisms is that they are simply too complicated to deal with. But there’s an important distinction to make here. These control systems and signalling networks aren’t just complicated – they’re complex. Recent theory of the statistical mechanics of this sort of multiply connected, evolving networks is beginning to yield fascinating insights (see, for example, Albert-László Barabási’s website). It seems likely that these biological signalling and control networks have some generic features in common with other complex networks, such as the internet, and even, perhaps, free market economies. Rather than being the hopelessly complicated result of billions of years of aimless evolutionary accretion, we should perhaps think of these networks as being optimally designed for robustness in the noisy and unpredictable nanoscale environment.

It seems to me that if we are going to have nanoscale systems of any kind of complexity, we are going to have to embrace these principles. Maintaining rigid, central control of large scale systems always seems to be a superficially good idea, but such control systems are often brittle and fail to adapt to unpredictability, change and noise. The ubiquity of noise in the nanoscale world offers a strong argument for using complex, evolved control systems. But we still lack some essential tools for doing this. In particular, biological signalling relies on allostery. This principle underlies the operation of the basic logic gates in chemical computing; the idea is that when a messenger molecule binds to a protein, it subtly changes the shape of the protein and affects its ability to carry out a chemical operation. Currently synthetic analogues for this crucial function are very thin on the ground (see this abstract for something that seems to be going the right way). It would be good to see more effort put in this difficult, but exciting, direction.