Biomimetic nanotechnology with synthetic macromolecules

This is a draft of a piece I’ve been invited to write for the special edition of Journal of Polymer Science: Polymer Physics Edition that is associated with the March meeting of the American Physical Society. The editors invited views from a few people about where they saw the future of polymer science. Here’s my contribution, with themes that will be familiar to readers of Soft Machines. Since the intended audience consists of active researchers in polymer science, the piece has more unexplained technical language than I usually use here.

In the first half of the twentieth century, polymer science and biochemistry developed together. With synthetic polymer chemistry in its infancy, most laboratory examples of macromolecules were of natural origin, and the conceptual foundations of polymer science, such as Staudinger’s macromolecular hypothesis, were as important for biology as for chemistry. Techniques for the physical characterisation of macromolecules, like Svedberg’s ultracentrifuge, were applied as much to biological macromolecules as synthetic ones. But with the tremendous development of the field of structural biology that x-ray protein crystallography made possible, the preoccupations of polymer science increasingly diverged from those of what was now being termed molecular biology. The issues that are so central to protein structure – secondary and tertiary structural motifs, ligand-receptor interactions and allostery, had no real analogue in synthetic polymer science. Meanwhile, the issues that exercised polymer scientists – crystallisation, melt dynamics and rheology – had little relevance to biology. Of course there were exceptions, but conceptually and culturally the two disciplines had become worlds apart.

I believe that the next fifty years we need to see much more interaction between polymer science and cell biology. In polymer science, we’ve seen the focus shift away from the properties of bulk materials to the search for new functionality by design at the molecular level. In cell biology, the new methods of single molecule biophysics permit us to study the behaviour of biological macromolecules in their natural habitat, rather than in a protein crystal, allowing us to see how these molecular machines actually work. Meanwhile synthetic polymer chemistry has started to give us access to control over molecular architecture. This is not yet at the precision that we obtain from biology, but we are already seeing the exploitation of non-trivial macromolecular architectures to achieve control over structure and function. The next stage is surely to take the insights from single molecule biophysics about how biological molecular machines work and design synthetic molecules to perform similar tasks.

We could call this field biomimetic nanotechnology. Biomimetics, of course, is a well-known field in material science; what we are talking about here is biomimetics at the level of single molecules, at the level of cell biology. Can we make synthetic analogues of molecular motors and other energy conversion devices? Can we learn from membrane biophysics to make selective pumps and valves, which would allow the easy and energy-efficient separation and sorting of molecules? Will it be possible to create any synthetic analogue of the systems of molecular sensing, communication and computation that systems biology is just starting to unravel? It’s surely only by achieving this degree of nanoscale control that the promise of molecular medicine could be fulfilled, to give just one example of a potential application.

What are the areas of polymer science that need to be advanced to enable these developments? Obviously, in polymer chemistry, synthesis with precise architectural control is key, and achieving this goal in water-soluble systems is going to be important if this technology is going to find wide use, particularly in medical applications. Polymer physicists are still much less comfortable dealing with systems involving water and charges than with polymer solutions in simple non-polar solvents, and we’ll need more work to ensure that we have a good understanding of the physical environment in which our devices will be operating.

The importance of self-assembly as a central theme will continue to grow. This way of creating intricate nanostructures by programmed interactions in macromolecules is well known to polymer science; the richness of the morphologies that can be obtained in block copolymer systems is well-known. But in comparison with the sophistication of biological self-assembly, synthetic self-assembly still operates at a very crude level. One new element that we should import from biology is the exploitation of secondary structure and its coupling to nanoscale morphology. Another important idea is to exploit the single chain folding of a sequenced copolymer in an analogue of protein folding. This, of course, would require considerable precision in synthesis, but theoretical developments are also necessary. We have learnt from the theory of protein folding theory that only a small fraction of possible sequences are foldable, so we will need to learn how to design foldable sequences.

Another important principle will be exploiting molecular shape change. In biology, this principle underlies the operation of most sophisticated nanoscale machines, including molecular motors, ion channel proteins and signalling molecules. In polymer physics the phenomenon of the coil-globule transition in response to changing solvent conditions is well known and has its macroscopic counterpart in thermoresponsive gels. To be widely useful, we need to engineer responsive systems with much more specific triggers and with a more highly amplified response. One promising way of doing this uses the coupling between transitions in secondary structure and global conformation; however we’re still a long way from the remarkable lever arms of biological motor proteins, in which rather subtle changes at a binding site produce a large overall mechanical response.

Some of the most powerful ideas from biology still remain essentially unexploited. An obvious one is, of course, evolution. At the molecular level, evolution offers a spectacularly powerful way of searching multidimensional parameter spaces to find efficient design solutions. It’s arguable that, given the combinatorial complexity that arises with even modest degrees of architectural control and our unfamiliarity with the design rules that are appropriate for the nanoscale environment, that significant progress will positively require some kind of evolutionary approach, whether that is executed in computer simulation or with real molecules.

Perhaps the most fundamental difference between the operating environments of biology and polymer science is the question of thermodynamic equilibrium. Polymer scientists are used to systems at, or perturbed slightly away from, equilibrium, while biological systems are driven far from equilibrium by a continuous energy input. How can we incorporate this most basic feature of life into our synthetic devices? What will be our synthetic analogue of life’s universal energy currency, adenosine triphosphate?

Ultimately, what we are talking about here is the reverse engineering of biology. It’s obvious that the gulf between the crudities of synthetic polymer science and the intricacies of cell biology is currently immense (certainly quite big enough to mean that the undoubted ethical issues that would arise if we could make any kind of reasonable facsimile of life are still very distant). Nonetheless, even rudimentary devices inspired by cell biology would be of huge practical benefit. Potentially even more significant a benefit than this, though, would be the deep understanding of the workings of biology that would arise from trying to copy it.

Making molecules work

The operation of most living organisms, from bacteria like E. Coli to multi-cellular organisms like ourselves, depends on molecular motors. These are protein-based machines which convert chemical energy to mechanical energy; the work our muscles do depends on many billions of these nanoscale machines all operating together, while individual motors propel bacteria or move materials around inside our cells. Molecular motors work in a very different way to the motors we are familiar with on the macroscopic scale, as has been revealed by some stunning experiments combining structural biology with single molecule biophysics. A good place to start getting a feel for how they work is with these movies of biological motors from Ronald Vale at UCSF.

The motors we use at the macroscopic scale to convert chemical energy to mechanical energy are heat engines, like petrol engines and steam turbines. The fuel is first burnt to convert chemical energy to heat energy, and this heat energy is then converted to useful work. Heat engines rely on the fact that you can maintain part of the engine at a higher temperature than the general environment. For example, in a petrol engine you burn the fuel in a cylinder, and then you extract work by allowing the hot gases expand against a piston. If you made a nanoscale petrol engine, it wouldn’t work, because the heat would diffuse out of the cylinder walls, cooling the gas down before it had a chance to expand. This is because the time taken for a hot body to cool down to ambient temperature depends on the square of its size. At the nanoscale, you can’t maintain significant temperature gradients for any useful length of time, so nanoscale motors have to work at constant temperature. The way biological molecular motors do this is by exploiting molecular shape change – the power stroke is provided by a molecule changing shape in response to the binding and unbinding of the fuel molecules and their products.

In our research at Sheffield we’ve been trying to learn from nature to make crude synthetic molecular motors that operate in the same way, by using molecular shape changes. The molecule we use is a polymer with weak acidic or basic groups along the backbone. For a polyacid, for example, in acidic conditions the molecule is uncharged and hydrophobic; it takes up a collapsed, compact shape. But when the acid is neutralised, the molecule ionises and becomes much more hydrophilic, substantially expanding in size. So, in principle we could use the expansion of a single molecule to do work.

How can we clock the motor, so that rather than just expanding a single time, our molecule will repeatedly cycle between the expanded and the compact shape? In biology, this happens because the reaction of the fuel molecule is actually catalysed by the the motor molecule. Our chemistry isn’t good enough to do this yet, so we use a much cruder approach.

We use a class of chemical reactions in which the chemical conditions spontaneously oscillate, despite the fact that the reactants are added completely steadily. The most famous of these reactions is the Belousov-Zhabotinksy reaction (see here for an explanation and a video of the experiment). With the help of Steve Scott from the University of Leeds, we’ve developed an oscillating reaction in which the acidity spontaneously oscillates over a range that is sufficient to trigger a shape change in our polyacid molecules.

You can see a progress report on our efforts in a paper in Faraday Discussions 128; the abstract is here and you can download the full paper as a PDF here (this is available under the author rights policy of the Royal Society of Chemistry, who own the copyright). We’ve been able to demonstrate the molecular shape change in response to the oscillating chemical reaction at both macroscopic and single chain level in a self-assembled structure. What we’ve not yet been able to do is directly measure the force generated by a single molecule; in principle we should be able to do this with an atomic force microscope whose tip is connected to a single molecule, the other end of which is grafted to a firm surface, but this has proved rather difficult to do in practise. This is high on our list of priorities for the future, together with some ideas about how we can use this motor to do interesting things, like propel a nanoscale object or pump chemicals across a membrane.

This work is a joint effort of my group in the physics department and Tony Ryan’s group in chemistry. In physics, Mark Geoghegan, Andy Parnell, Jon Howse, Simon Martin and Lorena Ruiz-Perez have all been involved in various aspects of the project, while the chemistry has been driven by Colin Crook and Paul Topham.

Nanotechnology – with nature or against it?

I’ve been covering two big debates about nanotechnology here. One the on hand, there’s the question of the relative merits of Drexler’s essentially mechanical vision of nanotechnology and the more biologically inspired soft and biomimetic approaches. On the other, we see the efforts of campaigning groups like ETC to paint nanotechnology as the next step after genetic modification in humanity’s efforts to degrade and control the natural world. Although these debates at first sight look very different, they both revolve around issues of control and our proper relationship with the natural world.

These issues are identified and situated in a deep historical context in a very perceptive article by Bernadette Bensaude-Vincent, of the Philosophy Department in the Université Paris X. The article, Two Cultures of Nanotechnology?, is in HYLE-the International Journal for Philosophy of Chemistry, Vol. 10, No.2 (2004).

The whole article is well worth reading, but this extract gets to the heart of the matter:

“There is nothing new in the current artificialization of nature. Already in antiquity, there were two different and occasionally conflicting views of technology. On the one hand, the arts or technai were considered as working against nature, as contrary to nature. This meaning of the term para-physin provided the ground for repeated condemnations of mechanics and alchemy. On the other hand, the arts – especially agriculture, cooking, and medicine – were considered as assisting or even improving on nature by employing the dynameis or powers of nature. In the former perspective, the artisan, like Plato’s demiurgos, builds up a world by imposing his own rules and rationality on a passive matter. Technology is a matter of control. In the latter perspective the artisan is more like the ship-pilot at sea. He conducts or guides forces and processes supplied by nature, thus revealing the powers inherent in matter. Undoubtedly the mechanicist [i.e. Drexlerian] model of nanotechnology belongs to the demiurgic tradition. It is a technology fascinated by the control and the overtaking of nature.”

Bensaude-Vincent argues soft and biomimetic approaches to nanotechnology fall more naturally into that second culture, conducting or guiding forces and processes supplied by nature, thus revealing the powers inherent in matter.

Nanobiotechnology and the communications industry

One of the UK’s two flagship nanotechnology centres, the Interdisciplinary Research Collaboration in Bionanotechnology at Oxford University, was having its mid-term review yesterday; I was there in my role as a member of the external steering committee. One thing I learnt that had previously passed me by was that one of the largest industrial collaborations they have is not, as one might think, with a pharmaceutical or biomedical company, but with the Japanese telecoms company NTT.

The linkup was announced last October; the $2 million project is concentrated in the area of the study of the function of membrane proteins. Why would they be interested in this? Membrane proteins provide the mechanisms by which living cells sense their surroundings and communicate with the outside world. As the leader of the NTT side of the project, Dr Keiichi Torimitsu, is quoted as saying, “We are especially interested in this field because of the possibility of future applications in the area of human – electronic interfaces.”

Intelligent yoghurt by 2025

Yesterday’s edition of the Observer contained the bizarre claim that we’ll soon be able to enhance the intelligence of bacteria by using molecular electronics. This came in an interview with Ian Pearson, who is always described as the resident futurologist of the British telecoms company BT. The claim is so odd that I wondered whether it was a misunderstanding on the part of the journalist, but it seems clear enough in this direct quote from Pearson:

“Whether we should be allowed to modify bacteria to assemble electronic circuitry and make themselves smart is already being researched.

‘We can already use DNA, for example, to make electronic circuits so it’s possible to think of a smart yoghurt some time after 2020 or 2025, where the yoghurt has got a whole stack of electronics in every single bacterium. You could have a conversation with your strawberry yogurt before you eat it.’ “

This is the kind of thing that puts satirists out of business.

The Rat-on-a-chip

I’ve written a number of times about the way in which the debate about the impacts of nanotechnology has been highjacked by the single issue of nanoparticle toxicity, to the detriment of more serious and interesting longer term issues, both positive and negative. The flippant title of this post on the subject – Bad News for Lab Rats – conceals the fact that, while I don’t oppose animal experiments in principle, I’m actually a little uncomfortable about the idea that large numbers of animals should be sacrificed in badly thought out and possibly unnecessary toxicology experiments. So I was very encouraged to read this news feature in Nature (free summary, subscription required for full article) about progress in using microfluidic devices containing cell cultures for toxiological and drug testing. The article features work from Michael Shuler’s group at Cornell, and a company founded by Shuler’s colleague Gregory Baxter, Hurel Corp.

Cancer and nanotechnology

There’s a good review in Nature Reviews: Cancer (with free access) about the ways in which nanotechnology could help the fight against cancer – Cancer Nanotechnology: Opportunities and Challenges . The article, by Ohio State University’s Mauro Ferrari, concentrates on two themes – how nanotechnologies can help diagnose and monitor cancer, and how it could lead to more effective targeting and delivery of anti-cancer agents to tumours.

The extent to which we urgently need better ways of wrapping up therapeutic molecules and getting them safely to their targets is highlighted by a striking figure that the article quotes – if you inject monoclonal antibodies and monitor how many of these molecules get to a target within an organ, the fraction is less than 0.01%. The rest are wasted, which is bad news if these molecules are expensive and difficult to make, and even worse news if, like many anti-cancer drugs, they are highly toxic. How can we make sure that every one of these drug molecules get to where they are needed? One answer is to stuff them into a nanovector, a nanoscale particle that protects the enclosed drug molecules and delivers them to where they are needed. The simplest example of this approach uses a liposome – a bag made from a lipid bilayer. Liposome encapsulated anti-cancer drugs are now clinically used in the treatment of Karposi’s sarcoma and breast and ovarian cancers. But lots of work remains to make nanovectors that are more robust, more resistant to non-specific protein adsorption, and above all which are specifically targeted to the cells they need to reach. Such specific targeting could be achieved by coating the nanovectors with antibodies with specific molecular recognition properties for groups on the surface of the cancer cells. The article cites one cautionary tale that illustrates that this is all more complicated than it looks – a recent simulation suggests that it is possible to get a situation in which targeting a drug precisely to a tumour can make the situation worse, by causing the tumour to break up. It may be necessary not just to target the drug carriers to a tumour, but to make sure that the spatial distribution of the drug through the tumour is right.

The future will probably see complex nanovectors engineered to perform multiple functions, protecting the drugs, getting them through all the barriers and pitfalls that lie between the point at which the drug is administered and the part of the body where it is needed, and releasing them at their target. The recently FDA approved breast cancer drug, Abraxane, is an advance in the right direction; one can think of it as a nanovector that combines two functions. The core of the nanovector consists of a nanoparticulate form of the drug itself; dispersing it so finely dispenses with the need for toxic solvents. And bound to the drug nanoparticle are protein molecules which help the nanoparticles get across the cells that line blood vessels. It’s clear that as more and more functions are designed into nanovectors, there’s a huge amount of scope for increases in drug effectiveness, increases that could amount to orders of magnitude.

New book on Nanoscale Science and Technology

Nanoscale Science and Technology is a new, graduate level interdisciplinary textbook which has just been published by Wiley. It’s based on the Masters Course in Nanoscale Science and Technology that we run jointly between the Universities of Leeds and Sheffield.

Nanoscale Science and Technology Book Cover

The book covers most aspects of modern nanoscale science and technology. It ranges from “hard” nanotechnologies, like the semiconductor nanotechnologies that underly applications like quantum dot lasers, and applications of nanomagnetism like giant magnetoresistance read-heads, via semiconducting polymers and molecular electronics, through to “soft” nanotechnologies such as self-assembling systems and bio-nanotechnology. I co-wrote a couple of chapters, but the heaviest work was done by my colleagues Mark Geoghegan, at Sheffield, and Ian Hamley and Rob Kelsall at Leeds, who, as editors, have done a great job of knitting together the contributions of a number of authors with different backgrounds to make a coherent whole.

Directly reading DNA

As the success of the Human Genome Project has made clear, DNA stores information at very high density – 15 atoms per bit of stored information. But, while biology has evolved some very sophisticated and compact ways of reading that information, we’re stuck with some clunky and expensive methods of sequencing DNA. Of course, driven by the Human Genome Project, the techniques have improved hugely, but it still costs about ten million dollars to sequence a mammal-sized genome (according to this recent press release from the National Institutes of Health). This needs to get much cheaper, not only to unlock the potential of personalised genomic medicine, but also if we are going to use DNA or analogous molecules as stores of information for more general purposes. One thousand dollars a genome is a sum that is often mentioned as a target.

Clearly, it would be great if we could simply manipulate a single DNA molecule and directly read out its sequence. One of the most promising approaches to doing this envisages threading the molecule through a nanoscale hole and measuring some property which changes according to which base is blocking the pore. A recent experiment shows that it is possible, in principle, to do this. The experiment is reported by Ashkenasy, Sanchez-Quesada, and M. Reza Ghadiri, from Scripps, and Bayley from Oxford, in a recent edition of Angewandte Chemie (Angew Chemie Int Ed 44 p1401 (2005)) – the full paper can be downloaded as a PDF here. In this case the pore is formed by a natural pore forming protein in a lipid membrane, and what is measured is the ion current across the membrane.

This approach isn’t new; it originated with David Deamer at Santa Cruz and Dan Branton at Harvard (Branton’s website in particular is an excellent resource). A number of groups around the world are trying to do something similar; there are various variations possible, such as using an artificially engineered nanopore instead of a membrane protein, and using a different probe than the ion current. It feels to me like this ought to work, and this latest demonstration is an important step along the path.

Artificial life and biomimetic nanotechnology

Last week’s New Scientist contained an article on the prospects for creating a crude version of artificial life (teaser here), based mainly on the proposals of Steen Rasmussen’s Protocell project at Los Alamos. Creating a self-replicating system with a metabolism, capable of interacting with its environment and evolving, would be a big step towards a truly radical nanotechnology, as well as giving us a lot of insight into how our form of life might have begun.

More details of Rasmussen’s scheme are given here, and some detailed background information can be found in this review in Science (subscription required), which discusses a number of approaches being taken around the world (see also this site, , with links to research around the world, also run by Rasmussen). Minimal life probably needs some way of enclosing the organism from the environment, and Rasmussen proposes the most obvious route of using self-assembled lipid micelles as his “protocells”. The twist is that the lipids are generated by light activation of an oil-soluble precursor, which effectively constitutes part of the organism’s food supply. Genetic information is carried in a peptide nucleic acid (PNA), which reproduces itself in the presence of short precursor PNA molecules, which also need to be supplied externally. The claim is that ‘this is the first explicit proposal that integrates genetics, metabolism, and containment in one chemical system”.

It’s important to realise that this, currently, is just that – a proposal. The project is just getting going, as is a closely related European Union funded project PACE (for programmable artificial cell evolution). But it’s a sign that momentum is gathering behind the notion that the best way to implement radical nanotechnology is to try and emulate the design philosophies that cell biology uses.

If this excites you enough that you want to invest your own money in it, the associated company Protolife is looking for first round investment funding. Meanwhile, a cheaper way to keep up with developments might be to follow this new blog on complexity, nanotechnology and bio-computing from Exeter University based computer scientist Martyn Amos.