The UK Government’s research programme into the potential risks of nanoparticles

As trailed in my last post, the UK government has published the first report (Characterising the risks posed by engineered nanoparticles: a first UK Government research report – a 55 page PDF) from its programme of research into the potential health and environmental risks of engineered, free nanoparticles. Or rather, it’s published a document that reveals that there isn’t really a programme of research at all, in the sense of an earmarked block of funds and a set of research goals and priorities. Instead, the report describes an ad-hoc assortment of bits and pieces of research funded by all kinds of different routes. The Royal Society’s response is sceptical, stressing that the report “reveals that no money has been specifically set aside for important research into, for example, how nanoparticles ultra small pieces of material might penetrate the skin.”

It’s clear, then, that if there is a nanotoxicity bandwagon developing (as identified by TNTlog), UK government is being pretty half-hearted about jumping on. I don’t think this is an entirely bad thing. Rather than joining some auction to declare what arbitrary percentage of their nanotechnology spend goes on toxicology, it makes sense to take a cold look at what research needs to be done (taking a realistic, hype-free view of how much of this stuff there really is in the work-place and the market), and what research is already going on. No-one gains by duplicating research, and identifying the gaps and the real needs is a good place to start.

What the government should understand, though, is that when it does identify knowledge gaps, it has to be forward in filling them. Money has to be ear-marked, and if necessary capacity has to be built. One can’t rely on the scientific market, as it were, by expecting research proposals in the required areas to come forward spontaneously. Toxicology, occupational health and environmental science are crucially important , but they are often not exciting science as that would be defined by a Research Council peer review panel.

David Tabor 1913-2005

I was sorry to learn that David Tabor, Emeritus Professor of Physics at Cambridge University, died on Saturday at the age of 92. Tabor was a brilliant and insightful experimental physicist whose name is perhaps not very widely known outside the scientific community. This is a pity, because he has a substantial claim to be considered one of the founding fathers of nanoscience.

Tabor began his research career in Australia, working for Council for Scientific and Industrial Research on lubricants and bearings. After moving to Cambridge University in 1946, he essentially created our modern understanding of the nanoscale origins of friction. His classic monograph on friction, written with F.P. Bowden in 1950, The Friction and Lubrication of Solids, is still in print and still very much worth reading. Tabor’s work on friction made him understand the importance of understanding the nature and structure of surfaces at the atomic level, and his group in the Cavendish Laboratory made major contributions to the development of surface science. Perhaps the highlight of his work on fundamental surface physics was his development of an apparatus to measure the van der Waals force between atomically smooth mica surfaces. This Surface Forces Apparatus, developed in the late ’60’s and early ’70’s in collaboration with his students Winterton and Israelachvili, was a technical tour de force, able to control and measure the separation between two surfaces with Angstrom resolution.

Tabor retired in 1981, but he was frequently to be found in the Cavendish Laboratory throughout the next 20 years. I joined the Cavendish as a lecturer in his old group in 1989, and thus I was lucky enough to be able to spend a great deal of time talking to him in that period. He was a great man to discuss physics with; despite his eminence and many honours he was modest and unassuming, yet with a tremendous insight into the way matter behaves at the nanoscale. Indeed, the recent surge of experimental studies of friction made possible by new tools like the atomic force microscope has only served to remind people how accurate Tabor’s intuition was.

Self-assembly vs self-organisation – can you tell the difference?

Self-assembly and self-organisation are important concepts in both nanotechnology and biology, but the distinction between them isn’t readily apparent, and this can cause considerable confusion, particularly when the other self-word – self-replication– is thrown into the mix.

People use different definitions, but it seems to me that it makes lots of sense to reserve the term self-assembly for equilibrium situations. As described in my book Soft Machines, the combination of programmed patterns of stickiness in nanoscale objects and constant Brownian motion mean that on the nanoscale complex 3-dimensional structures can assemble themselves from their component parts with no external intervention, purely driven by the tendency of systems to minimise their free energy in accordance with the second law of thermodynamics.

We can then reserve self-organisation as a term for those types of pattern forming system which are driven by a constant input of energy. A simple prototype from physics are the well-defined convection cells you get if you heat a fluid from below, while in chemistry there are the beautiful patterns you get from systems that combine some rather special non-linear chemical kinetics with slow diffusion – the Belousov-Zhabotinsky reaction being the most famous example. A great place to read about such systems is the book by Philip Ball – The self-made tapestry – pattern formation in nature (though Ball doesn’t in fact make the distinction I’m trying to set up here).

Self-assembly is pretty well understood, and it’s clear that at small length scales it is important in biology. Protein folding, for example, is a very sophisticated self-assembly process, and viable viruses can be made in the test-tube simply by mixing up the component proteins and nucleic acid. Self-organisation is much less well understood; it isn’t entirely clear that there are universal principles that underly the many different examples observed, and the relevance of the idea in biology is still under debate. There’s a very nice concrete example of the difference between the two ideas reported in a recent issue of Physical Review Letters (abstract here, full PDF preprint here). These authors consider a structural feature of living cells – the pattern of embedded proteins in the cell membrane – and ask, with the help of mathematical models, whether this pattern is likely to arise from equilibrium self-assembly or non-equilibrium self-organisation. The conclusion is that both processes can lead to patterns such as the ones observed, but that self-assembly leads to smaller scale patterns which take longer to develop.

One thing one can say with certainty – living organisms can’t arise wholly from self-assembly, because we know that in the absence of a continuous supply of energy they die. In summary, viruses self-assemble, but elephants (perhaps) self-organise.

Who’s in charge?

I spent Saturday afternoon in the Natural History Museum in London, not looking at the dinosaurs, but taking part in an event organised by the good people at Demos (not forgetting their colleagues at Lancaster) – nanoscientists-meet-nanopublics.

The format was a very gently moderated group discussion between nanoscientists of various ages (I think, alas, I was the oldest) and a group of members of the public who have been involved in a series of focus group discussions about nanotechnology. I’d summarise the demographic of my group as being “North London soccer mums” (with deep apologies to any of you who might read this!) – and I think it’s fair to say that the overall feeling towards nanotechnology was pretty negative. This was based on two things – an unease about untested nanoparticles in cosmetics, and a deeper unhappiness about the whole idea of human enhancement, particularly in a military context. I think we had a fairly productive discussion about both aspects.

One of the interesting things that came out in the discussion was this worry about “who is in charge”. I think it’s a natural human assumption to think that there is someone or some organisation that has the power to initiate change or to prevent it, if it is judged undesirable. But that’s not how science works in a liberal, globalised, market-driven system. I think this realisation that there really isn’t anyone in charge – not just in nanotechnology or any other part of science, but in all sorts of aspects of modern life – is what so many people find so frightening about the world we live in. But is there any alternative?

(Second) Best of Small Tech

Small Times, the US-based trade magazine for mico- and nano-technology, announced its annual Best of Small Tech awards yesterday. I was delighted to find that I was a runner-up in the Advocate category. Since the winner in this category was Fred Kavli, whose Kavli Foundation has endowed a number of chairs and insitutions in nanoscience, and has established a $1 million biennial prize for nanoscience, I can’t feel too hard done by for missing the top slot.

I was pleased to see a few other British names in there, too. Kevin Matthews, CEO of the nanomaterials company Oxonica, won the business leader award, and Peter Dobson, an Oxford University professor who originally spun out Oxonica, won the innovator award. David Fyfe, CEO of the Cambridge University spin-out Cambridge Display Technology, was a runner-up in the business leader category.

Understanding structure formation in thin polymer films

This month’s issue of Nature Materials has a paper from my group which provides new insight into the way structure can emerge in ultrathin polymer films by self-assembly. It’s very easy to make a very uniform polymer film with a thickness somewhere between 5 and 500 nanometers; in a process called “spin-casting” you just flood a smooth, flat substrate with a solution of the polymer in an organic solvent like toluene, and then you spin the substrate round at a couple of thousand RPM. The excess solution flies off, leaving a thin layer from which the solvent quickly evaporates. This process is used all the time in laboratories and in industry; in the semiconductor industry it’s the way in which photoresist layers are laid down. If you use, not a single polymer, but a mixture of two polymers, as the solvent is removed then the two polymers will phase separate, like oil and water. What’s interesting is that sometimes they will break up into little blobs in the plane of the film, but other times they will split into two distinct layers, each of which might only be a few tens of nanometers thick. The latter situation, sometimes called “self-stratification”, can be potentially very useful. It’s an advantage for solar cells made from semiconducting polymers to have two layers like this, and Henning Sirringhaus, from Cambridge, (whose company, Plastic Logic, is actively commercialising polymer electronics) has shown that you can make a polymer field effect transistor in which the gate dielectric layer spontaneously self-stratifies during spin-coating.

The paper (which can be downloaded as a PDF here) describes the experiments that Sasha Heriot, who is a postdoc in my group, did to try and disentangle what goes on in this complex situation. Our apparatus (which was built by my former graduate student, James Sharp, now a lecturer at Nottingham University) consists of a spin-coating machine in which a laser shines on the film as it spins; we detect both the light that is directly reflected and the pattern of light that is scattered out of the direct beam. The reflected light tells us how thick the film is at any point during the 5 seconds which the whole process takes, while the scattered light tells us about the lateral structure of the film. What we find is that after the spin-coating process starts, the film first stratifies vertically. As the solvent is removed, the interface separating the two layers becomes wavy, and this wave grows until these two layers break up, leaving the pattern of droplets that’s seen in the final film. We don’t exactly know why the interface between the two self-stratified films becomes unstable, but we suspect it’s connected to how volatile the solvent is. When we do understand this mechanism properly, we should be able to design conditions for the spin-coating to get the final structure we want.

The relevance of this is that this kind of solvent-based coating process is cheap and scalable to very large areas. The aim is to control the nanostructure of thin films of functional materials like semiconducting polymers simply by adjusting the processing conditions. We want to get the system to make itself as far as possible, rather than having to do lots of separate fabrication steps. If we can do this reliably, then this will get us closer to commercial processes for making, for example, very cheap solar cells using simple printing technology, or simple combinations of sensors and logic circuits by ink-jet printing.

Soft Machines: The Foresight Verdict

I was pleasantly surprised, on picking up a copy of the Foresight Nanotech Institute’s quarterly newsletter, Foresight Nanotech Update (not yet on the web, but it will presumably appear here in due course), to see a two-page, detailed review of my book Soft Machines. It’s actually a pretty positive review – “Soft Machines is an informative and readable exploration of the nanoworld” is a line I can imagine a publicist being pleased to fillet. Perhaps not surprisingly the reviewer doesn’t completely accept my arguments about the feasibility or otherwise of the Drexlerian program, saying “the arguments that Jones produces seem largely sound as far as they go, but not thorough enough to be conclusive”. Actually that’s a conclusion that I’m very comfortable with. We’ll see what things look like over the next couple of years, after some more real debate and some more supporting science.

At the Foresight Vision Weekend

I’m in California, where the Foresight Institute’s Vision Weekend has just finished. I gave a talk, outlining my thoughts about where the soft approach to nanotechnology might lead in the longer term. This was received well enough, though I’m sure without convincing the whole audience. This weekend is supposed to be off the record, so I’ll not give a blow-by-blow account. But one curious thing, which is in principle already a matter of public record, is worth mentioning. If you had looked at the program on the web last week you would have seen that a debate between me and Ralph Merkle about the viability of soft vs hard approaches to radical nanotechnology was scheduled. This debate disappeared from the final version of the program and never happened, for reasons that weren’t explained to me. Maybe this was just a result of the difficulty of trying to fit in a lot of speakers and events. Nonetheless it seems a pity that a community that often complains about the lack of detailed technical discussion of the proposals in Nanosystems didn’t get the chance to hear just such a debate.

Blog meets podcast

Soft Machines got a namecheck on the Berkeley Groks science radio show this week (you can download the MP3 here).

I don’t know whether to be more impressed that Soft Machines is so assiduously read by student broadcasters in search of material, or that one of the postdocs in my department is so addicted to obscure science podcasts that he noticed it and told me about it (thanks, Ashley). I’d like to say that they featured an in-depth discussion of some of the most serious issues this blog discusses, but instead, I’m afraid, it was the postscript to this item that caught their eye.

Other good science podcasts that Ashley recommends include the Science show from the Australian Broadcasting Corporation, here, and Nature Magazine’s podcast, here.

Framing nanotech: products, process, or program?

If you are a regulator or policy maker considering the possible impacts of nanotechnology, should you consider it solely in terms of the products it produces, should you think of it as a distinct process for making things, or should you ask about the more general socio-economic program of which it is part? This question is suggested by Sheila Jasanoff’s excellent new book, Designs on Nature. This book, recommended on Soft Machines the other day by James Wilsdon (see also James’s review of the book for the Financial Times), is a highly perceptive comparative study of the different ways in which the politics of biotechnology and genetic modification played out in the USA, the UK and Germany. Jasanoff finds one origin of the differences between the experience in the three countries in the different ways in which the technology was framed. In the USA, the emphasis was on asking whether the products of biotechnology were safe. In the UK, the issue was framed more broadly; the question was whether the process of genetic modification was in itself a cause for concern. In Germany, meanwhile, discussion of biotechnology could never escape the shadow of the complicity of German biomedical science with the National Socialist program, and the horrors that emerged from a state dedicated to the proposition than all men are not created equal. In this context, it was tempting to see biotechnology as part of a program in which science and a controlling, ordering state came together to subjugate both citizens and nature.

Since policy-makers, academics and activists are all looking at the unfolding debate around nanotechnology through the lens of the earlier GM debates, it’s worth asking how far this analysis can be applied to nanotechnology. The product-centred view is clearly in the ascendency in the USA, where the debate is centred almost exclusively over the issue of the possible toxicity of nanoparticles. But the process-centred view is not really managing to establish itself anywhere. The problem is, of course, that nanotechnology does not present a distinct process in the way that genetic modification does. This is despite the early rhetoric of the National Nanotechnology Initiative – the slogan “building the world atom-by-atom” does suggest that nanotechnology offers a fundamentally different way of doing things, but the reality, of course, is that today’s nanotechnology products are made by engineering processes which are only incremental developments of ones that have gone before. It remains to be seen whether a radically different nanotechnology will emerge which will make this framing more relevant.

Should we, then, worry about nanotechnology as part of a broader, socio-economic program? This is clearly the central position of anti-nanotechnology campaigning groups like the ETC group. They may find the nano-toxicity issue to be a convenient stick to beat governments and nano-industry with, but their main argument is not with the technology in itself, but with the broader issues of globalization and liberal economics. Of course, many of those most strongly in favour of nanotechnology have their own program, too – the idea of transhumanism, with its high profile adherents such as Ray Kurzweil. It’s possible that opposition to nanotechnology will increasingly come to be framed in terms of opposition to the transhumanist program, along the lines of Bill McKibben’s book Enough.