On Nanohype

David Berube’s new book on nanotechnology, Nanohype, is reviewed in this week’s Nature (subscription required). The review is, in truth, not very favourable, but I’m not going to comment on that until my own copy of Nanohype makes it from the Amazon warehouse across the Atlantic. As is often the case, though, the major message of the review is that this is not the book that the reviewer would have written, which in this case is rather interesting, as the reviewer was Harry Collins, one of the foremost exponents of the discipline of the sociology of science.

Collins’s research method is in-depth studies of scientific communities, in which he attempts to uncover the often tacit shared values that underly the scientific enterprise. As such, he is rather sceptical about the value of written material: “science is an oral culture. Although science’s spokespersons rattle on endlessly about peer review, the vast majority of published papers, peer reviewed or not, are largely ignored by scientists in the field. The problem that would face an alien from another planet who wanted to make a digest of terrestrial science from the literature alone would be about as bad as that facing a lay person who tries to understand it by reading everything on the Internet.”

Here’s why nanotechnology is interesting – as a scientific culture it barely exists yet. In contrast to the fields that Collins has studied – most recently, the search for gravitational waves – the idea of nanotechnology as a field has been imposed from the outside the scientific community, by the forces which I imagine Berube’s book documents, rather than emerging from within it. So the community shared values that Collins’s work aims to uncover are not yet even agreed upon.

For those readers who are sceptical about the very idea of the sociology of science, the BBC is currently broadcasting a pair of very interesting documentaries about how science works, called Under Laboratory Conditions; the first one, broadcast last Wednesday on the BBCs digital service BBC4, rang very true to me (and I say this not just because I made a brief appearance in the program myself).

Writing the history of the nanobot

The nanobot – the tiny submarine gliding through the bloodstream curing all our ills – is one of the most powerful images underlying the public perception of nanotechnology. In the newspapers, it seems compulsory to illustrate any article about any sort of nanotechnology with a fanciful picture of a nanobot and a Fantastic Voyage reference. Yet, to say that nanoscientists are ambivalent about these images is putting it mildly. Amongst the more sober nanobusiness and nanoscience types, the word nanobot is shorthand for everything they despise about the science fiction visions that nanotechnology has attracted. For my own part, I’ve argued that the popular notions of the nanobot are an embodiment of the fallacy that advanced nanotechnology will look like conventional engineering shrunk in size. And even followers of Drexler, in an attempt to head off fears of the grey goo dystopia of out-of-control self-replicating nanobots, have taken to downplaying their importance and arguing that their brand of advanced nanotechnology will take the form of innocent desktop devices looking rather like domestic bread-making machines.

The power of the nanobot image in the history of nanotechnology is emphasized by a recent article by a social scientist from the University of Nottingham, Brigitte Nerlich. This article, From Nautilus to Nanobo(a)ts: The Visual Construction of Nanoscience traces the evolution of the nanobot image from its antecendents in science fiction, going back to Jules Verne, through Fantastic Voyage, right through to those stupid nanobot images that irk scientists so much. Nerlich argues that ” popular culture and imagination do not simply follow and reflect science. Rather, they are a critical part of the process of developing science and technology; they can inspire or, indeed, discourage researchers to turn what is thinkable into new technologies and they can frame the ways in which the ‘public’ reacts to scientific innovations.”

Attempts to write the nanobot out of the history of nanotechnology thus seem doomed, so we had better try and rehabilitate the concept. If we accept that the shrunken submarine image is hopelessly misleading, how can we replace it by something more realistic?

Let’s prevail

Martyn Amos draws our attention to the collection of dangerous ideas on The Edge – the website of every popular science writer’s favourite literary agent, John Brockman. He asked a collection of writer-scientists to nominate their dangerous idea for 2006, and the result has something for everyone. Like Martyn, I very much like Lynn Margulis’s comments about the bacterial origins of our sensory perceptions. I’d want to go further, with the statement that human brains have more in common with colonies of social bacteria than with microprocessors.

Devotees of the nanobot have Ray Kurzweil arguing that radical life extension and expansion, enabled by radical nanotechnology, is as inevitable as it is desirable. The apparent problems of overpopulation will be overcome because “molecular nanoassembly devices will be able to manufacture a wide range of products, just about everything we need, with inexpensive tabletop devices. “ Readers of Soft Machines will already know why I think Drexlerian nanotechnology isn’t going to lead us to this particular cornucopia. To my mind, though, the biggest danger of radical life extension isn’t overpopulation; it’s stagnation and boredom. Every generation has needed its angry young men and women, its punk rockers, to spark its creativity, and even as I grow older the thought of the world being run by a gerontocracy doesn’t cheer me up.

So I’m with Joel Garreau, in hoping that despite environmental challenges and the frightening speed of technological change, we’ll see “the ragged human convoy of divergent perceptions, piqued honor, posturing, insecurity and humor once again wending its way to glory”. In the nice phrase Garreau used in his book Radical Evolution – let’s prevail.

At the year’s turning

All the best to Soft Machines readers for the New Year, and warm congratulations to my collaborator and friend Tony Ryan, Professor of Chemistry at Sheffield University, who was awarded an OBE for services to science in the Queen’s New Year’s Honours. For those readers unfamiliar with the intricacies of the British honours system, that means he’s been made an Officer of the Order of the British Empire. Yes, I know, Britain hasn’t got an empire any more, but the whole point of chivalry is to be archaic…

Eat up your buckyballs (for your liver’s sake)

The discovery by Eva Oberdorster that the molecule C60, or buckminster fullerene, caused brain damage in large mouthed bass received huge publicity when it was first reported, (see here for a relatively level-headed account). This work has now become one of the main underpinning texts of the belief that there is something uniquely dangerous about nanomaterials. It’s interesting, though perhaps not surprising, that a recent article in the American Chemical Society journal Nano Letters, which reaches an exactly opposite conclusion, (abstract, subscription required for full article) has received no publicity at all.

In this work, from Fathi Moussa’s group in the Department of Pharmacy in Université Paris XI, it is shown that not only did C60 not have a toxic effect on the rats and mice it was tested on; it also protected rat’s livers from the toxic effects of carbon tetrachloride, an effect ascribed to C60’s powerful anti-oxidant properties. The paper is not reticent in its criticism of the earlier work; it ascribes the apparent toxic effects previously observed to the fact that the C60 was prepared in an organic solvent, THF, which was not completely removed when a water-suspension of C60 was prepared. In short, it was the toxic effects of THF that were affecting the unfortunate fish, not those of C60. The tone of these comments is suprisingly caustic for a peer reviewed paper, and it finishes with a note of magnificent Gallic sarcasm. Referring to reports that naturally occurring fullerenes (presumably from the soot from forest fires) have been discovered in fossil dinosaur eggs, the authors ask “we feel that it cannot be said that the C60 discovered in dinosaur eggs was the origin of the mass extinction of these animals, or was it?”

I should stress that I’m not advocating that Soft Machines readers should immediately consume a large quantity of C60 and then start abusing solvents, nor should we now assume that fullerenes are entirely safe and without potential environmental problems. But there are a couple of lessons we should draw from this. Firstly, toxicology is not necessarily easy to get right. But perhaps the most important lesson is that learning about science from press releases is very misleading. What appear to be the big breakthroughs at the time get lots of coverage, but the follow-up work, which can modify or even completely contradict the initial big story, barely gets noticed.

Six challenges for molecular nanotechnology

To the outsider, the debate about whether Drexler’s vision of radical nanotechnology – molecular manufacturing or molecular nanotechnology (MNT) – is feasible or not can look a bit sterile. Many in the anti- camp take the view that the Drexler proposals are so obviously flawed that it’s not really worth spending any time making serious arguments against them, while on the pro- side the reply to any criticism is often “it’s all been worked out in Nanosystems, in which no errors have been found”. I think the recent debate we had at Nottingham did begin to move onto real issues. I can’t help feeling, though, that the time has come to move on from debating positions.

With this in mind, here are six areas in which I think the proposals of molecular nanotechnology are vulnerable. Trying to be constructive, I’ve tried, as far as possible, to formulate the issues as concrete research questions that could begin to be addressed now. Ideally, we would be seeing experimental work – this field has been dominated by simulation for too long. But theory and simulation does have its place; one has to recognise the limitations of the simulation methods being used and to validate the simulations against reality whenever possible. A couple of recent developments from the pro-MNT camp are encouraging – the Drexler/Allis paper (PDF) used state of the art quantum chemistry methods to design a “tool-tip” for mechanosynthesis, while the Nanorex program should make it much more convenient to do large scale molecular dynamics simulations of complex machine systems. What’s needed now is a systematic and scientific use of these and other methods, moderated by frequent reality checks, to answer some well-posed questions. Here are my suggestions for some of those questions.

1. Stability of nanoclusters and surface reconstruction.
The Problem. The “machine parts” of molecular nanotechnology – the cogs and gears so familiar from MNT illustrations – are essentially molecular clusters with odd and special shapes. They have been designed using molecular modelling software, which works on the principle that if valencies are satisfied and bonds aren’t distorted too much from their normal values then the structures formed will be chemically stable. But this is an assumption – and two features of MNT machine parts make this assumption questionable. These structures typically are envisaged as having substantially strained bonds. And, almost by definition, they have a lot of surface. We know from extensive experimental work in surface science that the stable structure of clean surfaces is very rarely what you would predict on the basis of simple molecular modelling – they “reconstruct”. One highly relevant finding is that the stable form of some small diamond clusters actually have surfaces coated with graphite-like carbon (see here, for example). There are two linked questions here. We need to know what is the stable structure at equilibrium – that is the structure with the overall lowest free energy. It may be possible to make structures that are metastable – that is, structures that are not at equilibrium, but which have a low enough probability of transforming to the stable state that they are usable for practical purposes. To assess whether these structures will be useful or not, we need to be able to estimate two things – the energy barrier that has to be surmounted, and how much energy is available in the system to push it over that barrier. The second of these factors is going to be closely related to challenge 3.

Research needed. Firstly, we need proper calculations, using quantum chemistry techniques (e.g. density functional theory) of the chemical stability of some target machine parts. Subsequently it would be worth doing molecular dynamics calculations with potentials that allow chemical reactions to probe the kinetic stability of metastable structures.

2. Thermal noise, Brownian motion and tolerance.
The Problem. The mechanical engineering paradigm that underlies MNT depends on close dimensional tolerances. But at the nanoscale, at room temperature, Brownian motion and thermal noise mean that parts are constantly flexing and fluctuating in size, making the effective “thermal tolerance” much worse than the mechanical tolerances that we rely on in macroscopic engineering. Clearly one answer is to use very stiff materials like diamond, but even diamond may not be stiff enough. The Nanorex simulations show this “wobbliness” very clearly. It should be remembered that in these simulations, the software nails down the structures at fixed points, but in reality the supports and mountings for the moving parts will all be just as wobbly. Will it be possible to engineer complex mechanisms in the face of this lack of dimensional tolerance?

Research needed. Drexler’s “Nanosystems” correctly lays out the framework for calculating the effects of thermal noise, but the only application to an engineering design of these calculations is a calculation of positional uncertainty at the tip of a molecular positioner. This shows that the positional uncertainty can be made to be less than an atomic diameter – this is clearly a necessary condition for such devices to work, but its not obvious that it is a sufficient one. What is needed to clarify this issue are molecular dynamics simulations carried out at finite temperatures of machines of some degree of complexity, in which both the mechanism itself and its mounting are subject to thermal noise.

3. Friction and energy dissipation.
The Problem. As mechanisms get smaller, the relative amount of interfacial area becomes much larger and surface forces become stronger. As people attempt to shrink micro-electromechanical systems (MEMS) towards the nanoscale the combination of friction and irreversible sticking (called in the field “stiction” ) causes many devices to fail. It’s an article of faith of MNT supporters that these problems won’t be met in MNT systems, because of the atomic perfection of the surfaces and the rigorous exclusion of foreign molecular species from the inner workings of MNT devices (the “eutactic environment” – but see challenge 5 below). Its certainly true that the friction of clean diamond surfaces is likely to be very low by macroscopic standards (the special frictional properties of diamond were already understood by David Tabor), particularly if the two sliding surfaces aren’t crystallographically related. However, in cases where direct comparisons can be made between the estimates of sliding friction in Nanosystems and the results of molecular dynamics simulations (e.g. Harrison et al., Physical Review B46 p 9700 (1992)) the Nanosystems estimates turn out to be much too low. MNT systems will have very large internal areas, and as they are envisaged as operating at very high power densities; thus even rather low values of friction may in practise compromise the operations of the devices by generating high levels of local heating which in turn will make any chemical stability issues (see challenge 1) much more serious.

Given that the machine parts of MNT are envisaged as being so small, and the contacting area of these parts is so large with respect to their volumes, it’s perhaps questionable how useful friction is as a concept at all. What we are talking about is the leakage of energy from the driving modes of the machines into the random, higher frequency vibrational modes that constitute heat. This mode coupling will always occur whenever the chemical bonds are stretched beyond the range over which they are well approximated by a harmonic potential (i.e. they obey Hooke’s law). At least one of the Nanorex simulations shows this leakage of energy into vibrational modes rather clearly.

Research needed. The field of nanoscale friction has moved forward greatly in the last ten years (a good accessible review by Jacqueline Krim can be found here), and an immediate priority should be to explore the implications to MNT of this new body of existing experimental and simulation work. Further insight into the scale of the problem and any design constraints it would lead to can then be obtained by quantitative molecular dynamic simulations of simple, driven nano-mechanical systems.

4. Design for a motor.
The Problem It’s obvious, on the one hand, that MNT needs some kind of power source to work. On the other hand, MNT supporters often point to the very high power densities that it will be possible to achieve in MNT systems. The basis of their confidence is a design for an electrostatic motor in Drexler’s “Nanosystems”, together with some estimates of its performance. The design is very ingenious in concept – it essentially works on the principle of a Van der Graaf generator worked backwards. The problem is that only the broad outline of the design is given in Nanosystems, and when one thinks through in detail how it might be built more and more difficulties emerge. The design relies on the induction of charge by making successive electrical contact between materials of different work-functions. The materials to be used need to be specified and the chemical stability of the resulting structures need to be tested as in challenge 1. This is a potentially tricky problem, as the use of any kind of metal is likely to raise serious surface stability issues. The design also specifies that electrical contact is made by electron tunneling rather than direct physical contact. This is probably essential in order to avoid immediate failure due to the adhesion of contacting surfaces (this would certainly happen with a metallic contact), but in turn, because of the exponential dependence of tunnelling current with separation) it calls for exquisite precision in positioning, which brings us back to the problems of tolerance in the face of thermal noise discussed in challenge 2.

Research needed. The electrostatic motor design needs to be worked up to atomistic level of detail and tested.

5. The eutactic environment and the feed-through problem.
The Problem It is envisaged that the operations of MNT will take place in a completely controlled environment sealed from the outside world – the so-called “eutatic” environment. There are good reasons for this: the presence of uncontrolled, foreign chemical species will almost certainly lead to molecular adsorption on any exposed surfaces followed by uncontrolled mechanochemistry leading to irreversible chemical damage to the mechanisms. MNT will need an extreme ultra-high vacuum to work. (It’s worth noting, though, that even in the absence of the random collisions of gas molecules Brownian motion – in the sense of thermal noise – is still present at finite temperatures). But, to be useful, MNT devices will need to interact with the outside world. A medical MNT device will need to exist in bodily fluids – amongst the most heterogenous media its possible to imagine – and a MNT manufacturing device will need to take in raw materials from the environment and deliver the product. In pretty much any application of MNT molecules will need to be exchanged with the surroundings. As anyone who’s tried to do an experiment in a vacuum system knows, it’s the interfaces between the vacuum system and the outside world – the feed-throughs – that cause all the problems. Nanosystems includes a design for a “molecular mill” to admit selected molecules into the eutactic environment, but again it is at the level of a rough sketch. The main argument about the feasibility of such selective pumps and valves is the existence of membrane pumps in biology. But I would argue that these devices are typical examples of “soft machines” that only work because they are flexible. Moreover, though a calcium pump is fairly effective at discriminating between calcium ions and sodium ions, its operation is statistical – its selectivity doesn’t need to be anything like 100%. To maintain a eutactic environment common small molecules like water and oxygen will need to be excluded with very high efficiency.

Research needed. Molecular level design of (for example) a selective valve or pump based on rigid materials that admits a chosen molecule while excluding (say) oxygen and water with 100% efficiency.

6. Implementation path.
The Problem The all-important practical question is, of course, how do we get from our technological capabilities today to the capabilities needed to implement MNT. Here there is a difference of opinion within the pro-MNT camp, with two quite different approaches being proposed. Robert Freitas believes that the best approach is to develop the current approaches of direct molecular manipulation using scanning probe microscopes to the point at which one is able to achieve a true mechanosynthetic step. This is interesting science in its own right, but some idea of the formidable difficulties involved can be found by reading Philip Moriarty’s critique of a specific proposal by Robert Freitas, and the subsequent correspondence with Chris Phoenix. Drexler himself prefers the idea of developing a biomimetic soft nanotechnology very much along the lines of what I describe in Soft Machines, and then making a transition from such a soft, wet system to a diamond based “hard” nanotechnology. This involves a transition between two completely incompatible environments, and two incompatible design philosophies, and I simply don’t see how it could happen. Without a concrete proposal it’s difficult to judge feasibility or otherwise.

Research needed. Engage with scanning probe microscopists to overcome the formidable experimental problems in the way of direct mechanosynthesis. Develop a concrete proposal for how one might make the transition between a functional, biomimetic “soft nanotechnology” system and hard MNT.

Nanotubes: not as perfect as one might like

Carbon nanotubes are often imagined to be structures of great perfection and regularity, but the reality is that, like virtually all materials we encounter, they will have defects – places where there’s a mistake in the crystal structure, like a missing atom or a wrongly connected bond. Defects are tremendously important in materials science, because they’re what stop materials from being anything like as strong as you would estimate they ought to be from a simple calculation. A recent paper in Nature Materials (abstract here, subscription required for full paper) provides what is, I think, the first accurate measurement of defect densities in single walled carbon nanotubes. For typical nanotubes, produced by chemical vapour deposition, one finds one defect every four microns of nanotube length.

It’s these atomic-level flaws that will, in practise, limit both the electronic and the mechanical properties of carbon nanotubes. The study, by Philip Collins and coworkers, at UC Irvine, uses a new technique for decorating the defects electrochemically. It’s not able to distinguish between different types of defects, which could include a substitutional dopant, a broken bond passivated by further chemical group or a mechanical strain or kink, as well as what is perhaps the theoretically best studied nanotube defect – the Stone–Wales defect. The latter occurs if, in a group of four hexagons of carbon, one bond is rotated, leading to two hexagons, a pentagon and a heptagon.

The figure of one defect per 4 microns of tube is, in one way, rather impressive – it translates into there being only one defect for every 10 thousand billion atoms. This is a similar level to the best quality silicon, which is pretty much the most perfect crystalline material available. But, on the other hand, given the essentially one-dimensional nature of a nanotube, it’s pretty significant, since a single defect in a length of nanotube being used in an electronic device would dramatically change its characteristics. And the presence of all these weak spots are likely to mean that it’s going to be difficult to make a macroscale nanotube cable whose strength approaches the theoretical estimates people have been making, for example in connection with the proposed space elevator.

The Wild, Wild East

There’s a developing conventional wisdom about the way science and technology in general, and nanotechnology in particular, is developing in Asia. This comes in two parts: firstly, it’s noted that the Asian countries – particularly China – are set to overtake the west in science and technology, and then it’s suggested that what will help these countries gain their new supremacy is the fact that there, technology will be developed without moral scruples, in contrast to the self-inflicted handicaps that Western countries are suffering. These handicaps, conventional wisdom further asserts, take the form, in the United States, of opposition from the religious right to the entire secular, scientific worldview, while in Europe anti-growth, left-wing environmentalists are the major culprits. The idea of a lawless, wild east, where technological stuff just gets done without agonising about social and environmental consequences, is becoming a bit of a bogeyman for western politicians, as nicely pointed out in this recent Demos pamphlet.

Clearly the rapid development of nanotechnology, together with biotechnology and other branches of advanced applied science, in China, Korea, Taiwan and Singapore, is a significant and important story that’s likely to profoundly change the shape of the world political economy over the next twenty years. But I can’t entirely buy in to the current mood of panic about this. Firstly, idealistic though I may be, I don’t believe that the development of science and technology is a zero-sum game. The opposite, in fact – the benefits of technological advances can spread from their place of invention very rapidly round the world. Given on the one hand, the very urgent environmental and developmental problems that need to be solved in the world now, and on the other, the entirely legitimate aspirations of the citizens of less developed countries to the lifestyles we enjoy in the west, the rapid development of science and technology in countries like China should be welcomed. Secondly, it just doesn’t seem plausible that science and technology really will develop in these countries without being constrainted by societal values. I know much less than I would like about the cultures, beliefs and values of these different countries, but I’m sure that societal and ethical issues will be hugely important in steering the development of technology there, even though some of those issues may be different from the ones that are important in the west. And just as the west consists of many different countries with societal values that differ from each other in important ways, the idea of a monolithic set of “Asian values” must be at best a gross oversimplification.

An interesting little vignette that illuminates some of these issues is provided by the recent saga about the troubles of the Korean stem cell pioneer Hwang Woo-Suk. Heavily criticised in the west for the ethical lapse of using donated eggs from his own graduate students, he has been strongly defended in his native Korea. At first sight, this seems to exactly support the conventional wisdom – in this view the Koreans have gained a world-leading position in stem-cell research by simply pressing ahead while western countries – particularly, in this case, the USA – have hesitated due to moral and religious qualms. But, as discussed in this very interesting article in the Economist, the reality is probably rather more complex and nuanced.

Drexler vision endorsed by Princeton physicists (or their publicists, at least)

A recent press release, describing a paper by Princeton theoretical physicists Rechtsman, Stillinger,and Torquato, begins with the stirring words “It has been 20 years since the futurist Eric Drexler daringly predicted a new world where miniaturized robots would build things one molecule at a time. The world of nanotechnology that Drexler envisioned is beginning to come to pass….” The mention of Drexler has ensured that the release got a mention on the Foresight Institute’s blog, Nanodot, but Christine Peterson disarmingly appeals for help in understanding what on earth the release is talking about. Fair enough, in my view; whatever one thinks of the Drexler reference, this is one of the worst written press releases I’ve seen for some time.

A look at the original paper, in Physical Review Letters, (abstract here, preprint here, subscription required for published paper) gives us more of a clue. The backstory here is the fact that collections of spherical particles in the size range of tens to hundreds of nanometers can (if they’re all the same size) spontaneously self-assemble to form ordered arrays, often called “colloidal crystals”. The gem-stone opal is a natural example of this phenomenon; it’s formed from naturally occurring silica nanoparticles, and its iridescent colours are a result of light diffraction from the crystals. It is these striking optical properties that have raised research interest in synthetic analogues; for some sets of parameters it’s predicted that these materials might have an “optical bandgap” – a range of wavelengths of light that can’t get through the crystal in any direction. This would be useful, for example, in making highly efficient solid state lasers. The problem is that most systems of simple spheres form close-packed crystal structures – of the kind you get when stacking oranges. But it would be useful if one could make colloidal crystals with different structures, such as the diamond structure, which have more interesting potential optical properties. In principle one might be able to do this by tinkering with the interaction potentials between the particles. Close packed structures occur because the particles simply attract each other more and more until they touch, at which point they resist further compression. What this paper shows is that you can design potentials to produce the crystal structure you want – perhaps you need the particles to attract to each other up to a certain distance, then softly repel until they get a bit closer, and then start to attract again until they touch. This is an elegant piece of statistical mechanics. Of course, having designed the potential theoretically you still need to design a system that in practise has these properties. One can imagine how to do this in principle, perhaps by having colloids that combine a tunable surface charge with a soft polymer coating, but such a demonstration needs a lot of further experimental work.

Is this really “turning a central concept of nanotechnology on its head” ? Of course not. It’s a nice step forward in theoretical methods, but it’s absolutely in the mainstream of a well established research direction for obtaining interesting ordered structures by colloidal self-assembly. And as for the next sentence – “If the theory bears out – and it is in its infancy — it could have radical implications not just for industries like telecommunications and computers but also for our understanding of the nature of life” – I can only hope the authors are cringing as much as they should be at what their publicists have put out for them.

Updated with link to preprint Tuesday 20.50.

The UK Government’s research programme into the potential risks of nanoparticles

As trailed in my last post, the UK government has published the first report (Characterising the risks posed by engineered nanoparticles: a first UK Government research report – a 55 page PDF) from its programme of research into the potential health and environmental risks of engineered, free nanoparticles. Or rather, it’s published a document that reveals that there isn’t really a programme of research at all, in the sense of an earmarked block of funds and a set of research goals and priorities. Instead, the report describes an ad-hoc assortment of bits and pieces of research funded by all kinds of different routes. The Royal Society’s response is sceptical, stressing that the report “reveals that no money has been specifically set aside for important research into, for example, how nanoparticles ultra small pieces of material might penetrate the skin.”

It’s clear, then, that if there is a nanotoxicity bandwagon developing (as identified by TNTlog), UK government is being pretty half-hearted about jumping on. I don’t think this is an entirely bad thing. Rather than joining some auction to declare what arbitrary percentage of their nanotechnology spend goes on toxicology, it makes sense to take a cold look at what research needs to be done (taking a realistic, hype-free view of how much of this stuff there really is in the work-place and the market), and what research is already going on. No-one gains by duplicating research, and identifying the gaps and the real needs is a good place to start.

What the government should understand, though, is that when it does identify knowledge gaps, it has to be forward in filling them. Money has to be ear-marked, and if necessary capacity has to be built. One can’t rely on the scientific market, as it were, by expecting research proposals in the required areas to come forward spontaneously. Toxicology, occupational health and environmental science are crucially important , but they are often not exciting science as that would be defined by a Research Council peer review panel.