Metamodern

Eric Drexler, the author of Nanosystems and Engines of Creation, launches his own blog today – Metamodern. The topics he’s covered so far include DNA nanotechnology and nanoplasmonics; these, to my mind, are a couple of the most exciting areas of modern nanoscience.

In the various debates about nanotechnology that have taken place over the years, not least on this blog, one sometimes has the sense that some of the people who presume to speak on behalf of Drexler and his ideas aren’t necessarily doing him any favours, so I’m looking forward to reading about what Drexler is thinking about now, directly from the source.

A shadow biosphere?

Where are we most likely to find truly alien life? The obvious (though difficult) place to look is on another planet or moon, whether that’s under the icy crust of Europa, near the poles of Mars, or, perhaps, on one of the planets we’re starting to discover orbiting distant stars. Alternatively, we might be able to make alien life for ourselves, through the emerging discipline of bottom-up synthetic biology. But what if alien life is to be found right under our noses, right here on earth, forming a kind of shadow biosphere? This provocative and fascinating hypothesis has been suggested by philosopher Carol Cleland and biologist Shelley Copley, both from the University of Colorado, Boulder, in their article “The possibility of alternative microbial life on Earth” (PDF, International Journal of Astrobiology 4, pp. 165-173, 2005).

The obvious objection to this suggestion is that if such alien life existed, we’d have noticed it by now. But, if it did exist, how would we know? We’d be hard pressed to find it simply by looking under a microscope – alien microbial life, if its basic units were structured on the micro- or nano- scale, would be impossible to distinguish just by appearance from the many forms of normal microbial life, or for that matter from all sorts of structures formed by inorganic processes. One of the surprises of modern biology is the huge number of new kinds of microbes that are discovered when, instead on relying on culturing microbes to identify them, one directly amplifies and sequences their nucleic acids. But suppose there exists a class of life-forms whose biochemistry fundamentally differs from the system based on nucleic acids and proteins that all “normal” life depends on – life-forms whose genetic information is coded in a fundamentally different way. There’s a strong assumption that early in the ancestry of our current form of biology, before the evolution of the current DNA based genetic code, a simpler form of life must have existed. So if descendants of this earlier form of life still exist on the earth, or if life on earth emerged more than once and some of the alternative versions still exist, detection methods that assume that life must involve nucleic acids will not help us at all. Just as, until the development of the polymerase chain reaction as a tool for detecting unculturable microbes, we have been able to detect only a tiny fraction of the microbes that surround us, it’s all too plausible that if alien life did exist around us we would not currently be able to detect it.

To find such alien life would be the scientific discovery of the century. We’d like to be able to make general statements about life in general – how it is to be defined, what are the general laws, not of biology but of all possible biologies, and, perhaps, how can one design and build new types of life. But we find it difficult to do this at the moment, as we only know about one type of life and it’s hard to generalise from a single example. Even if it didn’t succeed, the effort of seriously looking for alien life on earth would be hugely rewarding in forcing us to broaden our notions of the various, very different, manifestations that life might take.

Deja vu all over again?

Today the UK’s Royal Commission on Environmental Pollution released a new report on the potential risks of new nanomaterials and the implications of this for regulation and the governance of innovation. The report – Novel Materials in the Environment: The case of nanotechnology is well-written and thoughtful, and will undoubtedly have considerable impact. Nonetheless, four years after the Royal Society report on nanotechnology, nearly two years after the Council of Science and Technology’s critical verdict on the government’s response to that report, some of the messages are depressingly familiar. There are real uncertainties about the potential impact of nanoparticles on human health and the environment; to reduce these uncertainties some targeted research is required; this research isn’t going to appear by itself and some co-ordinated programs are needed. So what’s new this time around?

Andrew Maynard picks out some key messages. The Commission is very insistent on the need to move beyond considering nanomaterials as a single class; attempts to regulate solely on the basis of size are misguided and instead one needs to ask what the materials do and how they behave. In terms of the regulatory framework, the Commission was surprisingly (to some observers, I suspect) sanguine about the suitability and adaptability of the EU’s regulatory framework for chemicals, REACH, which, it believes, can readily be modified to meet the special challenges of nanomaterials, as long as the research needed to fill the knowledge gaps gets done.

Where the report does depart from some previous reports is in a rather subtle and wide-ranging discussion of the conceptual basis of regulation for fast-moving new technologies. It identifies three contrasting positions, none of which it finds satisfactory. The “pro-innovation” position calls for regulators to step back and let the technology develop unhindered, pausing only when positive evidence of harm emerges. “Risk-based” approaches allow for controls to be imposed, but only when clear scientific grounds for concern can be stated, and with a balance between the cost of regulating and the probability and severity of the danger. The “precautionary” approach puts the burden of proof on the promoters of new technology to show that it is, beyond any reasonable doubt, safe, before it is permitted. The long history of unanticipated consequences of new technology warn us against the first stance, while the second position assumes that the state of knowledge is sufficient to do these risk/benefit analyses with confidence, which isn’t likely to be the case for most fast moving new technologies. But the precautionary approach falls down, too, if, as the Commission accepts, the new technologies have the potential to yield significant benefits that would be lost if they were to be rejected on the grounds of inevitably incomplete information. To resolve this dilemma, the Commission seeks an adaptive system of regulation that seeks, above all, to avoid technological inflexibility. The key, in their view, is to innovate in a way that doesn’t lead society down paths from which it is difficult to reverse, if new information should arise about unanticipated threats to health or the environment.

The report has generated a substantial degree of interest in the press, and, needless to say, the coverage doesn’t generally reflect these subtle discussions. At one end, the coverage is relatively sober, for example Action urged over nanomaterials, from the BBC, and Tight regulation urged on nanotechnology, from the Financial Times. In the Daily Mail, on the other hand, we have Tiny but toxic: Nanoparticles with asbestos-like properties found in everyday goods. Notwithstanding Tim Harper’s suggestion that some will welcome this sort of coverage if it injects some urgency into the government’s response, this is not a good place for nanotechnology to be finding itself.

Nanocosmetics in the news

Uncertainties surrounding the use of nanoparticles in cosmetics made the news in the UK yesterday; this followed a press release from the consumer group Which? – Beauty must face up to nano. This is related to a forthcoming report in their magazine, in which a variety of cosmetic companies were asked about their use of nanotechnologies (I was one of the experts consulted for commentary on the results of these inquiries).

The two issues that concern Which? are some continuing uncertainties about nanoparticle safety and the fact that it hasn’t generally been made clear to consumers that nanoparticles are being used. Their head of policy, Sue Davies, emphasizes that their position isn’t blanket opposition: “We’re not saying the use of nanotechnology in cosmetics is a bad thing, far from it. Many of its applications could lead to exciting and revolutionary developments in a wide range of products, but until all the necessary safety tests are carried out, the simple fact is we just don’t know enough.” Of 67 companies approached for information about their use of nanotechnologies, only 8 replied with useful information, prompting Sue to comment: “It was concerning that so few companies came forward to be involved in our report and we are grateful for those that were responsible enough to do so. The cosmetics industry needs to stop burying its head in the sand and come clean about how it is using nanotechnology.”

On the other hand, the companies that did supply information include many of the biggest names – L’Oreal, Unilever, Nivea, Avon, Boots, Body Shop, Korres and Green People – all of whom use nanoparticulate titanium dioxide (and, in some cases, nanoparticulate zinc oxide). This makes clear just how widespread the use of these materials is (and goes someway to explaining where the estimated 130 tonnes of nanoscale titanium dioxide being consumed annually in the UK is going).

The story is surprisingly widely covered by the media (considering that yesterday was not exactly a slow news day). Many focus on the angle of lack of consumer information, including the BBC, which reports that “consumers cannot tell which products use nanomaterials as many fail to mention it”, and the Guardian, which highlights the poor response rate. The story is also covered in the Daily Telegraph, while the Daily Mail, predictably, takes a less nuanced view. Under the headline The beauty creams with nanoparticles that could poison your body, the Mail explains that “the size of the particles may allow them to permeate protective barriers in the body, such as those surrounding the brain or a developing baby in the womb.”

What are the issues here? There is, if I can put it this way, a cosmetic problem, in that there are some products on the market making claims that seem at best unwise – I’m thinking here of the claimed use of fullerenes as antioxidants in face creams. It may well be that these ingredients are present in such small quantities that there is no possibility of danger, but given the uncertainties surrounding fullerene toxicology putting products like this on the market doesn’t seem very smart, and is likely to cause reputational damage to the whole industry. There is a lot more data about nanoscale titanium dioxide, and the evidence that these particular nanoparticles aren’t able to penetrate healthy skin looks reasonably convincing. They deliver an unquestionable consumer benefit, in terms of screening out harmful UV rays, and the alternatives – organic small molecule sunscreens – are far from being above suspicion. But, as pointed out by the EU’s Scientific Committee on Consumer Products, there does remain uncertainty about the effect of titanium dioxide nanoparticles on damaged and sun-burned skin. Another issue recently highlighted by Andrew Maynard is the issue of the degree to which the action of light on TiO2 nanoparticles causes reactive and potentially damaging free radicals to be generated. This photocatalytic activity can be suppressed by the choice of crystalline structure (the rutile form of titanium dioxide should be used, rather than anatase), the introduction of dopants, and coating the surface of the nanoparticles. The research cited by Maynard makes it clear that not all sunscreens use grades of titanium dioxide that do completely suppress photocatalytic activity.

This poses a problem. Consumers don’t at present have ready access to information as to whether nanoscale titanium dioxide is used at all, let alone whether the nanoparticles in question are in the rutile or anatase form. Here, surely, is a case where if the companies following best practise provided more information, they might avoid their reputation being damaged by less careful operators.

Books that inspired me

I’ve just done a brief interview with a journalist for the BBC’s Focus magazine, about the three popular science books on nanotechnology that have most inspired me. I’ve already written about my nanotechnology bookshelf, but this time when I came to choose my three favourite books to talk about it turns out that they weren’t directly about nanotechnology at all. So here’s my alternative list of three non-nanotechnology books that I think all nanotechnologists could benefit from reading.

The New Science of Strong Materials by J.E. Gordon. To say that this is the best book ever written about materials science might not sound like that high praise, but I was hugely inspired by this book when I read it as a teenager, and every time I re-read it I find in it another insight. It was first published in 1968, long before anyone was talking about nanotechnology, but it beautifully lays out the principles by which one might design materials from first principles, relating macroscopic properties to the ways in which their atoms and molecules are arranged, principles which even now are not always as well known as they should be to people who write about nanotechnology. It’s a forward looking book, but it’s also full of incidental detail about the history of technology and the science that has underlain the skills of craftsmen using materials through the ages. It also looks to the natural world, discussing what makes materials of biological origin, like wood, so good.

The Self-Made Tapestry by Philip Ball. Part of the appeal of this is the beauty of the pictures, depicting the familiar natural patterns of clouds and sand-dunes, as well as the intricate nanoscale structure of self-assembled block copolymer phases and the shells of diatoms. But alongside the illustrations there is an accurate and clear account of the principles of self-assembly and self-organisation, that cause these intricate patterns to emerge, not through the execution of any centralised plan, but as a result of the application of simple rules describing the interactions of the components of these systems.

Out of Control by Kevin Kelly. This is also about emergence, but it casts its net much more widely, to consider swarm behaviour in insects, economics and industrial ecologies, and flocks of insect-like robots. The common theme is the idea that one can gain power by relinquishing control, harnessing the power of adaptation and evolution in complex systems in which non-trivial behaviour arises from the collective actions of many interacting objects or agents. The style is evangelical, perhaps to the extent of overselling some of these ideas, and some may, like me, not be wholly comfortable with the libertarian outlook that underlies the extension of these ideas into political directions, but I still find it hugely provocative and exciting.

In Richmond, VA

I’m making a brief visit to Virginia to talk to high school students and others about my book, Soft Machines. It’s in connection with a visiting author program for the Chesterfield County school system, initiated by Prof Krishan Aggarwal, from Virginia State University; each year high school students in the County schools get to read a science book in class and the author comes to discuss it with them. So far I’ve talked to students in Monacan High School and L.C. Bird High School, as well as spending an afternoon with the staff of Richmond’s MathScience Innovation Centre and local science teachers, who have been developing sets of lesson materials about nanotechnology for high school students, and have clearly been thinking hard about how to convey some of the developing concepts of nanotechnology to their students. I’m just about to go back to L.C. Bird High School for a public lecture and panel discussion. I’ve been hugely impressed so far by the thought that’s gone into the questions being put to me; it’s been a pleasure to interact with such an engaged group of students. My thanks to Krishan and to Dr Jeremy Lloyd, from the Chesterfield County schools, for setting this up and looking after me.

What’s meant by “food nanotechnology”?

A couple of weeks ago I took part in a dialogue meeting in Brussels organised by the CIAA, the Confederation of the Food and Drink Industries of the EU, about nanotechnology in food. The meeting involved representatives from big food companies, from the European Commission and agencies like the European Food Safety Association, together with consumer groups like BEUC, and the campaigning group Friends of the Earth Europe. The latter group recently released a report on food nanotechnology – Out of the laboratory and on to our plates: Nanotechnology in food and agriculture; according to the press release, this “reveals that despite concerns about the toxicity risks of nanomaterials, consumers are unknowingly ingesting them because regulators are struggling to keep pace with their rapidly expanding use.” The position of the CIAA is essentially that nanotechnology is an interesting technology currently in research rather than having yet made it into products. One can get a good idea of the research agenda of the European food industry from the European Technology Platform Food for Life. As the only academic present, I tried in my contribution to clarify a little the different things people mean by “food nanotechnology”. Here, more or less, is what I said.

What makes the subject of nanotechnology particularly confusing and contentious is the ambiguity of the definition of nanotechnology when applied to food systems. Most people’s definitions are something along the lines of “the purposeful creation of structures with length scales of 100 nm or less to achieve new effects by virtue of those length-scales”. But when one attempts to apply this definition in practise one runs into difficulties, particularly for food. It’s this ambiguity that lies behind the difference of opinion we’ve heard about already today about how widespread the use of nanotechnology in foods is already. On the one hand, Friends of the Earth says they know of 104 nanofood products on the market already (and some analysts suggest the number may be more than 600). On the other hand, the CIAA (the Confederation of Food and Drink Industries of the EU) maintains that, while active research in the area is going on, no actual nanofood products are yet on the market. In fact, both parties are, in their different ways, right; the problem is the ambiguity of definition.

The issue is that food is naturally nano-structured, so that too wide a definition ends up encompassing much of modern food science, and indeed, if you stretch it further, some aspects of traditional food processing. Consider the case of “nano-ice cream”: the FoE report states that “Nestlé and Unilever are reported to be developing a nano- emulsion based ice cream with a lower fat content that retains a fatty texture and flavour”. Without knowing the details of this research, what one can be sure of is that it will involve essentially conventional food processing technology in order to control fat globule structure and size on the nanoscale. If the processing technology is conventional (and the economics of the food industry dictates that it must be), what makes this nanotechnology, if anything does, is the fact that analytical tools are available to observe the nanoscale structural changes that lead to the desirable properties. What makes this nanotechnology, then, is simply knowledge. In the light of the new knowledge that new techniques give us, we could even argue that some traditional processes, which it now turns out involve manipulation of the structure on the nanoscale to achieve some desirable effects, would constitute nanotechnology if it was defined this widely. For example, traditional whey cheeses like ricotta are made by creating the conditions for the whey proteins to aggregate into protein nanoparticles. These subsequently aggregate to form the particulate gels that give the cheese its desirable texture.

It should be clear, then, that there isn’t a single thing one can call “nanotechnology” – there are many different technologies, producing many different kinds of nano-materials. These different types of nanomaterials have quite different risk profiles. Consider cadmium selenide quantum dots, titanium dioxide nanoparticles, sheets of exfoliated clay, fullerenes like C60, casein micelles, phospholipid nanosomes – the risks and uncertainties of each of these examples of nanomaterials are quite different and it’s likely to be very misleading to generalise from any one of these to a wider class of nanomaterials.

To begin to make sense of the different types of nanomaterial that might be present in food, there is one very useful distinction. This is between engineered nanoparticles and self-assembled nanostructures. Engineered nanoparticles are covalently bonded, and thus are persistent and generally rather robust, though they may have important surface properties such as catalysis, and they may be prone to aggregate. Examples of engineered nanoparticles include titanium dioxide nanoparticles and fullerenes.

In self-assembled nanostructures, though, molecules are held together by weak forces, such as hydrogen bonds and the hydrophobic interaction. The weakness of these forces renders them mutable and transient; examples include soap micelles, protein aggregates (for example the casein micelles formed in milk), liposomes and nanosomes and the microcapsules and nanocapsules made from biopolymers such as starch.

So what kind of food nanotechnology can we expect? Here are some potentially important areas:

• Food science at the nanoscale. This is about using a combination of fairly conventional food processing techniques supported by the use of nanoscale analytical techniques to achieve desirable properties. A major driver here will be the use of sophisticated food structuring to achieve palatable products with low fat contents.
• Encapsulating ingredients and additives. The encapsulation of flavours and aromas at the microscale to protect delicate molecules and enable their triggered or otherwise controlled release is already widespread, and it is possible that decreasing the lengthscale of these systems to the nanoscale might be advantageous in some cases. We are also likely to see a range of “nutriceutical” molecules come into more general use.
• Water dispersible preparations of fat-soluble ingredients. Many food ingredients are fat-soluble; as a way of incorporating these in food and drink without fat manufacturers have developed stable colloidal dispersions of these materials in water, with particle sizes in the range of hundreds of nanometers. For example, the substance lycopene, which is familiar as the molecule that makes tomatoes red and which is believed to offer substantial health benefits, is marketed in this form by the German company BASF.

What is important in this discussion is clarity – definitions are important. We’ve seen discrepancies between estimates of how widespread food nanotechnology is in the marketplace now, and these discrepancies lead to unnecessary misunderstanding and distrust. Clarity about what we are talking about, and a recognition of the diversity of technologies we are talking about, can help remove this misunderstanding and give us a sound basis for the sort of dialogue we’re participating in today.

From micro to nano for medical applications

I spent yesterday at a meeting at the Institute of Mechanical Engineers, Nanotechnology in Medicine and Biotechnology, which raised the question of what is the right size for new interventions in medicine. There’s an argument that, since the basic operations of cell biology take place on the nano-scale, that’s fundamentally the right scale for intervening in biology. On the other hand, given that many current medical interventions are very macroscopic, operating on the micro-scale may already offer compelling advantages.

A talk from Glasgow University’s Jon Cooper gave some nice examples illustrating this. His title was Integrating nanosensors with lab-on-a-chip for biological sensing in health technologies, and he began with some true nanotechnology. This involved a combination of fluid handling systems for very small volumes with nanostructured surfaces, with the aim of detecting single biomolecules. This depends on a remarkable effect known as surface enhanced Raman scattering. Raman scattering is a type of spectroscopy that can detect chemical groups with what is normally rather low sensitivity. But if one illuminates metals with very sharp asperities, this hugely magnifies the light field very close to the surface, increasing sensitivity by a factor of ten million or so. Systems based on this effect, using silver nanoparticles coated so that pathogens like anthrax will stick to them, are already in commercial use. But Cooper’s group uses, not free nano-particles, but very precisely structured nanosurfaces. Using electron beam lithography his group creates silver split-ring resonators – horseshoe shapes about 160 nm across. With a very small gap one can get field enhancements of a factor of one hundred billion, and it’s this that brings single molecule detection into prospect.

On a larger scale, Cooper described systems to probe the response of single cells – his example involved using a single heart cell (a cardiomyocyte) to screen responses to potential heart drugs. This involved a pico-litre scale microchamber adjacent to an array of micron size thermocouples, which allow one to monitor the metabolism of the cell as it responds to a drug candidate. His final example was on the millimeter scale, though its sensors incorporated nanotechnology at some level. This was a wireless device incorporating an electrochemical blood sensor – the idea was that one would swallow this to screen for early signs of bowel cancer. Here’s an example where, obviously, smaller would be better, but how small does one need to go?

Nanoparticles down the drain

With significant amounts of nanomaterials now entering markets, it’s clearly worth worrying about what’s going to happen these materials after disposal – is there any danger of them entering the environment and causing damage to ecosystems? These are the concerns of the discipline of nano-ecotoxicology; on the evidence of the conference I was at yesterday, on the Environmental effects of nanoparticles, at Birmingham, this is an expanding field.

From the range of talks and posters, there seems to be a heavy focus (at least in Europe) on those few nanomaterials which really are entering the marketplace in quantity – titanium dioxide, of sunscreen fame, and nano-silver, with some work on fullerenes. One talk, by Andrew Johnson, of the UK’s Centre for Ecology and Hydrology at Wallingford, showed nicely what the outline of a comprehensive analysis of the environmental fate of nanoparticles might look like. His estimate is that 130 tonnes of nano-titanium dioxide a year is used in sunscreens in the UK – where does this stuff ultimately go? Down the drain and into the sewers, of course, so it’s worth worrying what happens to it then.

At the sewage plant, solids are separated from the treated water, and the first thing to ask is where the titanium dioxide nanoparticles go. The evidence seems to be that a large majority end up in the sludge. Some 57% of this treated sludge is spread on farmland as fertilizer, while 21% is incinerated and 17% goes to landfill. There’s work to be done, then, in determining what happens to the nanoparticles – do they retain their nanoparticulate identity, or do they aggregate into larger clusters? One needs then to ask whether those that survive are likely to cause damage to soil microorganisms or earthworms. Johnson presented some reassuring evidence about earthworms, but there’s clearly more work to be done here.

Making a series of heroic assumptions, Johnson made some estimates of how many nanoparticles might end up in the river. Taking a worst case scenario, with a drought and heatwave in the southeast of England (they do happen, I’m old enough to remember) he came up with an estimate of 8 micrograms/litre in the Thames, which is still more than an order of magnitude less than that that has been shown to start to affect, for example, rainbow trout. This is reassuring, but, as one questioner pointed out, one still might worry about the nanoparticles accumulating in sediments to the detriment of filter feeders.

Responsible nanotechnology – from discourse to practice

Like many academics, I’ve come back from my summer holiday only to leave immediately for a flurry of conferences. This year has been particularly busy. Last week saw me give a talk at a conference on phase separation in Cambridge last week, this week I’ve been in and out of a conference at Sheffield on thin polymer films, and next week I’m giving talks successively at one conference honouring Dame Julia Higgins and another on the environmental effects of nanoparticles. Yesterday, though, I found myself not amongst scientists, but in the Manchester Business School for a conference on Nanotechnology, Society and Policy.

There were some interesting and provocative talks looking at the empirical evidence for the development, or otherwise, of regional clusters with particular strengths in nanotechnology; under discussion was the issue of whether new industries based on nanotechnologies would inevitably be attracted to existing technological clusters like Silicon Valley and the Boston area, or whether the diverse nature of the technologies grouped under this banner would diffuse this clustering effect.

In the governance section, the University of Twente’s Arie Rip, one of the doyens of European science studies, spoke on the title “Discourse and practice of responsible nanotechnology development”. I must admit that I’d had a preconception that this would be a talk critical of the way so many people had adopted the rhetoric of “responsible development” simply as a way of promoting the subject and deflecting criticism. However, Rip’s message was actually rather more optimistic than this. His view was that, however much such talk begins as rhetoric, it does translate into real practice, and the interactions we’re seeing between technology and society, in the form of public dialogue, discussions between companies and campaigning groups, and the development of codes of practice really are creating “soft structures” and “soft law” that are beginning to have a real, and beneficial, effect on the way these technologies are being introduced.