Soft soaping hard matter

Self-assembly is an elegant and scalable way of making complex nanoscale structures. But it only works for soft matter – the archetypal self-assembling systems are bars of soap and pots of hair gel; they’re soft because the energies that cause their components to stick together are comparable with the energies of thermal agitation. Is there a way of overcoming this limitation, and using self-assembly to make complex nanoscale structures from hard materials, like ceramics and (inorganic) semiconductors? There is – one can use the soft structure to template the synthesis of the harder material, so that the hard material takes up the intricate structure of the soft, self-assembled structure one starts with. It’s possible to use this templating technique to make glass-like materials, using so-called sol-gel chemistry. But up to now it’s not been possible to make templated, nanostructured elemental semiconductors like silicon or germanium. Two papers in this week’s Nature (Editor’s summary, with links to full articles and commentary, for which subscription is required) report the achievement of this goal for the case of germanium.

To do this, the first requirement is a chemistry for synthesising germanium that works in solution at moderate temperatures. No such chemistry exists that uses water, so another solvent system is needed, together with a compatible surfactant that self-assembles in this solvent. The two papers manage to overcome these barriers in ways that are different in detail, but similar in principle. Sarah Tolbert’s group, from UCLA, uses ethylene diamine as the solvent and the cationic surfactant CTEAB (a very similar molecule to that found in some mild domestic disinfectants) to form the self-assembled nanostructures, which in their case took the form of hexagonally packed rods. Mercouri Kanatzidis’s group at Michigan State used formamide as the solvent and a somewhat different cationic surfactant (EMBHEAB). Both groups used variants of the so-called Zintl salts, in which germanium is combined with a reactive metal like potassium or magnesium.

In both cases the germanium is disordered on the atomic scale, but with good long-ranged order on the larger length-scales, that reflects the relative perfection of the original self-assembled soapy structure. The UCLA group manage to remove the surfactant, leaving a nicely hydrogen-terminated germanium. The Michigan State group were unable to get rid of their surfactant, but on the positive side the structure they formed was the very beautiful and potentially useful gyroid phase, a high-symmetry structure (see the picture) in which both the material and the pores are continuous. Immediate uses of these structures follow from the fact that the optoelectronic properties of the material are strongly affected by its nanostructured form, and can be further changed by adsorption of matter on the semiconductor’s surfaces, offering potential sensor applications.

the gyroid phase

The gyroid phase, a cubic bicontinuous structure formed by some self-assembling surfactant systems. This structure has now been formed from elemental germanium using a templating process.

Debating Radical Nanotechnology

Philip Moriarty reports that the Nottingham Nanotechnology Debate can now be viewed on streaming video here. The debate, held last summer, featured two proponents of Drexler’s vision of molecular nanotechnology, Josh Storrs Hall and David Forrest, discussing the feasibility of these visions with a couple of more sceptical observers, myself and Saul Tendler, a bionanotechnologist from Nottingham University. The audience included many distinguished nanoscientists, and even with the video available, it’s worth reading the transcript of the debate, which can be downloaded from The Nottingham Nanoscience Group’s webpages, if only to identify the authors of the many perceptive questions.

The aftermath of the debate included these additional points from David Forrest, which attracted some discussion on Soft Machines here. For my part, I organised my thoughts on the problems which I think the MNT program needs to address and overcome in this post: Six Challenges for Molecular Nanotechnology.

So where does the debate go now? I can’t conceal my disappointment that the MNT community has reacted with complete indifference to this set of challenges, which I set out in as constructive and concrete way as possible. Nanodot, the blog of the Foresight Nanotech Institute, simply ignored it. The most vocal proponents of the MNT position are now to be found in the Centre for Responsible Nanotechnology, but rational discussion of MNT in that forum is hampered by the fact that its proprietors simply refuse to engage in debate with informed critics such as myself and Philip Moriarty, preferring simply to assert, in the absence of any evidence, that the MNT revolution comes ever nearer. The usual outcome of a refusal to engage with people outside one’s own circle of believers is, of course, complete marginalisation. I regret this situation, because even though I think many of the ideas underlying MNT are flawed, Drexler’s writings have been very valuable in highlighting the potential of radical nanotechnology, and the process of thinking through what might work and what won’t is likely to be a very productive way of establishing research directions.

Public engagement in theory and practise

Tuesday saw me both practising and preaching public engagement – I talked to about a hundred 15 year olds about nanotechnology in Sheffield in the morning, and then scooted to London to make a guest appearance in front of the Science and Society Strategy Panel of the Biotechnology and Biological Sciences Research Council, the body which doles out research funding on behalf of the UK government for those bits of biological science which are not directly clinically relevant. This panel represents the BBSRC’s first attempt to incorporate public engagement in their strategy setting; given the recent history of biotechnology in the UK it’s not surprising that many of the panel are veterans (from both sides) of the GM wars.

This is a relatively new committee, and they are still working out how their deliberations might actually have tangible impacts. The meeting had a wide-ranging discussion about the practicalities and realities of public engagement; one piece of work that they have recently commissioned, on public attitudes to ageing research, will interest transhumanists and life extension enthusiasts. Details will have to wait until the report, and the committee’s response to it, have been made public.

Printing devices

I spent a couple of days earlier this week at a conference in Manchester called “Printing of Functional Materials”. The premise of this meeting was the growing realisation that printing technologies, both the traditional, like silk-screen and gravure, and modern, like ink-jet, offer scalable, cheap and flexible ways of precisely depositing small quantities of materials on surfaces. Traditional inks are just vehicles for pigments to create static images, but there’s no reason why you can’t use printing to deposit materials that are conductors or semiconductors of electricity, which are electro-luminescent, or which have some biological functionality. Indeed, as one of the organisers of the conference has shown, one can even use ink-jet printing to deposit living human cells, with potential applications in tissue engineering.

The degree of commercial interest in these technologies was indicated by the fact that, unusually for an academic conference, more than a third of the attendees were from the commercial sector. Many of these were from the cluster of small and medium companies developing ink-jet technologies from around Cambridge, but European and American concerns were well represented too. My impression that the sector that is closest to maturity in this area is in electrically functional devices, where there’s a great deal of momentum to drive down the cost of RFID and to develop cheap, flexible displays. But there are still many materials issues to solve. It’s not easy to get a complex fluid to flow in the right way to form the tiny, well-defined droplets that make it ink-jet well, and formulating the ink in a way that makes it dry to give the best properties is awkward too. Silver inks illustrate the problems – commercial inks to write conducting lines sometimes use silver nanoparticles. Making the silver particles very small is helpful in making them coalesce well to make a continuous silver layer; the melting point of materials is lowered when they are in nanoparticulate form, making them sinter at lower temperatures. But then you have to work hard to stop the particles aggregating in the ink (it’s particularly undesirable, or course, if they aggregate in the ink-jet nozzle and block it up). To stabilise them, you need to coat them with surfactants or polymer molecules. But then this organic coating needs to be driven off by a heating step to get good conduction, and this compromises your ability to print on paper and plastics, which can’t take much heating. It seems to me that this technology has a huge amount of promise, but there’s a lot of materials science and colloid science to be done before it can fulfill its potential.

Ken Donaldson on nanoparticle toxicology

I’ve been running in and out of a three day course on nanotechnology intended for chemists working in the chemistry industry (Nanotechnology for Chemists), organised by me and my colleagues at Sheffield on behalf of the Royal Society of Chemistry. Yesterday I swapped from being a lecturer to being a pupil, to hear a lecture about nanoparticle toxicity, given by Ken Donaldson of the University of Edinburgh, the UK’s leading toxicologist specialising in the effects of environmental nanoparticles. This is a brief summary of his lecture as I understood it (all misunderstandings and misapprehensions are my fault, of course).

His lecture began with the disclaimer that most nanotechnology won’t pose a health risk at all; what’s at issue is the single class of free (i.e. not incorporated in a matrix, as happens in a nanocomposite material), manufactured, insoluble nanoparticles. Of the potential portals of entry – the lungs, the gut and the skin – he felt that the main danger was the lungs, so the main potential danger, both for industrial workers and users, was nanoparticles in the air.

It’s been known for a long time that particles cause lung disease; he gave a number of examples (illustrated by gruesome photographs), including coal miner’s lung, cancer and silicosis from quartz particles and asbestos. The latter causes a number of diseases, including mesothelioma, a particularly nasty cancer seen only in people exposed to asbestos, characterised by long latency period and with a uniformly fatal final outcome. So it’s clear that particles do accumulate in the lungs.

In terms of what we know about the effect of nanoparticle exposures, there are four distinct domains. What we know most about are the nanoparticles derived from combustion. We also know a fair amount about bulk manufactured particles, like titanium dioxide, which have been around a long time and which typically contain significant fractions of nanosized particles. Of course, the effects of nanoparticles used in medical contexts have been well studied. The final area is the least studied – the effect of engineered free nanoparticles.

So what can we learn from environmental nanoparticles? The origin of these particles is overwhelmingly from combustion; in the UK only 13% of exposure comes from non-combustion sources, usually the processes of natural atmospheric chemistry. The most important class of nanoparticles by far are those deriving from traffic exhaust, which account for 60% of exposure. These particles have a basic size of tens of nanometers, though they clump with time into micron sized aggregates, which are very easily respirable.

These particles have no problem getting deep within the lungs. Of the 40 nm particles, perhaps 30% can get to the very delicate tissues in the periphery of the lung, where they deposit very efficiently (smaller particles actually are less effective at getting to the lung as they tend to be taken up in the nose). The structures they interact with deep in the lung – the bronchal epithelial cells – are very small and fragile, and the distances separating airways from the blood are very small. Here the particles cause inflammation, which is essentially a defense reaction. We’re familiar with inflammation of the skin, marked by swelling – fluid bathes the region and white blood cells engulf damaged tissue and microbes, leading to pain, heat, redness and loss of function. Of course in the lung one can’t see the inflammation, and there are no pain receptors, so inflammation can be less obvious, though the swelling can easily cut off air flow leading to very disabling and threatening conditions.

It’s believed that there is a generic mechanism for lung inflammation by combustion-derived nanoparticles, despite the wide variety of different kinds of chemistry in these particles. All these have in common the production of free radicals, which leads to oxidative stress, which in turn leads to inflammation. DIfferent types of nanoparticles cause oxidative stress through different mechanisms. Metal nanoparticles – as found in welding fumes – yield one mechanism, surface born organics (as are found in soot), have another, and materials like carbon black, titanium dioxide and polystyrene latex, which are not very intrinsically toxic, operate through some generic surface mechanism. Clearly it is the surface area that is important, so nanoparticles cause more inflammation than the same mass of fine respirable particles, in the 2-3 micron range, composed of the same materials. In passing one can note that diesel fumes are particularly harmful, dealing a triple blow through their combination of surfaces, metals and organics. These pathways to oxidative stress are very well understood now, so this is a well-found paradigm.

Inflammation due to the oxidative stress caused by nanoparticles from pollution then leads to a number of different diseases, including cardiovascular diease, asthma, scarring, cancer and chronic obstructive pulmonary disease). Their involvement in cardiovascular disease is perhaps unexpected, and to understand it we need to understand where the nanoparticles go. We have some rather hypothetical toxicokinetics based on a few experiments using radioactive, insoluble tracer particles. Having entered the nose or lung, a few studies suggest that they can go directly to the brain. The route from the lung to the blood is well understood, and once in the blood there are many possible ultimate destinations. It’s doubtful that nanoparticles could enter the blood directly from the gut or skin. A mechanism for the involvement of nanoparticles in cardiovascular disease is suggested by studies in which healthy Swedish student volunteers rode a bike in an atmosphere of diesel fumes (at levels comparable to highly polluted city streets). This leads to measurable vascular dysfunction throughout the whole body, and a reduction in the ability to dissolve blood clots (similar effects will be observed in smokers, who self-administer nanoparticles). This suggests that pollution nanoparticles could cause cardiovascular disease either through lung inflammation or through the direct effect of bloodborn particles, leading to the worsening of coronary artery disease or increased blood clotting.

A study using radioactive carbon has suggested that nanoparticles can enter the brain directly from the nose, via the olfactory bulb – this is the route into the central nervous system used by the polio virus, and it doesn’t required crossing the blood-brain barrier. Studies of brain tissue in people living in highly polluted cities like Mexico City have shown pathological changes simiilar to those seen in victims of Parkinson’s and Alzheimer’s occurring as a result of the effect of pollution-derived nanoparticles.

The potential comparison between carbon nanotubes and asbestos is worth considering. Very large exposures to asbestos in the past have caused many cases of fatal lung disease. The characteristics of asbestos which cause this disease – and these characteristics are physical, not chemical – are that they are thin, persistent in the body, and long. Carbon nanotubes certainly match the first two requirements, but it is not obvious that they fulfill the third. Asbestos fibres need to be 20 microns long to demonstrate toxic effects; if they are milled to shorter lengths the toxicity goes away. Carbon nanotubes of this length tend to curl up and clump. On the other hand rat experiments on the effect of nanotubes on the lungs show distinctive fibrosing lesions. Donaldson has just written an extensive review article about nanotube toxicity which will be published soon.

From the regulatory point of view there are some difficulties as regulations usually specify exposure limits in terms of mass concentration, while clearly it is surface area that is important. In the USA NIOSH thinking of reducing limits by a factor of 5 for ultrafine TiO2. Fibres, though, are regulated by number density. The difficulties for carbon nanotubes are that they are probably too small to see by standard microscopy, and they curl up, so although they should be classifed as fibres by WHO definitions probably aren’t going to be detected. In terms of workplace protection local exhaust ventilation is much the best, with almost all masks being fairly useless. This applies, for example, to the masks used by some cyclists in polluted cities. They can, however, take comfort from the fact that their exposure to nanoparticles is significantly smaller than the exposure of the people inside the vehicles who are causing the pollution.

My conclusion, then, is if you are worried about inhaling free nanoparticles (and you should be) you should stop travelling by car.

Regulatory concerns about nanotechnology and food

The UK Government’s Food Standards Agency has issued a draft report about the use of nanotechnology in food and the regulatory implications this might have. The report can be downloaded here; the draft report is now open for public consultation and comments are invited by July 14th.

Observers could be forgiven some slight bemusement when it comes to the potential applications of nanotechnology to food, in that, entirely according to one’s definition of nanotechnology, these could encompass either almost everything or almost nothing. As the FSA says on its website: “In its widest sense, nanotechnology and nanomaterials are a natural part of food processing and conventional foods, as the characteristic properties of many foods rely upon nanometre sized components (e.g. nanoemulsions and foams).” To give just one example, the major protein component of milk – casein – is naturally present in the form of clusters of molecules tens of nanometers in size, so most of the processes of the dairy industry involve the manipulation of naturally occurring nanoparticles. On the other hand, in terms of the narrow focus that has developed at the applications end of nanotechnology on engineered nanoparticles, the current impact on food is rather small. In fact, the FSA states categorically in the report: “The Agency is not aware of any examples of manufactured nanoparticles or other nanomaterials being used in food currently sold in the UK.”

In terms of the narrow focus on engineered nanoparticles, it is clear that there is indeed a regulatory gap at the moment. The FSA states that, if a food ingredient were to be used in a new, nanoscale form, then currently there would be no need to pass any new regulatory hurdles. However, the FSA believes that a more general protection would step in as a backstop – ” in such cases, the general safety articles of the EU Food Law Regulation (178/2002) would apply, which require that food placed on the market is not unsafe.” So, how likely is it that this situation, and subsequent problems, might arise? One needs first to look at those permitted food additives that are essentially insoluble in oil or water. These include (in the EU) some inorganic materials that have been used in nanoparticulate form in non-food contexts, including titanium dioxide, silicon dioxide, some clay-based materials, and the metals aluminium, silver and gold. Insoluble organic materials include cellulose, in both powdered and microcrystalline forms. The latter is an interesting case because it provides a precedent for regulations that do specify size limits – the FSA report states that ” The only examples in the food additives area that specifically limits the presence of small particles is the specification for microcrystalline cellulose, where the presence of small particles (< 5 microns) is limited because of uncertainties over their safety. " The FSA seems fairly confident that if necessary similar amendments could quickly be made in the case of other materials. But there remains the problem that currently there isn’t, as far as I can see, a fail-safe method by which the FSA could be alerted to the use of such nanomaterials and any problems they might cause. On the other hand, it’s not obvious to me why one might want to use these sorts of materials in a nanoparticulate form in food. Titanium dioxide, for example, is used essentially as a white pigment, so there wouldn’t be any point using it in a transparent, nanoscale form.

Synthetic biology – the debate heats up

Will it be possible to radically remodel living organisms so that they make products that we want? This is the ambition of one variant of synthetic biology; the idea is to take a simple bacteria, remove all unnecessary functions, and then patch the genetic code for the functions we want. It’s clear that this project is likely to lead to serious ethical issues, and the debate about these issues is beginning in earnest today. At a conference being held in Berkeley today, synthetic biology 2.0, the synthetic biology research community is having discussions on biosecurity & risk, public understanding & perception, ownership, sharing & innovation, and community organization, with the aim of developing a framework for the self-regulation of the field. Meanwhile, a coalition of environmental NGOs, including Greenpeace, Genewatch, Friends of the Earth and ETC, has issued a press release calling on the scientists to abandon this attempt at self-regulation.

Some of the issues to be discussed by the scientists can be seen on this wiki. One very prominent issue is the possibility that malevolent groups could create pathogenic organisms using synthetic DNA, and there is a lot of emphasis on what safeguards can be put in place by the companies that supply synthetic DNA with a specified sequence. This is a very important problem – the idea that it is now possible to create from scratch pathogens like the virus behind the 1918 Spanish flu pandemic frightens many people, me included. But it’s not going to be the only issue to arise, and I think it is very legitimate to wonder whether community self-regulation is sufficient to police such a potentially powerful technology. The fact that much of the work is going on in commercial organisations is a cause for concern. One of the main players in this game is Synthetic Genomics, inc, which was set up by Craig Venter, who already has some form in the matter of not being bound by the consensus of the scientific community.

In terms of the rhetoric surrounding the field, I’d also suggest that the tone adopted in articles like this one, in this weeks New Scientist, Redesigning life: Meet the biohackers (preview, subscription required for full article), is unhelpful and unwise, to say the least.

Nanoscale ball bearings or grit in the works?

It’s all too tempting to imagine that our macroscopic intuitions can be transferred to the nanoscale world, but these analogies can be dangerous and misleading. For an example, take the case of the buckyball bearings. It seems obvious that the almost perfectly spherical C60 molecule, Buckminster fullerene, would be an ideal ball bearing on the nanoscale. This intuition underlies, for example, the design of the “nanocar”, from James Tour’s group in Rice, that recently made headlines. But a recent experimental study of nanoscale friction by Jackie Krim, from North Caroline State University, shows that this intuition may be flawed.

The study, reported in last week’s Physical Review Letters (abstract here, subscription required for full article), directly measured the friction experienced by a thin layer sliding on a surface coated with a layer of buckminster fullerene molecules. Krim was able to directly compare the friction observed when the balls were allowed to rotate, with the situation when the balls were fixed. Surprisingly, the friction was higher for the rotating layers – here the ball-bearing analogy is seductive, but wrong.

In Seville

I’ve been in Seville for a day or so, swapping the Derbyshire drizzle for the Andalucian sun. I was one of the speakers in a meeting about Technology and Society, held in the beautiful surroundings of the Hospital de los Venerables. The meeting was organised by the Spanish writer and broadcaster Eduardo Punset, who also interviewed me for the science program he presents on Spanish TV.

As well as my talk and the TV interview, I also took part in a panel discussion with Alun Anderson, the former editor-in-chief of New Scientist. This took the form of a conversation between him and me, with an audience listening in. I hope they enjoyed it; I certainly did. As one would imagine, Anderson is formidably well- informed about huge swathes of modern science, and very well-connected with the most prominent scientists and writers. Among the topics we discussed were the future of energy generation and transmission, prospects for space elevators and electronic newspapers, Craig Venter’s minimal genome project, and whether we believed the premise of Ray Kurzweil’s most recent book, ‘The Singularity is Near’. Alun announced he would soon be appearing on a platform with a Ray Kurzweil’s live hologram, or thereabouts. However he did stress that this was simply because the corporeal Kurzweil couldn’t get to the venue in person, not because he has prematurely uploaded.