Nanotechnology and visions of the future (part 2)

This is the second part of an article I was asked to write to explain nanotechnology and the debates surrounding it to a non-scientific audience with interests in social and policy issues. This article was published in the Summer 2007 issue of the journal Soundings. The first installment can be read here.

Ideologies

There are many debates about nanotechnology; what it is, what it will make possible, and what its dangers might be. On one level these may seem to be very technical in nature. So a question about whether a Drexler style assembler is technically feasible can rapidly descend into details of surface chemistry, while issues about the possible toxicity of carbon nanotubes turn on the procedures for reliable toxicological screening. But it’s at least arguable that the focus on the technical obscures the real causes of the argument, which are actually based on clashes of ideology. What are the ideological divisions that underly debates about nanotechnology?

Transhumanism
Underlying the most radical visons of nanotechnology is an equally radical ideology – transhumanism. The basis of this movement is a teleological view of human progress which views technology as the vehicle, not just for the improvement of the lot of humanity, but for the transcendence of those limitations that non-transhumanists would consider to be an inevitable part of the human condition. The most pressing of these limitations, is of course death, so transhumanists look forward to nanotechnology providing a permanent solution to this problem. In the first instance, this will be effected by nanomedicine, which they anticipate as making cell-by-cell repairs to any damage possible. Beyond this, some transhumanists believe that computers of such power will become available that they will constitute true artificial intelligence. At this point, they imagine a merging of human and machine intelligence, in a way that would effectively constitute the evolution of a new and improved version of humankind.

The notion that the pace of technological change is continually accelerating is an article of faith amongst transhumanists. This leads to the idea that this accelerating rate of change will lead to a point beyond which the future is literally inconceivable. This point they refer to as “the singularity”, and discussions of this hypothetical event take on a highly eschatological tone. This is captured in science fiction writer Cory Doctorow’s dismissive but apt phrase for the singularity: “the rapture of the nerds”.

This worldview carries with it the implication that an accelerating pace of innovation is not just a historical fact, but also a moral imperative. This is because it is through technology that humanity will achieve its destiny, which is nothing less that to transcend its own current physical and mental limitations. The achievement of radical nanotechnology is central to this project, and for this reason transhumanists tend to share a strong conviction not only that radical nanotechnology along Drexlerian lines is possible, but also that its development is morally necessary.

Transhumanism can be considered to be the extreme limit of views that combine strong technological determinism with a highly progressive view of the development of humanity. It is a worldwide movement, but it’s probably fair to say that its natural home is California, its main constituency is amongst those involved in information technology, and it is associated predominantly, if not exclusively, with a strongly libertarian streak of politics, though paradoxically not dissimilar views seem to be attractive to a certain class of former Marxists.

Given that transhumanism as an ideology does not seem to have a great deal of mass appeal, it’s tempting to underplay its importance. This may be a mistake; amongst its adherents are a number of figures with very high media profiles, particularly in the United States, and transhumanist ideas have entered mass culture through science fiction, films and video games. Certainly some conservative and religious figures have felt threatened enough to express some alarm, notably Francis Fukuyama, who has described transhumanism as “the world’s most dangerous idea”.

Global capitalism and the changing innovation landscape
If it is the radical futurism of the transhumanists that has put nanotechnology into popular culture, it is the prospect of money that has excited business and government. Nanotechnology is seen by many worldwide as the major driver of economic growth over the next twenty years, filling the role that information technology has filled over the last twenty years. Breathless projections of huge new markets are commonplace, with the prediction by the US National Nanotechnology Initiative of a trillion dollar market for nanotechnology products by 2015 being the most notorious of these. It is this kind of market projection that underlies a worldwide spending boom on nanotechnology research, which encompasses both the established science and technology powerhouses like the USA, Germany and Japan, but also fast developing countries like China and India.

The emergence of nanotechnology has corresponded with some other interesting changes in the commercial landscape in technologically intensive sectors of the economy. The types of incremental nanotechnology that have been successfully commercialised so far have involved nanoparticles, such as the ones used in sunscreens, or coatings, of the kind used in stain-resistant fabrics. This sort of innovation is the province of the speciality chemicals sector, and one cynical view of the prominence of the nanotechnology label amongst new and old companies is that it has allowed companies in this rather unfashionable sector of the market to rebrand themselves as being part of the newest new thing, with correspondingly higher stock market valuations and easier access to capital. On the other hand, this does perhaps signal a more general change in the way science-driven innovations reach the market.

Many of the large industrial conglomerates that were such a prominent parts of the industrial landscape in Western countries up to the 1980s have been broken up or drastically shrunken. Arguably, the monopoly rents that sustained these combines were what made possible the very large and productive corporate laboratories that were the source of much innovation at that time. This has been replaced by a much more fluid scene in which many functions of companies, including research and innovation, have been outsourced. In this landscape, one finds nanotechnology companies like Oxonica, which are essentially holding companies for intellectual property, with functions that in the past would have been regarded as of core importance, such as manufacturing and marketing, outsourced to contractors, often located in different countries.

Even the remaining large companies have embraced the concept of “open innovation”, in which research and development is regarded as a commodity to be purchased on the open market (and, indeed, outsourced to low cost countries) rather than a core function of the corporation. It is in this light that one should understand the new prominence of intellectual property as something fungible and readily monetised. Universities and other public research institutes, strongly encouraged to seek new sources of funding other than direct government support, have made increasing efforts to spin-out new companies based on intellectual property developed by academic researchers.

In the light of all this, it’s easy to see nanotechnology as one aspect of a more general shift to what the social scientist Michael Gibbons has called Mode II knowledge production[4]. In this view, traditional academic values are being eclipsed by a move to more explicitly goal-oriented and highly interdisciplinary research, in which research priorities are set not by the values of the traditional disciplines, but by perceived market needs and opportunities. It is clear that this transition has been underway for some time in the life sciences, and in this view the emergence of nanotechnology can be seen as a spread of these values to the physical sciences.

Environmentalist opposition
In the UK at least, the opposition to nanotechnology has been spearheaded by two unlikely bedfellows. The issue was first propelled into the news by the intervention of Prince Charles, who raised the subject in newspaper articles in 2003 and 2004. These articles directly echoed concerns raised by the small campaigning group ETC[5]. ETC cast nanotechnology as a direct successor to genetic modification; to summarise this framing, whereas in GM scientists had directly intervened in the code of life, in nanotechnology they meddle with the very atomic structure of matter itself. ETC’s background included a strong record of campaigning on behalf of third world farmers against agricultural biotechnology, so in their view nanotechnology, with its spectre of the possible patenting of new arrangements of atoms and the potential replacement of commodities such as copper and cotton by nanoengineered substitutes controlled by multinationals, was to be opposed as an intrinsic part of the agenda of globalisation. Complementing this rather abstract critique was a much more concrete concern that nanoscale materials might be more toxic than their conventional counterparts, and that current regulatory regimes for the control of environmental exposure to chemicals might not adequately recognise these new dangers.

The latter concern has gained a considerable degree of traction, largely because there has been a very widespread degree of consensus that the issue has some substance. At the time of the Prince’s intervention in the debate (and quite possibly because of it) the UK government commissioned a high-level independent report on the issue from the Royal Society and the Royal Academy of Engineering. This report recommended a program of research and regulatory action on the subject of possible nanoparticle toxicity[6]. Public debate about the risks of nanotechnology has largely focused on this issue, fuelled by a government response to the Royal Society that has been widely considered to be quite inadequate. However, it is possible to regret that the debate has become so focused on this rather technical issue of risk, to the exclusion of wider issues about the potential impacts of nanotechnology on society.

To return to the more fundamental worldviews underlying this critique of nanotechnology, whether they be the rather romantic, ruralist conservatism of the Prince of Wales, or the anti-globalism of ETC, the common feature is a general scepticism about the benefits of scientific and technological “progress”. An extremely eloquent exposition of one version of this point of view is to be found in a book by US journalist Bill McKibben[7]. The title of McKibben’s book – “Enough” – is a succinct summary of its argument; surely we now have enough technology for our needs, and new technology is likely only to lead to further spiritual malaise, through excessive consumerism, or in the case of new and very powerful technologies like genetic modification and nanotechnology, to new and terrifying existential dangers.

Bright greens
Despite the worries about the toxicology of nanoscale particles, and the involvement of groups like ETC, it is notable that all-out opposition to nanotechnology has not yet fully crystallised. In particular, groups such as Greenpeace have not yet articulated a position of unequivocal opposition. This reflects the fact that nanotechnology really does seem to have the potential to provide answers to some pressing environmental problems. For example, there are real hopes that it will lead to new types of solar cells that can be produced cheaply in very large areas. Applications of nanotechnology to problems of water purification and desalination have obvious potential impacts in the developing world. Of course, these kinds of problems have major political and social dimensions, and technical fixes by themselves will not be sufficient. However, the prospects that nanotechnology may be able to make a significant contribution to sustainable development have proved convincing enough to keep mainstream environmental movements at least neutral on the issue.

While some mainstream environmentalists may still remain equivocal in their view of nanotechnology, another group seems to be embracing new technologies with some enthusiasm as providing new ways of maintaining high standards of living in a fully sustainable way. Such “bright greens” dismiss the rejection of industrialised economies and the yearning to return to a rural lifestyle implicit in the “deep green” worldview, and look to the use of new technology, together with imaginative design and planning, to create sustainable urban societies[8]. In this point of view, nanotechnology may help, not just by enabling large scale solar power, but by facilitating an intrinsically less wasteful industrial ecology.

Conclusion

If there is (or indeed, ever was) a time in which there was an “independent republic of science”, disinterestedly pursuing knowledge for its own sake, nanotechnology is not part of it. Nanotechnology, in all its flavours and varieties, is unashamedly “goal-oriented research”. This immediately begs the question “whose goals?” It is this question that underlies recent calls for a greater degree of democratic involvement in setting scientific priorities[9]. It is important that these debates don’t simply concentrate on technical issues. Nanotechnology provides a fascinating and evolving example of the complexity of the interaction between science, technology and wider currents in society. Nanotechnology, with other new and emerging technologies, will have a huge impact on the way society develops over the next twenty to fifty years. Recognising the importance of this impact does not by any means imply that one must take a technologically deterministic view of the future, though. Technology co-evolves with society, and the direction it takes is not necessarily pre-determined. Underlying the directions in which it is steered are a set of competing visions about the directions society should take. These ideologies, which often are left implicit and unexamined, need to be made explicit if a meaningful discussion of the implications of the technology is to take place.

[4] Gibbons, M, et al. (1994) The New Production of Knowledge. London: Sage.
[5] David Berube (in his book Nano-hype, Prometheus, NY 2006) explicitly links the two interventions, and identifies Zac Goldsmith, millionaire organic farmer and editor of “The Ecologist” magazine, as the man who introduced Prince Charles to nanotechnology and the ETC critique. This could be significant, in view of Goldsmith’s current prominence in Conservative Party politics.
[6] Nanoscience and nanotechnologies: opportunities and uncertainties, Royal Society and Royal Academy of Engineering, available from http://www.nanotec.org.uk/finalReport.htm
[7] Enough; staying human in an engineered age, Bill McKibben, Henry Hall, NY (2003)
[8] For a recent manifesto, see Worldchanging: a user’s guide for the 21st century, Alex Steffen (ed.), Harry N. Abrams, NY (2006)
[9] See for example See-through Science: why public engagement needs to move upstream, Rebecca Willis and James Wilsdon, Demos (2004)

The limits of public engagement

Over on Nanodot, Christine Peterson picks up on some comments I made about public engagement in the Foreword to the final report of the Nanotechnology Engagement Group – Democratic technologies?. Having enumerated some of the problems and difficulties of seeking public engagement about nanotechnology, I finished with the positive words “I believe that the activities outlined in this report are just the start of a very positive movement that seeks to answer a compelling question: how can we ensure that the scientific enterprise is directed in pursuit of societal goals that command broad democratic support?

“That last question is a tough one,” Christine writes. She raises two interesting questions on the back of this. “Public research funds should go toward goals supported by the public, and our representative governmental systems are supposed to ensure that. Do they?” The record is mixed, of course, but I’m not convinced that science and conventional politics interact terribly well. The paradox of science is that its long term impacts may be very large, but in the short term there are always more urgent matters to deal with, and it is these issues, healthcare or economics, for example, that will decide elections. The elected politicians nominally in charge of public science budgets typically have many other responsibilities too, and their attention is often diverted by more immediate problems.

She goes on to ask “How about private research funds: can they pursue goals not supported by the majority? We don’t want a system where the public votes on how private science dollars are spent, do we?” In a way she then goes on to start to answer her own question “Unless they are violating a specific law, presumably” – there are some goals of science that in most countries are outlawed, regardless of who is funding the work, most notably human reproductive cloning. But there are some interesting discussions to be had about less extreme cases. One of the major sources of private science dollars are the charitable foundations, such as the UK’s Wellcome Foundation, which has £13 billion to play with, and the $33 billion of the Bill and Melinda Gates Foundation. One could certainly imagine in principle a situation in which a foundation pursued a goal with only minority support, but in practise the big foundations seem to be commendably sensitive to public concerns, more so in many ways than government agencies.

Much applied science is done by public companies, and there it is the shareholders who have an obvious interest and responsibility. It’s interesting, for example, that in the UK one of the major driving forces behind the development of a “Responsible NanoCode” for business is a major asset manager, which manages investments in public companies by institutions such as pension funds and insurance companies. There is considerable less clarity, of course, in the case of companies owned by venture capital and private equity, and these could be involved in research that may well turn out to be very controversial (one thinks, for example, of Synthetic Genomics, the company associated with Craig Venter which aims to commercialise synthetic biology). Irrespective of their ownership structure, the mechanisms of the market mean that companies can’t afford to ignore public opinion. There’s a tension, of course, between the idea that the market provides a sensitive mechanism by which the wants and needs of the public are met by private enterprise, and the view that companies have become adept at creating new consumer wants and desires, sometimes against the better interests both of the consumers themselves and wider society. The Nanodialogues project reports a very interesting public engagement exercise with a multinational consumer products company that explores this tension.

What isn’t in doubt is that global science and technology can seem a complex, unpredictable and perhaps uncontrollable force. The science fiction writer William Gibson puts this well in a recent interview: “I think what scares people most about new technologies — it’s actually what scares me most — is that they’re never legislated into being. Congress doesn’t vote on the cellular telephony initiative and create a cellphone system across the United States and the world. It just happens and capital flows around and it changes things at the most intimate levels of our lives, but we never decided to do it. Somewhere now there’s a team of people working on something that’s going to profoundly impact your life in the next 10 years and change everything. You don’t know what it is and they don’t know how it’s going to change your life because usually these things don’t go as predicted.”

Save the planet by insulating your house

A surprisingly large fraction of the energy used in developed countries is used heating and lighting buildings – in the European Union 40% of energy used is in buildings. This is an obvious place to look for savings if one is trying to reduce energy consumption without compromising economic activity. A few weeks ago, I reported a talk by Colin Humphreys explaining how much energy could be saved by replacing conventional lighting by light emitting diodes. A recent report commissioned by the UK Government’s Department for Environment, Food and Rural Affairs, Environmentally beneficial nanotechnology – Barriers and Opportunities (PDF file) ranks building insulation as one of the areas in which nanotechnology could make a substantial and immediate contribution to saving energy.

The problem doesn’t arise so much from new buildings; current building regulations in the UK and the EU are quite strict, and the technologies for making very heat efficient buildings are fairly well understood, even if they aren’t always used to the full. It is the existing building stock that is the problem. My own house illustrates this very well; its 3 foot thick solid limestone walls look as handsome and sturdy as when they were built 150 years ago, but the absence of a cavity makes them very poor insulators. To bring them up to modern insulating standards I’d need to dryline the walls with plasterboard with a foam-filled cavity, at a thickness that would lose a significant amount of the interior volume of the rooms. Is their some magic nanotechnology enabled solution that would allow us to retrofit proper insulation to the existing housing stock in an acceptable way?

The claims made by manufacturers of various products in this area are not always crystal clear, so its worth reminding ourself of the basic physics. Heat is transferred by convection, conduction and radiation. Stopping convection is essentially a matter of controlling the drafts. The amount of heat transmitted by conduction is proportional to the difference of temperature, the thickness of the material, and a material constant called the thermal conductivity. For solids like brick, concrete and glass thermal conductivities are around 0.6 – 0.8 W/m.K. As everyone knows, still air is a very good thermal insulator, with a thermal conductivity of 0.024 W/m.K, and the goal of traditional insulation materials, from sheeps’ wool to plastic foam, is to trap air to exploit its insulating properties. Standard building insulation is made from materials like polyurethane foam, are actually pretty good. A typical commercial product has a value of thermal conductivity of 0.021 W/m.K; it manages to do a bit better than pure air because the holes in the foam are actually filled with a gas that is heavier than air.

The best known thermal insulators are the fascinating materials known as aerogels. These are incredibly diffuse foams – their densities can be as low as 2 mg/cm3, not much more than air – that resemble nothing as much as solidified smoke. One makes an aerogel by making a cross-linked gel (typically from water soluble polymers of silica) and then drying it above the critical point of the solvent, preserving the structure of the gel in which the strands are essentially single molecules. An aerogel can have a thermal conductivity around 0.008 W/m.K. This is substantially less than the conductivity of the air it traps, essentially because the nanscale strands of material disrupt the transport of the gas molecules.

Aerogels have been known for a long time, mostly as a laboratory curiousity, with some applications in space where their outstanding properties have justified their very high expense. But it seems that there have been some significant process improvements that have brought the price down to a point where one could envisage using them in the building trade. One of the companies active in this area is the US-based Aspen Aerogels, which markets sheets of aerogel made, for strength, in a fabric matrix. These have a thermal conductivity in the range 0.012 – 0.015 W/m.K. This represents a worthwhile improvement on the standard PU foams. However, one shouldn’t overstate its impact; this means to achieve a given level of thermal insulation one needs an insulating sheet a bit more than half the thickness of a standard material.

Another product, from a company called Industrial Nanotech Inc, looks more radical in its impact. This is essentially an insulating paint; the makers claim that three layers of this material – Nansulate will provide significant insulation. If true, this would be very important, as it would easily and cheaply solve the problem of retrofitting insulation to the existing housing stock. So, is the claim plausible?

The company’s website gives little in the way of detail, either of the composition of the product or, in quantitative terms, its effectiveness as an insulator. The active ingredient is referred to as “hydro-NM-Oxide”, a term not well known in science. However, a recent patent filed by the inventor gives us some clues. US patent 7,144,522 discloses an insulating coating consisting of aerogel particles in a paint matrix. This has a thermal conductivity of 0.104 W/m.K. This is probably pretty good for a paint, but it is quite a lot worse than typical insulating foams. What, of course, makes matters much worse is that as a paint it will be applied as a very thin film (the recommended procedure is to use three coats, giving a dry thickness of 7.5 mils, a little less than 0.2 millimeters. Since one needs a thickness of at least 70 millimeters of polyurethane foam to achieve an acceptable value of thermal insulation (U value of 0.35 W/m2.K) it’s difficult to see how a layer that is both 350 times thinner than this, and with a significantly higher value of thermal conductivity, could make a significant contribution to the thermal insulation of a building.

More on synthetic biology and nanotechnology

There’s a lot of interesting recent commentary about synthetic biology on Homunculus, the consistently interesting blog of the science writer Philip Ball. There’s lots more detail about the story of the first bacterial genome transplant that I referred to in my last post; his commentary on the story was published last week as a Nature News and Views article (subscription required).

Philip Ball was a participant in a recent symposium organised by the Kavli Foundation “The merging of bio and nano: towards cyborg cells”. The participants in this produced an interesting statement: A vision for the convergence of synthetic biology and nanotechnology. The signatories to this statement include some very eminent figures both from synthetic biology and from bionanotechnology, including Cees Dekker, Angela Belcher, Stephen Chu and John Glass. Although the statement is bullish on the potential of synthetic biology for addressing problems such as renewable energy and medicine, it is considerably more nuanced than the sorts of statements reported by the recent New York Times article.

The case for a linkage between synthetic biology and bionanotechnology is well made at the outset: “Since the nanoscale is also the natural scale on which living cells organize matter, we are now seeing a convergence in which molecular biology offers inspiration and components to nanotechnology, while nanotechnology has provided new tools and techniques for probing the fundamental processes of cell biology. Synthetic biology looks sure to profit from this trend.” The writers divide the enabling technologies for synthetic biology into hardware and software. For this perspective on synthetic biology, which concentrates on the idea of reprogramming existing cells with synthetic genomes, the crucial hardware is the capability for cheap, accurate DNA synthesis, about which they write: “The ability to sequence and manufacture DNA is growing exponentially, with costs dropping by a factor of two every two years. The construction of arbitrary genetic sequences comparable to the genome size of simple organisms is now possible. “ This, of course, also has implications for the use of DNA as a building block for designed nanostructures and devices (see here for an example).

The authors are much more cautious on the software side. “Less clear are the design rules for this remarkable new technology—the software. We have decoded the letters in which life’s instructions are written, and we now understand many of the words – the genes. But we have come to realize that the language is highly complex and context-dependent: meaning comes not from linear strings of words but from networks of interconnections, with its own entwined grammar. For this reason, the ability to write new stories is currently beyond our ability – although we are starting to master simple couplets. Understanding the relative merits of rational design and evolutionary trial-and-error in this endeavor is a major challenge that will take years if not decades. “

The new new thing

It’s fairly clear that nanotechnology is no longer the new new thing. A recent story in Business Week – Nanotech Disappoints in Europe – is not atypical. It takes its lead from the recent difficulties of the UK nanotech company Oxonica, which it describes as emblematic of the nanotechnology sector as a whole: “a story of early promise, huge hype, and dashed hopes.” Meanwhile, in the slightly neophilic world of the think-tanks, one detects the onset of a certain boredom with the subject. For example, Jack Stilgoe writes on the Demos blog “We have had huge fun running around in the nanoworld for the last three years. But there is a sense that, as the term ‘nanotechnology’ becomes less and less useful for describing the diversity of science that is being done, interesting challenges lie elsewhere… But where?”

Where indeed? A strong candidate for the next new new thing is surely synthetic biology. (This will not, of course, be new to regular Soft Machines readers, who will have read about it here two years ago). An article in the New York Times at the weekend gives a good summary of some of the claims. The trigger for the recent prominence of synthetic biology in the news is probably the recent announcement from the Craig Venter Institute of the first bacterial genome transplant. This refers to an advance paper in Science (abstract, subscription required for full article) by John Glass and coworkers. There are some interesting observations on this in a commentary (subscription required) in Science. It’s clear that much remains to be clarified about this experiment: “But the advance remains somewhat mysterious. Glass says he doesn’t fully understand why the genome transplant succeeded, and it’s not clear how applicable their technique will be to other microbes. “ The commentary from other scientists is interesting: “Microbial geneticist Antoine Danchin of the Pasteur Institute in Paris calls the experiment “an exceptional technical feat.” Yet, he laments, “many controls are missing.” And that has prevented Glass’s team, as well as independent scientists, from truly understanding how the introduced DNA takes over the host cell.”

The technical challenges of this new field haven’t prevented activists from drawing attention to its potential downsides. Those veterans of anti-nanotechnology campaigning, the ETC group, have issued a report on synthetic biology, Extreme Genetic Engineering, noting that “Today, scientists aren’t just mapping genomes and manipulating genes, they’re building life from scratch – and they’re doing it in the absence of societal debate and regulatory oversight”. Meanwhile, the Royal Society has issued a call for views on the subject.

Looking again at the NY Times article, one can perhaps detect some interesting parallels with the way the earlier nanotechnology debate unfolded. We see, for example, some fairly unrealistic expectations being raised: ““Grow a house” is on the to-do list of the M.I.T. Synthetic Biology Working Group, presumably meaning that an acorn might be reprogrammed to generate walls, oak floors and a roof instead of the usual trunk and branches. “Take over Mars. And then Venus. And then Earth” —the last items on this modest agenda.” And just as the radical predictions of nanotechnology were underpinned by what were in my view inappropriate analogies with mechanical engineering, much of the talk in synthetic biology is underpinned by explicit, but as yet unproven, parallels between cell biology and computer science: “Most people in synthetic biology are engineers who have invaded genetics. They have brought with them a vocabulary derived from circuit design and software development that they seek to impose on the softer substance of biology. They talk of modules — meaning networks of genes assembled to perform some standard function — and of “booting up” a cell with new DNA-based instructions, much the way someone gets a computer going.”

It will be interesting how the field of synthetic biology develops, to see whether it does a better of job of steering between overpromised benefits and overdramatised fears than nanotechnology arguably did. Meanwhile, nanotechnology won’t be going away. Even the sceptical Business Week article concluded that better times lay ahead as the focus in commercialising nanotechnology moved from simple applications of nanoparticles to more sophisticated applications of nanoscale devices: “Potentially even more important is the upcoming shift from nanotech materials to applications—especially in health care and pharmaceuticals. These are fields where Europe is historically strong and already has sophisticated business networks. “

The Kroto Research Institute

You wait for years for an interdisciplinary nanoscience and nanotechnology centre to be opened somewhere in the English Midlands or south Yorkshire, and then two come along at once. Having spent yesterday 40 miles south of Sheffield, in Nottingham, at the official opening of the Nottingham nanoscience and nanotechnology centre, today I’m back home at Sheffield for the official opening of the The Kroto Research Institute and Centre for Nanoscale Science and Technology. As yesterday, the man doing the opening was the Nobel Laureate Sir Harry Kroto (Harry is an alumnus of Sheffield).

Actually, the Kroto centre covers a little more than just nanotechnology. It houses the UK’s national facility for fabricating nanostructures from III-V semiconductors, an well-equipped microscopy facility, which will soon commission an aberration corrected high resolution electron microscope capable of chemical analysis at the single-atom level, and a tissue engineering centre which spans the range from surface analysis to putting cultured skin onto patients. But there’s also a centre for computational biology, one for environmental engineering, and one for virtual reality.

Having talked about the Nottingham centre, it’s worth talking about the ways in which our two operations complement each other. Nottingham has what’s probably the best department of pharmacy in the country; they have long operated at the nanoscale, and have been leaders in applying surface science and scanning probe techniques to look at systems of biological and biomedical interest. But when they talk about nanomedicine, they have the strong links with the pharmaceutical industry that are needed to turn ideas into therapies. They’ve been successful in collaborating with the Department of Physics, whose interest in applying physical techniques to biological systems goes back to the discovery there of magnetic resonance imaging. Like Sheffield, they have real strength in semiconductor nanotechnology, and they also have the UK’s leaders in single molecule manipulation using scanning probe techniques.

There are already some major collaborations between Nottingham and Sheffield. These include the Nanorobotics project, which aims to combine nanoscale actuator technology with live electron microscopy observation, each at a resolution of down to 0.1nm. The Snomipede project, also including Glasgow and Manchester, aims to combine near-field scanning probe microscopy as a way of patterning molecules with massive parallelisation of the kind familiar from the IBM millipede technology. There is undoubtedly room for more collaboration between the two universities in this area. One should probably never regret all those failed research proposals one has put in, but back in 2000 we did put together a joint bid, together with Leeds, to host one of the two Interdisciplinary Research Collaborations in Nanotechnology that were being funded then. The money went to Oxford and Cambridge, and I don’t want to cast aspersions on the good work that’s come out of both places, but I’m sure we would have done a good job.

The Nottingham nanotechnology and nanoscience centre

Today saw the official opening of the Nottingham nanotechnology and nanoscience centre, which brings together some existing strong research areas across the University. I’ve made the short journey down the motorway from Sheffield to listen to a very high quality program of talks, with Sir Harry Kroto, co-discoverer of buckminster fullerene, taking the top of the bill. Also speaking were Don Eigler, from IBM (the originator of perhaps the most iconic image in all nanotechnology, the IBM logo made from individual atoms) Colin Humphreys, from the University of Cambridge, and Sir Fraser Stoddart, from UCLA.

There were some common themes in the first two talks (common, also, with Wade Adams’s talk in Norway described below). Both talked about the great problems of the world, and looked to nanotechnology to solve them. For Colin Humphries, the solutions to problems of sustainable energy and clean water are to be found in the material gallium nitride, or precisely in the compounds of aluminium, indium and gallium nitride which allow one to make, not just blue light emitting diodes, but LEDs that can emit light of any wavelength between the infra-red and the deep ultra-violet. Gallium nitride based blue LEDs were invented as recently as 1996 by Shuji Nakamura, but this is already a $4 billion market, and everyone will be familiar with torches and bicycle lights using them.

How can this help the problem of access to clean drinking water? We should remind ourselves that 10% of world child mortality is directly related to poor water quality, and half the hospital beds in the world occupied by people with water related diseases. One solution would be to use deep ultraviolet to sterilise contaminated water. Deep UV works well for sterilisation because biological organisms never developed a tolerance to these waves, which don’t penetrate the atmosphere. UV at a wavelength of 270 nm does the job well, but existing lamps are not practical because they need high voltages and are not efficient, and also some use mercury. AlGaN LEDS work well, and in principle they could be powered by solar cells at 4 V, which might allow every household to sterilise its water supply easily and cheaply. The problem is efficiency is too low for flowing water. At blue wavelengths (400 nm) efficiency is very good at 70%, but it drops precipitously at smaller wavelengths, and this is not yet understood theoretically.

The contribution of solid state lighting to the energy crisis arises from the efficiency of LEDs compared to tungsten light bulbs. People often underestimate the amount of energy used in lighting domestic and commercial buildings. Globally, it accounts for 1,900 megatonnes of CO2; this is 70% of the total emissions from cars, and three times the amount due to aviation. In the UK, it amounts to 20% of electricity generated, and in Thailand, for example, it is even more, at 40%. But tungsten light bulbs, which account for 79% of sales, have an efficiency of only 5%. There is much talk now of banning tungsten light bulbs, but the replacement, fluorescent lights, is not perfect either. Compact fluorescents have an efficiency of 15%, which is an improvement, but what is less well appreciated is that each bulb contains 4 mg of mercury. This would lead to tonnes of mercury ending up in landfills if tungsten bulbs were replaced by compact fluorescents.

Could solid-state lighting do the job? Currently what you can buy are blue LEDs (made from InGaN) which excite a yellow phosphor. The colour balance of these leaves something to be desired, and soon we will see blue or UV LEDs exciting red/green/blue phosphors which will have a much better colour balance (you could also use a combination of red, green and blue LEDs, but currently green efficiencies are too low). The best efficiency in a commercial white LED is 30% (from Seoul Semiconductor), but the best in the lab (Nichia) is currently 50%. The target is an efficiency of 50-80% at high drive currents, which puts them at a higher efficiency than the current most efficient light, sodium lamps, whose familiar orange glow converts electricity at 45% efficiency. This target would make them 10 times more efficient than filaments, 3 times more efficient than compact fluorescents and with no mercury. In the US the 50% replacement of filaments would save 41 GW, in the UK 100% replacement would save 8 GW of power station capacity. The problem at the moment is cost, but the rapidity of progress in this area means that Humphries is confident that within a few years costs will fall dramatically.

Don Eigler also talked about societal challenges, but with a somewhat different emphasis. His talk was entitled “Nanotechnology: the challenge of a new frontier”. The questions he asked were “What challenges do we face as a society in dealing with this new frontier of nanotechnology, and wow should we as a society make decisions about a new technology like nanotechnology?”

There are three types of nanotechnology, he said: evolutionary nanotechnology (historically larger technologies that have been shrunk to nanoscale dimensions), revolutionary nanotechnology (entirely new nanometer-scale technologies) and natural nanotechnology (cell biology, offering inspirations for our own technologies). Evolutionary nanotechnologies include semiconductors, nanoparticles in cosmetics. Revolutionary nanotechnologies include carbon nanotubes, for potential new logic structures that might supplant silicon, and the IBM millipede data storage system. Natural nanotechnologies include bacterial flagellar motors.

Nanohysteria comes into different varieties too. Type 1 nanohysteria is represented by greed driven “irrational exuberance”, and is based on the idea that nanotechnology will change everything very soon, as touted by investment tipsters and consultants who want to take people’s money off them. What’s wrong with this is the absence of critical thought. Type 2 nanohysteria is the opposite – fear driven irrational paranoia exemplified by the grey goo scenario of out of control self-replicating molecular assemblers or nanobots. What’s wrong with this is again, the absence of critical thought. Prediction is difficult, but Eigler thinks that self-replicating nanobots are not going to happen any time soon, if ever.

What else do people fear about nanotechnology? Eigler recently met a young person with strong views, that nanotech is scary, it will harm the biosphere, it will create new weapons, it is being driven by greedy individuals and corporations, in summary it is not just wrong, it is evil. Where did these ideas come from? If you look on the web – you see talk of superweapons made from molecular assemblers. What you don’t find on the web are statements like “My grandmother is still alive today because nanotechnology saved her life”. Why is this? Nanotechnology has not yet provided a tangible benefit to grandmothers!

Some candidates include gold nanoshell cancer therapy, as developed by Naomi Halas at Rice. This particular therapy may not work in humans, but something similar will. Another example is the work of Sam Stupp at Northwestern, making nanofibers that cause neural progenitor cells turn into new neurons, not scar tissue, holding out the hope of regenerative medicine to repair spinal cord damage.

As an example of wrong conclusions, Eigler made the smallest logic circuit, 12nm by 17 nm, made from carbon monoxide. But carbon monoxide is a deadly poison – shouldn’t we worry about this? Let’s do the sum – 18 CO molecules are needed for one transistor. The context is that I breathe 2 billion trillion molecules a day, so every day I breathe enough to make 160 million computers.

What could the green side of nanotechnology be? We could have better materials, that are lighter, stronger and more easily recyclable, and this will reduce energy consumption. Perhaps we can use nanotechnology to reduce consumption of natural resources and helping recycling. We can’t prove yet that these good benefits will follow, but Eigler believes they are likely.

There is a real risk of nanotechnology, if it is used without evaluating the consequences. The widespread introduction of nanoparticulates into the environment would be an example of this. So how do we now if something is safe? We need to think it through, but we can’t guarantee absolutely that anything can be absolutely safe. The principles should be that we eliminate fantasies, understand the different motivations that people have, and honestly assess risk and benefit. We need informed discussion, that is critical, creative, inclusive and respectful. We need to speak with knowledge and respect, and listen with zeal. Scientists have not always been good at this and we need to get much better. Our best weapons are our traditions of rigorous honesty and our tolerance for diverse beliefs.

A new strategy for UK Nanotechnology

It was announced this morning that the Engineering and Physical Sciences Research Council, the lead government agency for funding nanotechnology in the UK, has appointed a new Senior Strategic Advisor for Nanotechnology. This forms part of a new strategy, published (in a distinctly low key way) earlier this year. The strategy announces some relatively modest increases in funding from the current level, which amounts to around £92 million per year, much of which will be focused on some large-scale “Grand Challenge” projects addressing areas of major societal need.

An editorial (subscription required) in February’s issue of Nature Nanotechnology lays out the challenges that will face the new appointee. By a number of measures, the UK is underperforming in nanotechnology relative to its position in world science as a whole. Given the relatively small sums on offer, focusing on areas of existing UK strength – both academically and in existing industry – is going to be essential, and it’s clear that the pharmaceutical and health-care sectors are strong candidates. Nature Nanotechnology’s advice is clear: “Indeed, getting the biomedical community— including companies — to buy into a national strategy for nanotechnology and health care should be a top priority for the nano champion.”

Optimism and pessimism in Norway

I’m in Bergen, Norway, at a conference, Nanomat 2007, run by the Norwegian Research Council. The opening pair of talks – from Wade Adams, of Rice University and Jürgen Altmann, from Bochum, presented an interesting contrast of nano-optimism and nano-pessimism. Here are my notes on the two talks, hopefully more or less reflecting what was said without too much editorial alteration.

The first talk was from Wade Adams, the director of Rice University’s Richard E. Smalley Institute, with the late Richard Smalley’s message “Nanotechnology and Energy: Be a scientist and save the world”. Adams gave the historical background to Smalley’s interest in energy, which began with a talk from a Texan oilman explaining how rapidly oil and gas were likely to run out. Thinking positively, if one has cheap, clean energy most of the problems of the world – lack of clean water, food supply, the environment, even poverty and war – are soluble. This was the motivation for Smalley’s focus on clean energy as the top priority for a technological solution. It’s interesting that climate change and greenhouse gases was not a primary motivation for him; on the other hand he was strongly influenced by Hubbert (see http://www.princeton.edu/hubbert) and his theory of peak oil. Of course, the peak oil theory is controversial (recent a article in Nature – That’s oil, folks, subscription needed – for an overview of the arguments), but whether oil production has already peaked, as the doomsters suggest, or the peak is postponed to 2030, it’s a problem we will face at sometime or other. On the pessimistic side, Adams cited another writer – Mat Simmons – who maintains that oil production in Saudi Arabia – usually considered the reserve of last resort – has already peaked.

Meanwhile on the demand side, we are looking at increasing pressure. Currently 2 billion people have no electricity, 2 billion people rely on biomass for heating and cooking, the world’s population is still increasing and large countries such as India and China are industrialising fast. One should also remember that oil has more valuable uses than simply to be burnt – it’s the vital feedstock for plastics and all kinds of other petrochemicals.

Summarising the figures, the world (in 2003) consumed energy at a rate of 14 terawatts, the majority in the form of oil. By 2050, we’ll need between 30 and 60 terawatts. This can only happen if there is a dramatic change – for example renewable energy stepping up to deliver serious (i.e. measured in terawatts) amounts of power. How can this happen?

The first place to look is probably efficiencies. In the United States, about 60% of energy is currently simply wasted, so simple measures such as using low energy light bulbs and having more fuel-efficient cars can take us a long way.

On the supply side, we need to be hard-headed about evaluating the claims of various technologies in the light of the quantities needed. Wind is probably good for a couple of terawatts at most, and capacity constraints limit the contribution nuclear can make. To get 10 terawatts of nuclear by 2050 we need roughly 10,000 new plants – that’s one built every two days for the next 40 years, which in view of the recent record of nuclear build seems implausible. The reactors would in any case need to be breeders to avoid the consequent uranium shortage. The current emphasis on the hydrogen economy is a red herring, as it is not a primary fuel.

The only remaining solution is solar power. 165,000 TW hits the earth in sunlight. The problem is that the sunlight doesn’t arrive in the right places. Smalley’s solution was a new energy grid system, in which energy is transmitted through wires rather than in tankers. To realise this you need better electrical conductors (either carbon nanotubes or superconductors), and electrical energy storage devices. Of course, Rice University is keen on the nanotube solution. The need is to synthesise large amounts of carbon nanotubes which are all of the same structure, the structure that has metallic properties rather than semiconducting ones. Rice had been awarded $16 million from NASA to develop the scale-up of their process for growing metallic nanotubes by seeded growth, but this grant was cancelled amidst the recent redirection of NASA’s priorities.

Ultimately, Adams was optimistic. In his view, technology will find a solution and it’s more important now to do the politics, get the infrastructure right, and above all to enthuse young people with a sense of mission to become scientists and save the world. His slides can be downloaded here (8.4 MB PDF file).

The second, much more pessimistic, talk was from Jürgen Altmann, a disarmament specialist from Ruhr-Universität Bochum. His title was “Nanotechnology and (International) Society: how to handle the new powerful technologies?” Altmann is a physicist by original training, and is the author of a book, Military nanotechnology: new technology and arms control.

Altmann outlined the ultimate goal of nanotechnology as the full control of the 3-d position of each atom – the role model is the living cell, but the goal goes much beyond this, going beyond systems optimised for aqueous environments to those that work in vacuum, high pressure, space etc., limited only by the laws of nature. Altmann alluded to the controversy surrounding Drexler’s vision of nanotechnology, but insisted that no peer-reviewed publication had succeeded in refuting it.

He mentioned the extrapolations of Moore’s law due to Kurzweil, with the prediction that we will have a computer with a human being’s processing power by 2035. He discussed new nanomaterials, such as ultra-strong carbon nanotubes making the space elevator conceivable, before turning to the Drexler vision of mechanosynthesis, leading to a universal molecular assembler, and discussing consequences like space colonies and brain downloading, before highlighting the contrasting utopian and dystopian visions of the outcome – one the one hand, infinitely long life, wealth without work and clean environment, on the other hand, the consumption of all organic life by proliferating nanorobots (grey goo).

He connected these visions to transhumanism – the idea that we could and should accelerate human evolution by design, and the perhaps better accepted notion of converging technologies – NanoBioInfoCogno – which has taken up somewhat different connotations either side of the Atlantic (Altmann was on the working group which produced the EU document on converging technologies). He foresaw the benefits arising on a 20 year timescale, notably direct broad-band interfaces between brain and machines.

What, then, of the risks? There is the much discussed issue of nanoparticle toxicity. How might nanotechnology affect developing countries – will the advertised benefits really arise? We have seen a mapping of nanotechnology benefits onto the Millennium Development Goals looked by the Meridian Institute. But this has been criticised, for example by N. Invernizzi, (Nanotechnology Law and Business Journal 2 101-11- (2005)). High productivity will mean less demand for labour, there might be a tendency to neglect non-technological solutions, there might be a lack of qualified personnel. He asked what will happen if India and China succeed with nano, will that simply increase internal rich-poor divisions within those countries? The overall conclusion is that socio-economic factors are just as important as technology.

With respect to military nanotechnology, there are many potential applications, including smaller and faster electronics and sensors, lighter and faster armour and armoured vehicles, miniature satellites, including offensive ones. Many robots will be developed, including nano-robots, including biotechnical hybrids – electrode controlled rats and insects. Medical nanobiotechnology will have military applications – capsules for controlled release of biological and chemical agents, mechanisms for targeting agents to specific organs, but also perhaps to specific gene patterns or proteins, allowing chemical or biological warfare to be targeted against specific populations.

Military R&D for nano is mostly done in the USA, where it accounts for 1/4 – 1/3 of federal funding. At the moment, the USA spends 4-10 times as much as the rest of the world, but perhaps we can shortly expect other countries with the necessary capacity, like China and Russia, to begin to catch up.

The problem of military nanotechnology from an arms control point of view is that limitation and verification is very difficult – much more difficult than the control of nuclear technology. Nano is cheap and widespread, much more like biotechnology, with many non-military uses. Small countries and non-state actors can use high technology. To control this will need very intrusive inspection and monitoring – anytime, anyplace. Is this compatible with military interest in secrecy and the fear of industrial espionage?

So, Altmann asks, Is the current international system up to this threat? Probably not, he concludes, so we have two alternatives: increasing military and terrorist threats and marked instability, or the organisation of global security in another way, involving some kind of democratic superstate, in which existing states voluntarily accept reduced sovereignty in return for greater security.