A new strategy for UK Nanotechnology

It was announced this morning that the Engineering and Physical Sciences Research Council, the lead government agency for funding nanotechnology in the UK, has appointed a new Senior Strategic Advisor for Nanotechnology. This forms part of a new strategy, published (in a distinctly low key way) earlier this year. The strategy announces some relatively modest increases in funding from the current level, which amounts to around £92 million per year, much of which will be focused on some large-scale “Grand Challenge” projects addressing areas of major societal need.

An editorial (subscription required) in February’s issue of Nature Nanotechnology lays out the challenges that will face the new appointee. By a number of measures, the UK is underperforming in nanotechnology relative to its position in world science as a whole. Given the relatively small sums on offer, focusing on areas of existing UK strength – both academically and in existing industry – is going to be essential, and it’s clear that the pharmaceutical and health-care sectors are strong candidates. Nature Nanotechnology’s advice is clear: “Indeed, getting the biomedical community— including companies — to buy into a national strategy for nanotechnology and health care should be a top priority for the nano champion.”

Optimism and pessimism in Norway

I’m in Bergen, Norway, at a conference, Nanomat 2007, run by the Norwegian Research Council. The opening pair of talks – from Wade Adams, of Rice University and Jürgen Altmann, from Bochum, presented an interesting contrast of nano-optimism and nano-pessimism. Here are my notes on the two talks, hopefully more or less reflecting what was said without too much editorial alteration.

The first talk was from Wade Adams, the director of Rice University’s Richard E. Smalley Institute, with the late Richard Smalley’s message “Nanotechnology and Energy: Be a scientist and save the world”. Adams gave the historical background to Smalley’s interest in energy, which began with a talk from a Texan oilman explaining how rapidly oil and gas were likely to run out. Thinking positively, if one has cheap, clean energy most of the problems of the world – lack of clean water, food supply, the environment, even poverty and war – are soluble. This was the motivation for Smalley’s focus on clean energy as the top priority for a technological solution. It’s interesting that climate change and greenhouse gases was not a primary motivation for him; on the other hand he was strongly influenced by Hubbert (see http://www.princeton.edu/hubbert) and his theory of peak oil. Of course, the peak oil theory is controversial (recent a article in Nature – That’s oil, folks, subscription needed – for an overview of the arguments), but whether oil production has already peaked, as the doomsters suggest, or the peak is postponed to 2030, it’s a problem we will face at sometime or other. On the pessimistic side, Adams cited another writer – Mat Simmons – who maintains that oil production in Saudi Arabia – usually considered the reserve of last resort – has already peaked.

Meanwhile on the demand side, we are looking at increasing pressure. Currently 2 billion people have no electricity, 2 billion people rely on biomass for heating and cooking, the world’s population is still increasing and large countries such as India and China are industrialising fast. One should also remember that oil has more valuable uses than simply to be burnt – it’s the vital feedstock for plastics and all kinds of other petrochemicals.

Summarising the figures, the world (in 2003) consumed energy at a rate of 14 terawatts, the majority in the form of oil. By 2050, we’ll need between 30 and 60 terawatts. This can only happen if there is a dramatic change – for example renewable energy stepping up to deliver serious (i.e. measured in terawatts) amounts of power. How can this happen?

The first place to look is probably efficiencies. In the United States, about 60% of energy is currently simply wasted, so simple measures such as using low energy light bulbs and having more fuel-efficient cars can take us a long way.

On the supply side, we need to be hard-headed about evaluating the claims of various technologies in the light of the quantities needed. Wind is probably good for a couple of terawatts at most, and capacity constraints limit the contribution nuclear can make. To get 10 terawatts of nuclear by 2050 we need roughly 10,000 new plants – that’s one built every two days for the next 40 years, which in view of the recent record of nuclear build seems implausible. The reactors would in any case need to be breeders to avoid the consequent uranium shortage. The current emphasis on the hydrogen economy is a red herring, as it is not a primary fuel.

The only remaining solution is solar power. 165,000 TW hits the earth in sunlight. The problem is that the sunlight doesn’t arrive in the right places. Smalley’s solution was a new energy grid system, in which energy is transmitted through wires rather than in tankers. To realise this you need better electrical conductors (either carbon nanotubes or superconductors), and electrical energy storage devices. Of course, Rice University is keen on the nanotube solution. The need is to synthesise large amounts of carbon nanotubes which are all of the same structure, the structure that has metallic properties rather than semiconducting ones. Rice had been awarded $16 million from NASA to develop the scale-up of their process for growing metallic nanotubes by seeded growth, but this grant was cancelled amidst the recent redirection of NASA’s priorities.

Ultimately, Adams was optimistic. In his view, technology will find a solution and it’s more important now to do the politics, get the infrastructure right, and above all to enthuse young people with a sense of mission to become scientists and save the world. His slides can be downloaded here (8.4 MB PDF file).

The second, much more pessimistic, talk was from Jürgen Altmann, a disarmament specialist from Ruhr-Universität Bochum. His title was “Nanotechnology and (International) Society: how to handle the new powerful technologies?” Altmann is a physicist by original training, and is the author of a book, Military nanotechnology: new technology and arms control.

Altmann outlined the ultimate goal of nanotechnology as the full control of the 3-d position of each atom – the role model is the living cell, but the goal goes much beyond this, going beyond systems optimised for aqueous environments to those that work in vacuum, high pressure, space etc., limited only by the laws of nature. Altmann alluded to the controversy surrounding Drexler’s vision of nanotechnology, but insisted that no peer-reviewed publication had succeeded in refuting it.

He mentioned the extrapolations of Moore’s law due to Kurzweil, with the prediction that we will have a computer with a human being’s processing power by 2035. He discussed new nanomaterials, such as ultra-strong carbon nanotubes making the space elevator conceivable, before turning to the Drexler vision of mechanosynthesis, leading to a universal molecular assembler, and discussing consequences like space colonies and brain downloading, before highlighting the contrasting utopian and dystopian visions of the outcome – one the one hand, infinitely long life, wealth without work and clean environment, on the other hand, the consumption of all organic life by proliferating nanorobots (grey goo).

He connected these visions to transhumanism – the idea that we could and should accelerate human evolution by design, and the perhaps better accepted notion of converging technologies – NanoBioInfoCogno – which has taken up somewhat different connotations either side of the Atlantic (Altmann was on the working group which produced the EU document on converging technologies). He foresaw the benefits arising on a 20 year timescale, notably direct broad-band interfaces between brain and machines.

What, then, of the risks? There is the much discussed issue of nanoparticle toxicity. How might nanotechnology affect developing countries – will the advertised benefits really arise? We have seen a mapping of nanotechnology benefits onto the Millennium Development Goals looked by the Meridian Institute. But this has been criticised, for example by N. Invernizzi, (Nanotechnology Law and Business Journal 2 101-11- (2005)). High productivity will mean less demand for labour, there might be a tendency to neglect non-technological solutions, there might be a lack of qualified personnel. He asked what will happen if India and China succeed with nano, will that simply increase internal rich-poor divisions within those countries? The overall conclusion is that socio-economic factors are just as important as technology.

With respect to military nanotechnology, there are many potential applications, including smaller and faster electronics and sensors, lighter and faster armour and armoured vehicles, miniature satellites, including offensive ones. Many robots will be developed, including nano-robots, including biotechnical hybrids – electrode controlled rats and insects. Medical nanobiotechnology will have military applications – capsules for controlled release of biological and chemical agents, mechanisms for targeting agents to specific organs, but also perhaps to specific gene patterns or proteins, allowing chemical or biological warfare to be targeted against specific populations.

Military R&D for nano is mostly done in the USA, where it accounts for 1/4 – 1/3 of federal funding. At the moment, the USA spends 4-10 times as much as the rest of the world, but perhaps we can shortly expect other countries with the necessary capacity, like China and Russia, to begin to catch up.

The problem of military nanotechnology from an arms control point of view is that limitation and verification is very difficult – much more difficult than the control of nuclear technology. Nano is cheap and widespread, much more like biotechnology, with many non-military uses. Small countries and non-state actors can use high technology. To control this will need very intrusive inspection and monitoring – anytime, anyplace. Is this compatible with military interest in secrecy and the fear of industrial espionage?

So, Altmann asks, Is the current international system up to this threat? Probably not, he concludes, so we have two alternatives: increasing military and terrorist threats and marked instability, or the organisation of global security in another way, involving some kind of democratic superstate, in which existing states voluntarily accept reduced sovereignty in return for greater security.

Coherent “atoms” in (fairly) warm solids

In 2001, Eric Cornell, Wolfgang Ketterle and Carl Wieman won the Nobel prize for physics for demonstrating the phenomenon of Bose-Einstein condensation in a system of trapped ultra-cold atoms. Bose-Einstein condensation is a remarkable quantum phenomenon in which a system of particles all occupy the same quantum state. In this condition they are identical and indistinguishable – in effect the individual atoms have lost their identities and coalesced into a single coherent quantum blob. Now researchers have demonstrated the same phenomenon in a different type of particle, polaritons, confined in a semiconductor nanostructure, at a temperature of 4.2 K. This is not exactly ambient, but it is much more convenient than the temperature of 20 nanoKelvin needed for the atom experiments.

The experiments, reported in this article in Science (abstract, subscription required for full article), were done by grad students Ryan Balili and Vincent Hartwell in David Snoke’s group at the University of Pittsburgh, in collaboration with Loren Pfeiffer and Kenneth West from Bell Labs. The basic structure consisted of a semiconductor quantum well trapped between a pair of reflectors, each made up of alternating dielectric layers, rather like the one shown in the picture in this earlier post. If a laser is shone into the structure, pairs of electrons and holes are generated; these pairs of charge are bound together by the electrostatic interaction and behave like particles called excitons. Meanwhile, light bounces back between the two mirrors, forming standing wave modes. Energy bounces back and forward between these standing wave photons and excitons, and the combination forms a quasi-particle called a polariton.

How on earth can one compare an entity that is composed of a complicated set of interactions between light and matter with something simple and elementary like an atom? The answer to this is rather interesting, and relies on a principle of solid state physics that is fundamental to the subject, but little known outside the field. Simple theory tells us how to understand systems composed out of entities that don’t interact with each other very much; the first theory of electrons in solids one gets taught simply assumes that the electrons don’t interact with each other at all, which on the face of it is absurd because they are charged objects which strongly repel each other. It turns out that you can often lump together the basic entity together with all its associated interactions as a “quasi-particle”, which behaves just like a simple, quantum mechanical particle. The particle is characterised by an “effective mass” which, in the case of these polaritons, is very much smaller than a real atom. It is this very small mass which allows them to form a Bose-Einstein condensate at (relatively) high temperatures.

This is another great example of how being able to make precisely specified semiconductor nanostructures allows one to tune the interaction between light and matter to produce remarkable new effects. What use could this have in the future? Peter Littlewood, from the Cavendish Laboratory in Cambridge, writes in a commentary in Science (subscription required):

“These objects are, on the one hand, a new kind of low-threshold laser, but the fact that they consist of coherent quantum objects (unlike a regular laser) puts them potentially in the class of quantum devices. A rash speculation is that a small polariton condensate could become the basis for an elementary quantum computer, but the easy coupling to light might simplify the wiring issues that many quantum information technologies find challenging.”

Pierre-Gilles de Gennes 1932-2007

I was sorry to hear that Pierre-Gilles de Gennes, the great French theoretical physicist, died a week ago last Friday, following a long struggle with cancer. De Gennes, who won the Nobel Prize for Physics in 1991, created much of our modern understanding of liquid crystals, colloids and polymers, essentially founding the field of soft condensed matter by recognising the common features of these soft systems characterised by interaction energies comparable to thermal energies and dominated by Brownian motion.

This obituary in Le Monde has a good account of his life and work. My first introduction to his work was at the very beginning of my PhD. When I asked my supervisor what I should do to begin my studies, he told me to go to the bookshop, buy a copy of de Gennes’s book Scaling Concepts in Polymer Physics, and come back when I had read it. I did this, and very good advice it turned out to be; it’s a book I still refer to. Soon after I had the chance of meeting the man himself , when he listened with absolute attention and politeness to what this insignificant graduate student had to say.

De Gennes was an erudite, deeply cultured and utterly charming man. One of his passions outside physics was art, and he used art history to illustrate how he saw the role of the theoretical physicist evolving in a time when computer simulations are becoming ever more powerful. Just as the invention of photography meant that artists no longer felt the obligation to strive for simple verisimilitude, and could seek to capture the essence of their subject in increasingly impressionistic and abstract ways, so the fact that systems of great complexity could now be simulated on a computer left theorists with the job of sketching a description of these systems in a way that puts insight and transparency ahead of perfect accuracy. As the attention of physicists turns more and more towards complex and difficult systems (including living things, the most difficult systems of all) this insistence on cutting through the thicket of detail to focus on the essentials becomes ever more important.

In praise of Vaclav Smil

In my efforts to educate myself about how new technologies might impact on our economy and society, the author from whom I’ve learnt the most is unquestionably Vaclav Smil. Smil is a Professor in the Department of Environment and Geography at the University of Manitoba, but his writings cover the whole sweep of the interaction of technology and society. What I appreciate about his books is their emphasis on rigorous quantification, their long historical perspective and global span (Smil is an expert on China, among many other things), and their grounding in the things that matter – how we get the food we eat and the energy that underlies our lifestyles.

My introduction to Smil’s work came when I needed a rapid introduction to energy economics. His 2003 book Energy at the Crossroads: global perspectives and uncertainties does this job in an admirably clear-headed and realistic way. It has a particularly sobering view of the poor record of energy forecasting in the past, and of the evolution of linkages between economic growth and output and energy inputs. Enriching the Earth: Fritz Haber, Carl Bosch, and the Transformation of World Food Production takes a historical view of the linkage between energy and food. Few people nowadays stop to think about the importance of artificial nitrogen fixation, powered by fossil fuels, in feeding the world. Yet it is clear that without artificial fertilizers more than half of the current population of the earth would not be alive today. We are effectively surviving by eating oil. This theme is developed in Feeding the World: A Challenge for the Twenty-First Century, which asks the fundamental question, just how many people could the world feed? After a period of plentiful and cheap food, at least in the West, we’ve forgotten about some of the more apocolyptic visions of mass famine. Yet the world food supply equation is probably more fragile than we’d like to think. This is likely to get worse, as climate change, water shortages, and environmental degradation puts pressure on yields, and increasing demand for biofuels increases demand for non-food uses of crops.

Many of these themes are brought together, with many other trends, in two of Smil’s most recent books, Creating the Twentieth Century: Technical Innovations of 1867-1914 and Their Lasting Impact and Transforming the Twentieth Century: Technical Innovations and Their Consequences . Taken together, these two volumes offer the best overview of how the world we live in now has developed that I know of. At one level, this is simply a narrative history of modern technology, albeit one that takes a holistic view of the way in which many different inventions come together to make important innovations possible. In this sense, it’s the story of accelerating change, in which one technological development facilitates another. But he is explicitly dismissive of those who are too quick to plot exponential curves and extrapolate from them. The title of his first book makes it clear that in Smil’s view, the true technological revolution took place in the last part of the 19th century, and what we have seen since then is largely the unfolding of the developments that were initiated in this great saltation. And he is by no means certain that the rapid change will continue, noting the degree to which it has been built on a massive, and probably unsustainable, growth in energy consumption. His agnostic outlook is summed up in the last chapter, where he asks:

“have the last six generations of great technical innovations and transformations merely been the beginning of a new extended era of unprecedented accomplishments and spreading and sustained affluence – or have they been a historically ephemeral aberration that does not have any realistic chance of continuing along the same, or a similar trajectory, for much longer?”

Ideologies and nanotechnology

There are many debates about nanotechnology; what it is, what it will make possible, and what its dangers might be. On one level these may seem to be very technical in nature. So a question about whether a Drexler style assembler is technically feasible can rapidly descend into details of surface chemistry, while issues about the possible toxicity of carbon nanotubes turn on the procedures for reliable toxicological screening. But it’s at least arguable that the focus on the technical obscures the real causes of the arguments, which are actually based on clashes of ideology. We supposedly live in a non-ideological age, so what are the ideological divisions that underly debates about nanotechnology? I suggest, for a start, these four ideological positions, each of which implies a very different attitude towards nanotechnology.

  • Transhuman. Transhumanists look forward to a time in which technology allows humanity to transcend its current physical and mental limits. Radical nanotechnologies are essential to the fulfillment of this vision, so the attitude of transhumanists to nanotechnology in its most radical, Drexlerian form, is that it is not only inevitable but morally mandated.
  • Transglobal. Those who accept the current neo-liberal, globalising consensus look to new technologies as a driver for further economic growth. Nanotechnology is expected to lead to changes which may be disruptive to individual business sectors, but which probably won’t fundamentally change global socio-economic systems.
  • Deep Green. To radical environmentalists, our current urban, industrial economic system is unsustainable. Technologies are regarded as in large measure responsible for the difficulties we are now in, and a return to more rural, post-industrial, locally based economies is regarded as not only desirable but inevitable. Nanotechnology is, like most new technologies, viewed with deep distrust, as very likely to lead to undesirable and possibly unintended consequences.
  • Bright Green. Another strand of environmentalists share with Deep Greens the conviction that the current socio-economic system is unsustainable, but are confident that new technology and imaginative design will make possible an urban culture with a high standard of living that is sustainable. These people look with enthusiasm to nanotechnology for new sustainable energy systems and decentralised, low waste manufacturing processes.

When one sees a debate about nanotechnology start to get heated, it’s perhaps worth asking what the ideological positions of the debaters are, and whether an apparently technical argument is actually a proxy for an ideological one.

Everyware

This week’s Economist has a very interesting survey of the future of wireless technology, which assesses progress towards ubiquitous computing and “the internet of things” – the idea that in the near future pretty well every artefact will carry its own computing power, able to sense its environment and communicate wirelessly with other artefacts and computer systems. The introductory article and the (rather useful) list of sources and links (including the book by Adam Greenfield – Everyware: The Dawning Age of Ubiquitous Computing – whose title I’ve appropriated for my post) are freely available; for the other seven articles you need a subscription (or you could just buy a copy from the newstand).

Evolutionary nanotechnology is likely to contribute to these developments in at least two ways; by making possible a wide range of sensors able to detect, for example, very small concentrations of specific chemicals in the environment, and, through technologies like plastic electronics, by making possible the mass-production of rudimentary computing devices at tiny cost. Even with current technology, these developments are sure to raise privacy and security issues, but equally may make possible unimagined benefits in areas such as health and energy efficiency. The Economist’s survey finishes on an uncharacteristically humble note: “There is no saying how it will be used, other than it will surprise us.”

Nanotechnology: Some questions for social scientists

In 2003 I was one of the coauthors of a report – ‘The Social and Economic Challenges of Nanotechnology’ (PDF) – commissioned by the UK’s Economic and Social Research Council – this is the body which distributes government research funding to social scientists. Last year the ESRC commissioned me and my coauthors, Stephen Wood and Alison Geldart, to write a follow-up report summarising the way the debate about nanotechnology had evolved over the intervening years. The follow up report is now available from the ESRC web-site – Nanotechnology: from the science to the social (2 MB PDF) – and for those with a shorter attention span a short briefing (765 kB PDF) is also available.

One of our aims was to identify some questions that we thought were worthy of further study by social scientists. Here are some of the issues we thought were worth some more study:

The development of nanotechnology

Nanotechnology has some unique features as a case study for the social science of science, as it appears to have arisen not just as a natural development from existing disciplines, but at least partly as a result of external factors. This poses a number of interesting questions:
1) Is nanotechnology developing into a distinct field – that is, are there social and institutional pressures causing scientists in well-established disciplines such as chemistry and physics to assume a new disciplinary identity?
2) Is the nucleation of the field of nanotechnology, if this indeed is taking place, an integral part of the transformation of science from Mode 1-type to Mode 2-type and is nanotechnology being developed as a field precisely by those scientists who embrace Mode 2 values?
3) Are the grand visions associated with radical views of nanotechnology influential in shaping the development of science and technology, despite the rejection by many scientists of the assumptions on which they are based?

Nanotechnology, industry and the economy

Nanotechnology poses important questions in relation to technological innovation and its relationship to wealth creation. Governments and agencies worldwide are providing substantial financial support for nanotechnology on the basis of tacit or explicit assumptions that this support will yield substantial economic dividend. These assumptions need critical examination; some questions that arise include the following:
1) Is there, or will there ever be, a nanotechnology industry?
2) Will there be “nanotech” clusters comparable to “biotech” and information technology clusters?
3) Will these be geographical clusters, or could there be virtual clusters?
4) Will there be clusters associated with discrete sub-areas of nanotechnology, such as (for example) bionanotechnology for diagnostics?
5) As governments look to nanotechnology as a driver of innovation and economic growth, tacit or explicit models of the innovation process are being invoked to help frame policy. Are these models of innovation applicable to nanotechnology (or indeed any other new technology)?

Nanotechnology and internationalisation

Government support for Nanotechnology has included non-western countries and the EU, making it a unique and important case study in the further internationalisation of science and innovation. Questions that arise from this include:
1) What is the scope for government policy to influence innovations in the nanotechnology area, both between and within organisations, in an increasingly global economy?
2) Is there an emerging international division of labour in the development of nanotechnology?
3) Can nanotechnology make significant contributions to the development of less-developed countries? Contrasts between China and India, which are receiving most attention, with countries where nanotechnology has been given a significant role in plans but are receiving less attention, like Brazil, may be instructive.
4) Is there any truth in the caricature of the ‘Wild East’, i.e. a place without ethical or intellectual property-bound constraints unfairly competing with western countries?
5) As nanotechnology may be the first science in modern times in which substantial and original developments take place in non-western cultures, can it offer any insights about cultural relativism in science?

Technology development and society

The portrayal of nanotechnology in popular culture is strongly influenced by social movements outside the scientific mainstream. The significance of this unusual feature should be examined:
1) Some futurists argue that nanotechnology itself is accelerating the rate of technological change and hence social change – does this stand up to scrutiny?
2) How does nanotechnology fit into broader social movements about technology development, and do such movements depend on grand visions (positive or negative)?
3) Is there any significance to those movements, such as transhumanism, which are associated with the promotion of more futuristic visions of nanotechnology? What role do these movements have in shaping broader societal debates, such as the nascent debate about human enhancement?

Public engagement

The widespread consensus about the desirability of public engagement in connection with nanotechnology should receive some critical scrutiny:
1) The public engagement activities and the methods used could be evaluated, including a cross-country comparison of the various experiments in it and the role of the dissemination of scientific knowledge within this process.
2) While accepting the force of the critique of the deficit model of public understanding, one needs to understand the origins of the public’s understanding of nanotechnology, and particularly the relative influence of the various interest groups, whose visions of nanotechnology may be very different, as well as popular media, serious journalism, science fiction, and computer games.

An uncertain business

Last November, the Royal Society hosted an event at which companies were asked the question “How can business respond to the technical, social and commercial uncertainties of nanotechnology?” I was one of only a couple of academics at the event, which attracted representatives of 17 companies, many of them very large household names not previously associated with nanotechnology. The event took place somewhat under the radar, and was conducted under Chatham House rules, allowing the participants to speak freely without what they said being attributed to them. However, some information about the day has now been released in the form of this short workshop report (PDF).

The joint sponsors of the day were Royal Society, the Nanotechnology Industries Association, and Insight Investment. The Royal Society’s interest is obvious, in view of its long-standing involvement in considering the broader implications of nanotechnology, and it’s no surprise that the NIA, a newly established trade association, would want to be involved. The participation of Insight Investment is perhaps more surprising and interesting; this is a fund manager with around £100 billion in investments. This means that they hold, on behalf of clients including large institutions and pension funds, substantial equity stakes in many of the companies that took part. Thus they have a direct financial interest in whether the companies in question are in a position to exploit business opportunities that arise from the uses of nanotechnology, and can deal sensibly with any uncertainties that might arise.

The position paper that was written to inform the discussion – An uncertain business (PDF) – is now also available. This divides the uncertainties that might be associated with nanotechnology into three categories. Technical uncertainties include the well-known issues about possible toxicity of nanoscale materials, while social uncertainties involve the different ways in which people might react to new products involving nanotechnology. But many of the participants were exercised by possible commercial uncertainties, that is to say issues such as the potential risks to brand value that bad publicity might lead to, together with risks to cost of capital and insurance that might arise from adverse opinion in the financial and insurance markets.