Public Engagement and Nanotechnology – the UK experience

What do the public think about nanotechnology? This is a question that has worried scientists and policy makers ever since the subject came to prominence. In the UK, as in other countries, we’ve seen a number of attempts to engage with the public around the subject. This article, written for an edited book about public engagement with science more generally in the UK, attempts to summarise the UK’s experience in this area.

From public understanding to public engagement

Nanotechnology emerged as a focus of public interest and concern in the UK in 2003, prompted, not least, by a high profile intervention on the subject from the Prince of Wales. This was an interesting time in the development of thinking about public engagement with science. A consensus about the underlying philosophy underlying the public understanding of science movement, dating back to the Bodmer report (PDF) in 1985, had begun to unravel. This was prompted, on the one hand, by sustained and influential critique of some of the assumptions underlying PUS from social scientists, particularly from the Lancaster school associated with Brian Wynne. On the other hand, the acrimony surrounding the public debates about agricultural biotechnology and the government’s handling of the bovine spongiform encephalopathy outbreak led many to diagnosis a crisis of trust between the public and the world of science and technology.

In response to these difficulties, a rather different view of the way scientists and the public should interact gained currency. According to the critique of Wynne and colleagues, the idea of “Public Understanding of Science” was founded on a “deficit model”, which assumed that the key problem in the relationship between the public and science was an ignorance on the part of the public both of the basic scientific facts and of the fundamental process of science, and if these deficits in knowledge were corrected the deficit in trust would disappear. To Wynne, this was both patronizing, in that it disregarded the many forms of expertise possessed by non-scientists, and highly misleading, in that it neglected the possibility that public concerns about new technologies might revolve around perceptions of the weaknesses of the human institutions that proposed to implement them, and not on technical matters at all.

The proposed remedy for the failings of the deficit model was to move away from an emphasis on promoting the public understanding of science to a more reflexive approach to engaging with the public, with an effort to achieve a real dialogue between the public and the scientific community. Coupled with this was a sense that the place to begin this dialogue was upstream in the innovation process, while there was still scope to steer its direction in ways which had broad public support. These ideas were succinctly summarised in a widely-read pamphlet from the think-tank Demos, “See-through science – why public engagement needs to move upstream ” .

Enter nanotechnology

In response to the growing media profile of nanotechnology, in 2003 the government commissioned the Royal Society and the Royal Academy of Engineering to carry out a wide-ranging study on nanotechnology and the health and safety, environmental, ethical and social issues that might stem from it. The working group included, in addition to distinguished scientists, a philosopher, a social scientist and a representative of an environmental NGO. The process of producing the report itself involved public engagement, with two in-depth workshops exploring the potential hopes and concerns that members of the public might have about nanotechnology.

The report – “Nanoscience and nanotechnologies: opportunities and uncertainties” – was published in 2004, and amongst its recommendations was a whole-hearted endorsement of the upstream public engagement approach: “a constructive and proactive debate about the future of nanotechnologies should be undertaken now – at a stage when it can inform key decisions about their development and before deeply entrenched or polarised positions appear.”

Following this recommendation, a number of public engagement activities around nanotechnology have taken place in the UK. Two notable examples were Nanojury UK, a citizens’ jury which took place in Halifax in the summer of 2005, and Nanodialogues, a more substantial project which linked four separate engagement exercises carried out in 2006 and 2007.

Nanojury UK was sponsored jointly by the Cambridge University Nanoscience Centre and Greenpeace UK, with the Guardian as a media partner, and Newcastle University’s Policy, Ethics and Life Sciences Research Centre running the sessions. It was carried out in Halifax over eight evening sessions, with six witnesses drawn from academic science, industry and campaigning groups, considering a wide variety of potential applications of nanotechnology. Nanodialogues took a more focused approach; each of its four exercises, which were described as “experiments”, considered a single aspect or application area of nanotechnology. These included a very concrete example of a proposed use for nanotechnology – a scheme to use nanoparticles to remediate polluted groundwater – and the application of nanoscience in the context of a large corporation.

The Nanotechnology Engagement Group provided a wider forum to consider the lessons to be learnt from these and other public engagement exercises both in the UK and abroad; this reported in the summer of 2007 (the report is available here). This revealed a rather consistent message from public engagement. Broadly speaking, there was considerable excitement from the public about possible beneficial outcomes from nanotechnology, particularly in potential applications such as renewable energy, and medical applications. The more general value of such technologies in promoting jobs and economic growth were also recognised.

There were concerns, too. The questions that have been raised about potential safety and toxicity issues associated with some nanoparticles caused disquiet, and there were more general anxieties (probably not wholly specific to nanotechnology) about who controls and regulates new technology.

Reviewing a number of public engagement activities related to nanotechnology also highlighted some practical and conceptual difficulties. There was sometimes a lack of clarity about the purpose and role of public engagement; this leaves space for the cynical view that such exercises are intended, not to have a real influence on genuinely open decisions, but simply to add a gloss of legitimacy to decisions that have already been made. Related to this is the fact that bodies that might benefit from public engagement may lack institutional capacity and structure to benefit from it.

There are some more practical problems associated with the very idea of moving engagement “upstream” – the further the science is away from potential applications, the more difficult it can be both to communicate what can be complex issues, whose impact and implications may be subject to considerable disagreement amongst experts.

Connecting public engagement to policy

The big question to be asked about any public engagement exercise is “what difference has it made” – has there been any impact on policy? For this to take place there needs to be careful choice of the subject for the public engagement, as well as commitment and capacity on behalf of the sponsoring body or agency to use the results in a constructive way. A recent example from the Engineering and Physical Science Research Council offers an illuminating case study. Here, a public dialogue on the potential applications of nanotechnology to medicine and healthcare was explicitly coupled to a decision about where to target a research funding initiative, providing valuable insights that had a significant impact on the decision.

The background to this is the development of a new approach to science funding at EPSRC. This is to fund “Grand Challenge” projects, which are large scale, goal-oriented interdisciplinary activities in areas of societal need. As part of the “Nanoscience – engineering through to application” cross council priority area, it was decided to launch a Grand Challenge in the area of applications of nanotechnology to healthcare and medicine. This is potentially very wide area, so it was felt necessary to narrow the scope of the programme somewhat. The definition of the scope was carried out with the advice of a “Strategic Advisory Team” – an advisory committee with about a dozen experts on nanotechnology, drawn from academia and industry, and including international representation. Inputs to the decision were sought through a wider consultation with academics and potential research “users”, defined here as clinicians and representatives of the pharmaceutical and healthcare industries. This consultation included a “Town Meeting” open to the research and user communities.

This represents a fairly standard approach to soliciting expert opinion for a decision about science funding priorities. In the light of the experience of public engagement in the context of nanotechnology, it would be a natural question to ask whether one should seek public views as well. EPSRC’s Societal Issues Panel – a committee providing high-level advice on the societal and ethical context for the research EPSRC supports – enthusiastically endorsed the proposal that a public engagement exercise on nanotechnology for medicine and healthcare should be commissioned as an explicit part of the consultation leading up to the decision on the scope of the Grand Challenge in nanotechnology for medicine and healthcare.

A public dialogue on nanotechnology for healthcare was accordingly carried out during the Spring of 2008 by BMRB, led by Darren Bhattachary. This took the form of a pair of reconvened workshops in each of four locations – London, Sheffield, Glasgow and Swansea. Each workshop involved 22 lay participants, with care taken to ensure a demographic balance. The workshops were informed by written materials, approved by an expert Steering Committee; there was expert participation in each workshop from both scientists and social scientists. Personnel from the Research Council also attended; this was felt by many participants to be very valuable as a signal of the seriousness with which the organisation took the exercise.

The dialogues produced a number of rich insights that proved very useful in defining the scope of the final call (its report can be found here). In general, there was very strong support for medicine and healthcare as a priority area for the application of nanotechnology, and explicit rejection of an unduly precautionary approach. On the other hand, there were concerns about who benefits from the expenditure of public funds on science, and about issues of risk and the governance of technology. One overarching theme that emerged was a strong preference for new technologies that were felt to empower people to take control of their own health and lives.

One advantage of connecting a public dialogue with a concrete issue of funding priorities is that some very specific potential applications of nanotechnology could be discussed. As a result of the consultation with academics, clinicians and industry representatives, six topics had been identified for consideration. In each case, people at the workshops could identify both positive and negative aspects, but overall some clear preferences emerged. The use of nanotechnology to permit the early diagnosis of disease received strong support, as it was felt that this would provide information that would enable people to make changes to the way they live. The promise of nanotechnology to help treat serious diseases with fewer side effects by more effective targeting of drugs was also received with enthusiasm. On the other hand, the idea of devices that combine the ability to diagnose a condition with the means to treat it, via releasing therapeutic agents, caused some disquiet as being potentially disempowering. Other potential applications of nanotechnology which was less highly prioritised were its use to control pathogens, for example through nanostructured surfaces with intrinsic anti-microbial or anti-viral properties, nanostructured materials to help facilitate regenerative medicine, and the use of nanotechnology to help develop new drugs.

It was always anticipated that the results of this public dialogue would be used in two ways. Their most obvious role was as an input to the final decision on the scope of the Grand Challenge call, together with the outcomes of the consultations with the expert communities. It was the nanotechnology Strategic Advisory Team that made the final recommendation about the call’s scope, and in the event their recommendation was that the call should be in the two areas most favoured in the public dialogue – nanotechnology for early diagnosis and nanotechnology for drug delivery. In addition to this immediate impact, there is an expectation that the projects that are funded through the Grand Challenge should be carried out in a way that reflects these findings.

Public engagement in an evolving science policy landscape

The current interest in public engagement takes place at a time when the science policy landscape is undergoing larger changes, both in the UK and elsewhere in the world. We are seeing considerable pressure from governments for publicly funded science to deliver clearer economic and societal benefits. There is a growing emphasis on goal-oriented, intrinsically interdisciplinary science, with an agenda set by a societal and economic context rather than by an academic discipline – “mode II knowledge production” – in the phrase of Gibbons and his co-workers in their book The New Production of Knowledge: The Dynamics of Science and Research in Contemporary Societies. The “linear model” of innovation – in which pure, academic, science, unconstrained by any issues of societal or economic context, is held to lead inexorably through applied science and technological development to new products and services and thus increased prosperity, is widely recognised to be simplistic at best, neglecting the many feedbacks and hybridisations at every stage of this process.

These newer conceptions of “technoscience” or “mode II science” lead to problems of their own. If the agenda of science is to be set by the demands of societal needs, it is important to ask who defines those needs. While it is easy to identify the location of expertise for narrowly constrained areas of science defined by well-established disciplinary boundaries, it is much less easy to see who has the expertise to define the technically possible in strongly multidisciplinary projects. And as the societal and economic context of research becomes more important in making decisions about science priorities, one could ask who it is who will subject the social theories of scientists to critical scrutiny. These are all issues which public engagement could be valuable in resolving.

The enthusiasm for involving the public more closely in decisions about science policy may not be universally shared, however. In some parts of the academic community, it may be perceived as an assault on academic autonomy. Indeed, in the current climate, with demands for science to have greater and more immediate economic impact, an insistence on more public involvement might be taken as part of a two-pronged assault on pure science values. There are some who consider public engagement more generally as incompatible with the principles of representative democracy – in this view the Science Minister is responsible for the science budget and he answers to Parliament, not to a small group of people in a citizens’ jury. Representatives of the traditional media might not always be sympathetic, either, as they might perceive it as their role to be the gatekeepers between the experts and the public. It is also clear that public engagement, done properly, is expensive and time-consuming.

Many of the scientists who have been involved with public engagement, however, have reported that the experience is very positive. In addition to being reminded of the generally high standing of scientists and the scientific enterprise in our society, they are prompted to re-examine unspoken assumptions and clarify their aims and objectives. There are strong arguments that public deliberation and interaction can lead to more robust science policy, particularly in areas that are intrinsically interdisciplinary and explicitly coupled to meeting societal goals. What will be interesting to consider as more experience is gained is whether embedding public engagement more closely in the scientific process actually helps to produce better science.

A synthetic, DNA based molecular motor

The molecule DNA has emerged as the building block of choice for making precise, self-assembled nanoscale structures (in the laboratory, at least) – the specificity of the base-pair interaction makes it possible to design DNA sequences which will spontaneously form rather intricate structures. The field was founded by NYU’s Nadrian Seeman; I’ve written here before about DNA nanostructures from Erik Winfree and Paul Rothemund at Caltech, and Andrew Turberfield at Oxford. Now from Turberfield’s group comes a paper showing that DNA has the potential not just to make static structures, but to make functioning machines.

The paper, Coordinated Chemomechanical Cycles: A Mechanism for Autonomous Molecular Motion (abstract, subscription required for full article), by Simon Green, Jonathan Bath and Andrew Turberfield , was published in Physical Review Letters a couple of weeks ago (see also this Physical Review Focus article). The aim of the research was to design a synthetic analogue of the molecular motors that are so important in biology – these convert chemical energy (in biology, typically from a fuel like the energy carrying molecule ATP) into mechanical energy. One important class of biological motors consists of something like a molecular walker which moves along a track – for example, the motor molecule myosin walks along an actin track to make our muscles contract, while kinesin walks along the microtubule network inside a cell to deliver molecules to where they are needed (to see how this works take a look at this video from Ron Vale at UCSF). What Turberfield’s group has demonstrated is a synthetic DNA based motor that walks along a DNA track when fed with a chemical fuel.

The way molecular motors work is very different to any motor we know about in our macroscopic world. They’re the archetypal “soft machines”, whose operation depends on the constant Brownian motion of the wet nanoscale world. The animation below shows a schematic of the motor cycle of the DNA motor. At rest, the motor is stuck down by both feet onto the track, which is also made of DNA. The first step is that a fuel molecule displaces one foot from the track; the foot part of the motor then catalyses the combination of this fuel molecule with another fuel molecule from the solution, releasing some chemical energy in the process. The foot is then free to bind back to the track again. The key point is that all these binding and unbinding events, together with the flexing of the components of the motor that allow it to pick up and put down its feet on the track are driven by the random buffetings of Brownian motion. What makes it work as a motor is the fact that there’s an asymmetry to which foot is more likely to be displaced from the track; when the foot sticks back each of the two possible positions is equally probable. This means that although each step in the motor is probabalistic, not deterministic, there’s a net movement, on average, in one direction. It’s the input of chemical energy of the fuel that breaks the symmetry between forward and backward motion, making this motor a physical realisation of a “Brownian ratchet”.

In this paper the authors don’t directly show the motor in action – rather, they demonstrate experimentally the presence of the various bound and unbound states. But this does allow them to make a good estimate of the forces that the motor can be expected to exert – a few picoNewtons, very much in the ball-park of the forces exerted by biological motors.

dnamotor.gif
Schematic showing the operation of the DNA motor. Animation by Jonathan Bath.

Top US energy role for leading nanoscientist

It’s being reported that US President-Elect Obama will name the physicist Steven Chu as his Energy Secretary. Chu won the Nobel prize in 1997 (with Bill Phillips and Claude Cohen-Tannoudji) for his work on cooling and trapping atoms with laser light. One of the spin-offs from his discovery was the development of the “optical tweezers” technique, by which micron-size particles can be held and manipulated by a highly focused laser beam. Chu himself used this technique to manipulate individual DNA molecules, directly verifying the reptation theory of motion of long, entangled molecules. The technique has since become one of the mainstays of single molecule biophysics, used by a number of groups to characterise the properties of biological molecular motors.

Chu is currently director of the Lawrence Berkeley National Laboratory, where one of his major initiatives has been to launch a major initiative to develop economic methods for harnessing solar energy on a large scale – Helios. One can get some idea of what Chu’s priorities are from looking at recent talks he has given, for example this one: The energy problem and how we might solve it (PDF). This concludes with these words: ‘“We believe that aggressive support of energy science and technology, coupled with incentives that accelerate the concurrent development and deployment of innovative solutions, can transform the entire landscape of energy demand and supply … What the world does in the coming decade will have enormous consequences that will last for centuries; it is imperative that we begin without further delay.”

Talking nanotechnology on the street

The BBC’s Radio 4 has been running a series of short programs – Street Science – featuring scientists being sent out onto the streets to engage random members of the public about controversial bits of science. The latest program dealt with nanotechnology, with my friend and colleague Tony Ryan getting a good hearing in the centre of Sheffield. The programme (RealPlayer file) is well worth a listen, as he talks about applications in medicine and novel photovoltaics, how 2-in-1 shampoo works, Fantastic Voyage, Prince Charles and grey goo, the potential dangers of carbon nanotubes, and why nanosilver-based odour resistant socks may not be a good idea.

Books that inspired me

I’ve just done a brief interview with a journalist for the BBC’s Focus magazine, about the three popular science books on nanotechnology that have most inspired me. I’ve already written about my nanotechnology bookshelf, but this time when I came to choose my three favourite books to talk about it turns out that they weren’t directly about nanotechnology at all. So here’s my alternative list of three non-nanotechnology books that I think all nanotechnologists could benefit from reading.

The New Science of Strong Materials by J.E. Gordon. To say that this is the best book ever written about materials science might not sound like that high praise, but I was hugely inspired by this book when I read it as a teenager, and every time I re-read it I find in it another insight. It was first published in 1968, long before anyone was talking about nanotechnology, but it beautifully lays out the principles by which one might design materials from first principles, relating macroscopic properties to the ways in which their atoms and molecules are arranged, principles which even now are not always as well known as they should be to people who write about nanotechnology. It’s a forward looking book, but it’s also full of incidental detail about the history of technology and the science that has underlain the skills of craftsmen using materials through the ages. It also looks to the natural world, discussing what makes materials of biological origin, like wood, so good.

The Self-Made Tapestry by Philip Ball. Part of the appeal of this is the beauty of the pictures, depicting the familiar natural patterns of clouds and sand-dunes, as well as the intricate nanoscale structure of self-assembled block copolymer phases and the shells of diatoms. But alongside the illustrations there is an accurate and clear account of the principles of self-assembly and self-organisation, that cause these intricate patterns to emerge, not through the execution of any centralised plan, but as a result of the application of simple rules describing the interactions of the components of these systems.

Out of Control by Kevin Kelly. This is also about emergence, but it casts its net much more widely, to consider swarm behaviour in insects, economics and industrial ecologies, and flocks of insect-like robots. The common theme is the idea that one can gain power by relinquishing control, harnessing the power of adaptation and evolution in complex systems in which non-trivial behaviour arises from the collective actions of many interacting objects or agents. The style is evangelical, perhaps to the extent of overselling some of these ideas, and some may, like me, not be wholly comfortable with the libertarian outlook that underlies the extension of these ideas into political directions, but I still find it hugely provocative and exciting.

What’s meant by “food nanotechnology”?

A couple of weeks ago I took part in a dialogue meeting in Brussels organised by the CIAA, the Confederation of the Food and Drink Industries of the EU, about nanotechnology in food. The meeting involved representatives from big food companies, from the European Commission and agencies like the European Food Safety Association, together with consumer groups like BEUC, and the campaigning group Friends of the Earth Europe. The latter group recently released a report on food nanotechnology – Out of the laboratory and on to our plates: Nanotechnology in food and agriculture; according to the press release, this “reveals that despite concerns about the toxicity risks of nanomaterials, consumers are unknowingly ingesting them because regulators are struggling to keep pace with their rapidly expanding use.” The position of the CIAA is essentially that nanotechnology is an interesting technology currently in research rather than having yet made it into products. One can get a good idea of the research agenda of the European food industry from the European Technology Platform Food for Life. As the only academic present, I tried in my contribution to clarify a little the different things people mean by “food nanotechnology”. Here, more or less, is what I said.

What makes the subject of nanotechnology particularly confusing and contentious is the ambiguity of the definition of nanotechnology when applied to food systems. Most people’s definitions are something along the lines of “the purposeful creation of structures with length scales of 100 nm or less to achieve new effects by virtue of those length-scales”. But when one attempts to apply this definition in practise one runs into difficulties, particularly for food. It’s this ambiguity that lies behind the difference of opinion we’ve heard about already today about how widespread the use of nanotechnology in foods is already. On the one hand, Friends of the Earth says they know of 104 nanofood products on the market already (and some analysts suggest the number may be more than 600). On the other hand, the CIAA (the Confederation of Food and Drink Industries of the EU) maintains that, while active research in the area is going on, no actual nanofood products are yet on the market. In fact, both parties are, in their different ways, right; the problem is the ambiguity of definition.

The issue is that food is naturally nano-structured, so that too wide a definition ends up encompassing much of modern food science, and indeed, if you stretch it further, some aspects of traditional food processing. Consider the case of “nano-ice cream”: the FoE report states that “Nestlé and Unilever are reported to be developing a nano- emulsion based ice cream with a lower fat content that retains a fatty texture and flavour”. Without knowing the details of this research, what one can be sure of is that it will involve essentially conventional food processing technology in order to control fat globule structure and size on the nanoscale. If the processing technology is conventional (and the economics of the food industry dictates that it must be), what makes this nanotechnology, if anything does, is the fact that analytical tools are available to observe the nanoscale structural changes that lead to the desirable properties. What makes this nanotechnology, then, is simply knowledge. In the light of the new knowledge that new techniques give us, we could even argue that some traditional processes, which it now turns out involve manipulation of the structure on the nanoscale to achieve some desirable effects, would constitute nanotechnology if it was defined this widely. For example, traditional whey cheeses like ricotta are made by creating the conditions for the whey proteins to aggregate into protein nanoparticles. These subsequently aggregate to form the particulate gels that give the cheese its desirable texture.

It should be clear, then, that there isn’t a single thing one can call “nanotechnology” – there are many different technologies, producing many different kinds of nano-materials. These different types of nanomaterials have quite different risk profiles. Consider cadmium selenide quantum dots, titanium dioxide nanoparticles, sheets of exfoliated clay, fullerenes like C60, casein micelles, phospholipid nanosomes – the risks and uncertainties of each of these examples of nanomaterials are quite different and it’s likely to be very misleading to generalise from any one of these to a wider class of nanomaterials.

To begin to make sense of the different types of nanomaterial that might be present in food, there is one very useful distinction. This is between engineered nanoparticles and self-assembled nanostructures. Engineered nanoparticles are covalently bonded, and thus are persistent and generally rather robust, though they may have important surface properties such as catalysis, and they may be prone to aggregate. Examples of engineered nanoparticles include titanium dioxide nanoparticles and fullerenes.

In self-assembled nanostructures, though, molecules are held together by weak forces, such as hydrogen bonds and the hydrophobic interaction. The weakness of these forces renders them mutable and transient; examples include soap micelles, protein aggregates (for example the casein micelles formed in milk), liposomes and nanosomes and the microcapsules and nanocapsules made from biopolymers such as starch.

So what kind of food nanotechnology can we expect? Here are some potentially important areas:

• Food science at the nanoscale. This is about using a combination of fairly conventional food processing techniques supported by the use of nanoscale analytical techniques to achieve desirable properties. A major driver here will be the use of sophisticated food structuring to achieve palatable products with low fat contents.
• Encapsulating ingredients and additives. The encapsulation of flavours and aromas at the microscale to protect delicate molecules and enable their triggered or otherwise controlled release is already widespread, and it is possible that decreasing the lengthscale of these systems to the nanoscale might be advantageous in some cases. We are also likely to see a range of “nutriceutical” molecules come into more general use.
• Water dispersible preparations of fat-soluble ingredients. Many food ingredients are fat-soluble; as a way of incorporating these in food and drink without fat manufacturers have developed stable colloidal dispersions of these materials in water, with particle sizes in the range of hundreds of nanometers. For example, the substance lycopene, which is familiar as the molecule that makes tomatoes red and which is believed to offer substantial health benefits, is marketed in this form by the German company BASF.

What is important in this discussion is clarity – definitions are important. We’ve seen discrepancies between estimates of how widespread food nanotechnology is in the marketplace now, and these discrepancies lead to unnecessary misunderstanding and distrust. Clarity about what we are talking about, and a recognition of the diversity of technologies we are talking about, can help remove this misunderstanding and give us a sound basis for the sort of dialogue we’re participating in today.

Nanoparticles down the drain

With significant amounts of nanomaterials now entering markets, it’s clearly worth worrying about what’s going to happen these materials after disposal – is there any danger of them entering the environment and causing damage to ecosystems? These are the concerns of the discipline of nano-ecotoxicology; on the evidence of the conference I was at yesterday, on the Environmental effects of nanoparticles, at Birmingham, this is an expanding field.

From the range of talks and posters, there seems to be a heavy focus (at least in Europe) on those few nanomaterials which really are entering the marketplace in quantity – titanium dioxide, of sunscreen fame, and nano-silver, with some work on fullerenes. One talk, by Andrew Johnson, of the UK’s Centre for Ecology and Hydrology at Wallingford, showed nicely what the outline of a comprehensive analysis of the environmental fate of nanoparticles might look like. His estimate is that 130 tonnes of nano-titanium dioxide a year is used in sunscreens in the UK – where does this stuff ultimately go? Down the drain and into the sewers, of course, so it’s worth worrying what happens to it then.

At the sewage plant, solids are separated from the treated water, and the first thing to ask is where the titanium dioxide nanoparticles go. The evidence seems to be that a large majority end up in the sludge. Some 57% of this treated sludge is spread on farmland as fertilizer, while 21% is incinerated and 17% goes to landfill. There’s work to be done, then, in determining what happens to the nanoparticles – do they retain their nanoparticulate identity, or do they aggregate into larger clusters? One needs then to ask whether those that survive are likely to cause damage to soil microorganisms or earthworms. Johnson presented some reassuring evidence about earthworms, but there’s clearly more work to be done here.

Making a series of heroic assumptions, Johnson made some estimates of how many nanoparticles might end up in the river. Taking a worst case scenario, with a drought and heatwave in the southeast of England (they do happen, I’m old enough to remember) he came up with an estimate of 8 micrograms/litre in the Thames, which is still more than an order of magnitude less than that that has been shown to start to affect, for example, rainbow trout. This is reassuring, but, as one questioner pointed out, one still might worry about the nanoparticles accumulating in sediments to the detriment of filter feeders.

Our faith in technology

The following essay is the pre-edited version of a piece of mine that will be published in a forthcoming book “Human Futures: Art in an Age of Uncertainty”, edited by Andy Miah and published by FACT (Foundation for Art and Creative Technology) & Liverpool University Press.

The days when our society was bound together by a single shared faith seem long gone. But at some level, most of us share a faith in technology, a faith that next year we’ll be able to buy a faster computer, a digital camera with more megapixels, or an MP3 player that holds more songs, and it will cost us less. For some, this is part of a broader faith in the power of science and technology both to deliver a better life and to give a coherent way of thinking about the world. Others might have a more nuanced view, seeing the results of techno-science as a very much a mixed blessing, and accepting the gadgets, while rejecting the scientific worldview. For better or worse, though, we’re in the state we’re in now because of technology, and indeed we existentially depend on it. But it’s equally clear that the technology we have can’t be sustained. Whatever happens, this tension must be resolved; whether we believe in progress or not, things can’t go on as they are.

There’s a new set of emerging technologies to bring these arguments into focus. Nanotechnology manipulates matter at the level of atoms and molecules, and promises a new level of control over the material world[i]. Biology has already moved from being an essentially descriptive and explanatory activity, and it’s now taking on the character of a project to intervene in and reshape the living world. Up to now, the achievements of biotechnology have come from fairly modest modifications to biological systems, but a new discipline of synthetic biology is currently emerging, with the much more ambitious goal of a wholesale reengineering of living systems for human purposes, and possibly creating entirely novel living systems. In large organisms like humans, we’re starting to appreciate the complexities of communications within and between the cells that together make up the organism; it’s this understanding of the rich social lives of cells that will make possible the development of stem cell therapies and tissue engineering. Information technology both enables and is enabled by these advances; it’s computing power that underlay the decoding of the human genome and which drives the development of sciences like bioinformatics, that are giving us the tools to understand the informational basis of life. The other side of the coin is that it is developments in nanotechnology that are what drives the relentless increase in computing power that is obvious to every consumer; in the near future similar advances will contribute to the growing importance of the computer as an invisible component of the fabric of life – ubiquitous computing. Perhaps of most significance of all to our conceptions of what it means to be human, cognitive science expands our understanding of how the brain works as an organ of information processing, prompting dreams both of a reductionist understanding of consciousness and the possibility of augmenting the functionality of the brain.

What will all these bewildering developments mean for the way the human experience evolves over the coming decades? Let’s get some perspective by reminding ourselves of technology’s role in getting us to where we are now.

No-one can doubt that our lives now are hugely different to the lives of our forbears two hundred years ago, and that this dramatic transformation has come about largely through new technologies. The world of material things – food, buildings, clothes, tools – has been transformed by new materials and processes, with mass production bringing complex artefacts within reach of everyone. Information and communications have been transformed; first telephones removed the need for physical presence for two-way communication, then computers and the internet have come together to give unprecedented ways of storing, accessing and processing a vast universe of information. Now all these technologies have converged and become ubiquitous through mobile telephony and wireless networking. Meanwhile life expectancy has doubled, through a combination of material sufficiency, the development of scientific medicine, and the implementation of public health measures. We’ve started to assert a new control over human biology – we already take for granted control over our reproduction through the contraceptive pill and assisted fertility, and we are beginning to anticipate a future in which we’ll have access to bodily repairs and spare parts, through the promise of tissue engineering and stem cell therapy.

It’s easy to be dazzled by all that technology has achieved, but it’s important to remember that these developments have all been underpinned by a single factor – the availability of easily accessible, concentrated forms of energy. None of this would have happened if we had not been able to fuel our civilisation by extracting black stuff from the ground and burning it. In 1800, the total energy consumption in the UK amounted to about 20 GJ per person per year. By 1900 this figure had increased by more than a factor of five, and today we use 175 GJ. Since this is predominantly in the form of fossil fuels, one graphic way of restating this figure is that it amounts to the equivalent of more than 4 tonnes of oil per person per year[ii].

It’s obvious to everyone that they use fossil fuel energy when they put petrol in their car, or turn the house heating on. But it’s important to appreciate how much energy is embodied in the material things around us, in our built environment and the artefacts we use. It takes a tonne and a quarter of oil to make ten tonnes of cement, and eight and a quarter tonnes of oil to make ten tonnes of steel. For a really energy hungry material like aluminium, it takes nearly four tonnes of oil to produce a single tonne. And if we build with oil, and make things out of oil, in effect we eat oil too, thanks to our reliance on intensive agriculture with its high energy inputs. To grow ten tonnes of wheat (roughly the output of a hectare, in the most favourable circumstances) takes 200 kg of artificial fertiliser, which itself embodies 130 kg of oil, as well as the input of another 200 kg of oil in other energy inputs.

Some people have the conceit that we’ve moved beyond a dirty old economy of power stations and steel works to a new, weightless economy based on processing information. Nothing could be further from the truth; in addition to our continuing dependence on material things, with their substantial embodiment of energy, information and communications technology itself needs a surprisingly large energy input. The ICT industry in the UK is actually responsible for a comparable share of carbon dioxide generation to aviation. The energy consumption of that giant of the modern information economy, Google, is a closely guarded secret; what is clear, though, is that the choice of location of its data centres is driven by the need to be close to reliable, cheap power, like hydroelectric power plants or nuclear power stations, in much the same way that aluminium smelters are sited.

Perhaps the most complex and interesting relationship is that between energy use and measures of health and physical well-being, like infant mortality and life expectancy. It’s clear, both from the record of history and the correlation of these figures with energy use for less well developed countries at the moment, that there’s a strong correlation between per capita energy use and life expectancy, at the lower end of the range. It seems that increasing per capita energy use up to 60 or 70 GJ per year brings substantial benefits, presumably by ensuring that people are reasonably well nourished, and allowing basic public health measures like access to clean water and having a working sewerage system. Further improvements result from increasing energy consumption above this, presumably by enabling increasingly comprehensive medical services, but beyond a per capita consumption around 110 GJ a year there is very little correlation between energy use and life expectancy. The lesson of this is that, while it is clear that material insufficiency is bad for one’s health, sometimes excess can have its own problems.

This emphasis on our dependence on fossil fuel energy should make it clear, whatever the prospects for exciting new developments in the future, there is a certain fragility to our situation. The large scale use of fossil fuels has come at a price – in man-made climate change – whose full dimensions we don’t yet know, and we are once again seeing problems of pressures on resources like food and fuel. Food shortages and bad harvests remind us that technology hasn’t allowed us to transcend nature – we’re still dependent on the rains arriving at the right time in the right quantity. We’ve influenced the climate, on which we depend, but in ways that are uncontrolled and unpredicted. The lessons of history teach us that a societal collapse is a real possibility, and one of the consequences of this would be an abrupt end to the hopes of further technological progress[iii].

We can hope that these emerging technologies themselves can help avert this kind of disastrous outcome. The only renewable energy source that realistically has the capacity to underpin a large-scale, industrial society is solar energy, but current technologies for harvesting this are too expensive and cannot be produced on anything like the scales needed to make a serious dent in the world’s energy needs. There is a real possibility that nanotechnology will change this situation, making possible the use of solar energy on very large scales. Other developments – for example, in batteries and fuel cells – would then allow us to store and distribute this energy, while we could anticipate a further continuation of the trends that allow us to do more with less, reducing the energy input required to achieve a given level of prosperity.

Computers will probably go on getting faster, with the current exponential growth of computing power (Moore’s law) continuing for perhaps ten more years. After that, we’re relying on new developments in nanotechnology to allow us to keep that trajectory going. Less obvious, but in some ways more interesting, will be the ways computing power becomes seamlessly integrated into the material fabric of life. One of the areas this will impact is medicine; developments in sensors should mean that we diagnose diseases earlier and can personalise treatments to the particularities of an individual’s biology. Therapies, too, will become more effective and less prone to side-effects, thanks to nanoscale delivery devices for targeting drugs and the development of engineered replacement tissues and organs.

So perhaps our optimistic goal for the next fifty years should be that these emerging technologies contribute to making a prosperous global society on a sustainable basis. A steady world population should universally enjoy long and pain-free lives at a decent standard of living, this being underpinned by sustainable technologies, in particular renewable energy from the sun, and supported by a ubiquitous (but largely invisible) infrastructure of ambient computing, distributed sensing, and responsive materials.

For some, this level of ambition for technology isn’t enough. Instead they seek transcendence through technology and, through human enhancement, our transfiguration to qualitatively different and superior types of beings. It’s the technological trends we’ve discussed already that are invoked to support this view, but with a particularly superlative vision of the potential of technology[iv]. For example, there’s an extrapolation from the existing developments of nanotechnology, via Drexler’s conception of atom-by-atom nanomanufacturing[v], to a world of superabundance, in which any material object is available at no cost. From modern medicine, and the future promise of nanomedicine, there’s the promise of superlongevity – the idea that a “cure” for the “disease” of ageing is imminent, and the serious suggestion that people alive today might live for a thousand years[vi]. From some combination of the development of ever-faster computers and the possibility of the augmentation of human mental capabilities by implants, comes the idea that we will shortly create a greater than human intelligence, either as a purely artificial intelligence in a computer, or through a radical enhancement of a human mind. This superintelligence is anticipated to be the greatest superlative technology of all, as by applying its own intelligence to itself it will be able rapidly and recursively to improve all these technologies, including its own intelligence. This will lead to a moment of ineffably rapid technological and societal change called, by its devotees, the Singularity[vii].

The technical bases for these superlative predictions are strongly contested by researchers in the relevant fields[viii]. This doesn’t seem to have a great deal of impact on the vehemence with which such views are held by those (largely online) communities transhumanists and singularitarians for whom these shared beliefs define a shared identity. The essentially eschatological character of singularitarian beliefs is obvious – it’s this that is well captured in the dismissive epithet “the rapture of the nerds”. While some proponents of these views have an aggressively rational, atheist outlook, others are explicit in highlighting a spiritual dimension to their belief, in a cosmological outlook that seems to owe something, whether consciously or unconsciously, to the Catholic mystic Teilhard de Chardin[ix]. Belief in the singularity, then, as well as being a symptom of a particular moment of rapid technological change, should perhaps be placed in that tradition of millennial, utopian thinking that’s been a recurring feature in Western thought for many centuries.

For me, the main sin of singularitarianism is one shared much more widely – that is the idea of technological determinism. This is the idea that technology has an autonomous, predictable, momentum of its own, largely beyond social and political influence, and that societal and economic changes are governed by these technological developments. It’s the everyday observation of the rapidity of technological change that gives this view such force; what keeps new, faster computers appearing in the shops on schedule is Moore’s law. This is the observation, made in 1965 by Gordon Moore, the founder of the microprocessor company Intel, that computer power is growing exponentially, with the number of transistors on a single chip roughly doubling every two years. To futurists like Kurzweil, Moore’s law is simply one example of a more general rule of exponential technological growth. But simply to give Moore’s observation the name “law” is to mistake its character in fundamental ways. It isn’t a law; it is a self-fulfilling prophecy, a way of coordinating and orchestrating the deliberate and planned action of the many independent actors in the semiconductor industry and in commercial and academic research and development, in the pursuit of a common goal of continuous incremental improvement in their products. Moore’s law is not a law describing the way technology develops as some kind of independent force, it is a tool for coordinating and planning human action.

We need to be very aware that technology need not advance at all; it depends on a set of stable societal and economic arrangements that aren’t by any means guaranteed. If there’s a collapse of society due to resource shortage or runaway climate change that will bring an abrupt end to Moore’s law and to all kinds of other progress. But a more optimistic view is to assert that we aren’t slaves to technology as an external, autonomous force; instead, technology is a product of society and our aspiration should be that it is directed by society to promote widely shared goals.

i For an overview, see “Soft Machines: nanotechnology and life”, Richard A.L. Jones, Oxford University Press (2004).

ii An excellent overview of the role of energy in modern society can be found in “Energy in Nature and Society”, Vaclav Smil, MIT Press, Cambridge MA, 2008, on which the subsequent discussion extensively draws.

iii This point is eloquently made by Jared Diamond in “Collapse: how societies choose to fail or succeed”, Viking (2005).

iv This characterisation of the “Superlative technology discourse” owes much to Dale Carrico.

v K.E. Drexler, “Engines of Creation: the coming era of nanotechnology” (Anchor, 1987) and K.E. Drexler, “Nanosystems: molecular machinery, manufacturing and computation” (Wiley, 1992).

vi Aubrey de Gray and Michael Rae, “Ending Ageing, the rejuvenation strategies that could reverse human ageing in our lifetime” (St Martins Press, 2007)

vii Ray Kurzweil, “The Singularity is Near: when humans transcend biology” (Penguin, 2006)

viii See, for example, the essays in a special issue of IEEE Spectrum: “The Singularity: a special report”, June 2008 , including my own piece “Rupturing the Nanotech Rapture”. For a critique of proposals for radical life extension, see “Science fact and the SENS agenda”, Warner et al, EMBO reports 6, 11, 1006-1008 (2005) (subscription required).

ix For an example, consider this quotation from Ray Kurzweil’s “The Singularity is Near”: “Evolution moves towards greater complexity, greater elegance, greater knowledge, greater intelligence, greater beauty, greater creativity and greater levels of subtle attribures such as love. In every monotheistic trandition God is likewise described as all of these qualities, only without any limitation: infinite knowledge, infinite intelligence, infinite beauty, infinite creativity and infinite love, and so on. Of course, even the accelerating growth of evolution never achieves an infinite level, but as it explodes exponentially it certainly moves rapidly in that direction. So evolution moves inexorably toward this conception of God, although never quite reaching this ideal. We can regard, therefore, the freeing of our thinking from the severe limitations of its biological form to be an essentially spiritual undertaking”.

“Plastics are precious – they’re buried sunshine”

Disappearing dress at the London College of Fashion
A disappearing dress from the Wonderland project. Photo by Alex McGuire at the London College of Fashion.

I’m fascinated by the subtle science of polymers, and it’s a cause of regret to me that the most common manifestations of synthetic polymers are in the world of cheap, disposable plastics. The cheapness and ubiquity of plastics, and the problems caused when they’re carelessly thrown away, blind us to the utility and versatility of these marvellously mutable materials. But there’s something temporary about their cheapness; it’s a consequence of the fact that they’re made from oil, and as oil becomes scarcer and more expensive we’ll need to appreciate the intrinsic value of these materials much more.

These thoughts are highlighted by a remarkable project put together by the artist and fashion designer Helen Storey and my Sheffield friend and colleague, chemist Tony Ryan. At the centre of the project is an exhibition of exquisitely beautiful dresses, designed by Helen and made from fabrics handmade by textile designer Trish Belford. The essence of fashion is transience, and these dresses literally don’t last long; the textiles they are made from are water soluble and are dissolved during the exhibition in tanks of water. The process of dissolution has a beauty of its own, captured in this film by Pinny Grylls.

Another film, by the fashion photographer Nick Wright, reminds us of the basic principles underlying the thermodynamics of polymer dissolution. The exhibition will be moving to the Ormeau Baths Gallery in Belfast in October, and you will be able to read more about it in that month’s edition of Vogue.

Discussion meeting on soft nanotechnology

A forthcoming conference in London will be discussing the “soft” approach to nanotechnology. The meeting – Faraday Discussion 143 – Soft Nanotechnology – is organised by the UK’s Royal Society of Chemistry, and follows a rather unusual format. Selected participants in the meeting submit a full research paper, which is peer reviewed and circulated, before the meeting, to all the attendees. The meeting itself concentrates on a detailed discussion of the papers, rather than a simple presentation of the results.

The organisers describe the scope of the meeting in these terms: “Soft nanotechnology aims to build on our knowledge of biological systems, which are the ultimate example of ‘soft machines’, by:

  • Understanding, predicting and utilising the rules of self-assembly from the molecular to the micron-scale
  • Learning how to deal with the supply of energy into dynamically self-assembling systems
  • Implementing self-assembly and ‘wet chemistry’ into electronic devices, actuators, fluidics, and other ‘soft machines’.
  • An impressive list of invited international speakers includes Takuzo Aida, from the University of Tokyo, Chris Dobson, from the University of Cambridge, Ben Feringa, from the University of Groningen, Olli Ikkala, from Helsinki University of Technology, Chengde Mao, from Purdue University, Stefan Matile, from the University of Geneva, and Klaus J Schulten, from the University of Illinois. The conference will be wrapped up by Harvard’s George Whitesides, and I’m hugely honoured to have been asked to give the opening talk.

    The meeting is not until this time next year, in London, but if you want to present a paper you need to get an abstract in by the 11 July. Faraday Discussions in the past have featured lively discussions, to say the least; it’s a format that’s tailor made for allowing controversies to be aired and strong positions to be taken.