Our faith in technology

The following essay is the pre-edited version of a piece of mine that will be published in a forthcoming book “Human Futures: Art in an Age of Uncertainty”, edited by Andy Miah and published by FACT (Foundation for Art and Creative Technology) & Liverpool University Press.

The days when our society was bound together by a single shared faith seem long gone. But at some level, most of us share a faith in technology, a faith that next year we’ll be able to buy a faster computer, a digital camera with more megapixels, or an MP3 player that holds more songs, and it will cost us less. For some, this is part of a broader faith in the power of science and technology both to deliver a better life and to give a coherent way of thinking about the world. Others might have a more nuanced view, seeing the results of techno-science as a very much a mixed blessing, and accepting the gadgets, while rejecting the scientific worldview. For better or worse, though, we’re in the state we’re in now because of technology, and indeed we existentially depend on it. But it’s equally clear that the technology we have can’t be sustained. Whatever happens, this tension must be resolved; whether we believe in progress or not, things can’t go on as they are.

There’s a new set of emerging technologies to bring these arguments into focus. Nanotechnology manipulates matter at the level of atoms and molecules, and promises a new level of control over the material world[i]. Biology has already moved from being an essentially descriptive and explanatory activity, and it’s now taking on the character of a project to intervene in and reshape the living world. Up to now, the achievements of biotechnology have come from fairly modest modifications to biological systems, but a new discipline of synthetic biology is currently emerging, with the much more ambitious goal of a wholesale reengineering of living systems for human purposes, and possibly creating entirely novel living systems. In large organisms like humans, we’re starting to appreciate the complexities of communications within and between the cells that together make up the organism; it’s this understanding of the rich social lives of cells that will make possible the development of stem cell therapies and tissue engineering. Information technology both enables and is enabled by these advances; it’s computing power that underlay the decoding of the human genome and which drives the development of sciences like bioinformatics, that are giving us the tools to understand the informational basis of life. The other side of the coin is that it is developments in nanotechnology that are what drives the relentless increase in computing power that is obvious to every consumer; in the near future similar advances will contribute to the growing importance of the computer as an invisible component of the fabric of life – ubiquitous computing. Perhaps of most significance of all to our conceptions of what it means to be human, cognitive science expands our understanding of how the brain works as an organ of information processing, prompting dreams both of a reductionist understanding of consciousness and the possibility of augmenting the functionality of the brain.

What will all these bewildering developments mean for the way the human experience evolves over the coming decades? Let’s get some perspective by reminding ourselves of technology’s role in getting us to where we are now.

No-one can doubt that our lives now are hugely different to the lives of our forbears two hundred years ago, and that this dramatic transformation has come about largely through new technologies. The world of material things – food, buildings, clothes, tools – has been transformed by new materials and processes, with mass production bringing complex artefacts within reach of everyone. Information and communications have been transformed; first telephones removed the need for physical presence for two-way communication, then computers and the internet have come together to give unprecedented ways of storing, accessing and processing a vast universe of information. Now all these technologies have converged and become ubiquitous through mobile telephony and wireless networking. Meanwhile life expectancy has doubled, through a combination of material sufficiency, the development of scientific medicine, and the implementation of public health measures. We’ve started to assert a new control over human biology – we already take for granted control over our reproduction through the contraceptive pill and assisted fertility, and we are beginning to anticipate a future in which we’ll have access to bodily repairs and spare parts, through the promise of tissue engineering and stem cell therapy.

It’s easy to be dazzled by all that technology has achieved, but it’s important to remember that these developments have all been underpinned by a single factor – the availability of easily accessible, concentrated forms of energy. None of this would have happened if we had not been able to fuel our civilisation by extracting black stuff from the ground and burning it. In 1800, the total energy consumption in the UK amounted to about 20 GJ per person per year. By 1900 this figure had increased by more than a factor of five, and today we use 175 GJ. Since this is predominantly in the form of fossil fuels, one graphic way of restating this figure is that it amounts to the equivalent of more than 4 tonnes of oil per person per year[ii].

It’s obvious to everyone that they use fossil fuel energy when they put petrol in their car, or turn the house heating on. But it’s important to appreciate how much energy is embodied in the material things around us, in our built environment and the artefacts we use. It takes a tonne and a quarter of oil to make ten tonnes of cement, and eight and a quarter tonnes of oil to make ten tonnes of steel. For a really energy hungry material like aluminium, it takes nearly four tonnes of oil to produce a single tonne. And if we build with oil, and make things out of oil, in effect we eat oil too, thanks to our reliance on intensive agriculture with its high energy inputs. To grow ten tonnes of wheat (roughly the output of a hectare, in the most favourable circumstances) takes 200 kg of artificial fertiliser, which itself embodies 130 kg of oil, as well as the input of another 200 kg of oil in other energy inputs.

Some people have the conceit that we’ve moved beyond a dirty old economy of power stations and steel works to a new, weightless economy based on processing information. Nothing could be further from the truth; in addition to our continuing dependence on material things, with their substantial embodiment of energy, information and communications technology itself needs a surprisingly large energy input. The ICT industry in the UK is actually responsible for a comparable share of carbon dioxide generation to aviation. The energy consumption of that giant of the modern information economy, Google, is a closely guarded secret; what is clear, though, is that the choice of location of its data centres is driven by the need to be close to reliable, cheap power, like hydroelectric power plants or nuclear power stations, in much the same way that aluminium smelters are sited.

Perhaps the most complex and interesting relationship is that between energy use and measures of health and physical well-being, like infant mortality and life expectancy. It’s clear, both from the record of history and the correlation of these figures with energy use for less well developed countries at the moment, that there’s a strong correlation between per capita energy use and life expectancy, at the lower end of the range. It seems that increasing per capita energy use up to 60 or 70 GJ per year brings substantial benefits, presumably by ensuring that people are reasonably well nourished, and allowing basic public health measures like access to clean water and having a working sewerage system. Further improvements result from increasing energy consumption above this, presumably by enabling increasingly comprehensive medical services, but beyond a per capita consumption around 110 GJ a year there is very little correlation between energy use and life expectancy. The lesson of this is that, while it is clear that material insufficiency is bad for one’s health, sometimes excess can have its own problems.

This emphasis on our dependence on fossil fuel energy should make it clear, whatever the prospects for exciting new developments in the future, there is a certain fragility to our situation. The large scale use of fossil fuels has come at a price – in man-made climate change – whose full dimensions we don’t yet know, and we are once again seeing problems of pressures on resources like food and fuel. Food shortages and bad harvests remind us that technology hasn’t allowed us to transcend nature – we’re still dependent on the rains arriving at the right time in the right quantity. We’ve influenced the climate, on which we depend, but in ways that are uncontrolled and unpredicted. The lessons of history teach us that a societal collapse is a real possibility, and one of the consequences of this would be an abrupt end to the hopes of further technological progress[iii].

We can hope that these emerging technologies themselves can help avert this kind of disastrous outcome. The only renewable energy source that realistically has the capacity to underpin a large-scale, industrial society is solar energy, but current technologies for harvesting this are too expensive and cannot be produced on anything like the scales needed to make a serious dent in the world’s energy needs. There is a real possibility that nanotechnology will change this situation, making possible the use of solar energy on very large scales. Other developments – for example, in batteries and fuel cells – would then allow us to store and distribute this energy, while we could anticipate a further continuation of the trends that allow us to do more with less, reducing the energy input required to achieve a given level of prosperity.

Computers will probably go on getting faster, with the current exponential growth of computing power (Moore’s law) continuing for perhaps ten more years. After that, we’re relying on new developments in nanotechnology to allow us to keep that trajectory going. Less obvious, but in some ways more interesting, will be the ways computing power becomes seamlessly integrated into the material fabric of life. One of the areas this will impact is medicine; developments in sensors should mean that we diagnose diseases earlier and can personalise treatments to the particularities of an individual’s biology. Therapies, too, will become more effective and less prone to side-effects, thanks to nanoscale delivery devices for targeting drugs and the development of engineered replacement tissues and organs.

So perhaps our optimistic goal for the next fifty years should be that these emerging technologies contribute to making a prosperous global society on a sustainable basis. A steady world population should universally enjoy long and pain-free lives at a decent standard of living, this being underpinned by sustainable technologies, in particular renewable energy from the sun, and supported by a ubiquitous (but largely invisible) infrastructure of ambient computing, distributed sensing, and responsive materials.

For some, this level of ambition for technology isn’t enough. Instead they seek transcendence through technology and, through human enhancement, our transfiguration to qualitatively different and superior types of beings. It’s the technological trends we’ve discussed already that are invoked to support this view, but with a particularly superlative vision of the potential of technology[iv]. For example, there’s an extrapolation from the existing developments of nanotechnology, via Drexler’s conception of atom-by-atom nanomanufacturing[v], to a world of superabundance, in which any material object is available at no cost. From modern medicine, and the future promise of nanomedicine, there’s the promise of superlongevity – the idea that a “cure” for the “disease” of ageing is imminent, and the serious suggestion that people alive today might live for a thousand years[vi]. From some combination of the development of ever-faster computers and the possibility of the augmentation of human mental capabilities by implants, comes the idea that we will shortly create a greater than human intelligence, either as a purely artificial intelligence in a computer, or through a radical enhancement of a human mind. This superintelligence is anticipated to be the greatest superlative technology of all, as by applying its own intelligence to itself it will be able rapidly and recursively to improve all these technologies, including its own intelligence. This will lead to a moment of ineffably rapid technological and societal change called, by its devotees, the Singularity[vii].

The technical bases for these superlative predictions are strongly contested by researchers in the relevant fields[viii]. This doesn’t seem to have a great deal of impact on the vehemence with which such views are held by those (largely online) communities transhumanists and singularitarians for whom these shared beliefs define a shared identity. The essentially eschatological character of singularitarian beliefs is obvious – it’s this that is well captured in the dismissive epithet “the rapture of the nerds”. While some proponents of these views have an aggressively rational, atheist outlook, others are explicit in highlighting a spiritual dimension to their belief, in a cosmological outlook that seems to owe something, whether consciously or unconsciously, to the Catholic mystic Teilhard de Chardin[ix]. Belief in the singularity, then, as well as being a symptom of a particular moment of rapid technological change, should perhaps be placed in that tradition of millennial, utopian thinking that’s been a recurring feature in Western thought for many centuries.

For me, the main sin of singularitarianism is one shared much more widely – that is the idea of technological determinism. This is the idea that technology has an autonomous, predictable, momentum of its own, largely beyond social and political influence, and that societal and economic changes are governed by these technological developments. It’s the everyday observation of the rapidity of technological change that gives this view such force; what keeps new, faster computers appearing in the shops on schedule is Moore’s law. This is the observation, made in 1965 by Gordon Moore, the founder of the microprocessor company Intel, that computer power is growing exponentially, with the number of transistors on a single chip roughly doubling every two years. To futurists like Kurzweil, Moore’s law is simply one example of a more general rule of exponential technological growth. But simply to give Moore’s observation the name “law” is to mistake its character in fundamental ways. It isn’t a law; it is a self-fulfilling prophecy, a way of coordinating and orchestrating the deliberate and planned action of the many independent actors in the semiconductor industry and in commercial and academic research and development, in the pursuit of a common goal of continuous incremental improvement in their products. Moore’s law is not a law describing the way technology develops as some kind of independent force, it is a tool for coordinating and planning human action.

We need to be very aware that technology need not advance at all; it depends on a set of stable societal and economic arrangements that aren’t by any means guaranteed. If there’s a collapse of society due to resource shortage or runaway climate change that will bring an abrupt end to Moore’s law and to all kinds of other progress. But a more optimistic view is to assert that we aren’t slaves to technology as an external, autonomous force; instead, technology is a product of society and our aspiration should be that it is directed by society to promote widely shared goals.

i For an overview, see “Soft Machines: nanotechnology and life”, Richard A.L. Jones, Oxford University Press (2004).

ii An excellent overview of the role of energy in modern society can be found in “Energy in Nature and Society”, Vaclav Smil, MIT Press, Cambridge MA, 2008, on which the subsequent discussion extensively draws.

iii This point is eloquently made by Jared Diamond in “Collapse: how societies choose to fail or succeed”, Viking (2005).

iv This characterisation of the “Superlative technology discourse” owes much to Dale Carrico.

v K.E. Drexler, “Engines of Creation: the coming era of nanotechnology” (Anchor, 1987) and K.E. Drexler, “Nanosystems: molecular machinery, manufacturing and computation” (Wiley, 1992).

vi Aubrey de Gray and Michael Rae, “Ending Ageing, the rejuvenation strategies that could reverse human ageing in our lifetime” (St Martins Press, 2007)

vii Ray Kurzweil, “The Singularity is Near: when humans transcend biology” (Penguin, 2006)

viii See, for example, the essays in a special issue of IEEE Spectrum: “The Singularity: a special report”, June 2008 , including my own piece “Rupturing the Nanotech Rapture”. For a critique of proposals for radical life extension, see “Science fact and the SENS agenda”, Warner et al, EMBO reports 6, 11, 1006-1008 (2005) (subscription required).

ix For an example, consider this quotation from Ray Kurzweil’s “The Singularity is Near”: “Evolution moves towards greater complexity, greater elegance, greater knowledge, greater intelligence, greater beauty, greater creativity and greater levels of subtle attribures such as love. In every monotheistic trandition God is likewise described as all of these qualities, only without any limitation: infinite knowledge, infinite intelligence, infinite beauty, infinite creativity and infinite love, and so on. Of course, even the accelerating growth of evolution never achieves an infinite level, but as it explodes exponentially it certainly moves rapidly in that direction. So evolution moves inexorably toward this conception of God, although never quite reaching this ideal. We can regard, therefore, the freeing of our thinking from the severe limitations of its biological form to be an essentially spiritual undertaking”.

Can nanotechnology really be green?

This essay was first published in Nature Nanotechnology, February 2007, Volume 2 No 2 pp71-72 (doi:10.1038/nnano.2007.12), abstract here.

In discussions of the possibility of a public backlash against nanotechnology, the comparison that is always made is with the European reaction against agricultural bionanotechnology. “Will nanotechnology turn out to be the new GM?” is an omnipresent question; for nanotechnology proponents a nagging worry, and for opponents a source of hope. Yet, up to now, there’s one important difference – the major campaigning groups – most notably Greenpeace – have so far resisted taking an unequivocal stance against nanotechnology. The reason for this isn’t a sudden outbreak of harmony between environmental groups and the multinationals that are most likely to bring nanotechnology to market in a big way. Instead, it’s a measure of the force of the argument that nanotechnology may lead to new opportunities for sustainable development. Even the most vocal outright opponent of nanotechnology – the small Canada-based group ETC – has recently conceded that nanotechnology might have a role to play in the developing world. Is nanotechnology really going to be the first new technology that big business and the environmental movement can unite behind, or is this the most successful example yet of a greenwash from global techno-science?

The selling points of nanotechnology for the sustainability movement are easily stated. In the lead are the prospects of nano-enabled clean energy and clean water, with some vaguer and more general notions of nanotechnology facilitating cleaner and more sustainable modes of production sitting in the background. On the first issue, many people have argued – perhaps most persuasively the late Richard Smalley – that nanotechnology of a fairly incremental kind has the potential to make a disruptive change to our energy economy. For example, we’re currently seeing rapid growth in solar energy. But the contribution that conventional solar cells can make to our total energy economy is currently limited, not by the total energy supplied by the sun, but by our ability to scale up production of photovoltaics to the massive areas that would be needed to make a real impact. A number of new and promising nano-enabled photovoltaic technologies are positioning themselves to contribute, not by competing with existing solar cells on conversion efficiency, but by their potential for being cheap to produce in very large areas. Meanwhile, as the availability of clean, affordable water becomes more of a problem in many parts of the world, nanotechnology also holds promise. Better control of the nanoscale structure of separation membranes, and surface treatments to prevent fouling, all have the potential to increase the effectiveness and lower the price of water purification.

How can we distinguish between the promises that come so easily in grant applications and press releases, and the true potential that these technologies might have for sustainable development? We need to consider both technical possibilities and the socio-economic realities.

Academic scientists often underestimate the formidable technical obstacles standing in the way of the large scale scale-up of promising laboratory innovations. In the case of alternative, nano-enabled photovoltaics, difficulties with lifetime and stability are still problematic, while many processing issues remain to be ironed out before large scale production can take place. But one reason for optimism is simply the wide variety of possible approaches being tried. One has polymer-based photovoltaics, in which optimal control of self-assembled nanoscale structure could lead to efficient solar cells being printed in very large areas, photochemical cells using dye-sensitised nanoparticles (Grätzel cells) and other hybrid designs involving semi-conductor nanoparticles, or III-V semiconductor heterojunction cells in combination with large area solar concentrators. Surely, one might hope, at least one of these approaches might bear fruit.

The socio-economic realities may prove to be more intractable, at least in some cases. The think-tank Demos, together with the charity Practical Action, recently organised a public engagement event about the possible applications of nanotechnology to clean water in Zimbabwe, which emphasised how remote some of these discussions are from the real problems of poor communities. In the words of Demos’s Jack Stilgoe, “The gulf between Western technoscience and applications for poor communities is far wider than I’d imagined. Ask people what they want from new technologies and they talk about the rope and washer pump, which would stop things (like snakes) falling into their wells.” It’s clear that for nanotechnology to have a real impact in the developing world, a good understanding of local contexts will be vital.

Perhaps, in addition to these promises of direct solutions to sustainability problems, there are some deeper currents here. Given the emphasis that has been given by many writers to the importance of learning from nature in nanotechnology, it’s perhaps not surprising that we’re seeing this idea of nanotechnology as being derived from natural sources, and thus intrinsically benign, cropping up as an important framing device. Referring to the water-repellency of nanostructured surfaces as the “lotus leaf effect” is perhaps the most effective example, both lending itself to comforting imagery and connecting with the long-established symbolism of the lotus leaf as intrinsically, and naturally, spotless and stain-free.

Whatever these deeper cultural contexts, nanotechnology certainly finds itself in the frontline of another important shift, this time in science funding policies. In many countries, the UK included, we’re seeing a shift in emphasis in the aims of publicly funded science, away from narrowly discipline-based objectives, and towards goals defined through societal needs, and in particular towards mitigating global problems such as climate change. As an intrinsically multidisciplinary, and naturally goal-oriented, enterprise, nanotechnology fits very naturally into this new framework and applications of nanotechnology addressing sustainability issues will certainly see increasing emphasis.

Sceptics may see this as just another example of a misguided search for technical fixes for problems that are ultimately socio-political in origin. It may be true that in the past such an approach has simply led to further problems, but nonetheless I strongly believe that we currently have no choice but to continue to look to technological progress to help ameliorate our most pressing difficulties. The “deep green” school may argue that our problems would be cured by abandoning our technological civilisation and returning to simpler ways, but this view utterly fails to recognise the degree to which supporting the earth’s current and projected population levels depends on advanced technology and in particular on intensive energy use. We are existentially dependent on technology, but we know that the technology we have is not sustainable. Green nanotechnology, then, is not just a convenient slogan but an inescapable necessity

What the public think about nanomedicine

A major new initiative on the use of nanotechnology in medicine and healthcare has recently been launched by the UK government’s research councils; around £30 million (US$60 million) is expected to be available for large scale “Grand Challenge” style projects. The closing date for the first call has just gone by, so we will see in a few months how the research community has responded to this opportunity. What’s worth commenting on now, though, is the extent to which public engagement has been integrated into the process by which the call has been defined.

As the number of potential applications of nanotechnology to healthcare is very large, and the funds available relatively limited, there was a need to focus the call on just one or two areas; in the end the call is for applications of nanotechnology in healthcare diagnostics and the targeted delivery of therapeutic agents. As part of the program of consultations with researchers, clinicians and industry people that informed the decision to focus the call in this way, a formal public engagement exercise was commissioned to get an understanding of the hopes and fears the public have about the potential use of nanotechnology in medicine and healthcare. The full report on this public dialogue has just been published by EPSRC, and this is well worth reading.

I’ll be writing in more detail later both about the specific findings of the dialogue, and on the way the results of this public dialogue was incorporated in the decision-making process. Here, I’ll just draw out three points from the report:

  • As has been found by other public engagement exercises, there is a great deal of public enthusiasm for the potential uses of nanotechnology in healthcare, and a sense that this is an application that needs to be prioritised over some others.
  • People value potential technologies that empower them to have more control over their own health and their own lives, while potential technologies that reduce their sense of control are viewed with more caution.
  • People have concerns about who benefits from new technologies – while people generally see nothing intrinsically wrong with business driving nanotechnology, there’s a concern that public investment in science results in the appropriate public value.
  • The mis-measure of uncertainty

    A couple of pieces in the Financial Times today and yesterday offer some food for thought about the problems of commercialising scientific research. Yesterday’s piece – Drug research needs serendipity (free registration may be required) – concentrates on the pharmaceutical sector, but its observations are more widely applicable. Musing on the current problems of big pharma, with their dwindling pipelines of new drugs, David Shaywitz and Nassim Taleb (author of The Black Swan), identify the problem as a failure to deal with uncertainty; “academic researchers underestimated the fragility of their scientific knowledge while pharmaceuticals executives overestimated their ability to domesticate scientific research.”

    They identify two types of uncertainty; there’s the negative uncertainty of all the things that can go wrong as one tries to move from medical research to treatments. Underlying this is the simple fact that we know much less about human biology, in all its complexity, than one might think from all the positive headlines and press releases. It’s in response to this negative uncertainty that managers have attempted to impose more structure and focus to make the outcome of research more predictable. But why this is generally in vain? “Answer: spreadsheets are easy; science is hard.” According to Shaywitz and Taleb, this approach isn’t just doomed to fail on its on terms, it’s positively counterproductive. This is because it doesn’t leave any room for another type of uncertainty: the positive uncertainty of unexpected discoveries and happy accidents.

    Their solution is to embrace the trend we’re already seeing, for big Pharma to outsource more and more of its functions, lowering the barriers to entry and leaving room for “a lean, agile organisation able to capture, consider and rapidly develop the best scientific ideas in a wide range of disease areas and aggressively guide these towards the clinic.”

    But how are things for the small and agile companies that are going to be driving innovation in this new environment? Not great, says Jonathan Guthrie in today’s FT, but nonetheless “There is hope yet for science park toilers”. The article considers, from a UK perspective, the problems small technology companies are having raising money from venture capitalists. It starts from the position that the problem isn’t shortage of money but shortage of good ideas; perhaps not the end of the age of innovation, but a temporary lull after the excitement of personal computers, the internet and mobile phones. And, for the part of the problem that lies with venture capitalists, misreading this cycle has contributed to their difficulties. In the wake of the technology bubble, venture capital returns aren’t a good advertisement for would-be investors at the moment – “funds set up after 1996 have typically lost 1.4 per cent a year over five years and 1.8 per cent over 10 years, says the British Private Equity and Venture Capital Association.” All is not lost, Guthrie thinks – as the memory of the dotbomb debacles fade the spectacular returns enjoyed by the most successful technology start-ups will attract money back into the sector. Where will the advances take place? Not in nanotechnology, at least in the form of the nanomaterials sector as it has been understood up to now: “materials scientists have engineered a UK nanotechnology sector so tiny it is virtually invisible.” Instead Guthrie points to renewable energy and power saving systems.

    Nanotubes for flexible electronics

    The glamorous applications for carbon nanotube in electronics focus on the use of individual nanotubes for nanoscale electronics – for example, this single nanotube integrated circuit reported by IBM a couple of years ago. But more immediate applications may come from using thin layers of nanotubes on flexible substrates as conductors or semiconductors – these could be used for thin film transistor arrays in applications like electronic paper. A couple of recent papers report progress in this direction.

    From the group of John Rogers, at the University of Illinois, comes a Nature paper reporting integrated circuits on flexible substrates based on nanotubes. The paper (Editors summary in Nature, subscription required for full article) , whose first author is Qing Cao, describes the manufacture of an array of 100 transistors on a 50 µm plastic substrate. The transistors aren’t that small – their dimensions are in the micron range – so this is the sort of electronics that would be used to drive a display rather than as CPU or memory. But the performance of the transistors looks like it could be competitive with rival technologies for flexible displays, such as semiconducting polymers.

    The difficulty with using carbon nanotubes for electronics this way is that the usual syntheses produce a mixture of different types of nanotubes, some conducting and some semiconducting. Since about a third of the nanotubes have metallic conductivity, a simple mat of nanotubes won’t behave like a semiconductor, because the metallic nanotubes will provide a short-circuit. Rogers’s group get this round this problem in an effective, if not terribly elegant, way. They cut the film with grooves, and for an appropriate combination of groove width and nanotube length they reduce the probability of finding a continuous metallic path between the electrodes to a very low level.

    Another paper, published earlier this month in Science, offers what is potentially a much neater solution to this problem. The paper, “Self-Sorted, Aligned Nanotube Networks for Thin-Film Transistors” (abstract, subscription required for full article), has as its first author Melburne LeMieux, a postdoc in the group of Zhenan Bao at Stanford. They make their nanotube networks by spin-coating from solution. Spin-coating is a simple and very widely used technique for making thin films, which involves depositing a solution on a substrate spinning at a few thousand revolutions per minute. Most of the solution is flung off by the spinning disk, leaving a very thin uniform film, from which the solvent evaporates to leave the network of nanotubes. This simple procedure produces two very useful side-effects. Firstly, the flow in the solvent film has the effect of aligning the nanotubes, with obvious potential benefits for their electronic properties. Even more strikingly, the spin-coating process seems to provide an easy solution to the problem of sorting the metallic and semiconducting nanotubes. It seems that one can prepare the surface so that it is selectively sticky for one or other types of nanotubes; a surface presenting a monolayer of phenyl groups preferentially attracts the metallic nanotubes, while an amine coated surface yields nanotube networks with very good semiconducting behaviour, from which high performance transistors can be made.

    “Plastics are precious – they’re buried sunshine”

    Disappearing dress at the London College of Fashion
    A disappearing dress from the Wonderland project. Photo by Alex McGuire at the London College of Fashion.

    I’m fascinated by the subtle science of polymers, and it’s a cause of regret to me that the most common manifestations of synthetic polymers are in the world of cheap, disposable plastics. The cheapness and ubiquity of plastics, and the problems caused when they’re carelessly thrown away, blind us to the utility and versatility of these marvellously mutable materials. But there’s something temporary about their cheapness; it’s a consequence of the fact that they’re made from oil, and as oil becomes scarcer and more expensive we’ll need to appreciate the intrinsic value of these materials much more.

    These thoughts are highlighted by a remarkable project put together by the artist and fashion designer Helen Storey and my Sheffield friend and colleague, chemist Tony Ryan. At the centre of the project is an exhibition of exquisitely beautiful dresses, designed by Helen and made from fabrics handmade by textile designer Trish Belford. The essence of fashion is transience, and these dresses literally don’t last long; the textiles they are made from are water soluble and are dissolved during the exhibition in tanks of water. The process of dissolution has a beauty of its own, captured in this film by Pinny Grylls.

    Another film, by the fashion photographer Nick Wright, reminds us of the basic principles underlying the thermodynamics of polymer dissolution. The exhibition will be moving to the Ormeau Baths Gallery in Belfast in October, and you will be able to read more about it in that month’s edition of Vogue.

    The biofuels bust

    The news that the UK is to slow the adoption of biofuels, and that the European Parliament has called for a reduction in the EU’s targets for biofuel adoption, is a good point to mark one of the most rapid turnarounds we’ve seen in science policy. Only two years ago, biofuels were seen by many as a benign way for developed countries to increase their energy security and reduce their greenhouse gas emissions without threatening their citizens’ driving habits. Now, we’re seeing the biofuel boom being blamed for soaring food prices, and the environmental benefits are increasingly in doubt. It’s rare to see the rationale for a proposed technological fix for a major societal problem fall apart quite so quickly, and there must surely be some lessons here for other areas of science and policy.

    The UK’s volte-face was prompted by a government commissioned report led by the environmental scientist Ed Gallagher. The Gallagher Review is quite an impressive document, given the rapidity with which it has been put together. This issue is in many ways typical of the problem we’re increasingly seeing arising, in which difficult and uncertain science comes together with equally uncertain economics through the unpredictability of human and institutional responses in a rapidly changing environment.

    The first issue is whether, looking at the whole process of growing crops for biofuels, including the energy inputs for agriculture and for the conversion process, one actually ends up with a lower output of greenhouse gases than one would using petrol or diesel. Even this most basic question is more difficult than it might seem, as illustrated by the way the report firmly but politely disagrees with a Nobel Laureate in atmospheric chemistry, Paul Crutzen, who last year argued that, if emissions of nitrogen oxides during agriculture were properly accounted for, biofuels actually produce more greenhouse gases than the fossil fuels they replace. Nonetheless, the report finds a wide range of achievable greenhouse gas savings; corn bioethanol, for example, at its best produces a saving of about 35%, but at its worst it actually produces a net increase in greenhouse gases of nearly 30%. Other types of biofuel are better; both Brazilian ethanol from sugar cane and biodiesel from palm oil can achieve savings of between 60% and 70%. But, and this is a big but, these figures assume these crops are grown on existing agricultural land. If new land needs to be taken into cultivation, there’s typically a large release of carbon. Taking into account the carbon cost of changing land use means that there’s a considerable pay-back time before any greenhouse gas savings arise at all. In the worst cases, this can amount to hundreds of years.

    This raises the linked questions – how much land is available for growing biofuels, and how much can we expect that the competition from biofuel uses of food crops will lead to further increases in food prices? There seems to be a huge amount of uncertainty surrounding these issues. Certainly the situation will be eased if new technologies arise for the production of cellulosic ethanol, but these aren’t necessarily a panacea, particularly if they involve changes in land-use. The degree to which recent food price increases can be directly attributed to the growth in biofuels is controversial, but no-one can doubt that, in a world with historically low stocks of staple foodstuffs, any increase in demand will result in higher prices than would otherwise have occurred. The price of food is already indirectly coupled to the price of oil because modern intensive agriculture demands high energy inputs, but the extensive use of biofuels makes that coupling direct.

    It’s easy to be wise in hindsight, but one might wonder how much of this could have been predicted. I wrote about biofuels here two years ago, and re-reading that entry – Driving on sunshine – it seems that some of the drawbacks were more easy to anticipate than others. What’s sobering about the whole episode, though, is that it does show how complicated things can get when science, politics and economics get closely coupled in situations needing urgent action in the face of major uncertainties.

    Nanotechnology and the singularitarians

    A belief in the power and imminence of the Drexlerian vision of radical nanotechnology is part of the belief-package of adherents of the view that an acceleration of technology, linked particularly with the development of a recursively self-improving, super-human, artificial intelligence, will shortly lead to a moment of ineffably rapid technological and societal change – the Singularity. So it’s not surprising that my article in the IEEE Spectrum special issue on the Singularity – “Rupturing the Nanotech Rapture” – has generated some reaction amongst the singularitarians. The longest response has come from Michael Anissimov, whose blog Accelerating Future offers an articulate statement of the singularitarian case. Here are my thoughts on some of the issues he raises.

    One feature of his response is his dissociation from some of the stronger claims of his fellow singularitarians. For example, he responds to the suggestion that MNT will allow any material or artefact – “a Stradivarius or a steak” – to be made in abundance, by suggesting that no-one thinks this anymore, and this is a “red herring” that has arisen from “journalists inaccurately summarizing the ideas of scientists”. On the contrary, this claim has been at the heart of the rhetoric surrounding MNT from the earliest writings of Drexler, who wrote in “Engines of Creation” “Because assemblers will let us place atoms in almost any reasonable arrangement , they will let us build almost anything that the laws of nature allow to exist.” Elsewhere, Anissimov distances himself from Kurzweil, who he includes in a group of futurists who “justifiably attract ridicule”.

    This raises the question of who speaks for the singularitarians. As an author writing here for a publication with a fairly large circulation, it seems to me obvious that the authors whose arguments I need to address first are those whose books themselves command the largest circulation, because that’s where the readers are mostly going to have got their ideas about the singularity from. So, the first thing I did when I got this assignment was to read Kurzweil’s bestseller “The Singularity is Near”; after all, it’s Kurzweil who is able to command articles in major newspapers and is about to release a film. More specific to MNT, Drexler’s “Engines of Creation” obviously has to be a major point of reference, together with more recent books like Josh Hall’s “Nanofuture”. For the technical details of MNT, Drexler’s “Nanosystems” is the key text. It may well be that Michael and his associates have more sophisticated ideas about MNT and the singularity, but while these ideas remain confined to discussions on singularitarian blogs and email lists, they aren’t realistically going to attract the attention that people like Kurzweil do, and its appropriate that the publicly prominent faces of singularitarianism should attract the efforts of those arguing against the notion.

    A second theme of Michael’s response is the contention that the research that will lead to MNT is happening anyway. It’s certainly true that there are many exciting developments going on in nanotechnology laboratories around the world. What’s at issue, though, is what direction these developments are taking us. GIven the tendencies of singularitarians towards technological determinism, it’s a natural tendency to assume that all these exciting developments are all milestones on the way to a nano-assembler, and that progress towards the singularity can be measured by the weight of press releases flowing from the press offices of universities and research labs around the world. The crucial point, though, is that there’s no force driving technology towards MNT. Yes, technology is moving forward, but the road it’s taking is not the one anticipated by MNT proponents. It’s not clear to me that Michael has understood my central argument – it’s true that biology offers an existence proof for advanced nanotechnological devices of one kind or another – as Michael says, “Obviously, a huge number of biological entities, from molecule-sized to cell-sized, regularly traverse the body and perform a wide variety of essential functions, so we know such a thing is possible in principle.” But, this doesn’t allow us conclude that nanorobots built on the mechanical engineering principles of MNT will be possible, because the biological machines work on entirely different principles. The difficulties I outline for MNT that arise as a result of the different physics of the nanoscale are not difficulties for biological nanotechnology, because its very different operating principles exploit this different physics rather than trying to engineer round it.

    What’s measured by all these press releases, then, is progress towards a whole variety of technological goals, many very different from the goals envisaged for MNT, and each of whose feasibility at the present time we simply don’t know. I’ve given my arguments as to why MNT actually looks less likely now than it did ten years ago, and Michael isn’t able to counter these arguments other than by saying that “Of course, all of these challenges were taken into account in the first serious study of the feasibility of nanoscale robotic systems, titled Nanosystems…. We’ll need to build nanomachines using nanomechanical principles, not naive reapplications of macroscale engineering principles.” But Nanosystems is all about applying macroscale engineering principles – right at the outset it states that “molecular manufacturing applies the principles of mechanical engineering to chemistry.” Instead of work directed towards MNT, we’re now seeing other goals being pursued – goals like quantum computing, DNA based nanomachines, a path from plastic electronics to ultracheap computing and molecular electronics, breakthroughs in nanomedicine, optical metamaterials. Far from being incremental updates, many of these research directions hadn’t even been conceived when Drexler wrote “Engines of Creation”, and, unlike the mechanical engineering paradigm, these all really do exploit the different and unfamiliar physics of the nanoscale. All these are being actively researched now, but not all of them will pan out and other entirely unforeseen technologies will be discovered and get people excited anew.

    Ultimately, Michael’s arguments boil down to a concatenation of ever-hopeful “ifs” and “ands”. In answer to my suggestion that, if MNT like processes could only be got to work at low temperatures and ultra-high vacua, Michael says “If the machines used to maintain high vacuum and extreme refrigeration could be manufactured for the cost of raw materials, and energy can be obtained in great abundance from nano-manufactured, durable, self-cleaning solar panels, I am skeptical that this would be as substantial of a barrier as it is to similar high-requirement processes today.” I think there’s a misunderstanding of economics here. Things can only be manufactured for the cost of their raw materials if the capital cost of the manufacturing machinery is very small. But this capital cost itself mostly reflects the amortization of the research and development costs of developing the necessary plant and equipment. What we’ve learnt from the semiconductor industry is that as technology progresses these capital costs become larger and larger and more and more dominating for the economics of the industry. It’s difficult to see what can reverse this trend without invoking a deus ex machina. Ultimately, it’s just such an invocation that arguments for the singularity seem in the end to reduce to.

    Discussion meeting on soft nanotechnology

    A forthcoming conference in London will be discussing the “soft” approach to nanotechnology. The meeting – Faraday Discussion 143 – Soft Nanotechnology – is organised by the UK’s Royal Society of Chemistry, and follows a rather unusual format. Selected participants in the meeting submit a full research paper, which is peer reviewed and circulated, before the meeting, to all the attendees. The meeting itself concentrates on a detailed discussion of the papers, rather than a simple presentation of the results.

    The organisers describe the scope of the meeting in these terms: “Soft nanotechnology aims to build on our knowledge of biological systems, which are the ultimate example of ‘soft machines’, by:

  • Understanding, predicting and utilising the rules of self-assembly from the molecular to the micron-scale
  • Learning how to deal with the supply of energy into dynamically self-assembling systems
  • Implementing self-assembly and ‘wet chemistry’ into electronic devices, actuators, fluidics, and other ‘soft machines’.
  • An impressive list of invited international speakers includes Takuzo Aida, from the University of Tokyo, Chris Dobson, from the University of Cambridge, Ben Feringa, from the University of Groningen, Olli Ikkala, from Helsinki University of Technology, Chengde Mao, from Purdue University, Stefan Matile, from the University of Geneva, and Klaus J Schulten, from the University of Illinois. The conference will be wrapped up by Harvard’s George Whitesides, and I’m hugely honoured to have been asked to give the opening talk.

    The meeting is not until this time next year, in London, but if you want to present a paper you need to get an abstract in by the 11 July. Faraday Discussions in the past have featured lively discussions, to say the least; it’s a format that’s tailor made for allowing controversies to be aired and strong positions to be taken.

    Right and wrong lessons from biology

    The most compelling argument for the possibility of a radical nanotechnology, with functional devices and machines operating at the nano-level, is the existence of cell biology. But one can take different lessons from this. Drexler argued that we should expect to be able to do much better than cell biology if we applied the lessons of macroscale engineering, using mechanical engineering paradigms and hard materials. My argument, though, is that this fails to take into account the different physics of the nanoscale, and that evolution has optimised biology’s “soft machines” for this environment. This essay, first published in the journal Nature Nanotechnology (subscription required, vol 1, pp 85 – 86 (2006)), reflects on this issue.

    Nanotechnology hasn’t yet acquired a strong disciplinary identity, and as a result it is claimed by many classical disciplines. “Nanotechnology is just chemistry”, one sometimes hears, while physicists like to think that only they have the tools to understand the strange and counterintuitive behaviour of matter at the nanoscale. But biologists have perhaps the most reason to be smug – in the words of MIT’s Tom Knight “biology is the nanotechnology that works”.

    The sophisticated and intricate machinery of cell biology certainly gives us a compelling existence proof that complex machines on the nanoscale are possible. But, having accepted that biology proves that one form of nanotechnology is possible, what further lessons should be learned? There are two extreme positions, and presumably a truth that lies somewhere in between.

    The engineers’ view, if I can put it that way, is that nature shows what can be achieved with random design methods and a palette of unsuitable materials allocated by the accidents of history. If you take this point of view, it seems obvious that it should be fairly straightforward to make nanoscale machines whose performance vastly exceeds that of biology, by making rational choices of materials, rather than making do with what the accidents of evolution have provided, and by using the design principles we’ve learnt in macroscopic engineering.

    The opposite view stresses that evolution is an extremely effective way of searching parameter space, and that in consequence is that we should assume that biological design solutions are likely to be close to optimal for the environment for which they’ve evolved. Where these design solutions seem odd from our point of view, their unfamiliarity is to be ascribed to the different ways in which physics works at the nanoscale. At its most extreme, this view regards biological nanotechnology, not just as the existence proof for nanotechnology, but as an upper limit on its capabilities.

    So what, then, are the right lessons for nanotechnology to learn from biology? The design principles that biology uses most effectively are those that exploit the special features of physics at the nanoscale in an environment of liquid water. These include some highly effective uses of self-assembly, using the hydrophobic interaction, and the principle of macromolecular shape change that underlies allostery, used for both for mechanical transduction and for sensing and computing. Self-assembly, of course, is well known both in the laboratory and in industrial processes like soap-making, but synthetic examples remain very crude compared to the intricacy of protein folding. For industrial applications, biological nanotechnology offers inspiration in the area of green chemistry – promising environmentally benign processing routes to make complex, nanostructured materials based on water as a solvent and using low operating temperatures. The use of templating strategies and precursor routes widens the scope of these approaches to include final products which are insoluble in water.

    But even the most enthusiastic proponents of the biological approach to nanotechnology must concede that there are branches of nanoscale engineering that biology does not seem to exploit very fully. There are few examples of the use of coherent electron transport over distances greater than a few nanometers. Some transmembrane processes, particularly those involved in photosynthesis, do exploit electron transfer down finely engineered cascades of molecules. But until the recent discovery of electron conduction in bacterial pili, longer ranged electrical effects in biology seem to be dominated by ionic rather than electronic transport. Speculations that coherent quantum states in microtubules underlie consciousness are not mainstream, to say the least, so a physicist who insists on the central role of quantum effects in nanotechnology finds biology somewhat barren.

    It’s clear that there is more than one way to apply the lessons of biology to nanotechnology. The most direct route is that of bionanotechnology, in which the components of living systems are removed from their biological context and put to work in hybrid environments. Many examples of this approach (which NYU’s Ned Seeman has memorably called biokleptic nanotechnology) are now in the literature, using biological nanodevices such as molecular motors or photosynthetic complexes. In truth, the newly emerging field of synthetic biology, in which functionality is added back in a modular way to a stripped down host organism, is applying this philosophy at the level of systems rather than devices.

    This kind of synthetic biology is informed by what’s essentially an engineering sensibility – it is sufficient to get the system to work in a predictable and controllable way. Some physicists, though, might want to go further, taking inspiration from Richard Feynman’s slogan “What I cannot create I do not understand”. Will it be possible to have a biomimetic nanotechnology, in which the design philosophy of cell biology is applied to the creation of entirely synthetic components? Such an approach will be formidably difficult, requiring substantial advances both in the synthetic chemistry needed to create macromolecules with precisely specified architectures, and in the theory that will allow one to design molecular architectures that will yield the structure and function one needs. But it may have advantages, particularly in broadening the range of environmental conditions in which nanosystems can operate.

    The right lessons for nanotechnology to learn from biology might not always be the obvious ones, but there’s no doubting their importance. Can the traffic ever go the other way – will there be lessons for biology to learn from nanotechnology? It seems inevitable that the enterprise of doing engineering with nanoscale biological components must lead to a deeper understanding of molecular biophysics. I wonder, though, whether there might not be some deeper consequences. What separates the two extreme positions on the relevance of biology to nanotechnology is a difference in opinion on the issue of the degree to which our biology is optimal, and whether there could be other, fundamentally different kinds of biology, possibly optimised for a different set of environmental parameters. It may well be a vain expectation to imagine that a wholly synthetic nanotechnology could ever match the performance of cell biology, but even considering the possibility represents a valuable broadening of our horizons.