Feynman, Drexler, and the National Nanotechnology Initiative

It’s fifty years since Richard Feynman delivered his famous lecture “There’s Plenty of Room at the Bottom”, and this has been the signal for a number of articles reflecting on its significance. This lecture has achieved mythic importance in discussions of nanotechnology; to many, it is nothing less than the foundation of the field. This myth has been critically examined by Chris Tuomey (see this earlier post), who finds that the significance of the lecture is something that’s been attached retrospectively, rather than being apparent as serious efforts in nanotechnology got underway.

There’s another narrative, though, that is popular with followers of Eric Drexler. According to this story, Feynman laid out in his lecture a coherent vision of a radical new technology; Drexler popularised this vision and gave it the name “nanotechnology”. Then, inspired by Drexler’s vision, the US government launched the National Nanotechnology Initiative. This was then hijacked by chemists and materials scientists, whose work had nothing to do with the radical vision. In this way, funding which had been obtained on the basis of the expansive promises of “molecular manufacturing”, the Feynman vision as popularized by Drexler, has been used to research useful but essentially mundane products like stain resistant trousers and germicidal washing machines. To add insult to injury, the material scientists who had so successfully hijacked the funds then went on to belittle and ridicule Drexler and his theories. A recent article in the Wall Street Journal – “Feynman and the Futurists” – by Adam Keiper, is written from this standpoint, in a piece that Drexler himself has expressed satisfaction with on his own blog. I think this account is misleading at almost every point; the reality is both more complex and more interesting.

To begin with, Feynman’s lecture didn’t present a coherent vision at all; instead it was an imaginative but disparate set of ideas linked only by the idea of control on a small scale. I discussed this in my article in the December issue of Nature Nanotechnology – Feynman’s unfinished business (subscription required), and for more details see this series of earlier posts on Soft Machines (Re-reading Feynman Part 1, Part 2, Part 3).

Of the ideas dealt with in “Plenty of Room”, some have already come to pass and have indeed proved economically and societally transformative. These include the idea of writing on very small scales, which underlies modern IT, and the idea of making layered materials with precisely controlled layer thicknesses on the atomic scale, which was realised in techniques like molecular beam epitaxy and CVD, whose results you see every time you use a white light emitting diode or a solid state laser of the kind your DVD contains. I think there were two ideas in the lecture that did contribute to the vision popularized by Drexler – the idea of “a billion tiny factories, models of each other, which are manufacturing simultaneously, drilling holes, stamping parts, and so on”, and, linked to this, the idea of doing chemical synthesis by physical processes. The latter idea has been realised at proof of principle level by the idea of doing chemical reactions using a scanning tunnelling microscope; there’s been a lot of work in this direction since Don Eigler’s demonstration of STM control of single atoms, no doubt some of it funded by the much-maligned NNI, but so far I think it’s fair to say this approach has turned out so far to be more technically difficult and less useful (on foreseeable timescales) than people anticipated.

Strangely, the second part of the fable, which talks about Drexler popularising the Feynman vision, I think actually underestimates the originality of Drexler’s own contribution. The arguments that Drexler made in support of his radical vision of nanotechnology drew extensively on biology, an area that Feynman had touched on only very superficially. What’s striking if one re-reads Drexler’s original PNAS article and indeed Engines of Creation is how biologically inspired the vision is – the models he looks to are the protein and nucleic acid based machines of cell biology, like the ribosome. In Drexler’s writing now (see, for example, this recent entry on his blog), this biological inspiration is very much to the fore; he’s looking to the DNA-based nanotechnology of Ned Seeman, Paul Rothemund and others as the exemplar of the way forward to fully functional, atomic scale machines and devices. This work is building on the self-assembly paradigm that has been such a big part of academic work in nanotechnology around the world.

There’s an important missing link between the biological inspiration of ribosomes and molecular motors and the vision of “tiny factories”- the scaled down mechanical engineering familiar from the simulations of atom-based cogs and gears from Drexler and his followers. What wasn’t fully recognised until after Drexler’s original work, was that the fundamental operating principles of biological machines are quite different from the rules that govern macroscopic machines, simply because the way physics works in water at the nanoscale is quite different to the way it works in our familiar macroworld. I’ve argued at length on this blog, in my book “Soft Machines”, and elsewhere (see, for example, “Right and Wrong Lessons from Biology”) that this means the lessons one should draw from biological machines should be rather different to the ones Drexler originally drew.

There is one final point that’s worth making. From the perspective of Washington-based writers like Kepier, one can understand that there is a focus on the interactions between academic scientists and business people in the USA, Drexler and his followers, and the machinations of the US Congress. But, from the point of view of the wider world, this is a rather parochial perspective. I’d estimate that somewhere between a quarter and a third of the nanotechnology in the world is being done in the USA. Perhaps for the first time in recent years a major new technology is largely being developed outside the USA, in Europe to some extent, but with an unprecedented leading role being taken in places like China, Korea and Japan. In these places the “nanotech schism” that seems so important in the USA simply isn’t relevant; people are just pressing on to where the technology leads them.

Happy New Year

Here are a couple of nice nano-images for the New Year. The first depicts a nanoscale metal-oxide donut, whose synthesis is reported in a paper (abstract, subscription required for full article) in this week’s Science Magazine. The paper, whose first author is Haralampos Miras, comes from the group of Lee Cronin at the University of Glasgow. The object is made by templated self-assembly of molybdenum oxide units; the interesting feature here is that the cluster which forms the template for the ring – the “hole” around which the donut forms – forms as a precursor during the process before being ejected from the ring once it is formed.

A molybdenum oxide nanowheel templated on a transient cluster.  From Miras et al, Science 327 p 72 (2010).
A molybdenum oxide nanowheel templated on a transient cluster. From Miras et al, Science 327 p 72 (2010).

The second image depicts the stages in reconstructing a high resolution electron micrograph of a self-assembled tetrahedron made from DNA. In an earlier blog post I described how Russell Goodman, a grad student in the group of Andrew Turberfield at Oxford, was able to make rigid tetrahedra of DNA less than 10 nm in size. Now, in collaboration with Takayuki Kato and Keiichi Namba group at Osaka University, they have been able to obtain remarkable electron micrographs of these structures. The work was published last summer in an article in Nano Letters (subscription required). The figure shows, from left to right, the predicted structure, a raw micrograph obtained from cryo-TEM (transmission electron microscopy on frozen sections), a micrograph processed to enhance its contrast, and two three dimensional image reconstructions obtained from a large number of such images. The sharpest image, on the right, is at a 12 Å resolution, and it is believed that this is the smallest object, natural or artificial, that has been imaged using cryo-TEM at this resolution, which is good enough to distinguish between the major and minor grooves of the DNA helices that form the struts of the tetrahedron.

Cryo-TEM reconstruction of DNA tetrahedron
Cryo-TEM reconstruction of DNA tetrahedron, from Kato et al., Nano Letters, 9, p2747 (2009)

A happy New Year to all readers.

Why and how should governments fund basic research?

Yesterday I took part in a Policy Lab at the Royal Society, on the theme The public nature of science – Why and how should governments fund basic research? I responded to a presentation by Professor Helga Nowotny, the Vice-President of the European Research Council, saying something like the following:

My apologies to Helga, but my comments are going to be rather UK-centric, though I hope they illustrate some of the wider points she’s made.

This is a febrile time in British science policy.

We have an obsession amongst both the research councils and the HE funding bodies with the idea of impact – how can we define and measure the impact that research has on wider society? While these bodies are at pains to define impact widely, involving better policy outcomes, improvements in quality of life and broader culture, there is much suspicion that all that really counts is economic impact.

We have had a number of years in which the case that science produces direct and measurable effects on economic growth and jobs has been made very strongly, and has been rewarded by sustained increases in public science spending. There is a sense that these arguments are no longer as convincing as they were a few years ago, at least for the people in Treasury who are going to be making the crucial spending decisions at a time of fiscal stringency. As Helga argues, the relationship between economic growth in the short term, at a country level, and spending on scientific R&D is shaky, at best.

And in response to these developments, we have a deep unhappiness amongst the scientific community at what’s perceived as a shift from pure, curiosity driven, blue skies research into research and development.

What should our response to this be?

One response is to up the pressure on scientists to deliver economic benefits. This, to some extent, is what’s happening in the UK. One problem with this approach is that It probably overstates the importance of basic science in the innovation system. Scientists aren’t the only people who are innovators – innovation takes place in industry, in the public sector, it can involve customers and users too. Maybe our innovation system does need fixing, but it’s not obvious what needs most attention is what scientists do. But certainly, we should look at ways to open up the laboratory, as Helga puts it, and to look at the broader institutional and educational preconditions that allow science-based innovation to flourish.

Another response is to argue that the products of free scientific inquiry have intrinsic societal worth, and should be supported “as an ornament to civilisation”. Science is like the opera, something we support because we are civilised. One trouble with this argument is that it involves a certain degree of personal taste – I dislike opera greatly, and who’s to say that others won’t have the same feeling about astronomy? An even more serious argument is that we don’t actually support the arts that much, in financial terms, in comparison to the science budget. On this argument we’d be employing a lot fewer scientists than we are now (and probably paying them less).

A third response is to emphasise science’s role in solving the problems of society, but emphasising the long-term nature of this project. The idea is to direct science towards broad societal goals. Of course, as soon as one has said this one has to ask “whose goals?” – that’s why public engagement, and indeed politics in the most general sense, becomes important. In Helga’s words, we need to “recontextualise” science for current times. It’s important to stress that, in this kind of “Grand Challenge” driven science, one should specify a problem – not a solution. It is important, as well, to think clearly about different time scales, to put in place possibilities for the long term as well as responding to the short term imperative.

For example, the problem of moving to low-carbon energy sources is top of everyone’s list of grand challenges. We’re seeing some consensus (albeit not a very enthusiastic one) around the immediate need to build new nuclear power stations, to implement carbon capture and storage and to expand wind power, and research is certainly needed to support this, for example to reduce the high cost and energy overheads of carbon capture and storage. But it’s important to recognise that many of these solutions will be at best stop-gap, interim solutions, and to make sure we’re putting the research in place to enable solutions that will be sustainable for the long-term. We don’t know, at the moment, what these solutions will be. Perhaps fusion will finally deliver, maybe a new generation of cellulosic biofuels will have a role, perhaps (as my personal view favours) large scale cheap photovoltaics will be the solution. It’s important to keep the possibilities open.

So, this kind of societally directed, “Grand challenge”, inspired research isn’t necessarily short term, applied research, and although the practicalities of production and scale-up need to integrated at an early stage, it’s not necessarily driven by industry. It needs to preserve a diversity of approaches, to be robust in the face of our inevitable uncertainty.

One of Helga’s contributions to the understanding of modern techno-science has been the idea of “mode II knowledge production”, which she defined in an influential book with Michael Gibbons and others. In this new kind of science, problems are defined from the outset in the context of potential application, they are solved by the bringing together of transient, transdisciplinary networks, and their outcomes are judged by different criteria of quality than pure disciplinary research, including judgements of their likely economical viability or social acceptability.

This idea has been controversial. I think many people accept this represents the direction of travel of recent science. What’s at issue is whether it is a good thing; Helga and her colleagues have been at pains to stress that their work is purely descriptive, and implies no judgement of the desirability of these changes. But many of my colleagues in academic science think they are very definitely undesirable (see my earlier post Mode 2 and its discontents). One interesting point, though, is that in arguing against more directed ways of managing science, many people point to the many very valuable discoveries that have been serendipitously in the course of undirected, investigator driven research. Examples are manifold, from lasers to giant magneto-resistance, to restrict the examples to physics. It’s worth noting, though, that while this is often made as an argument against so-called “instrumental” science, it actually appeals to instrumental values. If you make this argument, you are already conceding that the purpose of science is to yield progress towards economic or political goals; you are simply arguing about the best way to organise science to achieve those goals.

Not that we should think this new. In the manifestos for modern science, written by Francis Bacon, that were so important in defining the mission of this society at its foundation three hundred and fifty years ago, the goal of science is defined as “an improvement in man’s estate and an enlargement of his power over nature”. This was a very clear contextualisation of science for the seventeenth century; perhaps our recontextualisation of science for the 21st century won’t prove so very different.

Mode 2 and its discontents

This essay was first published in the August 2008 issue of Nature Nanotechnology – Nature Nanotechnology 3 448 (2008) (subscription required for full online text).

A water-tight definition of nanotechnology still remains elusive, at least if we try and look at the problem from a scientific or technical basis. Perhaps this means we are looking in the wrong place, and we should instead seek a definition that’s essentially sociological. Here’s one candidate: “Nanotechnology is the application of mode 2 values to the physical sciences” . The jargon here is a reference to the influential 1994 book, “The New Production of Knowledge”, by Michael Gibbons and coworkers . Their argument was that the traditional way in which knowledge is generated – in a disciplinary framework, with a research agenda set from within that discipline – was being increasingly supplemented by a new type of knowledge production which they called “Mode 2”. In mode 2 knowledge production, they argue, problems are defined from the outset in the context of potential application, they are solved by the bringing together of transient, transdisciplinary networks, and their outcomes are judged by different criteria of quality than pure disciplinary research, including judgements of their likely economical viability or social acceptability. It’s easy to argue that the difference between nanotechnology research as it is developing in countries across the world, and the traditional disciplines from which it has emerged, like solid state physics and physical chemistry, very much fits this pattern.

Gibbons and his coauthors always argued that what they were doing was simply observing, in a neutral way, how things were moving. But it’s a short journey from “is” to “ought”, and many observers have seized on these observations as a prescription for how science should change. Governments seeking to extract demonstrable and rapid economic returns from their tax-payers’ investments in publicly funded science, and companies and entrepreneurs seeking more opportunities to make money from scientific discoveries, look to this model as a blueprint to reorganise science the better to deliver what they want. On the other hand, those arguing for a different relationship between science and society , with more democratic accountability and greater social reflexivity on the part of scientists, see these trends as an opportunity to erase the distinction between “pure” and “applied” science and to press all scientists to take much more explicit consideration of the social context in which they operate.

It’s not surprising that some scientists, for whom the traditional values of science as a source of disinterested and objective knowledge are precious, regard these arguments as assaults by the barbarians at the gates of science. Philip Moriarty made the opposing case very eloquently in an earlier article in Nature Nanotechnology (Reclaiming academia from post-academia, abstract, subscription required for full article) , arguing for the importance of “non-instrumental science” in the “post-academic world”. Here he follows John Ziman in contrasting instrumental science, directed to economic or political goals, with non-instrumental science, which, it is argued, has wider benefits to society in creating critical scenarios, promoting rational attitudes and developing a cadre of independent and enlightened experts.

What is striking, though, is that many of the key arguments made against instrumental science actually appeal to instrumental values. Thus, it is possible to make impressive lists of the discoveries made by undirected, investigator driven science that have led to major social and economic impacts – from the laser to the phenomenon of giant magneto-resistance that’s been so important in developing hard disk technology. This argument, then, is actually not an argument against the ends of instrumental science – it implicitly accepts that the development of economically or socially important products is an important outcome of science. Instead, it’s an argument about the best means by which instrumental science should be organised. The argument is not that science shouldn’t seek to produce direct impacts on society. It is that this goal can sometimes be more effectively, albeit unpredictably, reached by supporting gifted scientists as they follow their own instincts, rather than attempting to manage science from the top down. Likewise, arguments about the superiority of the “open-source” model of science, in which information is freely exchanged, to proprietary models of knowledge production in which intellectual property is closely guarded and its originators receive the maximum direct financial reward, don’t fundamentally question the proposition that science should lead to societal benefits, they simply question whether the proprietary model is the best way of delivering them. This argument was perhaps most pointed at the time of the sequencing of the human genome, where a public sector, open source project was in direct competition with Craig Venter’s private sector venture. Defenders of the public sector project, notably Sir John Sulston, were eloquent and idealistic in its defense, but what their argument rested on was the conviction that the societal benefits of this research would be maximised if its results remained in the public domain.

The key argument of the mode 2 proponents is that science needs to be recontextualised – placed into a closer relationship with society. But, how this is done is essentially a political question. For those who believe that market mechanisms offer the most reliable way by which societal needs are prioritised and met, the development of more effective paths from science to money-making products and services will be a necessary and sufficient condition for making science more closely aligned with society. But those who are less convinced by such market fundamentalism will seek other mechanisms, such as public engagement and direct government action, to create this alignment.

Perhaps the most telling criticism of the “Mode 2” concept is the suggestion that, rather than being a recent development, science being carried out in the context of application has actually been the rule for most of its history. Certainly, in the nineteenth century, there was a close, two-way interplay between science and application, well seen, for example, in the relationship between the developing science of thermodynamics and the introduction of new and refined heat engines. In this view, the idea of science as being autonomous and discpline-based is an anomaly of the peculiar conditions of the second half of the twentieth century, driven by the expansion of higher education, the technology race of the cold war and the reflected glory of science’s contribution to allied victory in the second world war.

At each point in history, then, the relationship between science and the society ends up being renegotiated according the perceived demands of the time. What is clear, though, is that right from the beginning of modern science there has been this tension about its purpose. After all, for one of the founders of the ideology of the modern scientific project, Francis Bacon, its aims were “an improvement in man’s estate and an enlargement of his power over nature”. These are goals that, I think, many nanotechnologists would still sign up to.

Five years on from the Royal Society report

Five years ago this summer, the UK’s Royal Society and the Royal Academy of Engineering issued a report on Nanoscience and nanotechnologies: opportunities and uncertainties. The report had been commissioned by the Government, and has been widely praised and widely cited. Five years on, it’s worth asking the question what difference has it made, and what is left to be done. The Responsible Nanoforum has collected a fascinating collection of answers to these questions – A beacon or just a landmark?. Reactions come from scientists, people in industry, representatives of NGOs, and the report is introduced by the Woodrow Wilson centre’s Andrew Maynard. His piece is also to be found on his blog. Here’s what I wrote.

The Royal Society/Royal Academy of Engineering report was important in a number of respects. It signaled a new openness from the science community, a new willingness by scientists and engineers to engage more widely with society. This was reflected in the composition of the group itself, with its inclusion of representatives from philosophy, social science and NGOs in addition to distinguished scientists, as well as in its recommendations. It accepted the growing argument that the place for public engagement was “upstream” – ahead of any major impacts on society; in the words of the report , “a constructive and proactive debate about the future of nanotechnologies should be undertaken now – at a stage when it can inform key decisions about their development and before deeply entrenched or polarised positions appear.” Among its specific recommendations, its highlighting of potential issues of the toxicity and environmental impact of some classes of free, engineered nano-particles has shaped much of the debate around nanotechnologies in the subsequent five years.

The impact in the UK has been substantial. We have seen a serious effort to engage the public in a genuinely open way; the recent EPSRC public dialogue on nanotechnology in healthcare gives a demonstration that these ideas have gone beyond public relations to begin to make a real difference to the direction of science funding. The research that the report called for in nanoparticle toxicity and eco-toxicity has been slower to get going. The opportunity to make a relatively small, focused investment in this area, as recommended by the report, was not taken and this is to be regretted. Despite the slow start caused by this failure to act decisively, however, there is now in the UK a useful portfolio of research in toxicology and ecotoxicology.

One of the consequences of the late start in dealing with the nanoparticle toxicity issue has been that this has dominated the public dialogue about nanotechnology, crowding out discussion of the potential far-reaching consequences of these technologies in the longer term. We now need to learn the lessons of the Royal Society report and apply them to the development of the new generations of nanotechnology now being developed in laboratories around the world, as well as to other, potentially transformative, technologies. Synthetic biology, which has strong overlaps with bionanotechnology, is now receiving similar scrutiny, and we can expect the debates surrounding subjects such as neurotechnology, pervasive information technology, and geoengineering to grow in intensity. These discussions may be fraught and controversial, but the example of the Royal Society nanotechnology report, as a model for how to set the scene for a constructive debate about controversial science issues, will prove enduring.

Environmentally beneficial nanotechnology

Today I’ve been at Parliament in London, at an event sponsored by the Parliamentary Office of Science and Technology to launch the second phase of the Environmental Nanoscience Initiative. This is a joint UK-USA research program led by the UK’s Natural Environment Research Agency and the USA’s Environmental Protection Agency. This is a very welcome initiative to give some more focus to existing efforts to quantify possible detrimental effects of engineered nanoparticles on the environment. It’s important to put more effort into filling gaps in our knowledge about what happens to nanoparticles when they enter the environment and start entering ecosystems, but equally it’s important not to forget that a major motivation for doing research in nanotechnology in the first place is for its potential to ameliorate the very serious environmental problems the world now faces. So I was very pleased to be asked to give a talk at the event to highlight some of the positive ways that nanotechnology could benefit the environment. Here are some of the key points I tried to make.

Firstly, we should ask why we need new technology at all. There is a view (eloquently expressed, for example, in Bill McKibben’s book “Enough”) that our lives in the West are comfortable enough, the technology we have now is enough to satisfy our needs without any more gadgets, and that the new technologies coming along – such as biotechnology, nanotechnology, robotics and neuro-technology – are so powerful and have such potential to cause harm that we should consciously relinquish them.

This argument is seductive to some, but it’s profoundly wrong. Currently the world supports more than six billion people; by the middle of the century that number may be starting to plateau out, perhaps between 8 and 10 billion people. It is technology that allows the planet to support these numbers; to give just one instance, our food supplies depend on the Haber-Bosch process, which uses fossil fuel energy to fix nitrogen to use in artificial fertilizers. It’s estimated that without Haber-Bosch nitrogen, more than half the world’s population would starve, even if everyone adopted a minimal, vegetarian diet. So we are existentially dependent on technology – but the technology we depend on isn’t sustainable. To escape from this bind, we must develop new, and more sustainable, technologies.

Energy is at the heart of all these issues; the availability of cheap and concentrated energy is what underlies our prosperity, and as the world’s population grows and becomes more prosperous, demand for energy will grow. It is important to appreciate the scale of these needs, which are measured in 10’s of terawatts (remember that a terawatt is a thousand gigawatts, with a gigawatt being the scale of a large coal-fired or nuclear power station). Currently the sources of this energy are dominated by fossil fuels, and it is the relentless growth of fossil fuel energy since the late 18th century that has directly led to the rise in atmospheric carbon dioxide concentrations. This rise, together with other greenhouse gases, is leading to climate change, which in turn will directly lead to other problems, such as pressure on clean water supplies and growing insecurity of food supplies. It is this background which sets the agenda for the new technologies we need.

At the moment we don’t know for certain which of the many new technologies being developed to address these problems will work, either technically or socio-economically, so we need to pursue many different avenues, rather than imagining that some single solution will deliver us. Nanotechnology is at the heart of many of these potential solutions, in the broad areas of sustainable energy production, storage and distribution, in energy conservation, clean water, and environmental remediation. Let me focus on a couple of examples.

It’s well known that the energy we use is a small fraction of the total amount of energy arriving on the earth from the sun; in principle, solar energy could provide for all our energy needs. The problems are ones of cost and scale. Even in cloudy Britain, if we could cover every roof with solar cells we’d end up with a significant fraction of the 42.5 GW which represents the average rate of electricity use in the UK. We don’t do this, firstly because it would be too expensive, and secondly because the total world output of solar cells, at about 2 GW a year, is a couple of orders of magnitude too small. A variety of nanotechnology enabled potential solutions exist; for example plastic solar cells offer the possibility of using ultra-cheap, large area processing technologies to make solar cells on a very large scale. This is the area supported by EPSRC’s first nanotechnology grand challenge.

It’s important to recognise, though, that all these technologies still have major technical barriers to overcome; they are not going to come to market tomorrow. In the meantime, the continued large scale use of fossil fuels looks inevitable, so the idea of the need to mitigate their impact by carbon capture and storage is becoming increasingly compelling to politicians and policy-makers. This technology is do-able today, but the costs are frightening. Carbon capture and storage increases the price of coal-derived electricity by between 43% and 91%; this is a pure overhead. Nanotechnologies, in the form of new membranes and sorbents could reduce this. Another contribution would be finding a use for carbon dioxide, perhaps using photocatalytic reduction to convert water and CO2 into hydrocarbons and methanol, which could be used as transport fuels or chemical feedstocks. Carbon capture and utilization is the general area of the 3rd nanotechnology grand challenge, whose call for proposals is open now.

How can we make sure that our proposed innovations are responsible? The idea of the “precautionary principle” is one that is often invoked in discussions of nanotechnology, but there are aspects of this notion which make me very uncomfortable. Certainly, we can all agree that we don’t want to implement “solutions” that bring there own, worse, problems. The potential impacts of any new technology are necessarily uncertain. But on the other hand, we know that there are near-certain negative consequences of failing to act. Not to actively seek new technologies is itself a decision that has impacts and consequences of its own, and in the situation we are now in these consequences are likely to be very bad ones.

Responsible innovation, then, means that we must speed up research to fill the knowledge gaps and reduce uncertainty; this is the role of the Environmental Nanotechnology Initiative. We need to direct our search for new technologies in areas of societal need, where public support is assured by a broad consensus about the desirability of the goals. This means increasing our efforts in the area of public engagement, and ensuring a direct connection between that public engagement and decisions about research priorities. We need to recognise that there will always be uncertainty about the actual impacts of new technologies, but we should do our best to choose directions that we won’t regret, even if things don’t turn out the way we first imagined.

To sum up, nanotechnologies, responsibly implemented, are part of the solution for our environmental difficulties.

Moving on

For the last two years, I’ve been the Senior Strategic Advisor for Nanotechnology for the UK’s Engineering and Physical Science Research Council (EPSRC), the government agency that has the lead responsibility for funding nanotechnology in the UK. I’m now stepping down from this position to return to a new, full-time role at the University of Sheffield; EPSRC is currently in the process of appointing my successor.

In these two years, a substantial part of a new strategy for nanotechnology in the UK has been implemented. We’ve seen new, Grand Challenge programmes targeting nanotechnology for harvesting solar energy, and nanotechnology for medicine and healthcare, with a third programme looking for new ways of using nanotechnology to capture and utilise carbon dioxide shortly to be launched. At the more speculative end of nanotechnology, the “Software Control of Matter” programme received supplementary funding. Some excellent individual scientists have been supported through personal fellowships, and looking to the future, the three new Doctoral Training Centres in nanotechnology will produce, over the next five years, up to 150 additional PhDs in nanotechnology over and above EPSRC’s existing substantial support for graduate students. After a slow response to the 2004 Royal Society report on nanotechnology, I think we now find ourselves in a somewhat more defensible position with respect to funding of nano- toxicology and ecotoxicology studies, with some useful projects in these areas being funded by the Medical Research Council and the Natural Environment Research Council respectively, and a joint programme with the USA’s Environmental Protection Agency about to be launched. With the public engagement exercise that was run in conjunction with the Grand Challenge on nanotechnology in medicine and healthcare, I think EPSRC has gone substantially further than any other funding agency in opening up decision making about nanotechnology funding. I’ve found this experience to be fascinating and rewarding; my colleagues in the EPSRC nanotechnology team, led by John Wand, have been a pleasure to work with. I’ve also had a huge amount of encouragement and support from many scientists from across the UK academic community.

In the process, I’ve learned a great deal; nanotechnology of course takes in physics, chemistry, and biology, as well as elements from engineering and medicine. I’ve also come into contact with philosophers and sociologists, as well as artists and designers, from all of whom I’ve learnt new insights. This education will stand me in good stead in my new role at Sheffield – as the Pro-Vice-Chancellor for Research and Innovation I’ll be responsible for the health of research right across the University.

How to engineer a system that fights back

Last week saw the release of a report on synthetic biology from the UK’s Royal Academy of Engineering. The headline call, as reflected in the coverage in the Financial Times, is for the government to develop a strategy for synthetic biology so that the country doesn’t “lose out in the next industrial revolution”. The report certainly plays up the likelihood of high impact applications in the short term – within five to ten years, we’re told, we’ll see synbio based biofuels, “artificial leaf technology” to fix atmospheric carbon dioxide, industrial scale production of materials like spider silk, and in medicine the realisation of personalised drugs. An intimation that progress towards these goals may not be entirely smooth can be found in this news piece from a couple of months ago – A synthetic-biology reality check – which described the abrupt winding up earlier this year of one of the most prominent synbio start-ups, Codon Devices, founded by some of the most prominent US players in the field.

There are a number of competing visions for what synthetic biology might be; this report concentrates on just one of these. This is the idea of identifying a set of modular components – biochemical analogues of simple electronic components – with the aim of creating a set of standard parts from which desired outcomes can be engineered. This way of thinking relies on a series of analogies and metaphors, relating the functions of cell biology with constructs of human-created engineering. Some of these analogies have a sound empirical (and mathematical) basis, like the biomolecular realisation of logic gates and positive and negative feedback.

There is one metaphor that is used a lot in the report which seems to me to be potentially problematic – that’s the idea of a chassis. What’s meant by this is a cell – for example, a bacteria like E.coli – into which the artificial genetic components are introduced in order to produce the desired products. This conjures up an image like the box into which one slots the circuit boards to make a piece of electronic equipment – something that supplies power and interconnections, but which doesn’t have any real intrinsic functionality of its own. It seems to me difficult to argue that any organism is ever going to provide such a neutral, predictable substrate for human engineering – these are complex systems which have their own agenda. To quote from the report on a Royal Society Discussion Meeting about synthetic biology, held last summer: “Perhaps one of the more significant challenges for synthetic biology is that living systems actively oppose engineering. They are robust and have evolved to be self-sustaining, responding to perturbations through adaptation, mutation, reproduction and self-repair. This presents a strong challenge to efforts to ‘redesign’ existing life.”

Are electric cars the solution?

We’re seeing enthusiasm everywhere for electric cars, with government subsidies being directed both at buyers and manufacturers. The attractions seem to be obvious – clean, emission free transport, seemingly resolving effortlessly the conflict between people’s desire for personal mobility and our need to move to a lower carbon energy economy. Widespread use of electric cars, though, simply moves the energy problem out of sight – from the petrol station and exhaust pipe to the power station. A remarkably clear opinion piece in today’s Financial Times, by Richard Pike, of the UK’s Royal Society of Chemistry, poses the problem in numbers.

The first question we have to ask, is how does the energy efficiency of electric cars compare to cars powered by internal combustion engines? Electric motors are much more efficient than internal combustion engines, but a fair comparison has to take into account the losses incurred in generating and transmitting the electricity. Pike’s cites figures that show the comparison is actually surprisingly close. Petrol engines, on average, have an overall efficiency of 32%, whereas the much more efficient Diesel engine converts 45% of the energy in the fuel into useful output. Conversion efficiencies in power stations, on the other hand, come in at a bit more than 40%; add to this a transmission loss getting from the power station to the plug and a further loss from the charging/discharging cycle in the batteries and you end up with an overall efficiency of about 31%. So, on pure efficiency grounds, electric cars do worse than either petrol or diesel vehicles. One further factor needs to be taken into account, though – that’s the amount of carbon dioxide emitted per Joule of energy supplied from different fuels. Clearly, if all our electricity was generated by nuclear power or by solar photovoltaics, the advantages of electric cars would be compelling, but if it all came from coal-fired power stations this would make the situation substantially worse. With the current mix of energy sources in the UK, Pike estimates a small advantage for electric cars, with an overall potental reduction of emissions of one seventh. I don’t know the corresponding figures for other countries; presumably given France’s high proportion of nuclear the advantage of electric cars there would be much greater, while in the USA, given the importance of coal, things may be somewhat worse.

Pike’s conclusion is that the emphasis on electric cars is misplaced, and the subsidy money would be better off spent on R&D on renewable energy and carbon capture. The counter-argument would be that a push for electric cars now won’t make a serious difference to patterns of energy use for ten or twenty years, given the inertia attached to the current installed base of conventional cars and the plant to manufacture them, but is necessary to begin the process of changing that. In the meantime, one should be pursuing low carbon routes to electricity generation, whether nuclear, renewable, or coal with carbon capture. It would be comforting to think that this is what will happen, but we shall see.

Brain interfacing with Kurzweil

The ongoing discussion of Ray Kurzweil’s much publicized plans for a Singularity University prompted me to take another look at his book “The Singularity is Near”. It also prompted me to look up the full context of the somewhat derogatory quote from Douglas Hofstadter that the Guardian used and I reproduced in my earlier post. This can be found in this interview“it’s a very bizarre mixture of ideas that are solid and good with ideas that are crazy. It’s as if you took a lot of very good food and some dog excrement and blended it all up so that you can’t possibly figure out what’s good or bad. It’s an intimate mixture of rubbish and good ideas, and it’s very hard to disentangle the two, because these are smart people; they’re not stupid.” Looking again at the book, it’s clear this is right on the mark. One difficulty is that Kurzweil makes many references to current developments in science and technology, and most readers are going to take it on trust that Kurzweil’s account of these developments is accurate. All too often, though, what one finds is that there’s a huge gulf between the conclusions Kurzweil draws from these papers and what they actually say – it’s the process I described in my article The Economy of Promises taken to extremes – “a transformation of vague possible future impacts into near-certain outcomes”. Here’s a fairly randomly chosen, but important, example.

In this prediction, we’re in the year 2030 (p313 in my edition). “Nanobot technology will provide fully immersive, totally convincing virtual reality”. What is the basis for this prediction? “We already have the technology for electronic devices to communicate with neurons in both directions, yet requiring no direct physical contact with the neurons. For example, scientists at the Max Planck Institute have developed “neuron transistors” that can detect the firing of a nearby neuron, or alternatively can cause a nearby neuron to fire or suppress it from firing. This amounts to two-way communication between neurons and the electronic-based neuron transistors. As mentioned above, quantum dots have also shown the ability to provide non-invasive communication between neurons and electronics.” The statements are supported by footnotes, with impressive looking references to the scientific literature. The only problem is, that if one goes to the trouble of looking up the references, one finds that they don’t say what he says they do.

The reference to “scientists at the MPI” refers to Peter Fromherz, who has been extremely active in developing ways of interfacing nerve cells with electronic devices – field effect transistors to be precise. I discussed this research in an earlier post – Brain chips – the paper cited by Kurzweil is Weis and Fromherz, PRE, 55 877 (1977) (abstract). Fromherz’s work does indeed demonstrate two-way communication between neurons and transistors. However, it emphatically does not do this in a way that needs no physical contact with neurons – the neurons need to be in direct contact with the gate of the FET, and this is achieved by culturing neurons in-situ. This restricts the method to specially grown, 2-dimensional arrays of neurons, not real brains. The method hasn’t been demonstrated to work in-vivo, and it’s actually rather difficult to see how this could be done. As Fromherz himself says, “Of course, visionary dreams of bioelectronic neurocomputers and microelectronic neuroprostheses are unavoidable and exciting. However, they should not obscure the numerous practical problems.”

What of the quantum dots, that “have also shown the ability to provide non-invasive communication between neurons and electronics”? The paper referred to here is Winter et al, Recognition Molecule Directed Interfacing Between Semiconductor Quantum Dots and Nerve Cells, Advanced Materials 13 1673 (2001) ( Richard JonesPosted on Categories General