Computing with molecules

This is a pre-edited version of an essay that was first published in April 2009 issue of Nature Nanotechnology – Nature Nanotechnology 4, 207 (2009) (subscription required for full online text).

The association of nanotechnology with electronics and computers is a long and deep one, so it’s not surprising that a central part of the vision of nanotechnology has been the idea of computers whose basic elements are individual molecules. The individual transistors of conventional integrated circuits are at the nanoscale already, of course, but they’re made top-down by carving them out from layer-cakes of semiconductors, metals and insulators – what if one could make the transistors by joining together individual molecules? This idea – of molecular electronics – is an old one, which actually predates the widespread use of the term nanotechnology. As described in an excellent history of the field by Hyungsub Choi and Cyrus Mody (The Long History of Molecular Electronics, PDF) its origin can be securely dated at least as early as 1973; since then it has had a colourful history of big promises, together with waves of enthusiasm and disillusionment.

Molecular electronics, though, is not the only way of using molecules to compute, as biology shows us. In an influential 1995 review, Protein molecules as computational elements in living cells (PDF), Dennis Bray pointed out that the fundamental purpose of many proteins in cells seems to be more to process information than to effect chemical transformations or make materials. Mechanisms such as allostery permit individual protein molecules to behave as individual logic gates; one or more regulatory molecules bind to the protein, and thereby turn on or off its ability to catalyse a reaction. If the product of that reaction itself regulates the activity of another protein, one can think of the result as an operation which converts an input signal conveyed by one molecule into an output conveyed by another, and by linking together many such reactions into a network one builds a chemical “circuit” which in effect can carry out computational tasks of more or less complexity. The classical example of such a network is the one underlying the ability of bacteria to swim towards food or away from toxins. In bacterial chemotaxis, information from sensors about many different chemical species in the environment is integrated to produce the signals that control a bacterium’s motors, resulting in apparently purposeful behaviour.

The broader notion that much cellular activity can be thought of in terms of the processing of information by the complex networks involved in gene regulation and cell signalling has had a far-reaching impact in biology. The unravelling of these networks is the major concern of systems biology, while synthetic biology seeks to re-engineer them to make desired products. The analogies between electronics and systems thinking and biological systems are made very explicit in much writing about synthetic biology, with its discussion of molecular network diagrams, engineered gene circuits and interchangeable modules.

And yet, this alternative view of molecular computing has yet to make much impact in nanotechnology. Molecular logic gates have been demonstrated in a number of organic compounds, for example by the Belfast based chemist Prasanna de Silva; here ingenious molecular design can allow several input signals, represented by the presence or absence of different ions or other species, to be logically combined to produce outputs represented by optical fluorescence signals at different wavelengths. In one approach, a molecule consists of a fluorescent group is attached by a spacer unit to receptor groups; in the absence of bound species at the receptors, electron transfer from the receptor group to the fluorophore suppresses its fluorescence. Other approaches employ molecular shuttles – rotaxanes – in which physically linked but mobile molecular components move to different positions in response to changes in their chemical environment. These molecular engineering approaches are leading to sensors of increasing sophistication. But because the output is in the form of fluorescence, rather than a molecule, it is not possible to link many such logic gates into a network.

At the moment, it seems the most likely avenue for developing complex networks for information processing based on synthetic components will use nucleic acids, particularly DNA. Like other branches of the field of DNA nanotechnology, progress here is being driven by the growing ease and cheapness with which it is possible to synthesise specified sequences of DNA, together with the relative tractability of design and modelling of molecular interactions based on the base pair interaction. One demonstration from Erik Winfree’s group at Caltech uses this base pair interaction to design logic gates based on DNA molecules. These accept inputs in the form of short RNA strands, and output DNA strands according to the logical operations OR, AND or NOT. The output strands can themselves be used as inputs for further logical operations, and it is this that would make it possible in principle to develop complex information processing networks.

What should we think about using molecular computing for? The molecular electronics approach has a very definite target; to complement or replace conventional CMOS-based electronics, to ensure the continuation of Moore’s law beyond the point when physical limitations prevent any further miniaturisation of silicon-based. The inclusion of molecular electronics in the latest International Technology Roadmap for Semiconductors indicates the seriousness of this challenge, and molecular electronics and other related approaches, such as graphene-based electronics, will undoubtedly continue to be enthusiastically pursued. But these are probably not appropriate goals for molecular computing with chemical inputs and outputs. Instead, the uses of these technologies are likely to be driven by their most compelling unique selling point – the ability to interface directly with the biochemical processes of the cell. It’s been suggested that such molecular logic could be used to control the actions of a sophisticated drug device, for example. An even more powerful possibility is suggested by another paper (abstract, subscription required for full paper) from Christina Smolke (now at Stanford). In this work an RNA construct controls the in-vivo expression of a particular gene in response to this kind of molecular logic. This suggests the creation of what could be called molecular cyborgs – the result of a direct merging of synthetic molecular logic with the cell’s own control systems.

Society for the study of nanoscience and emerging technologies

Last week I spent a couple of days in Darmstadt, at the second meeting of the Society for the Study of Nanoscience and Emerging Technologies (S.NET). This is a relatively informal group of scholars in the field of Science and Technology Studies from Europe, the USA and some other countries like Brazil and India, coming together from disciplines like philosophy, political science, law, innovation studies and sociology.

Arie Rip (president of the society, and to many the doyen of European science and technology studies) kicked things off with the assertion that nanotechnology is, above all, a socio-political project, and the warning that this object of study was in the process of disappearing (a theme that recurred throughout the conference). Not to be worried by this prospect, Arie observed that their society could keep its acronym and rename itself the Society for the Study of Newly Emerging Technologies.

The first plenary lecture was from the French philosopher Bernard Stiegler, on Knowledge, Industry and Distrust at the Time of Hyperminiaturisation. I have to say I found this hard going; the presentation was dense with technical terms and delivered by reading a prepared text. But I’m wiser about it now than I was, thanks to a very clear and patient explanation from Colin Milburn over dinner that evening, who filled us in with the necessary background about Derrida’s intepretation of Plato’s pharmakon, and Simondon’s notion of disindividuation.

One highlight for me was a talk by Michael Bennett about changes in the intellectual property regime in the USA during the 1980’s and 1990’s. He made a really convincing case that the growth of nanotechnology went in parallel with a series of legal and administrative changes that amounted to a substantial intensification of the intellectual property regime in the USA. While some people think that developments in law struggle to keep up with science and technology, he argued instead that law bookends the development of technoscience, both shaping the emergence of the science and dominating the way it is applied. This growing influence, though, doesn’t help innovation. Recent trends, such as the tendency of research universities to patent early with very wide claims, and to seek exclusive licenses, aren’t helpful; we’re seeing the creation of “patent thickets”, such as the one that surrounds carbon nanotubes, which substantially add to the cost and increase uncertainty for those trying to commercialise technologies in this area. And there is evidence of an “anti-commons” effect, where other scientists are inhibited from working on systems when patents have been issued.

A round-table discussion on the influence of Feynman’s lecture “Plenty of Room at the Bottom” on the emergence of nanotechnology as a field produced some suprises too. I’m already familiar with Chris Tuomey’s careful demonstration that Plenty of Room’s status as the foundation of nanotechnology was largely granted retrospectively (see, for example, his article Apostolic Succession, PDF); Cyrus Mody‘s account of the influence it had on the then emerging field of microelectronics adds some shade to this picture. Colin Milburn made some comments that put Feynman’s lecture into the cultural context of its time; particularly in the debt it owed to science fiction stories like Robert Heinlein’s “Waldo”. And, to my great surprise, he reminded us just how weird the milieu of post-war Pasadena was; the very odd figure of Jack Parsons helping to create the Jet Propulsion Laboratory while at the same time conducting a programme of magic inspired by Aleister Crowley and involving a young L. Ron Hubbard. At this point I felt I’d stumbled out of an interesting discussion of a by-way of the history of science into the plot of an unfinished Thomas Pynchon novel.

The philosopher Andrew Light talked about how deep disagreements and culture wars arise, and the distinction between intrinsic and extrinsic objections to new technologies. This was an interesting analysis, though I didn’t entirely agree with his prescriptions, and a number of other participants were showing some some unease at the idea that the role of philosophers is to create a positive environment for innovation. My own talk was a bit of a retrospective, with the title “What has nanotechnology taught us about contemporary technoscience?” The organisers will be trying to persuade me to write this up for the proceedings volume, so I’ll say no more about this for the moment.

On pure science, applied science, and technology

It’s conventional wisdom that science is very different from technology, and that it makes sense to distinguish between pure science and applied science. Largely as a result of thinking about nanotechnology (as I discussed a few years ago here and here), I’m less confident any more that there’s such a clean break between science and technology, or, for that matter, pure and applied science.

Historians of science tell us that the origin of the distinction goes back to the ancient Greeks, who distinguished between episteme, which is probably best translated as natural philosophy, and techne, translated as craft. Our word technology derives from techne, but careful scholars remind us that technology actually refers to writing about craft, rather than doing the craft itself. They would prefer to call the actual business of making machines and gadgets technique (in the same way as the Germans call it technik), rather than technology. Of course, for a long time nobody wrote about technique at all, so there was in this literal sense no technology. Craft skills were regarded as secrets, to be handed down in person from master to apprentice, who were from a lower social class than the literate philosophers considering more weighty questions about the nature of reality.

The sixteenth century saw some light being thrown on the mysteries of technique with books (often beautifully illustrated) being published about topic like machines and metal mining. But one could argue that the biggest change came with the development of what was called then experimental philosophy, which we see now as being the beginnings of modern science. The experimental philosophers certainly had to engage with craftsman and instrument makers to do their experiments, but what was perhaps more important was the need to commit the experimental details to writing so that their counterparts and correspondents elsewhere in the country or elsewhere in Europe could reliably replicate the experiments. Complex pieces of scientific apparatus, like Robert Boyle’s airpump, certainly were some of the most advanced (and expensive) pieces of technology of the day. And, conversely, it’s no accident that James Watt, who more than anyone else made the industrial revolution possible with his improved steam engine, learned his engineering as an instrument maker at the University of Glasgow.

But surely there’s a difference between making a piece of experimental apparatus to help unravel the ultimate nature of reality, and making an engine to pump a mine out? In this view, the aim of science is to understand the ultimate fundamental nature of reality, while technology seeks merely to alter the world in some way, with its success being judged simply by whether it does its intended job. In actuality, the aspect of science as natural philosophy, with its claims to deep understanding of reality, has always coexisted with a much more instrumental type of science whose success is judged by the power over nature it gives us (Peter Dear’s book The Intelligibility of Nature is a fascinating reflection on the history of this dual character of science). Even the keenest defenders of science’s claim to make reliable truth-claims about the ultimate nature of reality – often resort to entirely instrumental arguments – “if you’re so sceptical about science”, they’ll ask a relativist or social constructionist, “why do you fly in airplanes or use antibiotics?”

It’s certainly true that different branches of science are, to a different degree, applicable to practical problems. But which science is an applied science and which is a pure science depends as much on what problems society, at a particular time and in a particular place, needs solving, as on the character of the science itself. In the sixteenth and seventeenth centuries astronomy was a strategic subject of huge importance to the growing naval powers of the time, and was one of the first recipients of large scale state funding. The late nineteenth and early twentieth centuries were the heyday of chemistry, with new discoveries in explosives, dyes and fertilizers making fortunes and transforming the world only a few years after their discoveries in the laboratory. A contrarian might even be tempted to say “a pure science is an applied science that has outlived its usefulness”.

Another way of seeing the problems of a supposed divide between pure science, applied science and technology is to ask what it is that scientists actually do in their working lives. A scientist building a detector for CERN or writing an image analysis program for some radio astronomy data may be doing the purest of pure science in terms of their goals – understanding particle physics or the distant universe – but what they’re actually doing day to day will look very similar indeed to their applied scientist counterparts designing medical imaging hardware or software for interpreting CCTV footage for the police. Of course, this is the origin of the argument that we should support pure science for the spin-offs it produces (such as the World Wide Web, as the particle physicists continually remind us). A counter-argument would say, why not simply get these scientists to work on medical imaging (say) in the first place, rather than trying to look for practical applications for the technologies they develop in support of their “pure” science? Possible answers to this might point to the fact that the brightest people are motivated to solve deep problems in a way that might not apply to more immediately practical issues, or that our economic system doesn’t provide reliable returns for the most advanced technology developed on a speculative basis.

If it was ever possible to think that pure science could exist as a separate province from the grubby world of application, like Hesse’s “The Glass Bead Game”, that illusion was shattered in the second world war. The purest of physicists delivered radar and the fission bomb, and in the cold war we emerged into it seemed that the final destiny of the world was going to be decided by the atomic physicists. In the west, the implications of this for science policy was set out by Vannevar Bush. Bush, an engineer and perhaps the pre-eminent science administrator of the war, set out the framework for government funding of science in the USA in his report “Science: the endless frontier”.

Bush’s report emphasised, not “pure” research, but “basic” research. The distinction between basic research and applied research was not to be understood in terms of whether it was useful or not, but in terms of the motivations of the people doing it. “Basic research is performed without thought of practical ends” – but those practical ends do, nonetheless, follow (albeit unpredictably), and it’s the job of applied research to fill in the gaps. It had in the past been possible for a country to make technological progress without generating its own basic science (as the USA did in the 19th century) but, Bush asserted, the modern situation was different, and “A nation which depends upon others for its new basic scientific knowledge will be slow in its industrial progress and weak in its competitive position in world trade”.

Bush thus left us with three ideas that form the core of the postwar consensus on science policy. The first was that basic research should be carried out in isolation from thoughts of potential use – that it should result from ” the free play of free intellects, working on subjects of their own choice, in the manner dictated by their curiosity for exploration of the unknown”. The second was that, even though the scientists who produced this basic knowledge weren’t motivated by practical applications, these applications would follow, by a process in which potential applications were picked out and developed by applied scientists, and then converted into new products and processes by engineers and technologists. This one-way flow of ideas from science into application is what innovation theorists call the linear model of innovation. Bush’s third assertion was that a country that invested in basic science would recoup that investment through capturing the rewards from new technologies.

All three of these assertions have subsequently been extensively criticised, though the basic picture has a persistent hold on our thinking about science. Perhaps the most influential critique, from the science policy point of view, came in a book by Donald Stokes called Pasteur’s quadrant. Stokes argued from history that the separation of basic research from thoughts of potential use often didn’t happen; his key example was Louis Pasteur, who created a new field of microbiology in his quest to understand the spoilage of milk and the fermentation of wine. Rather than thinking about a linear continuum between pure and applied research, he thought in terms of two dimensions – the degree to which research was motivated by a quest for fundamental understanding, and the degree to which it was motivated by applications. Some research was driven solely by the quest for understanding, typified by Bohr, while an engineer like Edison typified a search for practical results untainted by any deeper curiosity. But, the example of Pasteur showed us that the two motivations could coexist. He suggested that research in this “Pasteur’s quadrant” – use-inspired basic research – should be a priority for public support.

Where are we now? The idea of Pasteur’s quadrant underlies the idea of “Grand Challenges” inspired by societal goals as an organising principle for publicly supported science. From innovation theory and science and technology studies come new terms and concepts, like technoscience, and Mode 2 knowledge production. One might imagine that nobody believes in the linear model anymore; it’s widely accepted that technology drives science as often as science drives technology. As David Willetts, the UK’s Science Minister, put it in a speech in July this year, “A very important stimulus for scientific advance is, quite simply, technology. We talk of scientific discovery enabling technical advance, but the process is much more inter-dependent than that.” But the linear model is still deeply ingrained in the way policy makers talk – in phrases like “technology readiness levels”, “pull-though to application”. From a more fundamental point of view, though, there is still a real difference between finding evidence to support a hypothesis and demonstrating that a gadget works. Intervening in nature is a different goal to understanding nature, even though the processes by which we achieve these goals are very much mixed up.

Energy, carbon, money – floating rates of exchange

When one starts reading about the future of the world’s energy economy, one needs to get used to making conversions amongst a zoo of energy units – exajoules, millions of tons of oil equivalent, quadrillions of british thermal units and the rest. But these conversions are trivial in comparison to a couple of other rates of exchange – the relationship between energy and carbon emissions (using this term as a shorthand for the effect of energy use on the global climate), and the conversion between energy and money.

On the face of it, it’s easy to see the link between emissions and energy. You burn a tonne of coal, you get 29 GJ of energy out and you emit 2.6 tonnes of carbon dioxide. But, if we step back to the level of a national or global economy, the emissions per unit of energy used depend on the form in which the energy is used (directly burning natural gas vs using electricity, for example) and, for the case of electricity, on the mix of generation being used. But if we want an accurate picture of the impact of our energy use on climate change, we need to look at more than just carbon dioxide emissions. CO2 is not the only greenhouse gas; methane, for example, despite being emitted in much smaller quantities than CO2, is still a significant contributor to climate change as it is a considerably more potent greenhouse gas than CO2. So if you’re considering the total contribution to global warming of electricity derived from a gas power station you need to account, not just for the CO2 produced by direct burning, but of the effect of any methane emitted from leaks in the pipes getting to the power station. Likewise, the effect on climate of the high altitude emissions from aircraft is substantially greater than that from the carbon dioxide alone, for example due to the production of high altitude ozone from NOx emissions. All of these factors can be wrapped up by expressing the effect of emissions on the climate through a measure of “mass of carbon dioxide equivalent”. It’s important to take these additional factors into account, or you end up significantly underestimating the climate impact of much energy use, but this accounting embodies more theory and more assumptions.

For a high accessible and readable account of the complexities of assigning carbon footprints to all sorts of goods and activities, I recommend Mike Berners-Lee’s new book How Bad Are Bananas?: The carbon footprint of everything. This has some interesting conclusions – his insistence on full accounting leads to surprisingly high carbon footprints for rice and cheese, for example (as the title hints, he recommends you eat more bananas). But carbon accounting is in its infancy; what’s arguably most important now is money.

At first sight, the conversion between energy and money is completely straightforward; we have well-functioning markets for common energy carriers like oil and gas, and everyone’s electricity bill makes it clear how much we’re paying individually. The problem is that it isn’t enough to know what the cost of energy is now; if you’re deciding whether to build a nuclear power station or to install photovoltaic panels on your roof, to make a rational economic decision you need to know what the price of energy is going to be over a twenty to thirty year timescale, at least (the oldest running nuclear power reactor in the UK was opened in 1968).

The record of forecasting energy prices and demand is frankly dismal. Vaclav Smil devotes a whole chapter of his book Energy at the Crossroads: Global Perspectives and Uncertainties to this problem – the chapter is called, simply, “Against Forecasting”. Here are a few graphs of my own to make the point – these are taken from the US Energy Information Administration‘s predictions of future oil prices.

In 2000 the USA’s Energy Information Agency produced this forecast for oil prices (from the International Energy Outlook 2000):

Historical oil prices up to 2000 in 2008 US dollars, with high, low and reference predictions made by the EIA in 2000

After a decade of relatively stable oil prices (solid black line), the EIA has relatively tight bounds between its high (blue line), low (red line) and reference (green line) predictions. Let’s see how this compared with what happened as the decade unfolded:

High, low and reference predictions for oil prices made by the EIA in 2000, compared with the actual outcome from 2000-2010

The EIA, having been mugged by reality in its 2000 forecasts, seems to have learnt from its experience, if the range of the predictions made in 2010 is anything to go by:

2000 and 2010 oIl price predictions
Successive predictions for future oil prices made by the USA's EIA in 2000 and 2010, compared to the actual outcome up to 2010

This forecast may be more prudent than the 2000 forecast, but with a variation of nearly of factor of four between high and low scenarios, it’s also pretty much completely useless. Conventional wisdom in recent years argues that we should arrange our energy needs through a deregulated market. It’s difficult to see how this can work when the information on the timescale needed to make sensible investment decisions is so poor.

What does it mean to be a responsible nanoscientist?

This is the pre-edited version of an article first published in Nature Nanotechnology 4, 336-336 (June 2009). The published version can be found here (subscription required).

What does it mean to be a responsible nanoscientist? In 2008, the European Commission recommended a code of conduct for responsible nanosciences and nanotechnologies research (PDF). This is one of a growing number of codes of conduct being proposed for nanotechnology. Unlike other codes, such as the Responsible Nanocode, which are focused more on business and commerce, the EU code is aimed squarely at the academic research enterprise. In attempting this, it raises some interesting questions about the degree to which individual scientists are answerable for consequences of their research, even if those consequences were ones which they did not, and possibly could not, foresee.

The general goals of the EU code are commendable – it aims to encourage dialogue between everybody involved in and affected by the research enterprise, from researchers in academia and industry, through to policy makers to NGOs and the general public, and it seeks to make sure that nanotechnology research leads to sustainable economic and social benefits. There’s an important question, though, about how the responsibility for achieving this desirable state of affairs is distributed between the different people and groups involved.

One can, for example, imagine many scientists who might be alarmed at the statement in the code that “researchers and research organisations should remain accountable for the social, environmental and human health impacts that their N&N research may impose on present and future generations.” Many scientists have come to subscribe to the idea of a division of moral labour – they do the basic research which in the absence of direct application, remains free of moral implications, and the technologists and industrialists take responsibility for the consequences of applying that science, whether those are positive or negative. One could argue that this division of labour has begun to blur, as the distinction between pure and applied science becomes harder to make. Some scientists themselves are happy to embrace this – after all, they are happy to take credit for the positive impact of past scientific advances, and to cite the potential big impacts that might hypothetically flow from their results.

Nonetheless, it is going to be difficult to convince many that the concept of accountability is fair or meaningful when applied to the downstream implications of scientific research, when those implications are likely to be very difficult to predict at an early stage. The scientists who make an original discovery may well not have a great deal of influence in the way it is commercialised. If there are adverse environmental or health impacts of some discovery of nanoscience, the primary responsibility must surely lie with those directly responsible for creating conditions in which people or ecosystems were exposed to the hazard, rather than the original discoverers. Perhaps it would be more helpful to think about the responsibilities of researchers in terms of a moral obligation to be reflective about possible consequences, to consider different viewpoints, and to warn about possible concerns.

A consideration of the potential consequences of one’s research is one possible approach to proceeding in an ethical way. The uncertainty that necessarily surrounds any predictions about way research may end up being applied at a future date, and the lack of agency and influence on those applications that researchers often feel, can limit the usefulness of this approach. Another recently issued code the UK government’s Universal Ethical Code for Scientists (PDF) – takes a different starting point, with one general principle – “ensure that your work is lawful and justified” – and one injunction to “minimise and justify any adverse effect your work may have on people, animals and the natural environment”.

A reference to what is lawful has the benefit of clarity, and it provides a connection through the traditional mechanisms of democratic accountability with some expression of the will of society at large. But the law is always likely to be slow to catch up with new possibilities suggested by new technology, and many would strongly disagree with the principle that what is legal is necessarily ethical. As far as the test of what is “justified” is concerned, one has to ask, who is to judge this?

One controversial research area that probably would past the test that research should be “lawful and justified” is in applications of nanotechnology to defence. Developing a new nanotechnology-based weapons system would clearly contravene the EU code’s injunction to researchers that they “should not harm or create a biological, physical or moral threat to people”. Researchers working in a government research organisation with this aim might find reassurance for any moral qualms with the thought that it was the job of the normal processes of democratic oversight to ensure that their work did pass the tests of lawfulness and justifiability. But this won’t satisfy those people who are sceptical about the ability of institutions – whether they in government or in the private sector – to manage the inevitably uncertain consequences of new technology.

The question we return to, then, is how is responsibility divided between the individuals that do science, and the organisations, institutions and social structures in which science is done? There’s a danger that codes of ethics focus too much on the individual scientist, at a time when many scientists often rather powerless, with research priorities increasingly being set from outside, and with the development and application of their research out of their hands. In this environment, too much emphasis on individual accountability could prove alienating, and could divert us from efforts to make the institutions in which science and technology are developed more responsible. Scientists shouldn’t completely underestimate their importance and influence collectively, even if individually they feel rather impotent. Part of the responsibility of a scientist should be to reflect on how one would justify one’s work, and how people with different points of view might react to it, and such scientists will be in a good position to have a positive influence on those institutions they interact with – funding agencies, for example. But we still need to think more generally how to make responsible institutions for developing science and technology, as well as responsible nanoscientists.

David Willetts on Science and Society

The UK’s Minister for Science and Higher Education, David Willetts, made his first official speech about science at the RI on 9 July 2010. What everyone is desperate to know is how big a cut the science budget will take. Willetts can’t answer this yet, but the background position isn’t good. We know that the budget of his department – Business, Innovation and Skills – will be cut by somewhere between 25%-33%. Science accounts for about 15% of this budget, with Universities accounting for another 29% (not counting the cost of student loans and grants, which accounts for another 27%). So, there’s not going to be a lot of room to protect spending on science and on research in Universities.

Having said this, this is a very interesting speech, in that Willetts takes some very clear positions on a number of issues related to science and innovation and their relationship to society, some of which are rather different from views in government before. I met Willetts earlier in the year, and then he said a couple of things then that struck me. He said that there was nothing in science policy that couldn’t be illuminated by looking at history. He mentioned in particular “The Shock of the Old”, by David Edgerton (which I’ve previously discussed here), and I noticed that at the RS meeting after the election he referred very approvingly to David Landes’s book “The Wealth and Poverty of Nations”. More personally, he referred with pride to his own family origins as Birmingham craftsmen, and he clearly knows the story of the Lunar Society well. His own academic background is as a social scientist, so it would be to be expected that he’d have some well-developed views about science and society. Here’s how I gloss the relevant parts of his speech.

More broadly, as society becomes more diverse and cultural traditions increasingly fractured, I see the scientific way of thinking – empiricism – becoming more and more important for binding us together. Increasingly, we have to abide by John Rawls’s standard for public reason – justifying a particular position by arguments that people from different moral or political backgrounds can accept. And coalition, I believe, is good for government and for science, given the premium now attached to reason and evidence.

The American political philosopher John Rawls was very concerned about how, in a pluralistic society, one could agree on a common set of moral norms. He rejected the idea that you could construct morality on entirely scientific grounds, as consequentialist ethical systems like utilitarianism try to, instead looking for a principles based morality; but he recognised that this was problematic in a society where Catholics, Methodists, Atheists and Muslims all had their different sets of principles. Hence the idea of trying to find moral principles that everyone in society can agree on, even though the grounds on which they approve of these principles may differ from group to group. In a coalition uniting parties including people as different as Evan Harris and Philippa Stroud one can see why Willetts might want to call in Rawls for help.

The connection to science is an interesting one, that draws on a particular reading of the development of the empirical tradition. According, for example, to Schaffer and Shapin (in their book “Leviathan and the Air Pump”) one of the main aims of the Royal Society in its early days was to develop a way of talking about philosophy – based on experiment and empiricism, rather than doctrine – that didn’t evoke the clashing religious ideologies that had been the cause of the bloody religious wars of the seventeenth century. According to this view (championed by Robert Boyle), in experimental philosophy one should refrain entirely from talking about contentious issues like religion, restricting oneself entirely to discussion of what one measures in experiments that are open to be observed and reproduced by anyone.

You might say that science is doing so well in the public sphere that the greatest risks it faces are complacency and arrogance. Crude reductionism puts people off.

I wonder if he’s thinking of the current breed of scientific atheists like Richard Dawkins?

Scientists can morph from admired public luminaries into public enemies, as debates over nuclear power and GM made clear. And yet I remain optimistic here too. The UK Research Councils had the foresight to hold a public dialogue about ramifications of synthetic biology ahead of Craig Venter developing the first cell controlled by synthetic DNA. This dialogue showed that there is conditional public support for synthetic biology. There is great enthusiasm for the possibilities associated with this field, but also fears about controlling it and the potential for misuse; there are concerns about impacts on health and the environment. We would do well to remember this comment from a participant: “Why do they want to do it? … Is it because they will be the first person to do it? Is it because they just can’t wait? What are they going to gain from it? … [T]he fact that you can take something that’s natural and produce fuel, great – but what is the bad side of it? What else is it going to do?” Synthetic biology must not go the way of GM. It must retain public trust. That means understanding that fellow citizens have their worries and concerns which cannot just be dismissed.

This is a significant passage which seems to accept two important features of some current thinking about public engagement with science. Firstly, that it should be “upstream” – addressing areas of science, like synthetic biology, for which concrete applications have yet to emerge, and indeed in advance of signficant scientific breakthroughs like Venter’s “synthetic cell”. Secondly, it accepts that the engagement should be two-way, that the concerns of the public may well be legitimate and should be taken seriously, and that these concerns go beyond simple calculations of risk.

The other significant aspect of Willetts’s speech was a wholesale rejection of the “linear model” of science and innovation, but this needs another post to discuss in detail.

Whose goals should direct goal-directed research?

I’ve taken part in panel discussions at two events with a strong Science and Technology Studies flavour in the last couple of months. “Democratising Futures” was a meeting under the auspices of the Centre for Research in Arts, Social Sciences and Humanities, at Cambridge on 27 May 2010. The Science and Democracy Network’s meeting was held in association with the Royal Society at the Kavli Centre on the 29 June 2010. What follows is a composite of the sorts of things I said at the two meetings.

“There is no alternative” is a phrase with a particular resonance in British politics, but it also expresses a way of thinking about the progress of science and technology. To many people, science and technology represent an autonomous force, driven forward by its own internal logic. In this view, the progress of science and technology cannot effectively steered, much less restrained. I think this view is both wrong and pernicious.

The reality is that there are very many places in which decisions and choices are made about the directions of science and technology. These include the implicit decisions made by the (international) scientific community, as a result of which the fashionable and timely topics of the day acquire momentum, much more explicit choices made by funding agencies in what areas they attach funding priority to, as well as preferences expressed by a variety of actors in the private sector, whether those are the beliefs that inform investment decisions by venture capitalists or the strategic decisions made by multinational companies. It’s obvious that these decisions are not always informed by perfect information and rationality – they will blend informed but necessarily fallible judgements about how the future might unfold with sectional interests, and will be underpinned by ideology.

To take an example which I don’t think is untypical, in the funding body I know best, the UK’s Engineering and Physical Sciences Research Council (EPSRC), priorities are set by a mixture of top-down and bottom-up pressures. The bottom-up aspect comes from the proposals the council receives, from individual scientists, to pursue those lines of research that they think are interesting. From the top, though, comes increasing pressure from government to prioritise research in line with their broad strategies.

In setting a strategic framework, EPSRC distinguishes between the technical opportunities that the current state of science offers, and the demands of “users” of research in industry and research. Advice on the former typically comes from practising scientists, who alone have the expertise to know what is possible. This advice won’t be completely objective, of course – it will be subject to the whims of academic fashion and a certain incumbency bias in favour of established, well-developed fields. The industrial scientists who provide advice will of course have a direct interest in science that benefits their own industries and their own companies. Policy demands supporting science that can be translated into the marketplace, but this needs to be balanced against a reluctance to subsidise the private sector directly. Even accepting the desirability of supporting science that can be taken to market quickly, there is also an incumbency bias here too. Given that this advice necessarily comes from people representing established concerns, who is going to promote the truly disruptive industries?

So, given these routes by which scientists and industry representatives have explicit mechanisms for influencing the agenda and priorities for publicly funded science, the big outstanding question is how the rest of the population can have some influence. Of course, research councils are aware of the broader societal contexts that surround the research they fund; and the scientists and industry people providing advice will be asked to incorporate these broader issues in their thinking. The danger is that these people are not well equipped to make some judgements. In a phrase of Arie Rip, it’s likely that they will be using “folk social science” – a set of preconceptions and prejudices, unsupported by evidence, about what the wider population thinks about science and technology (one very common example of this in the UK is the proposition that one can gauge probable public reactions to science by reading the Daily Mail).

It might be argued that the proper way for wider societal and ethical issues to be incorporated in scientific priority setting is through the usual apparatus of representative democracy – in the UK system, through Ministers who are responsible to Parliament. This fails in practise for both institutional and practical reasons. There is a formal principle in the UK known as the Haldane principle (like much else in the UK this is probably an invented tradition), which states that science should be governed at one remove from government, with decisions being left to scientists. The funding bodies – the research councils – are not direct subsidiaries of their parent government department, but are free-standing agencies. This doesn’t stop them from being given a strong strategic steer, both through informal and formal routes, but they generally resist taking direct orders from the Minister. But there are more general reasons why science resists democratic oversight through traditional mechanisms – it is at once too big and too little an issue. The long timescales of science and the convoluted routes by which it impacts on everyday life; the poor understanding of science on the part of elected politicians; the lack of immediate feedback from the electorate in the politicians’ postbags – all these factors contribute to science not having a high political profile, despite the deep and fundamental impacts it has on the way people live.

Here, then, is the potential role of public engagement – it should form a key input into identifying what potential goals of science and technology might have broad societal support. It was in recognition of these sorts of issues that EPSRC introduced a Societal Issues Panel into its advisory structure – this a high-level strategic advice panel on a par with the Technical Opportunities Panel and the User Panel.

Another development in the way people are thinking about scientific priority setting makes these issues even more pointed – this is the growing popularity across the world of the idea of the “Grand Challenge” as a way of organising science. Here, we have an explicit link being made between scientific priorities and societal goals – which leads directly to the question “whose goals?”

Grand Challenges provide a way of contextualising research that goes beyond a rather sterile dichotomy between “applied” and “blue sky” research – it supports work that has some goal in mind, but a goal that is more distant than the typical object of applied research, and is often on a larger scale. The “challenge” or context is typically based on some larger societal goal, rather than on a question arising from a scientific discipline. This might be a global problem, such as the need to develop a low carbon energy infrastructure or ensure food security for a growing population, or something that is more local to a particular country or group of countries, such as the problems of ageing populations in the UK and other developed countries. The definition in terms of a societal goal necessarily implies that the work needs to be cross-disciplinary in character, and there is growing recognition in principle of the importance of social sciences.

An example of the way in which public engagement could help steer such a grand challenge programme was given by the EPSRC’s recent Grand Challenge in Nanotechnology for Medicine and Healthcare. Here, a public engagement exercise was designed with the explicit intention of using what emerged as an input, together with expert advice from academic scientists, clinicians and industry representatives, into a decision about how to shape the priorities of the programme.

I’ve written in more detail about this process elsewhere. Here, it’s worth stressing what made this programme particularly suitable for this approach. The proposed research was framed explicitly as a search for technological responses to societal issues, so it was easy to argue that public attitudes and priorities were an important factor to consider. The area is also strongly interdisciplinary; this makes the traditional approaches of relying solely on expert advice less effective. Very few, if any, individual scientists have expertise that crosses the range of disciplines that is necessary to operate in the field of nanomedicine, so technical advice needs to integrate the contributions of people expert in areas as different as colloid chemistry and neuroscience, for example.

The outcome of the public engagement provided rich insights that in some cases surprised the expert advisors. These insights included both specific commentaries on the proposed areas of research that were being considered (such as the use of nanotechnology enabled surfaces to control pathogens) and a more general filter – the idea that a key issue in deciding people’s response to a proposed technology was the degree to which it gave or took away control and empowerment from the individual. Of course, people were concerned about issues of risk and regulation, but the form of the engagement was such that much broader questions than the simple question “is it safe” were discussed.

I believe that this public engagement was very successful, because it concerned a rather concrete and tightly defined technology area, it was explicitly linked to a pending funding decision, and there was complete clarity about how it would contribute, together with more conventional consultations, to that decision – that is, what kind of applications of nanotechnology to medicine and healthcare a forthcoming funding call would prioritise. Of course, there are still many open questions about using public engagement more widely in this sort of priority setting.

The first issue is the question of scope – at what level does one ask the question? For example, in the area of energy research, one could ask, should we have a programme of energy research, and if so how big? Or, taking the answer to that question as given, one could ask whether research in biofuels should form a part of the energy programme? Or one could ask what kind of biofuel should we prioritise. My experience from a variety of public engagement exercises in the area of nanotechnology is that the more specific the question, the easier it is for people to engage with the process. But the criticism of focusing public engagement down in this way is that one can be accused, by focusing on the details, of taking the answers to the big questions as read.

But the big questions are fundamentally questions of politics in its proper sense. They are questions about what sort of world we want to live in and what kinds of lives we want to lead. The inescapable conclusion, for me, is that the explicit linkage of science and this kind of politics – the politics of big questions about society’s future – is both inevitable and desirable.

Many scientists will instinctively recoil from this this enmeshing of science and politics. I think this is a mistake. It is less controversial to say we need more science in politics – since so many of the big issues we face have a scientific dimension, most people agree that decisions on these issues need to be informed by science. But we also need to recognise that we need more explicit recognition of the political dimensions of science – because the science we do has such potential to shape the way our society will change, we need positive visions of those changes to steer the way science develops. So, we need more science in politics, and more politics in science. And, when it comes to it, we probably need more politics in politics too.

In addition to these more fundamental questions, there are some very practical linked issues related to the scale of the engagement exercises one does, their methodological robustness, and their cost. Social scientists can contribute a great deal to understanding how to make them as reliable as possible, but I believe that a certain pragmatism is called for when one considers their inevitable methodological shortcomings – they need to be seen as one input into a decision making process that already falls short of perfection. This is inevitable; it is expensive in money and time to do these exercises properly. The UK research councils seem to have settled down to an informal understanding that they will do one or two of these exercises a year on topics that seem to the be most potentially controversial. Following the nanomedicine dialogue, there have been recently completed exercises on synthetic biology and geo-engineering. But we will see how strong the will is to continue in this way in an environment with much less money around.

In addition to practical difficulties, there are people who oppose in principle any use of public engagement in setting scientific priorities. One can identify perhaps three classes of objections. The first will come from those scientists who oppose any infringement of the sovereignty of the “independent republic of science”. The second can be heard from some politicians, who regard the use of direct public engagement as an infringement of the principles of representative democracy. The third will come from free market purists, who will insist that the market provides the route by which informal, non-scientific knowledge is incorporated in decisions about how technology is developed. I don’t think any of these objections is tenable, but that’s the subject for a much longer discussion.

Is debt putting British science at risk?

This was my opening statement at a debate at the Cheltenham Science Festival. This piece also appears as a guest blog on the Times’s Science blog “Eureka Zone”; see also Mark Henderson’s commentary on the debate as a whole.

The question we are posed is “Is debt putting British science at risk?” The answer to this question is certainly yes – we are all aware of the need to arrest the growth in the nation’s debt, and the science budget looks very vulnerable. There is a moral case against excessive debt – it is those in the next generations, our children, who will be paying higher taxes to service this debt. But we can leave a positive inheritance for future generations as well. The legacy we leave them comes from the science we do now. It’s this science that will underpin their future prosperity. We also know that future generations will have to face some big problems – problems that may be so big that they even threaten their way of life. How will we adapt to the climate change we know is coming? How will we get the energy we need to run our energy-dependent society without further adding to that climate change, when the cheap oil we’ve relied on may be a distant memory? How will we feed a growing population? How will we make sure that we can keep our aging population well? These are the problems that we have left future generations to deal with, so we owe it to them to do the science that will provide the solutions.

It’s worth reminding ourselves about the legacy we inherited – what’s happened as a result of the science done in the 1970’s, 80’s and 90’s. I’m going to give just two examples. The first is in the area of health. Many people know the story of how monoclonal antibodies were invented by Cesar Milstein in the Cambridge MRC lab in 1975, a discovery for which he won the Nobel prize in 1984. Further developments took place, notably the method of “humanising” mouse antibodies invented by Greg Winter, also at the MRC lab. This is now the basis of a $32 billion dollar market; one third of all new pharmaceutical treatments are based on this technology, including new treatments for breast cancer, arthritis, asthma and leukemia. And, contrary to the stereotype that the UK is good at science but bad at making money from it, this technology is now licensed to 50 companies, earning £300 million in royalties for MRC. The two main spin-out companies were sold for a total of £932 million, one to AstraZeneca and GlaxoSmithKline, and these large companies are continuing to generate value for the UK from them. So this is a very clear example of a single invention that led to a new industry.

Often the situation is much more complicated than this; rather than a single invention one has a whole series of linked breakthroughs in science, technology and business. Like many other people, I’m delighted with my new smartphone; this is a symbol of a vast new sector of the economy based on information and communication technology. Many people know that the web as we now know it was made possible by the work of Sir Timothy Berners-Lee, a spin-off from the high energy physics effort at CERN; perhaps fewer know about the way the hardware of the web depends on optical fibre, in which so much work was done at Southampton. The basics of how to run a wireless network were developed by the company Racal, the spin-out from which, Vodafone, became a global giant in its own right. The display on my smartphone uses liquid crystals, invented at Hull, while newer e-book readers are starting to use e-ink displays reliant of the technology of Plastic Logic, a spin-out based in the plastic electronics work done in the Cavendish Lab in Cambridge in the 1990’s. So there’s a whole web of invention – an international effort, certainly, but one in which the UK has made a disproportionately large contribution, with economic value generated in all kinds of ways. It’s having a strong science base that allows one to benefit from this kind of web of innovation.

The case for science is made in the excellent Royal Society report “The Scientific Century – securing our future prosperity”. This had input from two former science ministers (one Conservative, one Labour) – Lords Sainsbury and Waldegrave, outstanding science leaders like Sir Paul Nurse and Mark Walport, a few rank-and-file scientists like myself, and was put together by the excellent Science Policy team at the Royal Society. I think it’s thoughtful, evidence-based and compelling.

I’d like to highlight three reasons why we should keep our science base strong.

Firstly, it will underpin our future prosperity. The transformation of science into products through spin-out companies is important, but the role of science in underpinning the economy goes much deeper than this. It’s through the trained people that come out of the science enterprise and its connections with existing industry that the so-called “absorptive capacity” of the economy is underpinned – the ability of an economy to make the most of the opportunities that science and technology will bring.

Secondly, it will give us the tools to solve the big problems we know we are going to face. Tough times are coming – the Government’s Chief Scientific Advisor, Sir John Beddington, talks of the “perfect storm” we face, when continuing population pressure, climate change and the end of cheap energy all come together from 2020 onwards. It is science that will give us the tools to get through this time and prosper. We don’t know what will work in advance, so we need to support many different approaches. In my own area of nanotechnology, I’m particularly excited by the prospects for new kinds of solar cells that will be much cheaper and made on a much larger scale than current types, allowing solar energy to make a real contribution to our energy needs. And some of my colleagues are developing new ways of delivering drugs that can cross the blood-brain barrier and help us deal with those intractable neurodegenerative diseases like Alzheimer’s that are exacting such high and growing human and economic costs on our aging society. But these are just two from many promising lines of attack on our growing problems, and it’s vital to maintain science in its diversity. To cut back on science now, in the face of these coming threats, would amount to unilateral disarmament.

Thirdly, we should support science in the UK because we’re very good at it. The “Scientific Century” report quotes the figures that with 1% of the world’s population, and 3% of the world’s spending on science, we produce 7.9% of the worlds scientific papers. The impact of these papers is measured by the fact that they attract 11.8% of the citations that other scientific papers make; of the most highly cited papers – the ones that have the biggest impact – the UK produces 14.4%. Arguably, we produce more top quality science for less money than anyone else. And despite myths to the contrary, we are effective at translating science into economic benefit – our universities are now more focused on exploiting what they do than ever before, and as good at this as anywhere in the world. Our success in science is a source of advantage to us in a very competitive world, and a cause of envy in other countries that are investing significantly to try and match our performance.

So if debt is the problem we leave to future generations, science is the legacy we leave them; we owe it to them not to damage our science success story now.

Digital vitalism

The DNA that Venter’s team inserted into a bacteria, in his recently reported break-through in synthetic biology, was entirely synthetic – “Our cell has been totally derived from four bottles of chemicals”, he is quoted as saying. It’s this aspect that underlies the comment from Arthur Caplan that I quoted in my last post, that “Venter’s achievement would seem to extinguish the argument that life requires a special force or power to exist. This makes it one of the most important scientific achievements in the history of mankind.” Well, this is one view. But the idea that some special quality separates matter of biological origin from synthetic chemicals – chemical vitalism – is more usually assumed to have been killed by Wöhler’s synthesis of urea in 1828.

But while Venter is putting a stake through the heart of the long-dead doctrine of chemical vitalism, I wonder whether he’s allowed another kind of vitalism to slip in through the back door, as it were. The idea that his cells are entirely synthetic depends on a particular view of the flow of information – we have the sequence of his genome stored on his computer, this information is given physical realisation through the synthesis of the information carrying molecule DNA, and it is this information, when inserted into the lifeless husk, the shell of a bacteria whose own DNA has been removed, that sparks that cell into life, re-animating the cell under the control of the new DNA. In language Venter and others often use, the cell is “booted up”, as a dead computer with a corrupted operating system is restored to life with a new system disk. This idea that the spark of life is imparted by the information of the DNA seems perilously close to another kind of vitalism – let’s call it “digital vitalism”.

But does DNA control the cell, or does the cell control the DNA? Certainly, until Venter’s DNA molecule is introduced into its bacterial host, it is simply a lifeless polymer. It’s the machinery of the cell that reads the DNA and synthesises the protein molecules whose sequences are encoded within it. In many cases, it’s the regulatory apparatus of the cell that controls when that reading and synthesis is done – an enzyme is a tool, so there’s no point making it unless it is needed. Here the DNA seems less like a controller directing the operation of the cell, and more like a resource for the cell to draw on when necessary. And, it seems, bacteria endlessly swap bits of DNA with each other, allowing the fast spread of particularly useful tools, like resistance to antibiotics. This isn’t to deny that DNA is absolutely central to life of all sorts – without it the cell can’t renew itself, much less reproduce – but perhaps the relationship between the DNA and the rest of the cell is less asymmetric and more entangled than this talk of control implies.

How much do we need to worry about a few arguable metaphors? Here, more than usually, because it is these ideas of complete control and the reduction of biology to the digital domain that are so central in investing the visions of synthetic biology with such power.

Speculative bioethics as an engine of hype

Looking back on the reporting of the paper from Craig Venter’s team reporting the successful insertion of a synthetic genome into a bacteria, one thing strikes me – the commentators who were talking up the potential and significance of the experiment the most weren’t the scientists, but the bioethicists.

As one might expect, the Daily Mail took a hysterical view – “one mistake in a lab could lead to millions being wiped out by a plague” sums up their tone. But they were able to back up their piece with expert opinion, from Julian Savulescu, of the Oxford Uehiro Centre for Practical Ethics. He says of Venter “he is not merely copying life artificially or modifying it by genetic engineering. He is going towards the role of God: Creating artificial life that could never have existed.” Even in the sober pages of the Financial Times, we have Arthur Caplan, bioethics professor at the University of Pennsylvania, saying “Venter’s achievement would seem to extinguish the argument that life requires a special force or power to exist. This makes it one of the most important scientific achievements in the history of mankind.” There’s a very marked contrast with the generally much more sceptical comments from scientists, for example those quoted in a NY Times article. The Nobel Laureate David Baltimore, for example, says “To my mind Craig has somewhat overplayed the importance of this… He has not created life, only mimicked it”.

One might almost suspect that there is a symbiosis going on here, between those scientists anxious to maximise the significance of their work, and bioethicists in search of an issue to raise their own profile. After all, if a piece of science is worth worrying about, it must be important. It’s not that I don’t think that these developments have potentially important societal and ethical implications – but it seems to me that these would be better considered from a standpoint that was a little more critical.