Good capitalism, bad capitalism and turning science into economic benefit

Why isn’t the UK more successful at converting its excellent science into wealth creating businesses? This is a perennial question – and one that’s driven all sorts of initiatives to get universities to handle their intellectual property better, to develop closer partnerships with the private sector and to create more spinout companies. Perhaps UK universities shied away from such activities thirty years ago, but that’s not the case now. In my own university, Sheffield, we have some very successful and high profile activities in partnership with companies, such as our Advanced Manufacturing Research Centre with Boeing, shortly to be expanded as part of an Advanced Manufacturing Institute with heavy involvement from Rolls Royce and other companies. Like many universities, we have some interesting spinouts of our own. And yet, while the UK produces many small high tech companies, we just don’t seem to be able to grow those companies to a scale where they’d make a serious difference to jobs and economic growth. To take just one example, the Royal Society’s Scientific Century report highlighted Plastic Logic, a company based on great research from Richard Friend and Henning Sirringhaus from Cambridge University making flexible displays for applications like e-book readers. It’s a great success story for Cambridge, but the picture for the UK economy is less positive. The company’s Head Office is in California, its first factory was in Leipzig and its major manufacturing facility will be in Russia – the latter fact not unrelated to the fact that the Russian agency Rusnano invested $150 million in the company earlier this year.

This seems to reflect a general problem – why aren’t UK based investors more willing to put money into small technology based companies to allow them to grow? Again, this is something people have talked about for a long time, and there’ve been a number of more or less (usually less) successful government interventions to address the issue. Only the latest of these was announced at the Conservative party conference speech by the Chancellor of the Exchequer, George Osborne – “credit easing” to “help solve that age old problem in Britain: not enough long term investment in small business and enterprise.”

But it’s not as if there isn’t any money in the UK to be invested – so the question to ask isn’t why money isn’t invested in high tech businesses, it is why money is invested in other places instead. The answer must be simple – because those other opportunities offer higher returns, at lower risk, on shorter timescales. The problem is that many of these opportunities don’t support productive entrepreneurship, which brings new products and services to people who need them and generates new jobs. Instead, to use a distinction introduced by economist William Baumol (see, for example, his article Entrepreneurship: Productive, Unproductive, and Destructive, PDF), they support unproductive entrepreneurship, which exploits suboptimal reward structures in an economy to make profits without generating real value. Examples of this kind of activity might include restructuring companies to maximise tax evasion, speculating in financial and property markets when the downside risk is shouldered by the government, exploiting privatisations and public/private partnerships that have been structured to the disadvantage of the tax-payer, and generating capital gains which result from changes in planning and tax law.

Most criticism of this kind of bad capitalism focuses on issues of fairness and equity, and on the damage to the democratic process done by the associated lobbying and influence-peddling. But it causes deeper problems than this – money and effort used to support unproductive entrepreneurship is unavailable to support genuine innovation, to create new products and services that people and society want and need. In short, bad capitalism crowds out good capitalism, and innovation suffers.

Some questions for British research policy

This piece is based on a summing-up I did at a meeting in London this March: A New Mandate? Research Policy in the 21st Century.

There seem to be two lurking worries that concern people in science policy in the UK at the moment. The first is the worry that, having built a case for state support of science on the basis that this will lead to innovation and economic growth, that innovation and economic growth may not be delivered. The second is that the scientific enterprise doesn’t have a sufficiently broad base of popular support. In short, are we suffering from an innovation deficit, and does our research effort have a democratic deficit?

An innovation deficit

The letter with the funding settlement from BIS to the Research Councils called for “even more impact” – the impact agenda in research councils and funding agencies really is accompanied by a sense of increased urgency of an argument that is by no means settled.

To many scientists the economic case for supporting science may seem self-evident, but the solid evidence in support of this is surprisingly slippery. There is certainly the feeling in some quarters – and not just the Guardian’s Simon Jenkins – that the economic impact of science has been oversold. The Royal Society’s “The Scientific Century” document was a serious attempt to assemble the evidence. What strikes me, though, is that it doesn’t make a great deal of sense to try and give an answer to the primary question – to what extent should the state support science – without considering the much broader question of how our political and economic system is set up to support innovation.

And it is in relation to innovation that there are some more general worries, both at a global level and in our own national circumstances:

  • Is the rate of innovation actually slowing – leaving aside the special case of information technology, have the easiest gains from new technology already been made? I discussed this in an earlier post Accelerating Change or Innovation Stagnation?
  • Is our UK innovation system broken? In the UK postwar settlement, universities were only one of a number of kinds of places where research – especially more applied research – was carried out. Major conglomerates like ICI and GEC had large corporate laboratories, there were major government laboratories associated with organisations like the Atomic Energy Authority, and the military supported laboratories like RSRE Malvern which combined quite basic research with more strategic research and development. In the post-Thatcher climate of privatisation, deregulation and the drive to “unlock shareholder value” most of these alternative research organisations have disappeared.
  • In their place, we see a new emphasis on the development of protectable intellectual property in Universities with a view to creating venture-capital backed spin-out companies. This gives rise to two questions – how effective is this as a mechanism for technology transfer, and does the new emphasis on protectable IP have any deleterious effects on innovation itself? Certainly, the experience of nano- and bio- technology does point to potential problems of patent thickets and an “anti-commons” effect in academia, where pre-existing IP positions inhibit other scientists from working in particular areas. It’s these worries, among other factors, that have driven a move to a more open-source approach, now spreading from IT to new areas like synthetic biology.
  • For the UK, the pharmaceutical industry has been particularly important, as an industry of genuinely international stature which has been politically very important in making the case for state-supported science (and influencing the shape of that support). So the fact that this industry is having innovation difficulties of its own – the closure of the Pfizer R&D site at Sandwich being a very visible signal of this – is worrying.
  • We’re seeing the introduction of a new kind of institution into the innovation landscape – the Technology and Innovation Centres. There’s still uncertainty about their role and some governance issues are still unclear, but what’s most significant is that there is a widely perceived gap that they are intended to fill.
  • A democratic deficit

    The idea that we’re in the midst of a popular crisis in trust in science is deeply embedded. I’m not convinced that the crisis in trust is with science itself, rather than the use of science in politics and commerce, which is something slightly different, but nonetheless this idea has been a driving force for much of the new enthusiasm for public engagement and dialogue, and for taking this public engagement upstream. While some people (including me) would want to set this move as part of a broader move to steer technology to meet widely shared societal goals, there is still a sense that for many, this is still seen as being about gaining acceptance for new technologies.

    On the face of it, these two worries – of an innovation deficit and of a democratic deficit – look to be in opposition. The idea of an innovation deficit suggests that our problem is that technology isn’t moving fast enough, and we have to work to remove obstacles in the way of innovation, while the negative perception of public engagement holds that its job is to put those obstacles back in the way. In fact, in times like now this perception is a real danger.

    But actually they’re quite closely connected. Underneath these dilemmas are two worries – a loss of confidence in the self-organising capability of the scientific enterprise, and a sense that something’s missing in our innovation system.

    Research councils – “from funder to sponsor”

    It’s these worries that underly current moves in the UK research councils, perhaps most explicitly defined by EPSRC, in their aim of “moving from funder to a sponsor” – i.e. moving from the position of responding to the agenda of the scientific community, towards commissioning research in support of national needs.

    The issues then are, how is national need defined, and how is the process of defining that national need given legitimacy?

    This is a big problem in our current system, where our political fashion is explicitly not to define such a need in anything other than rather general and vacuous terms (like saying we need to have a “knowledge economy”). To pose the question in its most pointed form, does it make sense to have a science policy if you don’t have an industrial policy?

    This situation puts research councils in a very difficult position. If governments are not prepared to develop such an industrial policy, how can the research councils do this – how can they do it practically, and how can their decisions acquire legitimacy?

    These legitimacy problems come in three directions:
    1. with the scientific community
    2. with the government
    3. with the population at large.

    The scientific community will see a potential clash with the Haldane principle (invented tradition though David Edgerton says this is), which could be interpreted as saying that the scientific community is the primary source, as an embodiment of the principle of autonomy of the scientific enterprise.

    With the government, a research council like EPSRC is in a very difficult position. They have to deliver the science in support of a national policy which does not, in fact, exist, but they will be judged by very instrumental measures of wealth creation.

    Can “challenge-led” research help?

    “Societal challenges” offer a new synthesis that can be considered a response to this. I find this attractive as a way of getting beyond a sterile dichotomy between applied and basic research, but the definitions of what might be meant by a societal challenge are contested, value-laden and full of interpretive flexibility.

    Societal challenges do have an advantage, in having a certain security in the face of political uncertainty and lack of direction, and a certain independence from political whims. Who can really disagree with the idea that sustainable energy will be a big deal on rather long timescales, for example?

    But there are problems – can governments genuinely take a long enough view? How can we avoid fads and the herd mentality? How can we be prepared for the inevitable unanticipated changes in direction in world events? how can we move from generalities to the particularities of real technologies?

    What is the place of public engagement? On the one hand, what better way of getting a direct view about what national need should be than consulting the public directly? Public engagement then presents itself as a partial solution to the problem of legitimacy, but one that isn’t necessarily going to make their relationship with government any easier.

    There is one other set of institutions that, strangely, don’t get mentioned very often. Those are the Universities. What’s their role? Can they be more than just a loose coalition of individual researchers responding to the incentives and demands of the research councils and other funders? Universities have their own considerable intellectual resources across the disciplines, and they have their own long history and independence, so one might hope that Universities themselves could be another focus for reasserting the public value of research. For a civic university like my own, Sheffield, surely the University should as a focus for the aspirations of the community it serves.

    Science and politics

    There is another driving force for public engagement; the sense that representative government is failing to provide a space for discussing big issues about our future choices and how people want to live their lives. Science and technology have to be a part of this discussion, and this is why discussions about science and technology must have a political dimension. There are those who assert the opposite – that science doesn’t have or shouldn’t have a political dimension, and that technology is autonomous, out of control, and can’t be directed. But these assertions are themselves profoundly political statements.

    Why has the UK given up on nanotechnology?

    In a recent roundup of nanotechnology activity across the world, the consultancy Cientifica puts the UK’s activity pretty much at the bottom of the class. Is this a fair reflection of the actual situation? Comparing R&D numbers across countries is always difficult, because of the different institutional arrangements and different ways spending is categorised; but, broadly, this feels about right. Currently, the UK has no actual on-going nanotechnology program. Activity continues in projects that are already established, but the current plans for government science spending in the period 2011- 2015, as laid out in the various research council documents, reveal no future role for nanotechnology. The previous cross-council program “Nanoscience engineering through application” has been dropped; all the cross-council programmes now directly reflect societal themes such as “ageing population, environmental change, global security, energy, food security and the digital economy”. The delivery plan for the Engineering and Physical Science Research Council, previously the lead council for nanotechnology, does not even mention the word, while the latest strategy document for the Technology Strategy Board, responsible for nearer-market R&D support, notes in a footnote that nanotechnology is “now embedded in all themes where there are such opportunities”.

    So, why has the UK given up on nanotechnology? I suggest four reasons.

    1. The previous government’s flagship nanotechnology program – the network of Micro- and Nano- Technology centres (the MNT program) is perceived as having failed. This program was launched in 2003, with initial funding of £90 million, a figure which subsequently was intended to rise to £200 million. But last July, the new science minister, David Willetts, giving evidence to the House of Commons Science and Technology Select Committee, picked on nanotechnology as an area in which funding had been spread too thinly, and suggested that the number of nanotechnology centres was likely to be substantially pruned. To my knowledge, none of these centres has received further funding. In designing the next phase of the government’s translational research centres – a new network of Technology and Innovation Centres, loosely modelled on the German Fraunhofer centres, it seems that the MNT program has been regarded as a cautionary tale of how not to do things, rather than an example to build on, and nanotechnology in itself will play little part in these new centres (though, of course, it may well be an enabling technology for things like a regenerative medicine).

    2. There has been no significant support for nanotechnology from the kinds of companies and industries that government listens to. This is partly because the UK is now weak in those industrial sectors that would be expected to be most interested in nanotechnology, such as the chemicals industry and the electronics industry. Large national champions in these sectors with the power to influence government, in the way that now-defunct conglomerates like ICI and GEC did in the past, are particularly lacking. Companies selling directly to consumers, in the food and personal care sectors, have been cautious about being too closely involved in nanotechnology for fear of a consumer backlash. The pharmaceutical industry, which is still strong in the UK, has other serious problems to deal with, so nanotechnology has been, for them, a second order issue. And the performance of small, start-up companies based on nanotechnology, such as Oxonica, has been disappointing. The effect of this was brought home to me in March 2010, when I met the then Science Minister, Lord Drayson, to discuss on behalf of the Royal Society the shortcomings of the latest UK Nanotechnology Strategy. To paraphrase his response, he said he knew the strategy was poor, but that was the fault of the nanotechnology community, which had not been able to get its act together to convince the government it really was important. He contrasted this with the space industry, which had been able to make what to him was a very convincing case for its importance.

    3. The constant criticism that the government was receiving about its slow response to issues of the safety and environmental impact of nanotechnology was, I am sure, a source of irritation. The reasons for this slow response were structural, related to the erosion of support for strategic science within government (as opposed to the kind of investigator led science funded by the research councils – see this blogpost on the subject from Jack Stilgoe), but in this environment civil servants might be forgiven for thinking that this issue had more downside than upside.

    4. Within the scientific community, there were few for whom the idea of nanotechnology was their primary loyalty. After the financial crisis, when it was clear that big public spending cuts were likely and their were fears of very substantial cuts in science budgets, it was natural for scientists either to lobby on behalf of their primary disciplines or to emphasise the direct application of their work to existing industries with strong connections to government, like the pharmaceutical and aerospace industries. In this climate, the more diffuse idea of nanotechnology slipped down a gap.

    Does it matter that, in the UK, nanotechnology is no longer a significant element of science and innovation policy? On one level, one could argue that it doesn’t. Just because nanotechnology isn’t an important category by which science is classified by, this doesn’t mean that the science that would formerly have been so classified doesn’t get done. We will still see excellent work being supported in areas like semiconductor nanotechnology for optoelectronics, plastic electronics, nano-enabled drug delivery and DNA nanotech, to give just a few examples. But there will be opportunities missed to promote interdisciplinary science, and I think this really does matter. In straitened times, there’s a dangerous tendency for research organisations to retreat to core business, to single disciplines, and we’re starting to see this happening now to some extent. Interdisciplinary, goal-oriented science is still being supported through the societal themes, like the programs in energy and ageing, and it’s going to be increasingly important that these themes do indeed succeed in mobilising the best scientists from different areas to work together.

    But I worry that it very much does matter that the UK’s efforts at translating nanotechnology research into new products and new businesses has not been more successful. But this is part of a larger problem. The UK has, for the last thirty years, not only not had an industrial policy to speak of, it has had a policy of not having an industrial policy. But the last three years have revealed the shortcomings of this, as we realise that we aren’t any more going to be able to rely on a combination of North Sea oil and the ephemeral virtual profits of the financial services industry to keep the country afloat

    On Impact

    This somewhat policy-heavy piece is an updated version of a talk I gave at a higher education policy conference last September – my apologies for blog readers not directly concerned with science and University funding in the UK, who may find it less enthralling.

    What is this thing called “impact”, which has such a grip on Universities and funding agencies in the UK at the moment? Of course, it isn’t a thing at all; it’s a word that’s been adopted to stand for a number of overlapping, but still distinct, imperatives that are being felt by different public agencies concerned with different aspects of funding research in higher education in the UK, and which, in turn, different constituencies within UK higher education are attempting to steer.

    The most immediate sources of talk about “impact” are the Higher Education Funding Council of England (HEFCE) and the different research councils, who operate jointly in this area under the umbrella of Research Councils UK (RCUK). These two manifestations of this impact agenda are, in fact, two rather different and separate issues. HEFCE wish to measure the impact of past research, as part of their overall program to assess the past research performance of Universities – the Research Excellence Framework – which subsequently will inform future allocations of funding to the Universities. RCUK, on the other hand, wishes to ensure that the research it funds is carried out in a way that maximises the chance that it does have impact. Both HEFCE and RCUK want the idea of impact to have a greater influence on funding decisions. But HEFCE’s version of impact is backward looking and concerned with measurement, RCUK’s interest is forward looking and concerned with changing behaviours.

    It is important to understand the wider context which has driven this concern with impact. The immediate pressure has come from the funding council’s perception of a growing need to convince the Treasury that public spending on research brings a proportionate return to the UK as a whole. During the process of settling the science budget last autumn, in a very tight public spending round, this argument within government, has been dominant. And, to the extent that the budget settlement was not as bad as many had feared, perhaps this idea of impact did gain some traction. Certainly, last December’s letter (PDF here) announcing the science settlement called for “even more impact” – saying “Research Councils and Funding Councils will be able to focus their contribution on promoting impact through excellent research, supporting the growth agenda. They will provide strong incentives and rewards for universities to improve further their relationships with business and deliver even more impact in relation to the economy and society.”

    But this focus on impact is only one manifestation of a much wider discussion about the value of research to society at large and how the values that underly publicly funded research should be aligned with widely shared societal values. The broader question is how we organise publicly funded research to realise its public value. For leaders and managers of HE institutions engaged in publicly funded research, this leads to fundamental questions about the missions and visions of their institutions and how this is communicated to their members.

    What do we actually mean by “impact”? This, of course, is a highly contested question – there is a growing perception that the degree to which a particular discipline has a greater or lesser degree of impact on the wider world is directly connected to its value in the eyes of funding agencies, and so it’s not surprising that disciplines will wish to influence the definition of impact to maximise their contributions. Clearly science, engineering, medicine, social sciences, arts and the humanities will come at the problem with different emphases. The funding agencies will reflect a compromise position back to the academic communities they serve, while tailoring the message a different way in their interactions with their political masters.

    HEFCE must, necessarily, take a broad view of impacts, as they serve the whole academic community. Engineers may emphasise the direct economic benefits that come from their research, social scientists information to underpin good public policy, while the humanities possibly more intangible cultural benefits. The task that HEFCE has set itself is devising a framework to measure and compare these incommensurable qualities. The methodology is starting to become clear. A pilot exercise tested a trial methodology in a number of different Universities in a handful of rather different subjects. The methodology combines the use of quantitative indicators, where appropriate, and narrative case studies, in which the external impact of research carried out by groups of researchers over some past period is described. The results of the pilot highlighted some predictable difficulties, and suggested some mitigating strategies. The timescales on which impact appears vary greatly from subject to subject, and even within subjects. For much research, impacts are captured outside higher education, whether that’s as a result of transfer of people from HE into industry or public service, or by the picking up of research ideas that are effectively in the public domain. As a result, the originators of research may well not be in a position to know about the impacts of their research.

    The research councils have the apparent advantage that they can tailor the idea of impact more closely to their own constituencies. For the Medical Research Council (MRC), for example, it’s clear that improved health and well-being will be the primary category of their impact (though even here there may be many different routes to achieving those broad goals). The Engineering and Physical Sciences Research Council (EPSRC) will tend to emphasise economic impacts through spin-outs and partnerships with existing industry. Many researchers will be concerned that the growing emphasis on impact will lead inexorably towards a move from pure, curiosity-driven research to more applied research. The counter-argument from the research councils will be to emphasise that this is not what they want; instead they seek a more conscious consideration of why the impact of the research they sponsor matters. This emphasises the forward-looking nature of the impact agenda as understood by RCUK – the sections in research council grant applications about “pathways to impact” don’t seek to ask researchers to predict the future, instead they seek to change the behavior of researchers.

    It’s clear that defining and assessing impact isn’t easy; the Science Minister, David Willetts, had earlier made his reservations about this clear. In a speech in July last year he announced a delay in the Research Excellence Framework, saying “The surprising paths which serendipity takes us down is a major reason why we need to think harder about impact. There is no perfect way to assess impact, even looking backwards at what has happened. I appreciate why scientists are wary, which is why I’m announcing today a one-year delay to the implementation of the Research Excellence Framework, to figure out whether there is a method of assessing impact which is sound and which is acceptable to the academic community. This longer timescale will enable HEFCE, its devolved counterparts, and ministers to make full use of the pilot impact assessment exercise which concludes in the Autumn, and then to consider whether it can be refined. “

    At the moment, though, the views of the Treasury are as important as the views of the Minister. It’s difficult to avoid the suspicion that, for all the subtlety with which RCUK and HEFCE have defined the many dimensions of impact, the Treasury is interested in only one type of impact – money. This sounds more straightforward, but it’s still not easy – we need for a robust evidence base for the assertion that spending on research yields tangible commensurate economic returns.

    It isn’t just in the UK that these arguments are being carried on. In the USA, for example, the large injection of funding into science as part of the economic stimulus package have prompted the “Star Metrics” programme. In the UK, the Royal Society released in March last year an extensive study – “The Scientific Century” – which marshalled the evidence for the returns on investment in publicly funded R&D (concentrating on science, medicine and engineering).

    Even in this restricted domain, the complications of the routes by which public investment in research produce returns become apparent. There was, for many years, a clear consensus in western countries about the way in which the value of publicly funded science emerges. This consensus originates in an enormously influential document written by the US science administrator, Vannevar Bush, in 1945 – “Science: the Endless Frontier”. This is the document that led to the foundation of the USA’s National Science Foundation. It encapsulated what, to many people, has become known as the “linear model of innovation” – the idea that pure science, curiosity driven and carried out without any consideration of its end-uses, would be converted into national prosperity through a linear process of applied science and technological development. Of course, the impact agenda, as conceived by the research councils, is in direct contradiction of this world-view – and since this view is deeply ingrained in many parts of the scientific community, this accounts for the deep-seated unease in those quarters that the RCUK view of impact gives rise to. And, if it were that simple, surely the measurement of past impacts would be straightforward?

    However, the linear model is now very much out of fashion – it is considered by many to be neither an accurate picture of how research has worked in the past, nor a desirable prescription for how research ought to work in the future. To return to our current Science Minister, it is clear that he doesn’t believe it at all. In his July speech, he said: “The previous government appeared to think of innovation as if it were a sausage machine. You’re supposed to put money into university-based scientific research, which leads to patents and then spinout companies that secure venture capital backing….The world does not work like this as often as you might think…. There are many other ways of harvesting benefits from research. But the benefits are real”.

    One of the most influential critiques of the linear model came in a came in a book by Donald Stokes called Pasteur’s quadrant. This argued that the separation of basic research from considerations of potential applications which is made explicit in Bush’s picture didn’t always correspond to the reality of how research has been done. There have certainly been scientists who have carried out fundamental investigations without any thought of potential use – Niels Bohr is the example Stokes used. And, as Bush argued, sometimes very practical applications do in fact emerge from such work. There have been technologists who have focused solely on the need to get their inventions to work and to market, without a great deal of curiosity about the fundamental underpinnings of those technologies – Thomas Edison being a classical example. But a scientist like Louis Pasteur carried out fundamental research – in his case, laying many of the foundations of modern microbiology, while at the same time being motivated by the very practical considerations of how wine ferments and milk sours.

    On Stokes’s diagram, which has two axes defined by the degree to which considerations of use and fundamental interest motivate research, we have three quadrants typified by the approach of Bohr, Edison and Pasteur. What occupies the fourth quadrant, where the work is characterised by being neither fundamentally interesting nor practically useful? In the past this undesirable quadrant hasn’t had a name, but I propose to call it “Cable’s quadrant”, after the UKs secretary of state for Business, Innovation and Skills, who said in a speech on 8 September last year “there is no justification for taxpayers money being used to support research which is neither commercially useful nor theoretically outstanding.” Of course, no-one sets out to carry out research of this kind; the question is how to minimise the chance of research turning out this way without the risk of discouraging high-risk research that, if it did succeed, would be truly transformative.

    There remains an unanswered question in Stokes’s formulation – who decides what is practically useful? Is this simply a matter of what has commercial applications? In the context of UK publicly funded research, this must be related to the broader question of who we, in Universities, work for. Universities are independent and autonomous institutions, so while they must respond to the immediate demands of their funders, they must always be mindful of their enduring sense of mission. How can we resolve this tension? One idea that might be helpful is the notion of “public value”, as applied to science policy in a pamphlet from Demos – The public value of science”. But it should be clear that the drive for research councils, in particular, to move beyond criteria for “good science” that are entirely defined by scientists, on the basis of their own disciplinary norms, to judging science on the basis of what are perceived as the needs of the nation, will present some severe problems of its own, which I will perhaps discuss in a later post.

    What would a truly synthetic biology look like?

    This is the pre-edited version of an article first published in Physics World in July 2010. The published version can be found here (subscription required). Some of the ideas here were developed in a little more technical detail in an article published in the journal Faraday Discussions, Challenges in Soft Nanotechnology (subscription required). This can be found in a preprint version here. See also my earlier piece Will nanotechnology lead to a truly synthetic biology?.

    On the corner of Richard Feynman’s blackboard, at his death, was the sentence “What I cannot create, I do not understand”. This slogan has been taken as the inspiration for the emerging field of synthetic biology. Biologists are now unravelling the intricate and complex mechanisms that underlie life, even in its simplest forms. But, can we be said truly to understand biology, until it proves possible to create a synthetic life-form?

    Craig Venter’s well-publicised program to replace the DNA in a simple microorganism with a new, synthetic genome has been widely reported as the moment when humans have created a new, synthetic living organism. This achievement was certainly a technical tour-de-force, but many would argue that just replacing the genome of an existing organism isn’t the same as creating a complete organism from the bottom up. Making a truly synthetic biology, in which all the components and mechanisms are designed and made without the use of existing biological materials or parts, is a much more distant and challenging prospect. But it is this, hugely more ambitious, act of creation that would fulfil Feynman’s criterion for truly understanding even the simplest forms of life.

    What we have learnt from biology is how similar all life is – when we study biology, we are studying the many diverse branches from a single trunk, huge and baroque variety on one hand, but all variants on a single basic theme based on DNA, RNA and proteins. We’d like to find some general rules, not just about the one particular biology we know about, but about all possible biologies. It is this more general understanding that will help us in one of science’s deepest questions – was the origin of life on earth a random and improbably event, or should we expect to find life all over the universe, perhaps on many of the the exo-planets we’re now discovering? Exo-biology has a practical difficulty, though – even if we can detect the signatures of alien life-forms, distance will make it difficult to study them in detail. So what better way of understanding alien life than trying to build it ourselves?

    But we can’t start building life without having an understanding of what life is. The history of attempts to provide a succinct, water-tight definition of life is very long and rather inconclusive. There are some recurring themes, though. Many definitions focus on life’s ability to self-replicate and evolve and the ability of living organisms to maintain themselves by transforming external matter and free energy into their own components. The principle of living things as being autonomous agents – able to sense their environment and choose between actions on the basis of this information – is appealing. But while people may agree on the ingredients of a definition, putting these together to make one which is neither too exclusive nor too inclusive is difficult. (I very much like the discussion of this issue in Pier Luigi Luisi’s excellent book The emergence of life).

    An experimental approach to the problem might change the question – instead of asking “what life is” we could ask “what life does”. Rather than asking for a waterproof definition of life itself, we can make progress by asking what sort of things living things do, and then consider how we might execute these functions experimentally. Here we’re thinking explicitly of biology as a series of engineering problems. Given the scale of the basic unit of biology – the cell – what we’re considering is essentially a form of nanotechnology.

    But not all nanotechnologies are the same; we’re asking how to make functional machines and devices in an environment dominated by the presence of water, the effects of Brownian motion, and some subtle but important interactions between surfaces. This nanoscale physics – very different to the rules that govern macroscopic engineering – gives rise to some new design principles, much exploited in biological systems. These principles include the idea of self-assembly – molecules that put themselves together under the influence of Brownian motion and surface forces, constructing complex structures whose design is entirely encoded within the molecules themselves. This is one example of the mutability that is so characteristic of soft and biological matter – a shifting balance between weak interactions in the face of subtle changes in external conditions causes changes in the organisation and shape of molecules and assemblies of molecules in response to changes in the environment.

    It’s quite difficult to imagine a living organism that doesn’t have some kind of closed compartment to separate the organism from its environment. Cells have membranes and walls of greater or lesser complexity, but at their simplest these are bags made from a double layer of phospholipid molecules, arranged so their hydrophobic tails are sandwiched between two layers of hydrophilic head groups. The synthetic analogue of these membranes are called liposomes; they are easily made and commonly used in cosmetics and drug delivery systems. Polymer chemists make analogues of phospholipids – amphiphilic block copolymers – which make bags called polymersomes which, in some respects, offer much more flexibility of design, often being more robust and allowing precise control of wall thickness. From such synthetic artificial bags, it is a short step to encapsulating systems of chemicals and biochemicals to mimic some kind of metabolism, and in some cases even some level of self-replication. What is more difficult is to be able to control the traffic in and out of the compartment; this ideally would require pores which only allowed certain types of molecules in and out, or that could be opened and closed on certain triggers.

    It is this sensitivity to the environment that proves more complex to mimic synthetically. It’s still not generally appreciated how much information processing power is possessed even by the most apparently simple single celled organisms. This is because biological computing is carried out, not by electrons within transistors, but by molecules acting on other molecules. (Dennis Bray’s book Wetware is well worth reading on this subject). The key elements of this chemical logic are enzymes that perform logical operations, reacting the presence or absence of input molecules by synthesising, or not synthesising, output molecules.

    Efforts to make synthetic analogues of this molecular logic are only at the earliest stages. What is needed is a molecule that changes shape in the presence of an input molecule, and for this shape change to turn on or off some catalytic activity. In biology, it is proteins that carry out this function; the only synthetic analogues made so far are built from DNA (see my earlier essay Molecular Computing for more details and references).

    Given molecular logic elements whose outputs are other molecules, one can start to build networks linking many logic gates. In biology these networks integrate information about the cell’s environment and make decisions about different courses of action the cell can take – to swim towards food, or away from danger, for example.

    In order for a bacteria sized object to be able to move – to swim through a fluid or crawl along a surface – it needs to solve some very interesting physics problems. For such a small object, it’s the viscosity of the fluid that dominates resistance to motion, in contrast to the situation at human scales, where it’s the inertia of the fluid that needs to be overcome. In these situations of very low Reynolds number new swimming strategies need to be found. Bacteria often use the beating motion of tiny threads – flagellae or ciliae – to push themselves forward. At Sheffield we’ve been exploring another way of making microscopic swimmers – catalysing a chemical reaction on one half of the particle, producing an asymmetric cloud of reaction products that pushes the particle forward by osmotic pressure (more details here. But even though we can make artificial swimmers, we still don’t know how to control and steer them.

    By now it should be obvious that the task of creating a truly synthetic biology remains a very distant goal. The more that biologists discover –particularly now they can use the tools of single molecule biophysics to unravel the mechanisms of the sophisticated molecular machines within even the simplest types of organism – the cruder our efforts to mimic some of the features of cell biology seem. We do have a reasonable understanding of some important principles of nano-scale design – how to design macromolecules to make to self-assembled structures resembling cell membranes, for example. But other areas are still wide open, from the fundamental theoretical issues around how to understand small systems driven far from equilibrium, through the intricacies of mechanisms to achieve accurate self-replication, to the challenge of designing chemical computers. On a practical level, to cope with this level of complexity we’re probably going to have to do what Nature does, and use evolutionary design methods. But if the goal is distant, we’ll learn a great deal from trying. Even to speculate about what a truly synthetic life-form might look like is itself helpful in sharpening our notions of what we might consider to be alive. It is this kind of experimental approach that will help us to find out the physical principles that underlie biology – not just the biology we know about, but all possible biologies.

    Three things that Synthetic Biology should learn from Nanotechnology

    I’ve been spending the last couple of days at a meeting about synthetic biology – The economic and social life of synthetic biology. This has been a hexalateral meeting involving the national academies of science and engineering of the UK, China and the USA. The last session was a panel discussion, in which I was invited to reflect on the lessons to be learnt for new emerging technologies like synthetic biology from the experience of nanotechnology. This is more or less what I said.

    It’s quite clear from the many outstanding talks we’ve heard over the last couple of days that synthetic biology will be an important part of the future of the applied life sciences. I’ve been invited to reflect on the lessons that synbio and other emerging technologies might learn from the experience of my own field, nanotechnology. Putting aside the rueful reflection that, like synbio now, nanotechnology was the future once, I’d like to draw out three lessons.

    1. Mind that metaphor
    Metaphors in science are powerful and useful things, but they come with two dangers:
    a. it’s possible to forget that they are metaphors, and to think they truly reflect reality,
    b. and even if this is obvious to the scientists using the metaphors, the wider public may not appreciate the distinction.

    Synthetic biology has been associated with some very powerful metaphors. There’s the idea of reducing biology to software; people talk about booting up cells with new operating systems. This metaphor underlies ideas like the cell chassis, interchangeable modules, expression operating systems. But it is only a metaphor; biology isn’t really digital and there is an inescabable physicality to the biological world. The molecules that carry information in biology – RNA and DNA – are physical objects embedded in a Brownian world, and it’s as physical objects that they interact with their environment.

    Similar metaphors have surrounded nanotechnology, in slogans like “controlling the world atom by atom” and “software control of matter”. They were powerful tools in forming the field, but outside the field they’ve caused confusion. Some have believed these ideas are literally becoming true, notably the transhumanists and singularitarians who rather like the idea of a digital transcendence.

    On the opposite side, people concerned about science and technology find plenty to fear in the idea. We’ll see this in synbio if ideas like biohacking get wider currency. Hackers have a certain glamour in technophile circles, but to the rest of the world they write computer viruses and send spam emails. And while the idea of reducing biotech to software engineering is attractive to techie types, don’t forget that the experience of most people of software is that it is buggy, unreliable, annoyingly difficult to use, and obsolete almost from the moment you buy it.

    Finally, investors and venture capitalists believed, on the basis of this metaphor, that they’d get returns from nano start-ups on the same timescales that the lucky ones got from dot-com companies, forgetting that, even though you could design a marvellous nanowidget on a computer, you still had to get a chemical company to make it.

    2. Blowing bubbles in the economy of promises

    Emerging areas of technology all inhabit an economy of promises, in which funding for the now needs to be justified by extravagant claims for the future. These claims may be about the economic impact – “the trillion dollar market” – or on revolutions in fields such as sustainable energy and medicine. It’s essential to be able to make some argument about why research needs to be funded and it’s healthy that we make the effort to anticipate the impact of what we do, but there’s an inevitable tendency for those claimed benefits to inflate to bubble proportions.

    The mechanisms by which this inflation takes place are well known. People do believe the metaphors; scientists need to get grants, the media demand big and unqualified claims to attract their attention. Even the process of considering the societal and ethical aspects of research, and of doing public engagement can have the effect of giving credence to the most speculative possible outcomes.

    There’s a very familiar tension emerging about synthetic biology – is it a completely new thing, or an evolution of something that’s been going on for some time – i.e. industrial biotechnology? This exactly mirrors a tension within nanotechnology – the promise is sold on the grand vision and the big metaphors, but the achievements are largely based on the aspects of the technology with the most continuity with the past.

    The trouble with all bubbles, of course, is that reality catches up on unfulfilled promises, and in this environment people are less forgiving of the reality of the hard constraints faced by any technology. If you overdo the promise, disillusionment will set in amongst funders, governments, investors and the public. This might discredit even the genuine achievements the technology will make possible. Maybe our constant focus on revolutionary innovation blinds us to the real achievements of incremental innovation – a better drug, a more efficient process for processing a biofuel, a new method of pest control, for example.

    3. It’s not about risk, it’s about trust

    The regulation of new technologies is focused on controlling risks, and it’s important that we try and identify and control those risks as the technology emerges. But there’s a danger in focusing on risk too much. When people talk about emerging technologies, by default it is to risk that conversation turns. But often, it isn’t really risk that is fundamentally worrying people, but trust. In the face of the inevitable uncertainties with new technologies, this makes complete sense. If you can’t be confident in identifying risks in advance, the question you naturally ask is whether the bodies and institutions that are controlling these technologies can be trusted. It must be a priority, then, that we think hard about how to build trust and trustworthy institutions. General principles like transparency and openness will certainly be helpful, but we have to ask whether it is realistic for these principles alone to be maintained in an environment demanding commercial returns from large scale industrial operations.

    Accelerating change or innovation stagnation?

    It’s conventional wisdom that the pace of innovation has never been faster. The signs of this seem to be all around us, as we rush to upgrade our smartphones and adopt yet another social media innovation. And yet, there’s another view emerging too, that all the easy gains of technological innovation have happened already and that we’re entering a period, if not of technological stasis, but of maturity and slow growth. This argument has been made most recently by the economist Tyler Cowen, for example in this recent NY Times article, but it’s prefigured in the work of technology historians David Edgerton and Vaclav Smil. Smil, in particular, points to the period 1870 – 1920 as the time of a great technological saltation, in which inventions such as electricity, telephones, internal combustion engines and the Haber-Bosch process transformed the world. Compared to this, he is rather scornful of the relative impact of our current wave of IT-based innovation. Tyler Cowen puts essentially the same argument in an engagingly personal way, asking whether the changes seen in his grandmother’s lifetime were greater than those he has seen in his own.

    Put in this personal way, I can see the resonance of this argument. My grandmother was born in the first decade of the 20th century in rural North Wales. The world she was born into has quite disappeared – literally, in the case of the hill-farms she used to walk out to as a child, to do a day’s chores in return for as much buttermilk as she could drink. Many of these are now marked only by heaps of stones and nettle patches. In her childhood, medical care consisted of an itinerant doctor coming one week to the neighbouring village and setting up an impromptu surgery in someone’s front room; she vividly recalled all her village’s children being crammed into the back of a pony trap and taken to that room, where they all had their tonsils taken out, while they had the chance. It was a world without cars or lorries, without telephones, without electricity, without television, without antibiotics, without air travel. My grandmother never in her life flew anywhere, but by the time she died in 1994, she’d come to enjoy and depend on all the other things. Compare this with my own life. In my childhood in the 1960s we did without mobile phones, video games and the internet, and I watched a bit less television than my children do, but there’s nowhere near the discontinuity, the great saltation that my grandmother saw.

    How can we square this perspective against the prevailing view that technological innovation is happening at an ever increasing pace? At its limit, this gives us the position of Ray Kurzweil, who identifies exponential or faster growth rates in technology and extrapolates these to predict a technological singularity.

    The key mistake here is to think that “Technology” is a single thing, that by itself can have a rate of change, whether that’s fast or slow. There are many technologies, and at any given time some will be advancing fast, some will be in a state of stasis, and some may even be regressing. It’s very common for technologies to have a period of rapid development, with a roughly constant fractional rate of improvement, until physical or economic constraints cause progress to level off. Moore’s “law”, in the semiconductor industry, is a very famous example of a long period of constant fractional growth, but the increase in efficiency of steam engines in the 19th century followed a similar exponential path, until a point of diminishing returns was inevitably reached.

    To make sense of the current situation, it’s perhaps helpful to think of three separate realms of innovation. We have the realm of information, the material realm, and the realm of biology. In these three different realms, technological innovation is subject to quite different constraints, and has quite different requirements.

    It is in the realm of information that innovation is currently taking place very fast. This innovation is, of course, being driven by a single technology from the material realm – the microprocessor. The characteristics of innovation in the information world is that the infrastructure required to enable it is very small, a few bright people in a loft or garage with a great idea genuinely can build a world-changing business in a few years. But, the apparent weightlessness of this kind of innovation is of course underpinned by the massive capital expenditures and the focused, long-term research and development of the global semiconductor industry.

    In the material world, things take longer and cost more. The scale-up of promising ideas from the laboratory needs attention to detail and the continuous, sequential solution of many engineering problems. This is expensive and time-consuming, and demands a degree of institutional scale in the organisations that do it. A few people in a loft might be able to develop a new social media site, but to build a nuclear power station or a solar cell factory needs something a bit bigger. The material world is also subject to some hard constraints, particularly in terms of energy. And the penalties for making mistakes in a chemical plant or a nuclear reactor or a passenger aircraft have consequences of a seriousness rarely seen in the information realm.

    Technological innovation in the biological realm, as demanded by biomedicine and biotechnology, presents a new set of problems. The sheer complexity of biology makes a full mechanistic understanding hard to achieve; there’s more trial and error and less rational design than one would like. And living things and living systems are different and fundamentally more difficult to engineer than the non-living world; they have agency of their own and their own priorities. So they can fight back, whether that’s pathogens evolving responses to new antibiotics or organisms reacting to genetic engineering in ways that thwart the designs of their engineers. Technological innovation in the biological realm carries high costs and very substantial risks of failure, and it’s not obvious that we have the right institutions to handle this. One manifestation of these issues is the slowness of new technologies like stem cells and tissue engineering to deliver, and we’re now seeing the economic and business consequences in an unfolding crisis of innovation in the pharmaceutical sector.

    Can one transfer the advantages of innovation in the information realm to the material realm and the biological realm? Interestingly, that’s exactly the rhetorical claim made by the new disciplines of nanotechnology and synthetic biology. The claim of nanotechnology is that by achieving atom-by-atom control, we can essentially reduce the material world to the digital. Likewise, the power of synthetic biology is claimed to be that it can reduce biotechnology to software engineering. These are powerful and seductive claims, but wishing it to be so doesn’t make it happen, and as yet the rhetoric has yet to be fully matched by achievement. Instead, we’ve seen some disappointment – some nanotechnology companies have disappointed investors, who hadn’t realised that, in order to materialise the clever nanoscale design of the products, the constraints of the material realm still apply. A nanoparticle may be designed digitally, but it’s still a speciality chemical company that has to make it.

    Our problem is that we need innovation in all three realms; we can’t escape the fact that we live in the material world, we depend on our access to energy, for example, and fast progress in one realm can’t fully compensate for slower progress in the other areas. We still need technological innovation in the material and biological realms – we must develop better technologies in areas like energy, because the technologies we have are not sustainable and not good enough. So even if accelerating change does prove to be a mirage, we still can’t afford innovation stagnation.

    The next twenty-five years

    The Observer ran a feature today collecting predictions for the next twenty five years from commentators about politics, science, technology and culture. I contributed a short piece on nanotechnology: I’m not expecting a singularity. Here’s what I wrote:

    Twenty years ago Don Eigler, a scientist working for IBM in California, wrote out the logo of his employer in letters made of individual atoms. This feat was a graphic symbol of the potential of the new field of nanotechnology, which promises to rebuild matter atom by atom, molecule by molecule, and to give us unprecedented power over the material world.

    Some, like the futurist Ray Kurzweil, predict that nanotechnology will lead to a revolution, allowing us to make any kind of product virtually for free, to have computers so powerful that they will surpass human intelligence, and to lead to a new kind of medicine on a sub-cellular level that will allow us to abolish aging and death.

    I don’t think Kurzweil’s “technological singularity” – a dream of scientific transcendence which echoes older visions of religious apocalypse – will happen. Some stubborn physics stands between us and “the rapture of the nerds”. But nanotechnology will lead to some genuinely transformative new applications.

    New ways of making solar cells very cheaply on a very large scale offer us the best hope we have for providing low-carbon energy on a big enough scale to satisfy the needs of a growing world population aspiring to the prosperity we’re used to in the developed world. We’ll learn more about intervening in our biology at the sub-cellular level, and this nano-medicine will give us new hope of overcoming really difficult and intractable diseases, like Alzheimer’s, that will increasingly afflict our population as it ages. The information technology that drives your mobile phone or laptop is already operating at the nanoscale. Another twenty five years of development will lead us to a new world of cheap and ubiquitous computing, in which privacy will be a quaint obsession of our grandparents.

    Nanotechnology is a different type of science, respecting none of the conventional boundaries between disciplines, and unashamedly focused on applications rather than fundamental understanding. Given the huge resources being directed towards nanotechnology in China and its neighbours, this may be the first major technology of the modern era that is predominantly developed outside the USA and Europe.

    If the technology we’ve got isn’t sustainable, doesn’t that mean we need better technology?

    Friends of the Earth have published a new report called “Nanotechnology, climate and energy: over-heated promises and hot air?” (here but the website was down when I last looked). As its title suggests, it expresses scepticism about the idea that nanotechnology can make a significant contribution to making our economy more sustainable. It does make some fair points about the distance between rhetoric and reality when it comes to claims that nano-manufacturing can be intrinsically cleaner and more precise than conventional processing (the reality being, of course, that the manufacturing processes used to make nanomaterials are not currently very much different to processes to make existing materials). It also expresses scepticism about ideas such as the hydrogen economy, which I to some extent share. But I think its position betrays one fundamental and very serious error. That is the comforting, but quite wrong, belief that there is any possibility of moving our current economy to a sustainable basis with existing technology in the short term (i.e. in the next ten years).

    Take, for example, solar energy. I’m extremely positive about its long term prospects. At the moment, the world uses energy at a rate of about 16 Terawatts (a TW is one thousand Gigawatts; one GW is about the scale of a medium size power station). The total energy arriving at the earth from the sun is 162,000 TW – so there is, in principle, an abundance of solar energy. But the total world amount of installed solar capacity is just over 2 GW (the nominal world installed capacity was, in 2008, 13.8 GW, which represents a real output of around 2 GW, having accounted for the lack of 24 hour sunshine and system losses. These numbers come from NREL’s 2008 Solar Technologies Market Report). This is four orders of magnitude less than the energy we need. It’s true that the solar energy industry is growing very fast – at annual rates of 40-50% at the moment. But even if this rate of increase went on for another 10 years, we would only have achieved a solar contribution of around 200 GW by 2010. Meanwhile, on even the most optimistic assumption, the IEA predicts that our total energy needs would have increased by 1400 GW in this period, so this isn’t enough even to halt the increase in our rate of burning fossil fuels, let alone reverse it. And, without falls in cost from the current values of around $5 per installed Watt, by 2020 we’d need to be spending about $2.5 trillion a year to achieve this rate of growth, at which point solar would still only be supplying around 1 % of world energy demand.

    What this tells us is that though our existing technology for harvesting solar energy may be good in many ways – it’s efficient and long-lasting – it’s too expensive and in need of a step-change in the areas in which it can be produced. That’s why new solar cell technology is needed – and why those candidates which use nanotechnologies to enable large scale, roll to roll processing are potentially attractive. We know that currently these technologies aren’t ready for the mass market – their efficiencies and lifetimes aren’t good enough yet. And incremental developments of conventional silicon solar cells may yet surprise us and bring their costs down dramatically, and that would be a very good outcome too. But this is why research is needed. For perspective, look at this helpful graphic to see how the efficiencies of all solar cells have evolved with time. Naturally, the most recently invented technologies – such as the polymer solar cells – have progressed less far than the more mature technologies that are at market.

    A similar story could be told about batteries. It’s clear that the use of renewables on a large scale will need large scale energy storage methods to overcome problems of intermittency, and the electrification of transport will need batteries with high specific energy (for a recent review of the requirements for plug-in hybrids see here). Currently available lithium ion batteries have a specific energy of about half a megajoule per kilogram, a fraction of the energy density of petrol (44 MJ/kg). They’re also too expensive and their lifetime is too short – they deteriorate at a rate of about 2% a year. Once again, current technology is simply not good enough, and it’s not getting better fast enough; new technology is needed, and this will almost certainly require better control of nanostructure.

    Could we, alternatively, get by using less energy? Improving energy efficiency is certainly worth doing, and new technology can help here too. But substantial reductions in energy use will be associated with drops in living standards which, in rich countries, are going to be a hard sell politically. The politics of persuading poorer countries that they should forgo economic growth will be even trickier, given that, unlike the rich countries, they haven’t accumulated the benefit of centuries of economic growth fueled by cheap fossil-fuel based energy, and they don’t feel responsible for the resulting accumulation of atmospheric carbon dioxide. Above all, we mustn’t underestimate the degree to which, not just our comfort, but our very existence depends on cheap energy – notably in the high energy inputs needed to feed the world’s population. This is the hard fact that we have to face – we are existentially dependent on the fossil-fuel based technology we have now, but we know this technology isn’t sustainable and we don’t yet have viable replacements. In these circumstances we simply don’t have a choice but to try and find better, more sustainable energy technologies.

    Yes, of course we have to assess the risks of these new technologies, of course we need to do the life-cycle analyses. And while Friends of the Earth may say they’re shocked (shocked!) that nanotechnology is being used by the oil industry, this seems to me to be either a rather disingenuous piece of rhetoric, or an expression of supreme naiveity about the nature of capitalism. Naturally, the oil industry will be looking at new technology such as nanotechnology to help their business; they’ve got lots of money and some pressing needs. And for all I know, there may be jungle labs in Colombia looking for applications of nanotechnology in the recreational pharmaceuticals sector right now. I can agree with FoE that it was unconvincing to suggest that there was something inherently environmental benign about nanotechnology, but it’s equally foolish to imply that, because the technology can be used in industries that you disapprove of, that makes it intrinsically bad. What’s needed instead is a realistic and hard-headed assessment of the shortcomings of current technologies, and an attempt to steer potentially helpful emerging new technologies in beneficial directions.

    Feynman, Waldo and the Wickedest Man in the World

    It’s been more than fifty years since Richard Feynman delivered his lecture “Plenty of Room at the Bottom”, regarded by many as the founding vision statement of nanotechnology. That foundational status has been questioned, most notably by Chris Tuomey in his article Apostolic Succession (PDF). In another line of attack, Colin Milburn, in his book Nanovision, argues against the idea that the ideas of nanotechnology emerged from Feynman’s lecture as the original products of his genius; instead, according to Milburn, Feynman articulated and developed a set of ideas that were already current in science fiction. And, as I briefly mentioned in my report from September’s SNET meeting, according to Milburn, the intellectual milieu from which these ideas emerged had some very weird aspects.

    Milburn describes some of science fiction antecedents of the ideas in “Plenty of Room” in his book. Perhaps the most direct link can be traced for Feynman’s notion of remote control robot hands, which make smaller sets of hands, which can be used to be made yet smaller ones, and so on. The immediate source of this idea is Robert Heinlein’s 1942 novella “Waldo”, in which the eponymous hero devises just such an arrangement to carry out surgery on the sub-cellular level. There’s no evidence that Feynman had read “Waldo” himself, but Feynman’s friend Al Hibbs certainly had. Hibbs worked at Caltech’s Jet Propulsion Laboratory, and he had been so taken by Heinlein’s idea of robot hands as a tool for space exploration that he wrote up a patent application for it (dated 8 February 1958). Ed Regis, in his book “Nano”, tells the story, and makes the connection to Feynman, quoting Hibbs as follows: “It was in this period, December 1958 to January 1959, that I talked it over with Feynman. Our conversations went beyond my “remote manipulator” into the notion of making things smaller … I suggested a miniature surgeon robot…. He was delighted with the notion.”

    “Waldo” is set in a near future, where nuclear derived energy is abundant, and people and goods fly around in vessels powered by energy beams. The protagonist, Waldo Jones, is a severely disabled mechanical genius (“Fat, ugly and hopelessly crippled” as it says on the back of my 1970 paperback edition) who lives permanently in an orbiting satellite, sustained by the technologies he’s developed to overcome his bodily weaknesses. The most effective of these technologies are the remote controlled robot arms, named “waldos” after their inventor. The plot revolves around a mysterious breakdown of the energy transmission system, which Waldo Jones solves, assisted by the sub-cellular surgery he carries out with his miniaturised waldos.

    The novella is dressed up in the apparatus of hard science fiction – long didactic digressions, complete with plausible-sounding technical details and references to the most up-to-date science, creating the impression of that its predictions of future technologies are based on science. But, to my surprise, the plot revolves around, not science, but magic. The fault in the flying machines is diagnosed by a back-country witch-doctor, and involves a failure of will by the operators (itself a consequence of the amount of energy being beamed about the world). And the fault can itself be fixed by an act of will, by which energy in a parallel, shadow universe can be directed into our own world. Waldo Jones himself learns how to access the energy of this unseen world, and in this way overcomes his disabilities and fulfills his full potential as a brain surgeon, dancer and all round, truly human genius.

    Heinlein’s background as a radio engineer explains where his science came from, but what was the source of this magical thinking? The answer seems to be the strange figure of Jack Parsons. Parsons was a self-taught rocket scientist, one of the founders of the Jet Propulsion Laboratory and a key figure in the early days of the USA’s rocket program (his story is told in George Pendle’s biography “Strange Angel”). But he was also deeply interested in magic, and was a devotee of the English occultist Aleister Crowley. Crowley, aka The Great Beast, was notorious for his transgressive interest in ritual magic – particularly sexual magic – and attracted the title “the wickedest man in the world” from the English newspapers in between the wars. He had founded a religion of his own, whose organisation, the Ordo Templi Orientis, promulgated his creed, summarised as “Do what thou wilt shall be the whole of the Law”. Parsons was inititated into the Hollywood branch of the OTO in 1941; in 1942 Parsons, now a leading figure in the OTO, moved the whole commune into a large house in Pasadena, where they lived according to Crowley’s transgressive law. Also in 1942, Parsons met Robert Heinlein at the Los Angeles Science Fiction Society, and the two men became good friends. Waldo was published that year.

    The subsequent history of Jack Parsons was colourful, but deeply unhappy. He became close to another member of the circle of LA science fiction writers, L. Ron Hubbard, who moved into the Pasadena house in 1945 with catastrophic effects for Parsons. In 1952, Parsons died in a mysterious explosives accident in his basement. Hubbard, of course, went on to found a religion of his own, Scientology.

    This is a fascinating story, but I’m not sure what it signifies, if anything. Colin Milburn wonders whether “it is tempting to see nanotech’s aura of the magical, the impossible made real, as carried through the Parsons-Heinlein-Hibbs-Feynman genealogy”. Sober scientists working in nanotechnology would argue that their work is as far away from magical thinking as one can get. But amongst those groups on the fringes of the science that cheer nanotechnology on – the singularitarians and transhumanists – I’m not sure that magic is so distant. Universal abundance through nanotechnology, universal wisdom through artificial intelligence, and immortal life through the defeat of ageing – these sound very much like the traditional aims of magic – these are parallels that Dale Carrico has repeatedly drawn attention to. And in place of Crowley’s Ordo Templi Orientis (and no doubt without some of the OTO’s more colourful practises), transhumanists have their very own Order of Cosmic Engineers, to “engineer ‘magic’ into a universe presently devoid of God(s).”