Fulfilling the promises of emerging biotechnologies

At the end of last year, the Nuffield Foundation for Bioethics published a report on the ethics of emerging biotechnologies, called Emerging Biotechnologies: technology, choice and the public good. I was on the working party for that report, and this piece reflects a personal view about some of its findings. A shorter version was published in Research Fortnight (subscription required).

In a speech at the Royal Society last November George Osborne said that, as Chancellor of the Exchequer, it is his job “to focus on the economic benefits of scientific excellence”. He then listed eight key technologies that he challenged the scientific community in Britain to lead the world in, and for which he promised continuing financial support. Among these technologies were synthetic biology, regenerative medicine and agri-science, key examples of what a recent report from the Nuffield Council for Bioethics calls emerging biotechnologies. Picking technology winners is clearly high on the UK science policy agenda, and this kind of list will increasingly inform the science funding choices the government and its agencies, like the research councils, make. So the focus of the Nuffield’s report, on how those choices are made and what kind of ethics should guide them, couldn’t be more timely.

These emerging technologies are not short of promises. According to Osborne, synthetic biology will have an £11 billion market by 2016 producing new medicines, biofuels and food – “they say that synthetic biology will heal us, heat and feed us.” Continue reading “Fulfilling the promises of emerging biotechnologies”

We sold out our energy future

Everyone should know that the industrial society we live in depends on access to plentiful, convenient, cheap energy – the last two hundred years of rapid economic growth has been underpinned by the large scale use of fossil fuels. And everyone should know that the effect of burning those fossil fuels has been to markedly increase the carbon dioxide content of the atmosphere, resulting in a changing climate, with potentially dangerous but still uncertain consequences. But a transition from fossil fuels to low carbon sources of energy isn’t going to take place quickly; existing low carbon energy sources are expensive and difficult to scale up. So rather than pushing on with the politically difficult, slow and expensive business of deploying current low carbon energy sources, why don’t we wait until technology brings us a new generation of cheaper and more scalable low carbon energy? Presumably, one might think, since we’ve known about these issues for some time, we’ve been spending the last twenty years energetically doing research into new energy technologies?

Alas, no. As my graph shows, the decade from 1980 saw a worldwide decline in the fraction of GDP major industrial countries devoted to government funded energy research, development, and demonstration, with only Japan sustaining anything like its earlier intensity of energy research into the 1990s. It was only in the second half of the decade after 2000 that we began to see a recovery, though in the UK and the USA a rapid upturn following the 2007 financial crisis has fallen away again. A rapid post-2000 growth of energy RD&D in Korea is an exception to the general picture. There’s a good discussion of the situation in the USA in a paper by Kamman and Nemet – Reversing the incredible shrinking energy R&D budget. But the largest fall by far was in the UK, where at its low point, the fraction of national resource devoted to energy RD&D fell, in 2003, to an astonishing 0.2% of its value at the 1981 high point.

Government spending on energy research, development and demonstration
Government spending on energy research, development and demonstration. Data: International Energy Authority

Continue reading “We sold out our energy future”

Do materials even have genomes?

I’ve long suspected that physical scientists have occasional attacks of biology envy, so I suppose I shouldn’t be surprised that the US government announced last year the “Materials Genome Initiative for Global Competiveness”. Its aim is to “discover, develop, manufacture, and deploy advanced materials at least twice as fast as possible today, at a fraction of the cost.” There’s a genuine problem here – for people used to the rapid pace of innovation in information technology, the very slow rate at which new materials are taken up in new manufactured products is an affront. The solution proposed here is to use those very advances in information technology to boost the rate of materials innovation, just as (the rhetoric invites us to infer) the rate of progress in biology has been boosted by big data driven projects like the human genome project.

There’s no question that many big problems could be addressed by new materials. Continue reading “Do materials even have genomes?”

Geek power?

Mark Henderson’s book “The Geek Manifesto” was part of my holiday reading, and there’s a lot to like in it – there’s all too much stupidity in public life, and anything that skewers a few of the more egregious recent examples of this in such a well-written and well-informed way must be welcomed. There is a fundamental lack of seriousness in our public discourse, a lack of respect for evidence, a lack of critical thinking. But to set against many excellent points of detail, the book is built around one big idea, and it’s that idea that I’m less keen on. This is the argument – implicit in the title – that we should try to construct some kind of identity politics based around those of us who self-identify as being interested in and informed about science – the “geeks”. I’m not sure that this is possible, but even if it was, I think it would be bad for science and bad for politics. This isn’t to say that public life wouldn’t be better if more people with a scientific outlook had a higher profile. One very unwelcome feature of public debate is the prevalence of wishful thinking. Comfortable beliefs that fit into people’s broader world-views do need critical examination, and this often needs the insights of science, particularly the discipline that comes from seeing whether the numbers add up. But science isn’t the only source of the insights needed for critical thinking, and scientists can have some surprising blind-spots, not just about the political, social and economic realities of life, but also about technical issues outside their own fields of interest.

But first, who are these geeks who Henderson thinks should organise? Continue reading “Geek power?”

The UK’s thirty year experiment in innovation policy

In 1981 the UK was one of the world’s most research and development intensive economies, with large scale R&D efforts being carried out in government and corporate laboratories in many sectors. Over the thirty years between then and now, this situation has dramatically changed. A graph of the R&D intensity of the national economy, measured as the fraction of GDP spent on research and development, shows a long decline through the 1980’s and 1990’s, with some levelling off from 2000 or so. During this period the R&D intensity of other advanced economies, like Japan, Germany, the USA and France, has increased, while in fast developing countries like South Korea and China the growth in R&D intensity has been dramatic. The changes in the UK were in part driven by deliberate government policy, and in part have been the side-effects of the particular model of capitalism that the UK has adopted. Thirty years on, we should be asking what the effects of this have been on our wider economy, and what we should do about it.

A comparison of gross research and development expenditure of various countries from 1981 to 2010
Gross expenditure on research and development as a % of GDP from 1981 to 2010. Data from Eurostat.

The second graph breaks down where R&D takes place. The largest fractional fall has been in research in government establishments, which has dropped by more than 60%. The largest part of this fall took place in the early part of the period, under a series of Conservative governments. This reflects a general drive towards a smaller state, a run-down of defence research, and the privatisation of major, previously research intensive sectors such as energy. However, it is clear that privatisation didn’t lead to a transfer of the associated R&D to the business sector. It is in the business sector that the largest absolute drop in R&D intensity has taken place – from 1.48% of GDP to 1.08%. Cutting government R&D didn’t lead to increases in private sector R&D, contrary to the expectations of free marketeers who think the state “crowds out” private spending. Instead the business climate of the time, with a drive to unlock “shareholder value” in the short-term, squeezed out longer term investments in R&D. Some seek to explain this drop in R&D intensity in terms of a change in the sectoral balance of the UK economy, away from manufacturing and towards financial services, and this is clearly part of the picture. However, I wonder whether this should be thought of not so much as an explanation, but more as a symptom. I’ve discussed in an earlier post the suggestion that “bad capitalism” – for example, speculations in financial and property markets ,with the downside risk being shouldered by the tax-payer – squeezes out genuine innovation.

UK R&D as % of GDP by sector of performance from 1981 to 2010
UK R&D as % of GDP by sector of performance from 1981 to 2010. Data from Eurostat.

The Labour government that came to power in 1997 did worry about the declining R&D intensity of the UK economy, and, in its Science Investment Framework 2004-2014 (PDF), set about trying to reverse the trend. This long-term policy set a target of reaching an overall R&D intensity of 2.5% by 2014, and an increase in R&D intensity in the business sector from to 1.7%. The mechanisms put in place to achieve this included a period of real-terms increase in R&D spending by government, some tax incentives for business R&D, and a new agency for nearer term research in collaboration with business, the Technology Strategy Board. In the event, the increases in government spending on R&D did lead to some increase in the UK’s overall research intensity, but the hoped-for increase in business R&D simply did not happen.

This isn’t predominantly a story about academic science, but it provides a context that’s important to appreciate for some current issues in science policy. Over the last thirty years, the research intensity of the UK’s university sector has increased, from 0.32% of GDP to 0.48% of GDP. This reflects, to some extent, real-term increases in government science budgets, together with the growing success of universities in raising research funds from non UK-government sources. The resulting R&D intensity of the UK HE sector is at the high end of international comparisons (the corresponding figures for Germany, Japan, Korea and the USA are 0.45%, 0.4%, 0.37% and 0.36%). But where the UK is very much an outlier is in the proportion of the country’s research that takes place in universities. This proportion now stands at 26%, which is much higher than international competitors (again, we can compare with Germany, Japan, Korea and the USA, where the proportions are 17%, 12%, 11% and 13%), and much higher now than it has been historically (in 1981 it was 14%). So one way of interpreting the pressure on universities to demonstrate the “impact” of their research, which is such a prominent part of the discourse in UK science policy at the moment, is as a symptom of the disproportionate importance of university research in the overall national R&D picture. But the high proportion of UK R&D carried out in universities is as much a measure of the weakness of the government and corporate applied and strategic research sectors as the strength of its HE research enterprise. The worry, of course, has to be that, given the hollowed-out state of the business and government R&D sectors, where in the past the more applied research needed to convert ideas into new products and services was done, universities won’t be able to meet the expectations being placed on them.

To return to the big picture, I’ve seen surprisingly little discussion of the effects on the UK economy of this dramatic and sustained decrease in research intensity. Aside from the obvious fact that we’re four years into an economic slump with no apparent prospect of rapid recovery, we know that the UK’s productivity growth has been unimpressive, and the lack of new, high tech companies that grow fast to a large scale is frequently commented on – where, people ask, is the UK’s Google? We also know that there are urgent unmet needs that only new innovation can fulfil – in healthcare, in clean energy, for example. Surely now is the time to examine the outcomes of the UK’s thirty year experiment in innovation theory.

Finally, I think it’s worth looking at these statistics again, because they contradict the stories we tell about ourselves as a country. We think of our postwar history as characterised by brilliant invention let down by poor exploitation, whereas the truth is that the UK, in the thirty post-war years, had a substantial and successful applied research and development enterprise. We imagine now that we can make our way in the world as a “knowledge economy”, based on innovation and brain-power. I know that innovation isn’t always the same as research and development, but it seems odd that we should think that innovation can be the speciality of a nation which is substantially less intensive in research and development than its competitors. We should worry instead that we’re in danger of condemning ourselves to being a low innovation, low productivity, low growth economy.

When technologies can’t evolve

In what way, and on what basis, should we attempt to steer the development of technology? This is the fundamental question that underlies at least two discussions that I keep coming back to here – how to do industrial policy and how to democratise science. But some would simply deny the premise of these discussions, and argue that technology can’t be steered, and that the market is the only effective way of incorporating public preferences into decisions about technology development. This is a hugely influential point of view which goes with the grain of the currently hegemonic neo-liberal, free market dominated world-view. It originates in the arguments of Friedrich Hayek against the 1940’s vogue for scientific planning, it incorporates Michael Polanyi’s vision of an “independent republic of science”, and it fits the view of technology as an autonomous agent which unfolds with a logic akin to that of Darwinian evolution – what one might called the “Wired” view of the world, eloquently expressed in Kevin Kelly’s recent book “What Technology Wants”. It’s a coherent, even seductive, package of beliefs; although I think it’s fatally flawed, it deserves serious examination.

Hayek’s argument against planning (his 1945 article The Use of Knowledge in Society makes this very clearly) rests on two insights. Firstly, he insists that the relevant knowledge that would underpin the rational planning of an economy or a society isn’t limited to scientific knowledge, and must include the tacit, unorganised knowledge of people who aren’t experts in the conventional sense of the word. This kind of knowledge, then, can’t rest solely with experts, but must be dispersed throughout society. Secondly, he claims that the most effective – perhaps the only – way in which this distributed knowledge can be aggregated and used is through the mechanism of the market. If we apply this kind of thinking to the development of technology, we’re led to the idea that technological development would happen in the most effective way if we simply allow many creative entrepreneurs to try different ways of combining different technologies and to develop new ones on the basis of existing scientific knowledge and what developments of that knowledge they are able to make. When the resulting innovations are presented to the market, the ones that survive will, by definition, the ones that best meet human needs. Stated this way, the connection with Darwinian evolution is obvious.

One objection to this viewpoint is essentially moral in character. The market certainly aggregates the preferences and knowledge of many people, but it necessarily gives more weight to the views of people with more money, and the distribution of money doesn’t necessarily coincide with the distribution of wisdom or virtue. Some free market enthusiasts simply assert the contrary, following Ayn Rand. There are, though, some much less risible moral arguments in favour of free markets which emphasise the positive virtues of pluralism, and even those opponents of libertarianism who point to the naivety of believing that this pluralism can be maintained in the face of highly concentrated economic and political power need to answer important questions about how pluralism can be maintained in any alternative system.

What should be less contentious than these moral arguments is an examination of the recent history of technological innovation. This shows that the technologies that made the modem world – in all their positive and negative aspects – are largely the result of the exercise of state power, rather than of the free enterprise of technological entrepreneurs. New technologies were largely driven by large scale interventions by the Warfare States that dominated the twentieth century. The military-industrial complexes of these states began long before Eisenhower popularised this name, and existed not just in the USA, but in Wilhelmine and Nazi Germany, in the USSR, and in the UK (David Edgerton’s “Warfare State: Britain 1920- 1970” gives a compelling reinterpretation of modern British history in these terms). At the beginning of the century, for example, the Haber-Bosch process for fixing nitrogen was rapidly industrialised by the German chemical company BASF. It’s difficult to think of a more world-changing innovation – more than half the world’s population wouldn’t now be here if it hadn’t been for the huge growth in agricultural productivity that artificial fertilisers made possible. However, the importance of this process for producing the raw materials for explosives ensured that the German state took much more than a spectator’s role. Vaclav Smil, in his book Enriching the Earth, quotes an estimate for the development cost of the Haber-Bosch process of US$100 million at 1919 prices (roughly US$1 billion in current money, equating to about $19 billion in terms of its share of the economy at the time), of which about half came from the government. Many more recent examples of state involvement in innovation are cited in Mariana Mazzucato’s pamphlet The Entrepreneurial State. Perhaps one of the most important stories is the role of state spending in creating the modern IT industry; computing, the semiconductor industry and the internet are all largely the outcome of US military spending.

Of course, the historical fact that the transformative, general purpose technologies that were so important in driving economic growth in the twentieth century emerged as a result of state sponsorship doesn’t by itself invalidate the Hayekian thesis that innovation is best left to the free market. To understand the limitations of this picture, we need to return to Hayek’s basic arguments. Under what circumstances does the free market fail to aggregate information in an optimal way? People are not always rational economic actors – they know what they want and need now, but they aren’t always good at anticipating what they might want if things they can’t imagine become available, or what they might need if conditions change rapidly. There’s a natural cognitive bias to give more weight to the present, and less to an unknowable future. Just like natural selection, the optimisation process that the market carries out is necessarily local, not global.

So when does the Hayekian argument for leaving innovation to the market not apply? The free market works well for evolutionary innovation – local optimisation is good at solving present problems with the tools at hand now. But it fails to be able to mobilise resources on a large scale for big problems whose solution will take more than a few years. So, we’d expect market-driven innovation to fail to deliver whenever timescales for development are too long, or the expense of development too great. Because capital markets are now short-term to the point of irrationality (as demonstrated by this study (PDF) from the Bank of England by Andrew Haldane), the private sector rejects long term investments in infrastructure and R&D, even if the net present value of those investments would be significantly positive. In the energy sector, for example, we saw widespread liberalisation of markets across the world in the 1990s. One predictable consequence of this has been a collapse of private sector R&D in the energy sector (illustrated for the case of the USA by Dan Kammen here – The Incredible Shrinking Energy R&D Budget (PDF)).

The contrast is clear if we compare two different cases of innovation – the development of new apps for the iPhone, and the development of innovative new passenger aircraft, like the composite-based Boeing Dreamliner and Airbus A350. The world of app development is one in which tens or hundreds of thousands of people can and do try out all sorts of ideas, a few of which have turned out to fulfil an important and widely appreciated need and have made their developers rich. This is a world that’s well described by the Hayekian picture of experimentation and evolution – the low barriers to entry and the ease of widespread distribution of the products rewards experimentation. Making a new airliner, in contrast, involves years of development and outlays of tens of billions of dollars in development cost before any products are sold. Unsurprisingly, the only players are two huge companies – essentially a world duopoly – each of whom is in receipt of substantial state aid of one form or another. The lesson is that technological innovation doesn’t just come in one form. Some innovation – with low barriers to entry, often building on existing technological platforms – can be done by individuals or small companies, and can be understood well in terms of the Hayekian picture. But innovation on a larger scale, the more radical innovation that leads to new general purpose technologies, needs either a large company with a protected income stream or outright state action. In the past the companies able to carry out innovation on this scale would typically have been a state sponsored “national champion”, supported perhaps by guaranteed defense contracts, or the beneficiary of a monopoly or cartel, such as the postwar Bell Labs.

If the prevalence of this Hayekian thinking about technological innovation really does mean that we’re less able now to introduce major, world-changing innovations than we were 50 years ago, this would matter a great deal. One way of thinking about this is in evolutionary terms – if technological innovation is only able to proceed incrementally, there’s a risk that we’re less able to adapt to sudden shocks, we’re less able to anticipate the future and we’re at risk of being locked into technological trajectories that we can’t alter later in response to unexpected changes in our environment or unanticipated consequences. I’ve written earlier about the suggestion that, far from seeing universal accelerating change, we’re currently seeing innovation stagnation. The risk is that we’re seeing less in the way of really radical innovation now, at a time when pressing issues like climate change, peak cheap oil and demographic transitions make innovation more necessary than ever. We are seeing a great deal of very rapid innovation in the world of information, but this rapid pace of change in one particular realm has obscured much less rapid growth in the material realm and the biological realm. It’s in these realms that slow timescales and the large scale of the effort needed mean that the market seems unable to deliver the innovation we need.

It’s not going to be possible, nor would it be desirable, for us to return to the political economies of the mid-twentieth century warfare states that delivered the new technologies that underlie our current economies. Whatever other benefits the turn to free markets may have delivered, it seems to have been less effective at providing radical innovation, and with the need for those radical innovations becoming more urgent, some rethinking is now urgently required.

A billion dollar nanotech spinout?

The Oxford University spin-out Oxford Nanopore Technologies created a stir last month by announcing that it would be bringing to market this year systems to read out the sequence of individual DNA molecules by threading them through nanopores. It’s claimed that this will allow a complete human genome to be sequenced in about 15 minutes for a few thousand dollars; the company also is introducing a cheap, disposable sequencer which will sell for less that $900. Speculation has now begun about the future of the company, with valuations of $1-2 billion dollars being discussed if they decide to take the company public in the next 18 months.

It’s taken a while for this idea of sequencing a single DNA molecule by directly reading out its bases to come to fruition. The original idea came from David Deamer and Harvard’s Dan Branton in the mid-1990s; from Hagen Bayley, in Oxford, came the idea of using an engineered derivative of a natural pore-forming protein to form the hole through which the DNA is threaded. I’ve previously reported progress towards this goal here, in 2005, and in more detail here, in 2007. The Oxford Nanopore announcement gives us some clues as to the key developments since then. The working system uses a polymer membrane, rather than a lipid bilayer, to carry the pore array, which undoubtedly makes the system much more robust. The pore is still created from a pore forming protein, though this has been genetically engineered to give greater discrimination between different combinations of bases as the DNA is threaded through the hole. And, perhaps most importantly, an enzyme is used to grab DNA molecules from solution and feed them through the pore. In practise, the system will be sold as a set of modular units containing the electronics and interface, together with consumables cartridges, presumably including the nanopore arrays and the enzymes. The idea is to take single molecule analysis beyond DNA to include RNA and proteins, as well as various small molecules, with a different cartridge being available for each type of experiment. This will depend on the success of their program to develop a whole family of different pores able to discriminate between different types of molecules.

What will the impact of this development be, if everything works as well as is being suggested? (The prudent commentator should stress the if here, as we haven’t yet seen any independent trials of the technology). Much has already been written about the implications of cheap – less than $1000 – sequencing of the human genome, but I can’t help wondering whether this may not actually be the big story here. And in any case, that goal may end being reached with or without Oxford Nanopore, as this recent Nature News article makes clear. We still don’t know whether the Oxford Nanopore technique will be yet competitive on accuracy and price with the other contending approaches. I wonder, though, whether we are seeing here something from the classic playbook for a disruptive innovation. The $900 device in particular looks like it’s intended to create new markets for cheap, quick and dirty sequencing, to provide an income stream while the technology is improved further – with better, more selective pores and better membranes (inevitably, perhaps, Branton’s group at Harvard reported using graphene membranes for threading DNA in Nature last year). As computers continue to get faster, cheaper and more powerful, the technology will automatically benefit from these advances too – fragmentary and perhaps imperfect sequence information has much greater value in the context of vast existing sequence libraries and the data processing power to use them. Perhaps applications for this will be found in forensic and environmental science, diagnostics, microbiology and synthetic biology. The emphasis on molecules other than DNA is interesting too; single molecule identification and sequencing of RNA opens up the possibility of rapidly identifying what genes are being transcribed in a cell at a given moment (the so-called “transcriptome”).

The impact on the investment markets for nanotechnology is likely to be substantial. Existing commercialisation efforts around nanotechnology have been disappointing so far, but a company success on the scale now being talked about would undoubtedly attract more money into the area – perhaps it might also persuade some of the companies currently sitting on huge piles of cash that they might usefully invest some of this in a little more research and development. What’s significant about Oxford Nanopore is that it is operating in a sweet spot between the mundane and the far-fetched. It’s not a nanomaterials company, essentially competing in relatively low margin speciality chemicals, nor is it trying to make a nanofactory or nanoscale submarine or one of the other more radical visions of the nanofuturists. Instead, it’s using the lessons of biology – and indeed some of the components of molecular biology – to create a functional device that operates on the true single molecule level to fill real market needs. It also seems to be displaying a commendable determination to capture all the value of its inventions, rather than licensing its IP to other, bigger companies.

Finally, not the least of the impacts of a commercial and technological success on the scale being talked about would be on nanotechnology itself as a discipline. In the last few years the field’s early excitement has been diluted by a sense of unfulfilled promise, especially, perhaps, in the UK; last year I asked “Why has the UK given up on nanotechnology?” Perhaps it will turn out that some of that disillusionment was premature.

Where the randomness comes from

For perhaps 200 years it was possible to believe that physics gave a picture of the world with no place for randomness. Newton’s laws prescribe a picture of nature that is completely deterministic – at any time, the future is completely specified by the present. For anyone attached to the idea that they have some control over their destiny, that the choices they make have any influence on what happens to them, this seems problematic. Yet the idea of strict physical determinism, the idea that free will is an illusion in a world in which the future is completely predestined by the laws of physics, remains strangely persistent, despite the fact that it isn’t (I believe) supported by our current scientific understanding.

The mechanistic picture of a deterministic universe received a blow with the advent of quantum mechanics, which seems to introduce an element of randomness to the picture – in the act of “measurement”, the state function of a quantum system discontinuously changes according to a law which is probabilistic rather than deterministic. And when we look at the nanoscale world, at least at the level of phenomenology, randomness is ever-present, summed up in the phenomenon of Brownian motion, and leading inescapably to the second law of thermodynamics. And, of course, if we are talking about human decisions (should we go outside in the rain, or have another cup of tea?) the physical events in the brain that initiate the process of us opening the door or putting the kettle on again are strongly subject to this randomness; those physical events, molecules diffusing across synapses, receptor molecules changing shape in response to interactions with signalling molecules, shock waves of potential running up membranes as voltage-gated pores in the membrane open and close, all take place in that warm, wet, nanoscale domain in which Brownian motion dominates and the dynamics is described by Langevin equations, complete with their built-in fluctuating forces. Is this randomness real, or just an appearance? Where does it come from?

I suspect the answer to this question, although well-understood, is not necessarily widely appreciated. It is real randomness – not just the appearance of randomness that follows from the application of deterministic laws in circumstances too complex to model – and its ultimate origin is indeed in the indeterminism of quantum mechanics. To understand how the randomness of the quantum realm gets transmitted into the Brownian world, we need to remember first that the laws of classical, Newtonian physics are deterministic, but only just. If we imagine a set of particles interacting with each other through well-known forces, defined through potentials of the kind you might use in a molecular dynamics simulation, the way in which the system evolves in time is in principle completely determined, but in practise any small perturbation to the deterministic laws (such as a rounding error in a computer simulation) will have an effect which grows with time to widen the range of possible outcomes that the system will explore, a widening that macroscopically we’d interpret as an increase in the entropy of the system.

To understand where, physically, this perturbation might come from we have to ask where the forces between molecules originate, as they interact and bounce off each other. One ubiquitous force in the nanoscale world is known to chemists as the Van der Waals force. In elementary physics and chemistry, this is explained as a force that arises between two neutral objects when a randomly arising dipole in one object induces an opposite dipole in the other object, and the two dipoles then attract each other. Another, perhaps deeper, way of thinking about this force is due to the physicists Casimir and Lifshitz, who showed that it arises from the way objects modify the quantum fluctuations that are always present in the vacuum – the photons that come in and out of existence even in the emptiest of empty spaces. This way of thinking about the Van der Waals force makes clear that because the force arises from the quantum fluctuations of the vacuum, the force must itself be fluctuating – it has an intrinsic randomness that is sufficient to explain the randomness we observe in the nanoscale world.

So, to return to the question of whether free will is compatible with physical determinism, we can now see that this is not an interesting question, because rules that govern the operation of the brain are fundamentally not deterministic. Of course, the question of how free will might emerge from a non-deterministic, stochastic system isn’t of course a trivial question either, but at least it starts from the right premise – we can say categorically that strict physical determinism, as applied to the operation of the brain, is false. The brain is not a deterministic system, but one in which randomness is central and inescapable to its operation.

One might go on to ask why some people are so keen to hold on to the idea of strict physical determinism, more than a hundred years after the discoveries of quantum mechanics and statistical mechanics that makes determinism untenable? This is too big a question for me to even attempt to answer here, but maybe it’s worth pointing out that there seems to be quite a lot of of determinism around – in addition to physical determinism, genetic determinism and technological determinism seem to be attractive to many people at the moment. Of course, the rise of the Newtonian mechanistic world-view occurred at a time when a discussion about the relationship between free will and a theological kind of determinism was very current in Christian Europe, and I’m tempted to wonder whether the appeal of these modern determinisms might be part of the lingering legacy of Augustine of Hippo and Calvin to the modern age.

Science in hard times

How should the hard economic times we’re going through affect the amount of money governments spend on scientific and technological research? The answer depends on your starting point – if you think that science is an optional extra that we do if we’re prosperous, then decreasing prosperity must inevitably mean we can afford to do less science. But if you think that our prosperity depends on the science we do, then if growth is starting to stall, that’s a signal telling you to devote more resources to research. This is a huge oversimplification, of course; the link between science and prosperity can never be automatic. How effective that link will be will depend on the type of science and technology you support, and on the nature of the wider economic system that translates innovations into economic growth. It’s worth taking a look at recent economic history to see some of the issues at play.

Plot of UK real GDP per person and government R&D spend
UK Government spending on research and development compared to the real growth in per capita GDP.

R&D data (red) from the Royal Society Report The Scientific Century adjusted to constant 2005 £s. GDP per person data (blue) from Measuring Worth. Dotted blue line – current projections from the November 2011 forecast of the UK Office of Budgetary Responsibility (uncorrected for population changes).

The graph shows both the real GDP per person in the UK from 1946 up to the present, together with the amount of money, again in real terms, spent by the government on research and development. The GDP graph tells an interesting story in itself, making very clear the discontinuity in economic policy that happened in 1979. In this year Margaret Thatcher’s new Conservative government overthrew a thirty year broad consensus, shared by both parties, on how the economy should be managed. Before 1979, we had a mixed economy, with substantial industrial sectors under state control, highly regulated financial markets, including controls on the flow of capital in and out of the country, and the macro-economy governed by the principles of Keynesian demand management. After 1979, it was not Keynes, but Hayek, who supplied the intellectual underpinning, and we saw progressive privatisation of those parts of the economy under state control, the abolition of controls on capital movements and deregulation of financial markets. In terms of economic growth, measured in real GDP per person, the period between 1946 and 1979 was remarkable, with a steady increase of 2.26% per year – this is, I think, the longest sustained period of high growth in the modern era. Since 1979, we’ve seen a succession of deep recessions, followed by periods of rapid, and evidently unsustainable growth, sustained by asset price bubbles. The peaks of these periods of growth have barely attained the pre-1979 trend line, while in our current economic travails we find ourselves about 9% below trend. Not only does there seem no imminent prospect of the rapid growth we’d need to return to that trend line, but there now seems to be a likelihood of another recession.

The plot for public R&D spending tells its own story, which also shows a turning point with the Thatcher government. From 1980 until 1998, we see a substantial long-term decline in research spending, not just as a fraction of GDP, but in absolute terms; since 1998 research spending has increased again in real terms, though not substantially faster than the rise in GDP over the same period. Underlying the decline were a number of factors. There was a real squeeze on spending in research in Universities, well remembered by those who were working in them at the time. Meanwhile the research spending in those industries that were being privatised – such as telecommunications and energy – was removed from the government spending figures. And the activities of government research laboratories – particularly those associated with defense and the nuclear industry – were significantly wound down. Underlying this winding down of research was both a political motive and an ideological one. Big government spending on high technology was associated with the corporate politics of the 1960’s, subscribed to by both parties but particularly associated with Labour, and the memorable slogan “The White Heat of Technology”. To its detractors this summoned up associations with projects like the supersonic passenger aircraft Concord, a technological triumph but a commercial disaster. To the adherents of the Hayekian free market ideology that underpinned the Thatcher government, the state had no business doing any research but the most basic and far from market. In fact, state-supported research was likely to be not only less efficient and less effectively directed than research in the private sector, but by “squeezing out” such private sector research it would actually make the economy less efficient.

The idea that state support of research reduces support of research by the private sector by “squeezing out” remains attractive to free market ideologues, but the empirical evidence points to the opposite conclusion – state spending and private sector spending on research support each other, with increases in state R&D spending leading to increases in R&D by business (see for example Falk M (2006). What drives business research and development intensity across OECD countries? (PDF), Applied Economics 38 p 533). Certainly, in the UK, the near-halving of government R&D spend between 1980 and 1999 did not lead to an increase in R&D by business; instead, this also fell from about 1.4% of GDP to 1.2%. Not only did those companies that had been privatised substantially reduce their R&D spending, but other major players in industrial R&D – such as the chemical company ICI and the electronics company GEC – substantially cut back their activities. At the time many rationalised this as the inevitable result of the UK economy changing its mix of sectors, away from manufacturing towards service sectors such as the financial service industry.

None of this answers the questions: how much should one spend on R&D, and what difference do changes in R&D spend make to economic performance? It is certainly clear that the decline in R&D spending in the UK isn’t correlated with any improvement in its economic performance. International comparisons show that the proportion of GDP spent on R&D in the UK is significantly lower than most of its major competitors, and within this the proportion of R&D supported by business is itself unusually low . On the other hand, the performance of the UK science base, as measured by academic measures rather than economic ones, is strikingly good. Updating a much-quoted formula, the UK accounts for 3% of the total world R&D spend, it has 4.3% of the world’s researchers, who produce 6.4% of the world’s scientific articles, which attract 10.9% of the world’s citations and produce 13.8% of the world’s top 1% of highly cited papers (these figures come from the analysis in the recent report The International Comparative Performance of the UK Research Base).

This formula is usually quoted to argue for the productivity and effectiveness of the UK research base, and it clearly tells a powerful story about its strength as measured in purely academic terms. But does this mean we get the best out of our research in economic terms? The partial recovery in government R&D spending that we saw from 1998 until last year brought real terms increases in science budgets (though without significantly increasing the fraction of GDP spent on science). These increases were focused on basic research, whose importance as a proportion of total government science spending doubled between 1986 and 2005. This has allowed us to preserve the strength of our academic research base, but the decline in more applied R&D in both government and industrial laboratories has weakened our capacity to convert this strength into economic growth.

Our national economic experiment in deregulated capitalism ended in failure, as the 2008 banking collapse and subsequent economic slump has made clear. I don’t know how much the systematic running down of our national research and development capability in the 1980’s and 1990’s contributed to this failure, but I suspect that it’s a significant part of the bigger picture of misallocation of resources associated with the booms and the busts, and the associated disappointingly slow growth in economic productivity.

What should we do now? Everyone talks about the need to “rebalance the economy”, and the government has just released an “Innovation and Research Strategy for Growth”, which claims that “The Government is putting innovation and research at the heart of its growth agenda”. The contents of this strategy – in truth largely a compendium of small-scale interventions that have already been announced, which together still don’t fully reverse last year’s cuts in research capital spending – are of a scale that doesn’t begin to meet this challenge. What we should have seen is, not just a commitment to maintain the strength of the fundamental science base, important though that is, but a real will to reverse the national decline in applied research.

Why has the UK given up on nanotechnology?

In a recent roundup of nanotechnology activity across the world, the consultancy Cientifica puts the UK’s activity pretty much at the bottom of the class. Is this a fair reflection of the actual situation? Comparing R&D numbers across countries is always difficult, because of the different institutional arrangements and different ways spending is categorised; but, broadly, this feels about right. Currently, the UK has no actual on-going nanotechnology program. Activity continues in projects that are already established, but the current plans for government science spending in the period 2011- 2015, as laid out in the various research council documents, reveal no future role for nanotechnology. The previous cross-council program “Nanoscience engineering through application” has been dropped; all the cross-council programmes now directly reflect societal themes such as “ageing population, environmental change, global security, energy, food security and the digital economy”. The delivery plan for the Engineering and Physical Science Research Council, previously the lead council for nanotechnology, does not even mention the word, while the latest strategy document for the Technology Strategy Board, responsible for nearer-market R&D support, notes in a footnote that nanotechnology is “now embedded in all themes where there are such opportunities”.

So, why has the UK given up on nanotechnology? I suggest four reasons.

1. The previous government’s flagship nanotechnology program – the network of Micro- and Nano- Technology centres (the MNT program) is perceived as having failed. This program was launched in 2003, with initial funding of £90 million, a figure which subsequently was intended to rise to £200 million. But last July, the new science minister, David Willetts, giving evidence to the House of Commons Science and Technology Select Committee, picked on nanotechnology as an area in which funding had been spread too thinly, and suggested that the number of nanotechnology centres was likely to be substantially pruned. To my knowledge, none of these centres has received further funding. In designing the next phase of the government’s translational research centres – a new network of Technology and Innovation Centres, loosely modelled on the German Fraunhofer centres, it seems that the MNT program has been regarded as a cautionary tale of how not to do things, rather than an example to build on, and nanotechnology in itself will play little part in these new centres (though, of course, it may well be an enabling technology for things like a regenerative medicine).

2. There has been no significant support for nanotechnology from the kinds of companies and industries that government listens to. This is partly because the UK is now weak in those industrial sectors that would be expected to be most interested in nanotechnology, such as the chemicals industry and the electronics industry. Large national champions in these sectors with the power to influence government, in the way that now-defunct conglomerates like ICI and GEC did in the past, are particularly lacking. Companies selling directly to consumers, in the food and personal care sectors, have been cautious about being too closely involved in nanotechnology for fear of a consumer backlash. The pharmaceutical industry, which is still strong in the UK, has other serious problems to deal with, so nanotechnology has been, for them, a second order issue. And the performance of small, start-up companies based on nanotechnology, such as Oxonica, has been disappointing. The effect of this was brought home to me in March 2010, when I met the then Science Minister, Lord Drayson, to discuss on behalf of the Royal Society the shortcomings of the latest UK Nanotechnology Strategy. To paraphrase his response, he said he knew the strategy was poor, but that was the fault of the nanotechnology community, which had not been able to get its act together to convince the government it really was important. He contrasted this with the space industry, which had been able to make what to him was a very convincing case for its importance.

3. The constant criticism that the government was receiving about its slow response to issues of the safety and environmental impact of nanotechnology was, I am sure, a source of irritation. The reasons for this slow response were structural, related to the erosion of support for strategic science within government (as opposed to the kind of investigator led science funded by the research councils – see this blogpost on the subject from Jack Stilgoe), but in this environment civil servants might be forgiven for thinking that this issue had more downside than upside.

4. Within the scientific community, there were few for whom the idea of nanotechnology was their primary loyalty. After the financial crisis, when it was clear that big public spending cuts were likely and their were fears of very substantial cuts in science budgets, it was natural for scientists either to lobby on behalf of their primary disciplines or to emphasise the direct application of their work to existing industries with strong connections to government, like the pharmaceutical and aerospace industries. In this climate, the more diffuse idea of nanotechnology slipped down a gap.

Does it matter that, in the UK, nanotechnology is no longer a significant element of science and innovation policy? On one level, one could argue that it doesn’t. Just because nanotechnology isn’t an important category by which science is classified by, this doesn’t mean that the science that would formerly have been so classified doesn’t get done. We will still see excellent work being supported in areas like semiconductor nanotechnology for optoelectronics, plastic electronics, nano-enabled drug delivery and DNA nanotech, to give just a few examples. But there will be opportunities missed to promote interdisciplinary science, and I think this really does matter. In straitened times, there’s a dangerous tendency for research organisations to retreat to core business, to single disciplines, and we’re starting to see this happening now to some extent. Interdisciplinary, goal-oriented science is still being supported through the societal themes, like the programs in energy and ageing, and it’s going to be increasingly important that these themes do indeed succeed in mobilising the best scientists from different areas to work together.

But I worry that it very much does matter that the UK’s efforts at translating nanotechnology research into new products and new businesses has not been more successful. But this is part of a larger problem. The UK has, for the last thirty years, not only not had an industrial policy to speak of, it has had a policy of not having an industrial policy. But the last three years have revealed the shortcomings of this, as we realise that we aren’t any more going to be able to rely on a combination of North Sea oil and the ephemeral virtual profits of the financial services industry to keep the country afloat