Geek power?

Mark Henderson’s book “The Geek Manifesto” was part of my holiday reading, and there’s a lot to like in it – there’s all too much stupidity in public life, and anything that skewers a few of the more egregious recent examples of this in such a well-written and well-informed way must be welcomed. There is a fundamental lack of seriousness in our public discourse, a lack of respect for evidence, a lack of critical thinking. But to set against many excellent points of detail, the book is built around one big idea, and it’s that idea that I’m less keen on. This is the argument – implicit in the title – that we should try to construct some kind of identity politics based around those of us who self-identify as being interested in and informed about science – the “geeks”. I’m not sure that this is possible, but even if it was, I think it would be bad for science and bad for politics. This isn’t to say that public life wouldn’t be better if more people with a scientific outlook had a higher profile. One very unwelcome feature of public debate is the prevalence of wishful thinking. Comfortable beliefs that fit into people’s broader world-views do need critical examination, and this often needs the insights of science, particularly the discipline that comes from seeing whether the numbers add up. But science isn’t the only source of the insights needed for critical thinking, and scientists can have some surprising blind-spots, not just about the political, social and economic realities of life, but also about technical issues outside their own fields of interest.

But first, who are these geeks who Henderson thinks should organise? Continue reading “Geek power?”

The UK’s thirty year experiment in innovation policy

In 1981 the UK was one of the world’s most research and development intensive economies, with large scale R&D efforts being carried out in government and corporate laboratories in many sectors. Over the thirty years between then and now, this situation has dramatically changed. A graph of the R&D intensity of the national economy, measured as the fraction of GDP spent on research and development, shows a long decline through the 1980’s and 1990’s, with some levelling off from 2000 or so. During this period the R&D intensity of other advanced economies, like Japan, Germany, the USA and France, has increased, while in fast developing countries like South Korea and China the growth in R&D intensity has been dramatic. The changes in the UK were in part driven by deliberate government policy, and in part have been the side-effects of the particular model of capitalism that the UK has adopted. Thirty years on, we should be asking what the effects of this have been on our wider economy, and what we should do about it.

A comparison of gross research and development expenditure of various countries from 1981 to 2010
Gross expenditure on research and development as a % of GDP from 1981 to 2010. Data from Eurostat.

The second graph breaks down where R&D takes place. The largest fractional fall has been in research in government establishments, which has dropped by more than 60%. The largest part of this fall took place in the early part of the period, under a series of Conservative governments. This reflects a general drive towards a smaller state, a run-down of defence research, and the privatisation of major, previously research intensive sectors such as energy. However, it is clear that privatisation didn’t lead to a transfer of the associated R&D to the business sector. It is in the business sector that the largest absolute drop in R&D intensity has taken place – from 1.48% of GDP to 1.08%. Cutting government R&D didn’t lead to increases in private sector R&D, contrary to the expectations of free marketeers who think the state “crowds out” private spending. Instead the business climate of the time, with a drive to unlock “shareholder value” in the short-term, squeezed out longer term investments in R&D. Some seek to explain this drop in R&D intensity in terms of a change in the sectoral balance of the UK economy, away from manufacturing and towards financial services, and this is clearly part of the picture. However, I wonder whether this should be thought of not so much as an explanation, but more as a symptom. I’ve discussed in an earlier post the suggestion that “bad capitalism” – for example, speculations in financial and property markets ,with the downside risk being shouldered by the tax-payer – squeezes out genuine innovation.

UK R&D as % of GDP by sector of performance from 1981 to 2010
UK R&D as % of GDP by sector of performance from 1981 to 2010. Data from Eurostat.

The Labour government that came to power in 1997 did worry about the declining R&D intensity of the UK economy, and, in its Science Investment Framework 2004-2014 (PDF), set about trying to reverse the trend. This long-term policy set a target of reaching an overall R&D intensity of 2.5% by 2014, and an increase in R&D intensity in the business sector from to 1.7%. The mechanisms put in place to achieve this included a period of real-terms increase in R&D spending by government, some tax incentives for business R&D, and a new agency for nearer term research in collaboration with business, the Technology Strategy Board. In the event, the increases in government spending on R&D did lead to some increase in the UK’s overall research intensity, but the hoped-for increase in business R&D simply did not happen.

This isn’t predominantly a story about academic science, but it provides a context that’s important to appreciate for some current issues in science policy. Over the last thirty years, the research intensity of the UK’s university sector has increased, from 0.32% of GDP to 0.48% of GDP. This reflects, to some extent, real-term increases in government science budgets, together with the growing success of universities in raising research funds from non UK-government sources. The resulting R&D intensity of the UK HE sector is at the high end of international comparisons (the corresponding figures for Germany, Japan, Korea and the USA are 0.45%, 0.4%, 0.37% and 0.36%). But where the UK is very much an outlier is in the proportion of the country’s research that takes place in universities. This proportion now stands at 26%, which is much higher than international competitors (again, we can compare with Germany, Japan, Korea and the USA, where the proportions are 17%, 12%, 11% and 13%), and much higher now than it has been historically (in 1981 it was 14%). So one way of interpreting the pressure on universities to demonstrate the “impact” of their research, which is such a prominent part of the discourse in UK science policy at the moment, is as a symptom of the disproportionate importance of university research in the overall national R&D picture. But the high proportion of UK R&D carried out in universities is as much a measure of the weakness of the government and corporate applied and strategic research sectors as the strength of its HE research enterprise. The worry, of course, has to be that, given the hollowed-out state of the business and government R&D sectors, where in the past the more applied research needed to convert ideas into new products and services was done, universities won’t be able to meet the expectations being placed on them.

To return to the big picture, I’ve seen surprisingly little discussion of the effects on the UK economy of this dramatic and sustained decrease in research intensity. Aside from the obvious fact that we’re four years into an economic slump with no apparent prospect of rapid recovery, we know that the UK’s productivity growth has been unimpressive, and the lack of new, high tech companies that grow fast to a large scale is frequently commented on – where, people ask, is the UK’s Google? We also know that there are urgent unmet needs that only new innovation can fulfil – in healthcare, in clean energy, for example. Surely now is the time to examine the outcomes of the UK’s thirty year experiment in innovation theory.

Finally, I think it’s worth looking at these statistics again, because they contradict the stories we tell about ourselves as a country. We think of our postwar history as characterised by brilliant invention let down by poor exploitation, whereas the truth is that the UK, in the thirty post-war years, had a substantial and successful applied research and development enterprise. We imagine now that we can make our way in the world as a “knowledge economy”, based on innovation and brain-power. I know that innovation isn’t always the same as research and development, but it seems odd that we should think that innovation can be the speciality of a nation which is substantially less intensive in research and development than its competitors. We should worry instead that we’re in danger of condemning ourselves to being a low innovation, low productivity, low growth economy.

When technologies can’t evolve

In what way, and on what basis, should we attempt to steer the development of technology? This is the fundamental question that underlies at least two discussions that I keep coming back to here – how to do industrial policy and how to democratise science. But some would simply deny the premise of these discussions, and argue that technology can’t be steered, and that the market is the only effective way of incorporating public preferences into decisions about technology development. This is a hugely influential point of view which goes with the grain of the currently hegemonic neo-liberal, free market dominated world-view. It originates in the arguments of Friedrich Hayek against the 1940’s vogue for scientific planning, it incorporates Michael Polanyi’s vision of an “independent republic of science”, and it fits the view of technology as an autonomous agent which unfolds with a logic akin to that of Darwinian evolution – what one might called the “Wired” view of the world, eloquently expressed in Kevin Kelly’s recent book “What Technology Wants”. It’s a coherent, even seductive, package of beliefs; although I think it’s fatally flawed, it deserves serious examination.

Hayek’s argument against planning (his 1945 article The Use of Knowledge in Society makes this very clearly) rests on two insights. Firstly, he insists that the relevant knowledge that would underpin the rational planning of an economy or a society isn’t limited to scientific knowledge, and must include the tacit, unorganised knowledge of people who aren’t experts in the conventional sense of the word. This kind of knowledge, then, can’t rest solely with experts, but must be dispersed throughout society. Secondly, he claims that the most effective – perhaps the only – way in which this distributed knowledge can be aggregated and used is through the mechanism of the market. If we apply this kind of thinking to the development of technology, we’re led to the idea that technological development would happen in the most effective way if we simply allow many creative entrepreneurs to try different ways of combining different technologies and to develop new ones on the basis of existing scientific knowledge and what developments of that knowledge they are able to make. When the resulting innovations are presented to the market, the ones that survive will, by definition, the ones that best meet human needs. Stated this way, the connection with Darwinian evolution is obvious.

One objection to this viewpoint is essentially moral in character. The market certainly aggregates the preferences and knowledge of many people, but it necessarily gives more weight to the views of people with more money, and the distribution of money doesn’t necessarily coincide with the distribution of wisdom or virtue. Some free market enthusiasts simply assert the contrary, following Ayn Rand. There are, though, some much less risible moral arguments in favour of free markets which emphasise the positive virtues of pluralism, and even those opponents of libertarianism who point to the naivety of believing that this pluralism can be maintained in the face of highly concentrated economic and political power need to answer important questions about how pluralism can be maintained in any alternative system.

What should be less contentious than these moral arguments is an examination of the recent history of technological innovation. This shows that the technologies that made the modem world – in all their positive and negative aspects – are largely the result of the exercise of state power, rather than of the free enterprise of technological entrepreneurs. New technologies were largely driven by large scale interventions by the Warfare States that dominated the twentieth century. The military-industrial complexes of these states began long before Eisenhower popularised this name, and existed not just in the USA, but in Wilhelmine and Nazi Germany, in the USSR, and in the UK (David Edgerton’s “Warfare State: Britain 1920- 1970” gives a compelling reinterpretation of modern British history in these terms). At the beginning of the century, for example, the Haber-Bosch process for fixing nitrogen was rapidly industrialised by the German chemical company BASF. It’s difficult to think of a more world-changing innovation – more than half the world’s population wouldn’t now be here if it hadn’t been for the huge growth in agricultural productivity that artificial fertilisers made possible. However, the importance of this process for producing the raw materials for explosives ensured that the German state took much more than a spectator’s role. Vaclav Smil, in his book Enriching the Earth, quotes an estimate for the development cost of the Haber-Bosch process of US$100 million at 1919 prices (roughly US$1 billion in current money, equating to about $19 billion in terms of its share of the economy at the time), of which about half came from the government. Many more recent examples of state involvement in innovation are cited in Mariana Mazzucato’s pamphlet The Entrepreneurial State. Perhaps one of the most important stories is the role of state spending in creating the modern IT industry; computing, the semiconductor industry and the internet are all largely the outcome of US military spending.

Of course, the historical fact that the transformative, general purpose technologies that were so important in driving economic growth in the twentieth century emerged as a result of state sponsorship doesn’t by itself invalidate the Hayekian thesis that innovation is best left to the free market. To understand the limitations of this picture, we need to return to Hayek’s basic arguments. Under what circumstances does the free market fail to aggregate information in an optimal way? People are not always rational economic actors – they know what they want and need now, but they aren’t always good at anticipating what they might want if things they can’t imagine become available, or what they might need if conditions change rapidly. There’s a natural cognitive bias to give more weight to the present, and less to an unknowable future. Just like natural selection, the optimisation process that the market carries out is necessarily local, not global.

So when does the Hayekian argument for leaving innovation to the market not apply? The free market works well for evolutionary innovation – local optimisation is good at solving present problems with the tools at hand now. But it fails to be able to mobilise resources on a large scale for big problems whose solution will take more than a few years. So, we’d expect market-driven innovation to fail to deliver whenever timescales for development are too long, or the expense of development too great. Because capital markets are now short-term to the point of irrationality (as demonstrated by this study (PDF) from the Bank of England by Andrew Haldane), the private sector rejects long term investments in infrastructure and R&D, even if the net present value of those investments would be significantly positive. In the energy sector, for example, we saw widespread liberalisation of markets across the world in the 1990s. One predictable consequence of this has been a collapse of private sector R&D in the energy sector (illustrated for the case of the USA by Dan Kammen here – The Incredible Shrinking Energy R&D Budget (PDF)).

The contrast is clear if we compare two different cases of innovation – the development of new apps for the iPhone, and the development of innovative new passenger aircraft, like the composite-based Boeing Dreamliner and Airbus A350. The world of app development is one in which tens or hundreds of thousands of people can and do try out all sorts of ideas, a few of which have turned out to fulfil an important and widely appreciated need and have made their developers rich. This is a world that’s well described by the Hayekian picture of experimentation and evolution – the low barriers to entry and the ease of widespread distribution of the products rewards experimentation. Making a new airliner, in contrast, involves years of development and outlays of tens of billions of dollars in development cost before any products are sold. Unsurprisingly, the only players are two huge companies – essentially a world duopoly – each of whom is in receipt of substantial state aid of one form or another. The lesson is that technological innovation doesn’t just come in one form. Some innovation – with low barriers to entry, often building on existing technological platforms – can be done by individuals or small companies, and can be understood well in terms of the Hayekian picture. But innovation on a larger scale, the more radical innovation that leads to new general purpose technologies, needs either a large company with a protected income stream or outright state action. In the past the companies able to carry out innovation on this scale would typically have been a state sponsored “national champion”, supported perhaps by guaranteed defense contracts, or the beneficiary of a monopoly or cartel, such as the postwar Bell Labs.

If the prevalence of this Hayekian thinking about technological innovation really does mean that we’re less able now to introduce major, world-changing innovations than we were 50 years ago, this would matter a great deal. One way of thinking about this is in evolutionary terms – if technological innovation is only able to proceed incrementally, there’s a risk that we’re less able to adapt to sudden shocks, we’re less able to anticipate the future and we’re at risk of being locked into technological trajectories that we can’t alter later in response to unexpected changes in our environment or unanticipated consequences. I’ve written earlier about the suggestion that, far from seeing universal accelerating change, we’re currently seeing innovation stagnation. The risk is that we’re seeing less in the way of really radical innovation now, at a time when pressing issues like climate change, peak cheap oil and demographic transitions make innovation more necessary than ever. We are seeing a great deal of very rapid innovation in the world of information, but this rapid pace of change in one particular realm has obscured much less rapid growth in the material realm and the biological realm. It’s in these realms that slow timescales and the large scale of the effort needed mean that the market seems unable to deliver the innovation we need.

It’s not going to be possible, nor would it be desirable, for us to return to the political economies of the mid-twentieth century warfare states that delivered the new technologies that underlie our current economies. Whatever other benefits the turn to free markets may have delivered, it seems to have been less effective at providing radical innovation, and with the need for those radical innovations becoming more urgent, some rethinking is now urgently required.

A billion dollar nanotech spinout?

The Oxford University spin-out Oxford Nanopore Technologies created a stir last month by announcing that it would be bringing to market this year systems to read out the sequence of individual DNA molecules by threading them through nanopores. It’s claimed that this will allow a complete human genome to be sequenced in about 15 minutes for a few thousand dollars; the company also is introducing a cheap, disposable sequencer which will sell for less that $900. Speculation has now begun about the future of the company, with valuations of $1-2 billion dollars being discussed if they decide to take the company public in the next 18 months.

It’s taken a while for this idea of sequencing a single DNA molecule by directly reading out its bases to come to fruition. The original idea came from David Deamer and Harvard’s Dan Branton in the mid-1990s; from Hagen Bayley, in Oxford, came the idea of using an engineered derivative of a natural pore-forming protein to form the hole through which the DNA is threaded. I’ve previously reported progress towards this goal here, in 2005, and in more detail here, in 2007. The Oxford Nanopore announcement gives us some clues as to the key developments since then. The working system uses a polymer membrane, rather than a lipid bilayer, to carry the pore array, which undoubtedly makes the system much more robust. The pore is still created from a pore forming protein, though this has been genetically engineered to give greater discrimination between different combinations of bases as the DNA is threaded through the hole. And, perhaps most importantly, an enzyme is used to grab DNA molecules from solution and feed them through the pore. In practise, the system will be sold as a set of modular units containing the electronics and interface, together with consumables cartridges, presumably including the nanopore arrays and the enzymes. The idea is to take single molecule analysis beyond DNA to include RNA and proteins, as well as various small molecules, with a different cartridge being available for each type of experiment. This will depend on the success of their program to develop a whole family of different pores able to discriminate between different types of molecules.

What will the impact of this development be, if everything works as well as is being suggested? (The prudent commentator should stress the if here, as we haven’t yet seen any independent trials of the technology). Much has already been written about the implications of cheap – less than $1000 – sequencing of the human genome, but I can’t help wondering whether this may not actually be the big story here. And in any case, that goal may end being reached with or without Oxford Nanopore, as this recent Nature News article makes clear. We still don’t know whether the Oxford Nanopore technique will be yet competitive on accuracy and price with the other contending approaches. I wonder, though, whether we are seeing here something from the classic playbook for a disruptive innovation. The $900 device in particular looks like it’s intended to create new markets for cheap, quick and dirty sequencing, to provide an income stream while the technology is improved further – with better, more selective pores and better membranes (inevitably, perhaps, Branton’s group at Harvard reported using graphene membranes for threading DNA in Nature last year). As computers continue to get faster, cheaper and more powerful, the technology will automatically benefit from these advances too – fragmentary and perhaps imperfect sequence information has much greater value in the context of vast existing sequence libraries and the data processing power to use them. Perhaps applications for this will be found in forensic and environmental science, diagnostics, microbiology and synthetic biology. The emphasis on molecules other than DNA is interesting too; single molecule identification and sequencing of RNA opens up the possibility of rapidly identifying what genes are being transcribed in a cell at a given moment (the so-called “transcriptome”).

The impact on the investment markets for nanotechnology is likely to be substantial. Existing commercialisation efforts around nanotechnology have been disappointing so far, but a company success on the scale now being talked about would undoubtedly attract more money into the area – perhaps it might also persuade some of the companies currently sitting on huge piles of cash that they might usefully invest some of this in a little more research and development. What’s significant about Oxford Nanopore is that it is operating in a sweet spot between the mundane and the far-fetched. It’s not a nanomaterials company, essentially competing in relatively low margin speciality chemicals, nor is it trying to make a nanofactory or nanoscale submarine or one of the other more radical visions of the nanofuturists. Instead, it’s using the lessons of biology – and indeed some of the components of molecular biology – to create a functional device that operates on the true single molecule level to fill real market needs. It also seems to be displaying a commendable determination to capture all the value of its inventions, rather than licensing its IP to other, bigger companies.

Finally, not the least of the impacts of a commercial and technological success on the scale being talked about would be on nanotechnology itself as a discipline. In the last few years the field’s early excitement has been diluted by a sense of unfulfilled promise, especially, perhaps, in the UK; last year I asked “Why has the UK given up on nanotechnology?” Perhaps it will turn out that some of that disillusionment was premature.

Where the randomness comes from

For perhaps 200 years it was possible to believe that physics gave a picture of the world with no place for randomness. Newton’s laws prescribe a picture of nature that is completely deterministic – at any time, the future is completely specified by the present. For anyone attached to the idea that they have some control over their destiny, that the choices they make have any influence on what happens to them, this seems problematic. Yet the idea of strict physical determinism, the idea that free will is an illusion in a world in which the future is completely predestined by the laws of physics, remains strangely persistent, despite the fact that it isn’t (I believe) supported by our current scientific understanding.

The mechanistic picture of a deterministic universe received a blow with the advent of quantum mechanics, which seems to introduce an element of randomness to the picture – in the act of “measurement”, the state function of a quantum system discontinuously changes according to a law which is probabilistic rather than deterministic. And when we look at the nanoscale world, at least at the level of phenomenology, randomness is ever-present, summed up in the phenomenon of Brownian motion, and leading inescapably to the second law of thermodynamics. And, of course, if we are talking about human decisions (should we go outside in the rain, or have another cup of tea?) the physical events in the brain that initiate the process of us opening the door or putting the kettle on again are strongly subject to this randomness; those physical events, molecules diffusing across synapses, receptor molecules changing shape in response to interactions with signalling molecules, shock waves of potential running up membranes as voltage-gated pores in the membrane open and close, all take place in that warm, wet, nanoscale domain in which Brownian motion dominates and the dynamics is described by Langevin equations, complete with their built-in fluctuating forces. Is this randomness real, or just an appearance? Where does it come from?

I suspect the answer to this question, although well-understood, is not necessarily widely appreciated. It is real randomness – not just the appearance of randomness that follows from the application of deterministic laws in circumstances too complex to model – and its ultimate origin is indeed in the indeterminism of quantum mechanics. To understand how the randomness of the quantum realm gets transmitted into the Brownian world, we need to remember first that the laws of classical, Newtonian physics are deterministic, but only just. If we imagine a set of particles interacting with each other through well-known forces, defined through potentials of the kind you might use in a molecular dynamics simulation, the way in which the system evolves in time is in principle completely determined, but in practise any small perturbation to the deterministic laws (such as a rounding error in a computer simulation) will have an effect which grows with time to widen the range of possible outcomes that the system will explore, a widening that macroscopically we’d interpret as an increase in the entropy of the system.

To understand where, physically, this perturbation might come from we have to ask where the forces between molecules originate, as they interact and bounce off each other. One ubiquitous force in the nanoscale world is known to chemists as the Van der Waals force. In elementary physics and chemistry, this is explained as a force that arises between two neutral objects when a randomly arising dipole in one object induces an opposite dipole in the other object, and the two dipoles then attract each other. Another, perhaps deeper, way of thinking about this force is due to the physicists Casimir and Lifshitz, who showed that it arises from the way objects modify the quantum fluctuations that are always present in the vacuum – the photons that come in and out of existence even in the emptiest of empty spaces. This way of thinking about the Van der Waals force makes clear that because the force arises from the quantum fluctuations of the vacuum, the force must itself be fluctuating – it has an intrinsic randomness that is sufficient to explain the randomness we observe in the nanoscale world.

So, to return to the question of whether free will is compatible with physical determinism, we can now see that this is not an interesting question, because rules that govern the operation of the brain are fundamentally not deterministic. Of course, the question of how free will might emerge from a non-deterministic, stochastic system isn’t of course a trivial question either, but at least it starts from the right premise – we can say categorically that strict physical determinism, as applied to the operation of the brain, is false. The brain is not a deterministic system, but one in which randomness is central and inescapable to its operation.

One might go on to ask why some people are so keen to hold on to the idea of strict physical determinism, more than a hundred years after the discoveries of quantum mechanics and statistical mechanics that makes determinism untenable? This is too big a question for me to even attempt to answer here, but maybe it’s worth pointing out that there seems to be quite a lot of of determinism around – in addition to physical determinism, genetic determinism and technological determinism seem to be attractive to many people at the moment. Of course, the rise of the Newtonian mechanistic world-view occurred at a time when a discussion about the relationship between free will and a theological kind of determinism was very current in Christian Europe, and I’m tempted to wonder whether the appeal of these modern determinisms might be part of the lingering legacy of Augustine of Hippo and Calvin to the modern age.

Slouching towards an industrial policy

The UK’s Science Minister, David Willetts, gave a speech last week on “Our High Tech Future”. The headlines about it were dominated by one somewhat odd policy announcement, which I’ll come to later, but what’s more interesting is the fact that he chose (apparently at quite short notice) to give the speech at all, only weeks after the publication of a strategy for “Innovation and Research for Growth”, that was widely regarded as, at best, a retrospective attempt to give coherence to a series of rather random acts of policy. I’m tempted to interpret the speech as a signal that a not completely formed government policy is still evolving in some quite interesting directions. In short, after 32 years, the Conservatives are rediscovering the need for industrial policy.
Continue reading “Slouching towards an industrial policy”

A little history of bionanotechnology and nanomedicine

I wrote this piece as a briefing note in connection with a study being carried out by the Nuffield Council on Bioethics about Emerging Biotechnologies. I’m not sure whether bionanotechnology or nanomedicine should be considered as emerging biotechnologies, but this is an attempt to sketch out the connections.

Nanotechnology is not a single technology; instead it refers to a wide range of techniques and methods for manipulating matter on length scales from a nanometer or so – i.e. the typical size of molecules – to hundreds of nanometers, with the aim of creating new materials and functional devices. Some of these methods represent the incremental evolution of well-established techniques of applied physics, chemistry and materials science. In other cases, the techniques are at a much earlier state, with promises about their future power being based on simple proof-of-principle demonstrations.

Although nanotechnology has its primary roots in the physical sciences, it has always had important relationships with biology, both at the rhetorical level and in practical outcomes. The rhetorical relationship derives from the observation that the fundamental operations of cell biology take place at the nanoscale, so one might expect there to be something particularly powerful about interventions in biology that take place on this scale. Thus the idea of “nanomedicine” has been prominent in the promises made on behalf of nanotechnology from its earliest origins, and as a result has entered popular culture in the form of the exasperating but ubiquitous image of the “nanobot” – a robot vessel on the nano- or micro- scale, able to navigate through a patient’s bloodstream and effect cell-by-cell repairs. This was mentioned as a possibility in Richard Feynman’s 1959 lecture, “Plenty of Room at the Bottom”, which is widely (though retrospectively) credited as the founding manifesto of nanotechnology, but it was already at this time a common device in science fiction. The frequency with which conventionally credentialed nanoscientists have argued that this notion is impossible or impracticable, at least as commonly envisioned, has had little effect on the enduring hold it has on the popular imagination.
Continue reading “A little history of bionanotechnology and nanomedicine”

Science in hard times

How should the hard economic times we’re going through affect the amount of money governments spend on scientific and technological research? The answer depends on your starting point – if you think that science is an optional extra that we do if we’re prosperous, then decreasing prosperity must inevitably mean we can afford to do less science. But if you think that our prosperity depends on the science we do, then if growth is starting to stall, that’s a signal telling you to devote more resources to research. This is a huge oversimplification, of course; the link between science and prosperity can never be automatic. How effective that link will be will depend on the type of science and technology you support, and on the nature of the wider economic system that translates innovations into economic growth. It’s worth taking a look at recent economic history to see some of the issues at play.

Plot of UK real GDP per person and government R&D spend
UK Government spending on research and development compared to the real growth in per capita GDP.

R&D data (red) from the Royal Society Report The Scientific Century adjusted to constant 2005 £s. GDP per person data (blue) from Measuring Worth. Dotted blue line – current projections from the November 2011 forecast of the UK Office of Budgetary Responsibility (uncorrected for population changes).

The graph shows both the real GDP per person in the UK from 1946 up to the present, together with the amount of money, again in real terms, spent by the government on research and development. The GDP graph tells an interesting story in itself, making very clear the discontinuity in economic policy that happened in 1979. In this year Margaret Thatcher’s new Conservative government overthrew a thirty year broad consensus, shared by both parties, on how the economy should be managed. Before 1979, we had a mixed economy, with substantial industrial sectors under state control, highly regulated financial markets, including controls on the flow of capital in and out of the country, and the macro-economy governed by the principles of Keynesian demand management. After 1979, it was not Keynes, but Hayek, who supplied the intellectual underpinning, and we saw progressive privatisation of those parts of the economy under state control, the abolition of controls on capital movements and deregulation of financial markets. In terms of economic growth, measured in real GDP per person, the period between 1946 and 1979 was remarkable, with a steady increase of 2.26% per year – this is, I think, the longest sustained period of high growth in the modern era. Since 1979, we’ve seen a succession of deep recessions, followed by periods of rapid, and evidently unsustainable growth, sustained by asset price bubbles. The peaks of these periods of growth have barely attained the pre-1979 trend line, while in our current economic travails we find ourselves about 9% below trend. Not only does there seem no imminent prospect of the rapid growth we’d need to return to that trend line, but there now seems to be a likelihood of another recession.

The plot for public R&D spending tells its own story, which also shows a turning point with the Thatcher government. From 1980 until 1998, we see a substantial long-term decline in research spending, not just as a fraction of GDP, but in absolute terms; since 1998 research spending has increased again in real terms, though not substantially faster than the rise in GDP over the same period. Underlying the decline were a number of factors. There was a real squeeze on spending in research in Universities, well remembered by those who were working in them at the time. Meanwhile the research spending in those industries that were being privatised – such as telecommunications and energy – was removed from the government spending figures. And the activities of government research laboratories – particularly those associated with defense and the nuclear industry – were significantly wound down. Underlying this winding down of research was both a political motive and an ideological one. Big government spending on high technology was associated with the corporate politics of the 1960’s, subscribed to by both parties but particularly associated with Labour, and the memorable slogan “The White Heat of Technology”. To its detractors this summoned up associations with projects like the supersonic passenger aircraft Concord, a technological triumph but a commercial disaster. To the adherents of the Hayekian free market ideology that underpinned the Thatcher government, the state had no business doing any research but the most basic and far from market. In fact, state-supported research was likely to be not only less efficient and less effectively directed than research in the private sector, but by “squeezing out” such private sector research it would actually make the economy less efficient.

The idea that state support of research reduces support of research by the private sector by “squeezing out” remains attractive to free market ideologues, but the empirical evidence points to the opposite conclusion – state spending and private sector spending on research support each other, with increases in state R&D spending leading to increases in R&D by business (see for example Falk M (2006). What drives business research and development intensity across OECD countries? (PDF), Applied Economics 38 p 533). Certainly, in the UK, the near-halving of government R&D spend between 1980 and 1999 did not lead to an increase in R&D by business; instead, this also fell from about 1.4% of GDP to 1.2%. Not only did those companies that had been privatised substantially reduce their R&D spending, but other major players in industrial R&D – such as the chemical company ICI and the electronics company GEC – substantially cut back their activities. At the time many rationalised this as the inevitable result of the UK economy changing its mix of sectors, away from manufacturing towards service sectors such as the financial service industry.

None of this answers the questions: how much should one spend on R&D, and what difference do changes in R&D spend make to economic performance? It is certainly clear that the decline in R&D spending in the UK isn’t correlated with any improvement in its economic performance. International comparisons show that the proportion of GDP spent on R&D in the UK is significantly lower than most of its major competitors, and within this the proportion of R&D supported by business is itself unusually low . On the other hand, the performance of the UK science base, as measured by academic measures rather than economic ones, is strikingly good. Updating a much-quoted formula, the UK accounts for 3% of the total world R&D spend, it has 4.3% of the world’s researchers, who produce 6.4% of the world’s scientific articles, which attract 10.9% of the world’s citations and produce 13.8% of the world’s top 1% of highly cited papers (these figures come from the analysis in the recent report The International Comparative Performance of the UK Research Base).

This formula is usually quoted to argue for the productivity and effectiveness of the UK research base, and it clearly tells a powerful story about its strength as measured in purely academic terms. But does this mean we get the best out of our research in economic terms? The partial recovery in government R&D spending that we saw from 1998 until last year brought real terms increases in science budgets (though without significantly increasing the fraction of GDP spent on science). These increases were focused on basic research, whose importance as a proportion of total government science spending doubled between 1986 and 2005. This has allowed us to preserve the strength of our academic research base, but the decline in more applied R&D in both government and industrial laboratories has weakened our capacity to convert this strength into economic growth.

Our national economic experiment in deregulated capitalism ended in failure, as the 2008 banking collapse and subsequent economic slump has made clear. I don’t know how much the systematic running down of our national research and development capability in the 1980’s and 1990’s contributed to this failure, but I suspect that it’s a significant part of the bigger picture of misallocation of resources associated with the booms and the busts, and the associated disappointingly slow growth in economic productivity.

What should we do now? Everyone talks about the need to “rebalance the economy”, and the government has just released an “Innovation and Research Strategy for Growth”, which claims that “The Government is putting innovation and research at the heart of its growth agenda”. The contents of this strategy – in truth largely a compendium of small-scale interventions that have already been announced, which together still don’t fully reverse last year’s cuts in research capital spending – are of a scale that doesn’t begin to meet this challenge. What we should have seen is, not just a commitment to maintain the strength of the fundamental science base, important though that is, but a real will to reverse the national decline in applied research.

Can plastic solar cells deliver?

The promise of polymer solar cells is that they will be cheap enough and produced on a large enough scale to transform our energy economy, unlocking the sun’s potential to meet all our energy needs in a sustainable way. But there’s a long way to go from a device in a laboratory, or even a company’s demonstrator product, to an economically viable product that can be made at scale. How big is that gap, are there insuperable obstacles standing in the way, and if not, how long might it take us to get there? Some answers to these questions are now beginning to emerge, and I’m cautiously optimistic. Although most attention is focused on efficiency, the biggest outstanding technical issue is to prolong the lifetime of the solar cells. But before plastic solar cells can be introduced on a mass scale, it’s going to be necessary to find a substitute for indium tin oxide as a transparent electrode. But if we can do this, the way is open for a real transformation of our energy system.

The obstacles are both technical and economic – but of course it doesn’t make sense to consider these separately, since it is technical improvements that will make the economics look better. A recent study starts to break down the likely costs and identify where we need to find improvements. The paper – Economic assessment of solar electricity production from organic-based photovoltaic modules in a domestic environment, by Brian Azzopardi, from Manchester University, with coworkers from Imperial College, Cartagena, and Riso (Energy and Environmental Science 4 p3741, 2011) – breaks down an estimate of the cost of power generated by a polymer photovoltaic fabricated on a plastic substrate by a manufacturing process already at the prototype stage. This process uses the most common combination of materials – the polymer P3HT together with the fullerene derivative PCBM. The so-called “levelised power cost” – i.e. the cost per unit of electricity, including all capital costs, averaged over the lifetime of the plant, comes in between €0.19 and €0.50 per kWh for 7% efficient solar cells with a lifetime of 5 years, assuming southern European sunshine. This is, of course, too expensive both compared to alternatives like fossil fuel or nuclear energy, and to conventional solar cells, though the gap with conventional solar isn’t massive. But the technology is still immature, so what improvements in performance and reductions in cost is it reasonable to expect?

The two key technical parameters are efficiency and lifetime. Most research effort so far has concentrated on improving efficiencies – values greater than 4% are now routine for the P3HT/PCBM system; a newer system, involving a different fullerene derivative, PC70BM blended with the polymer PCDTBT (I find even the acronym difficult to remember, but for the record the full name is poly[9’-hepta-decanyl-2,7- carbazole-alt-5,5-(4’,7’-di-2-thienyl-2’,1’,3’-benzothiadiazole)]), achieves efficiencies greater than 6%. These values will improve, through further tweaking of the materials and processes. Azzopardi’s analysis suggests that efficiencies in the range 7-10% may already be looking viable… as long as the cells last long enough. This is potentially a problem – it’s been understood for a while that the lifetime of polymer solar cells may well prove to be their undoing. The active materials in polymer solar cells – conjugated polymer semiconductors – are essentially overgrown dyes, and we all know that dyes tend to bleach in the sun. Five years seems to be a minimum lifetime to make this a viable technology, but up to now many laboratory devices have struggled to last more than a few days. Another recent paper, however, gives grounds for more optimism. This paper – High Efficiency Polymer Solar Cells with Long Operating Lifetimes, Advanced Energy Materials 1 p491, 2011), from the Stanford group of Michael McGehee – demonstrates a PCDTBT/PC70BM solar cell with a lifetime of nearly seven years. This doesn’t mean all our problems are solved, though – this device was encapsulated in glass, rather than printed on a flexible plastic sheet. Glass is much better than plastics at keeping harmful oxygen away from the active materials; to reproduce this lifetime in an all-plastic device will need more work to improve the oxygen barrier properties of the module.

How does the cost of a plastic solar cell break down, and what reductions is it realistic to expect? The analysis by Azzopardi and coworkers shows that the cost of the system is dominated by the cost of the modules, and the cost of the modules is dominated by the cost of the materials. The other elements of the system cost will probably continue to decrease anyway, as much of this is shared in common with other types of solar cells. What we don’t know yet is the extent to which the special advantages of plastic solar cells over conventional ones – their lightness and flexibility – can reduce the installation costs. As we’ve been expecting, the cheapness of processing plastic solar cells means that manufacturing costs – including the capital costs of the equipment to make them – are small compared to the cost of materials. The cost of these materials make up 60-80% of the cost of the modules. Part of this is simply the cost of the semiconducting polymers; these will certainly reduce with time as experience grows at making them at scale. But the surprise for me is the importance of the cost of the substrate, or more accurately the cost of the thin, transparent conducting electrode which coats the substrate – this represents up to half of the total cost of materials. This is going to be a real barrier to the large scale uptake of this technology.

The transparent electrode currently used is a thin layer of indium tin oxide – ITO. This is a very widely used material in touch screens and liquid crystal displays, and it currently represents the major use of the metal indium, which is rare and expensive. So unless a replacement for ITO can be found, it’s the cost and availability of this material that’s going to limit the use of plastic solar cells. Transparency and electrical conductivity don’t usually go together, so it’s not straightforward to find a substitute. Carbon nanotubes, and more recently graphene, have been suggested, but currently they’re neither good enough by themselves, nor is there a process to make them cheaply at scale (a good summary of the current contenders can be found in Rational Design of Hybrid Graphene Films for High-Performance Transparent Electrodes by Zhu et al, ACS Nano 5 p6472, 2011). So, to make this technology work, much more effort needs to be put into finding a substitute for ITO.

Are you a responsible nanoscientist?

This is the pre-edited version of a piece which appeared in Nature Nanotechnology 4, 336-336 (June 2009). The published version can be viewed here (subscription required).

What does it mean to be a responsible nanoscientist? In 2008, we saw the European Commission recommend a code of conduct for responsible nanosciences and nanotechnologies research (PDF). This is one of a growing number of codes of conduct being proposed for nanotechnology. Unlike other codes, such as the UK-based Responsible Nanocode, which are focused more on business and commerce, the EU code is aimed squarely at the academic research enterprise. In attempting this, it raises some interesting questions about the degree to which individual scientists are answerable for consequences of their research, even if those consequences were ones which they did not, and possibly could not, foresee.

The general goals of the EU code are commendable – it aims to encourage dialogue between everybody involved in and affected by the research enterprise, from researchers in academia and industry, through to policy makers to NGOs and the general public, and it seeks to make sure that nanotechnology research leads to sustainable economic and social benefits. There’s an important question, though, about how the responsibility for achieving this desirable state of affairs is distributed between the different people and groups involved.

One can, for example, imagine many scientists who might be alarmed at the statement in the code that “researchers and research organisations should remain accountable for the social, environmental and human health impacts that their N&N research may impose on present and future generations.” Many scientists have come to subscribe to the idea of a division of moral labour – they do the basic research which in the absence of direct application, remains free of moral implications, and the technologists and industrialists take responsibility for the consequences of applying that science, whether those are positive or negative. One could argue that this division of labour has begun to blur, as the distinction between pure and applied science becomes harder to make. Some scientists themselves are happy to embrace this – after all, they are happy to take credit for the positive impact of past scientific advances, and to cite the potential big impacts that might hypothetically flow from their results.

Nonetheless, it is going to be difficult to convince many that the concept of accountability is fair or meaningful when applied to the downstream implications of scientific research, when those implications are likely to be very difficult to predict at an early stage. The scientists who make an original discovery may well not have a great deal of influence in the way it is commercialised. If there are adverse environmental or health impacts of some discovery of nanoscience, the primary responsibility must surely lie with those directly responsible for creating conditions in which people or ecosystems were exposed to the hazard, rather than the original discoverers. Perhaps it would be more helpful to think about the responsibilities of researchers in terms of a moral obligation to be reflective about possible consequences, to consider different viewpoints, and to warn about possible concerns.

A consideration of the potential consequences of one’s research is one possible approach to proceeding in an ethical way. The uncertainty that necessarily surrounds any predictions about way research may end up being applied at a future date, and the lack of agency and influence on those applications that researchers often feel, can limit the usefulness of this approach. Another code – the UK government’s Universal Ethical Code for Scientists – takes a different starting point, with one general principle – “ensure that your work is lawful and justified” – and one injunction to “minimise and justify any adverse effect your work may have on people, animals and the natural environment”.

A reference to what is lawful has the benefit of clarity, and it provides a connection through the traditional mechanisms of democratic accountability with some expression of the will of society at large. But the law is always likely to be slow to catch up with new possibilities suggested by new technology, and many would strongly disagree with the principle that what is legal is necessarily ethical. As far as the test of what is “justified” is concerned, one has to ask, who is to judge this?

One controversial research area that probably would past the test that research should be “lawful and justified” is in applications of nanotechnology to defence. Developing a new nanotechnology-based weapons system would clearly contravene the EU code’s injunction to researchers that they “should not harm or create a biological, physical or moral threat to people”. Researchers working in a government research organisation with this aim might find reassurance for any moral qualms with the thought that it was the job of the normal processes of democratic oversight to ensure that their work did pass the tests of lawfulness and justifiability. But this won’t satisfy those people who are sceptical about the ability of institutions – whether they in government or in the private sector – to manage the inevitably uncertain consequences of new technology.

The question we return to, then, is how is responsibility divided between the individuals that do science, and the organisations, institutions and social structures in which science is done? There’s a danger that codes of ethics focus too much on the individual scientist, at a time when many scientists often feel rather powerless, with research priorities increasingly being set from outside, and with the development and application of their research out of their hands. In this environment, too much emphasis on individual accountability could prove alienating, and could divert us from efforts to make the institutions in which science and technology are developed more responsible. Scientists shouldn’t completely underestimate their importance and influence collectively, even if individually they feel rather impotent. Part of the responsibility of a scientist should be to reflect on how one would justify one’s work, and how people with different points of view might react to it, and such scientists will be in a good position to have a positive influence on those institutions they interact with – funding agencies, for example. But we still need to think more generally how to make responsible institutions for developing science and technology, as well as responsible nanoscientists.