The Higher Education and Research Bill, the Haldane principle, and the autonomy of science

The UK government’s Higher Education and Research Bill is currently working its way through parliament – it’s currently in the committee stage at the House of Lords, where it’s encountering some turbulence. A recent newspaper article by (Lord) Chris Patten – “Leave state control of Universities to China” – gives a sense of how some are perceiving the issues … “But the bill that he [Jo Johnson, Minister for Universities, Science, Research and Innovation] recommends we swallow gives the secretary of state greater power than ever to direct the course of research.”

Any discussion of who has the power to direct government funded research in the UK will very soon invoke something called the “Haldane principle”, which is held by many to be a quasi-constitutional principle upholding the autonomy of the research community and its immunity from direct political direction, and indeed the government has been at pains to affirm its adherence to this tradition, stating for example in a publication about the Bill (Higher Education and Research Bill: UKRI Vision, Principles & Governance) that “The commitment to the Haldane principle will be strengthened and protected.”

Nonetheless, the Bill itself does not refer to the Haldane principle. The Bill sets up a new body to oversee science funding, UK Research and Innovation (UKRI), and section 96 of the Bill seems pretty explicit about where power will lie. Clause 1 of section 96 states baldly “The Secretary of State may give UKRI directions about the allocation or expenditure by UKRI of grants received”. And for the avoidance of doubt, as the lawyers say, clause 4 underlines clause 1 thus: “UKRI must comply with any directions given under this section”.

So, what is the Haldane principle, and if the Haldane principle really does protect the autonomy of science, how can this be consistent with direct control the new Bill gives the Minister? To get to the bottom of this, it’s worth asking three separate questions. Firstly, what did the supposed originator of the principle, Lord Haldane, actually say? Secondly, what do people now understand by the “Haldane principle”, and what purposes are served by invoking it? And lastly, we should perhaps put aside the constitutional history and go back to first principles to ask what the proper relationship between politicians and the scientific enterprise should be.

The supposed origin of the Haldane principle is in a report written in 1918 by the “Machinery of Government Committee”, chaired by Viscount Richard Haldane. (It should perhaps be pointed out that this Haldane is not the same as the famous biologist J.B.S. Haldane, who was in fact Richard Haldane’s nephew).

This is a very famous and important report in the history of UK public administration – it established some principles of executive government still followed in the UK today, recommending the foundation of the Cabinet Office, and the creation of Ministries based around key functions, such as Education, Health, Defense, and Finance. It does talk quite extensively about research and intelligence, but it’s clear that Haldane usually means something rather different to academic science when he uses these terms. His focus is much more on what we would call the evidence base for policy.

However, Haldane does discuss the arrangements for supporting medical research, which over the years evolved into the Medical Research Council, and he commends the way the Department of Scientific and Industrial Research was arranged. The key feature here is that responsibility for spending money on research proposals rests with a Minister, who has to answer to Parliament, but that “the scientific and technical soundness of all research proposals recommended for State assistance is guaranteed by constant resort to the guidance of an Advisory Council, consisting of a small number of eminent men of science, with an Administrative Chairman”*. Some sense of the kind of research being talked about is given by the examples given – “certain enquiries into questions of urgent practical importance relating to the preservation of fish”, “the mortality and illness due to TNT poisoning”. The importance of expert advice emerges clearly, but this doesn’t read like a statement of principle about the autonomy of basic science.

So if Viscount Haldane didn’t invent the Haldane principle, who did? The historian David Edgerton (in his article “The ‘Haldane Principle’ and other invented traditions in science policy”) points the finger at the Conservative politician Lord Hailsham. He invoked the “Haldane principle” in the early 1960s to score a political point off his Labour opponents.

Clearly the Haldane principle still has considerable rhetorical power, so different meanings are attached to it by different people to the degree that these meanings support the arguments they are making. In a general sense, there’s an agreement that the Haldane principle has something to do with the protection of the operation of science from direct political interventions, but within this I’d identify two rather different versions.

In the first version, the Haldane principle relates to the devolution of decisions about scientific matters to groups of experts – the Research Councils. This is actually the version that is enshrined in current legislation, to the extent that the powers and responsibilities of the Research Councils are defined by their Royal Charters. When I talk about Research Councils here, I don’t mean the organisations in their Swindon offices – I mean the 15 or so experts who are appointed by the Minister to form each of the research councils’ governing boards (this is the position I hold for the EPSRC). These boards are, in effect, the descendants of the advisory Councils specified by Viscount Haldane – but in the current legislation they do have more independence and autonomy than Haldane implies. But this autonomy is codified in the Royal Charters that the current Higher Education and Research Bill would rescind.

This version of the Haldane principle – the devolution of technical decision-making to bodies of experts – can be thought of as an example of the wider trend in government to offload its responsibilities to technocratic bodies – the granting of independence to the Bank of England and the empowering of the Monetary Policy Committee to set interest rates being perhaps the most visible example.

But another reading of the Haldane principle takes the correct level of devolution of power to be, not to a small group of the scientific great and good, but to the broader scientific community. This comes in a strong version and a weak one. The strong version holds that the scientific enterprise (as conceptualised by Michael Polanyi as a Hayekian spontaneous order – the “independent republic of science”) should not be subject to any external steering at all, and should be configured to maximise, above all, scientific excellence (for some value of “excellence”, of course). Unsurprisingly, this view is popular amongst many elite scientists.

The weak version concedes that at the micro-level of individual research proposals, decisions should be left to peer review, but insists that larger scale, strategic decisions can and should be subject to political control. This is essentially the interpretation of the Haldane principle favoured by recent governments. Of course, in this interpretation, where the line of demarcation between strategic decisions and individual research proposals falls is crucial and contested.

If the point at issue is when, and at what level, it is appropriate for politicians to directly intervene in the selection of research projects, we should ask what purposes could be served by such intervention.

An intrinsically authoritarian government might feel that it ought to have such control on principle, without any particular instrumental purpose.

A government that was excessively concerned about its short-term popularity might seek to use such decisions to gain immediate political advantage, for example by announcing the kind of new initiatives that might attract headlines and win votes in marginal constituencies. The devolution of this kind of decision making to technocratic committees can be read as a recognition by politicians that they need to be thus protected from their own worst instincts, tied to the mast, as it were.

But if a government has an overwhelming strategic purpose, then it might feel that the urgency of that purpose might justify rather direct interventions in the direction of research. To go back to the original Haldane, it’s clear that this is the historical context of that 1918 report.

The central purpose of the state then was clear – “to defeat German militarism”. And as we look back on that war now with horror as an exemplar of futile mass slaughter, it’s easy to forget the degree to which the First World War was a scientific and technological war. Britain had relied for some decades on technology rather than mass mobilisation for its global military power, and the war was born out of a naval arms race of Dreadnoughts, torpedoes and sophisticated fire control. The war itself brought aeroplanes, tanks, submarines, chemicals for new munitions (and chemical weapons), together with state control of the means of production, and Haldane himself, as prewar War Minister, had been central in the modernisation of Britain’s armed forces to exploit these technologies.

My own view is that the elected government do have the right to intervene in the direction of research, but that such interventions need to be motivated by a clearly articulated sense of strategic purpose. For most of the twentieth century, Britain’s “Warfare State” supplied, for better or worse, just such a clear sense of purpose. What’s our urgent sense of state purpose for the 21st century?

* The gendered language leaps out, of course, but in fairness to Viscount Haldane it’s worth pointing out that the report also contains a strikingly modern section about the value of diversity and the importance of opening up senior civil service roles to women. Beatrice Webb was on the committee, as was a notably reactionary Treasury Permanent Secretary, Sir George Murray, who added a note dissenting on this section to his signature.

I’m grateful to Alice Vadrot, convenor of an STS reading group at Cambridge’s Centre for Science and Policy, and its other members, for a stimulating discussion of some of these ideas at its most recent meeting.

Some books I read this year

Nick Lane – The Vital Question: energy, evolution and the origins of complex life

This is as good as everyone says it is – well written and compelling. I particularly appreciated the focus on energy flows as the driver for life, and the way the book gives the remarkable chemiosmotic hypothesis the prominence it deserves. The hypothesis Lane presents for the way life might have originated on earth is concrete and (to me) plausible, and what’s more important it suggests some experimental tests.

Todd Feinberg and Jon Mallet – The Ancient Origins of Consciousness: how the brain created experience

How many organisms can be said to be conscious, and when did consciousness emerge? Feinberg and Mallet’s answers are bold: all vertebrates are conscious, and in all probability so are cephalopods and some arthropods. In their view, consciousness evolved in the Cambrian explosion, associated with an arms race between predators and prey, and driven by the need to integrate different forms of long-distance sensory perceptions to produce a model of an organism’s environment. Even if you don’t accept the conclusion, you’ll learn a great deal about the evolution of nervous systems and the way sense perceptions are organised in many different kinds of organisms.

David Mackay – Information theory, inference, and learning algorithms

This is a text-book, so not particularly easy reading, but it’s a particularly rich and individual one. Continue reading “Some books I read this year”

Physical limits and diminishing returns of innovation

Are ideas getting harder to find? This question is asked in a preprint with this title by economists Bloom, Jones, Van Reenan and Webb, who attempt to quantify decreasing research productivity, showing for a number of fields that it is currently taking more researchers to achieve the same rate of progress. The paper is discussed in blogposts by Diane Coyle, who notes sceptically that the same thing was being said in 1983, and by Alex Tabarrok, who is more depressed.

Given the slowdown in productivity growth in the developed nations, which has steadily fallen from about 2.9% a year in 1970 to about 1.2% a year now, the notion is certainly plausible. But the attempt in the paper to quantify the decline is, I think, so crude as to be pretty much worthless – except inasmuch as it demonstrates how much growth economists need to understand the nature of technological innovation at a higher level of detail and particularity than is reflected in their current model-building efforts.

The first example is the familiar one of Moore’s law in semiconductors, where over many decades we have seen exponential growth in the number of transistors on an integrated circuit. The authors argue that to achieve this, the total number of researchers has increased by a factor of 25 or so since 1970 (this estimate is obtained by dividing the R&D expenditure of the major semiconductor companies by an average researcher wage). This is very broadly consistent with a corollary of Moore’s law (sometimes called Rock’s Law). This states that the capital cost of new generations of semiconductor fabs is also growing exponentially, with a four year doubling time; this cost is now in excess of $10 billion. A large part of this is actually the capitalised cost of the R&D that goes into developing the new tools and plant for each generation of ICs.

This increasing expense simply reflects the increasing difficulty of creating intricate, accurate and reproducible structures on ever-decreasing length scales. The problem isn’t that ideas are harder to find, it’s that as these length scales approach the atomic, many more problems arise, which need more effort to solve them. It’s the fundamental difficulty of the physics which leads to diminishing returns, and at some point a combination of the physical barriers and the economics will first slow and then stop further progress in miniaturising electronics using this technology.

For the second example, it isn’t so much physical barriers as biological ones that lead to diminishing returns, but the effect is the same. The green revolution – a period of big increases in the yields of key crops like wheat and maize – was driven by creating varieties able to use large amounts of artificial fertiliser and focus much more of their growing energies into the useful parts of the plant. Modern wheat, for example, has very short stems – but there’s a limit to how short you can make them, and that limit has probably been reached now. So R&D efforts are likely to be focused in other areas than pure yield increases – in disease resistance and tolerance of poorer growing conditions (the latter likely to be more important as climate changes, of course).

For their third example, the economists focus on medical progress. I’ve written before about the difficulties of the pharmaceutical industry, which has its own exponential law of progress. Unfortunately this goes the wrong way, with cost of developing new drugs increasing exponentially with time. The authors focus on cancer, and try to quantify declining returns by correlating research effort, as measured by papers published, with improvements in the five year cancer survival rate.

Again, I think the basic notion of diminishing returns is plausible, but this attempt to quantify it makes no sense at all. One obvious problem is that there are very long and variable lag times between when research is done, through the time it takes to test drugs and get them approved, to when they are in wide clinical use. To give one example, the ovarian cancer drug Lynparza was approved in December 2014, so it is conceivable that its effects might start to show up in 5 year survival rates some time after 2020. But the research it was based on was published in 2005. So the hope that there is any kind of simple “production function” that links an “input” of researchers’ time with an “output” of improved health, (or faster computers, or increased productivity) is a non-starter*.

The heart of the paper is the argument that an increasing number or researchers are producing fewer “ideas”. But what can they mean by “ideas”? As we all know, there are good ideas and bad ideas, profound ideas and trivial ideas, ideas that really do change the world, and ideas that make no difference to anyone. The “representative idea” assumed by the economists really isn’t helpful here, and rather than clarifying their concept in the first place, they redefine it to fit their equation, stating, with some circularity, that “ideas are proportional improvements in productivity”.

Most importantly, the value of an idea depends on the wider technological context in which it is developed. People claim that Leonardo da Vinci invented the helicopter, but even if he’d drawn an accurate blueprint of a Chinook, it would have had no value without all the supporting scientific understanding and technological innovations that were needed to make building a helicopter a practical proposition.

Clearly, at any given time there will be many ideas. Most of these will be unfruitful, but every now and again a combination of ideas will come together with a pre-existing technical infrastructure and a market demand to make a viable technology. For example, integrated circuits emerged in the 1960’s, when developments in materials science and manufacturing technology (especially photolithography and the planar process) made it possible to realise monolithic electronic circuits. Driven by customers with deep pockets and demanding requirements – the US defense industry – many refinements and innovations led to the first microprocessor in 1971.

Given a working technology and a strong market demand to create better versions of that technology, we can expect a period of incremental improvement, often very rapid. A constant rate of fractional improvement leads, of course, to exponential growth in quality, and that’s what we’ve seen over many decades for integrated circuits, giving us Moore’s law. The regularity of this improvement shouldn’t make us think it is automatic, though – it represents many brilliant innovations. Here, though, these innovations are coordinated and orchestrated so that in combination the overall rate of innovation is maintained. In a sense, the rate of innovation is set by the market, and the resources devoted to innovation increased to maintain that rate.

But exponential growth can never be sustained in a physical (or biological) system – some limit must always be reached. From about 1750 to 1850, the efficiency of steam engines increased exponentially, but despite many further technical improvements, this rate of progress slowed down in the second half of the 19th century – the second law of thermodynamics, through Carnot’s law, puts a fundamental upper limit on efficiency and as that limit is approached, diminishing returns set in. Likewise, the atomic scale of matter puts fundamental limits on how far the CMOS technology of our current integrated circuits can be pushed to smaller and smaller dimensions, and as those limits are approached, we expect to see the same sort of diminishing returns.

Economic growth didn’t come to an end in 1850 when the exponential rise in steam engine efficiencies started to level out, though. Entirely new technologies were developed – electricity, the chemical industry, the internal combustion engine powered motor car – which went through the same cycle of incremental improvement and eventual saturation.

The question we should be asking now is not whether the technologies that have driven economic growth in recent years have reached the point of diminishing returns – if they have, that is entirely natural and to be expected. It is whether enough entirely new technologies are now entering infancy, from which they can take-off with the sustained incremental growth that’s driven the economy in previous technology waves. Perhaps solar energy is in that state now; quantum computing perhaps hasn’t got there yet, as it isn’t clear how the basic idea can be implemented and whether there is a market to drive it.

What we do know is that growth is slowing, and has been doing so for some years. To this extent, this paper highlights a real problem. But a correct diagnosis of the ailment and design of useful policy prescriptions is going to demand a much more realistic understanding of how innovation works.

* if one insists on trying to build a model, the “production function” would need to be, not a simple function, but a functional, integrating functions representing different types of research and development effort over long periods of time.

Manufacturing *is* special, but it’s important to understand why

The politics of Trump and Brexit has drawn attention again to the phenomenon of “left-behind” communities. In the US rust belt and the UK’s northern cities, de-industrialisation and the loss of manufacturing jobs has stripped communities, not just of their economic base, but of their very sense of purpose.

But to some commentators, the focus on manufacturing is misguided sentimentality, an appeal to the discredited idea that the only proper work is making stuff in factories. These jobs, they say, have gone for ever, killed by a combination of technology and globalisation; the clock cannot be turned back and we must adjust to the new reality of service based economies, which produce economic value just as real as any widget.

I agree that the world has changed, but I want to argue that, despite that, manufacturing does have a special importance for the economic health of developed countries. It’s important, though, to understand why, if not for sentimentality or conservatism, manufacturing is important, or we’ll end up with bad and counter-productive policy prescriptions.

Manufacturing is important for three reasons. Firstly, consistently, over the long-run, manufacturing innovation remains the most reliable way of delivering sustained productivity growth, and this productivity growth spills over into other sectors and the economy more generally.

Secondly, centres of manufacturing sustain wider clusters in which tangible and intangible assets accumulate and reinforce their collective value, and where tacit knowledge is stored in networks of skilled people and effective organisations (what Shih and Pisano call the “manufacturing commons”). These networks include, not just the core manufacturers, but suppliers and maintainers of equipment, design consultancies, R&D, and so on, which in the long term are anchored by those manufacturing activities at the core of the cluster.

Of course, the same is true in other sectors too; this brings me to the third point, which is that the diversity of types of manufacturing leaves room for clusters to be geographically dispersed. Rebalancing the economy in favour of manufacturing will at the same time rebalance it geographically, reducing the gross regional imbalances in wealth and opportunities that are such a dangerous feature of the UK now.

Recognising these as the features of manufacturing that make it so important make it clear what an industrial strategy to promote it should not try and do. Its aim should not be to prop up failing industries as they currently exist – the whole point of supporting manufacturing is as a focus for innovation. Neither should there be any expectation that a manufacturing resurgence will lead to large scale mass employment on the old model. If productivity growth is to be the motivation, then this will not lead directly to large numbers of new jobs.

The point is to create value, not, in the first instance, to create jobs. But the jobs will follow, in those sectors that will support the new manufacturing activities – in design, marketing, data analytics, professional services. In fact, the characteristic of the new manufacturing is precisely that the lines between manufacturing and its associated service activities are becoming more blurred.

So an industrial strategy to support the new manufacturing needs to have, at its heart, a focus on innovation and skills, and the goal of creating a self-sustaining ecosystem. This doesn’t mean that one can ignore history – the future manufacturing specialisms of a region will reflect their past, because the nature of the assets one has to build on, in terms of existing firms, institutions and skills, will reflect that past. But equally an understanding of the transformations that technology is bringing is important too.

Manufacturing is changing, through automation and robotics, new materials and manufacturing techniques, and new modes of organising manufacturing processes in more reconfigurable and customisable ways. New business models are being developed which erode the distinction between traditional manufacturing and service industries, and underlying all these changes is the power of new digital technology, and the potential of large scale data analytics and machine learning. All these demand new (often digital) skills, better management practises, more effective mechanisms by which new technologies diffuse widely through an existing business base.

Last summer, we began the process of defining what a modern industrial strategy might look like, to support a resurgence of high value manufacturing in the traditional manufacturing heartlands of South Yorkshire and Lancashire. The outcome of this is presented in the Science and Innovation Audit commissioned by the UK government, whose report you can read here – Driving productivity growth through innovation in high value manufacturing.

As the UK government develops its own industrial strategy, I hope the policies that emerge are designed to support the right sorts of manufacturing, for the right reasons.

Time to face facts about the UK’s productivity stagnation

One positive feature of the Autumn Statement that the Chancellor of the Exchequer presented yesterday was that he gave unprecedented prominence to the UK’s serious productivity problem. What was less positive was that he had no analysis of where the problem comes from, and his proposed measures to address it are entirely inadequate.

This matters. Our ability to harness technological and other improvements to produce more value from the same inputs is the only fundamental driver for real wage increases; productivity growth drives living standards. And we rely on productivity growth to meet the future promises we’re making now – to grow our way out of our debts, and to pay for our future pensions.

In 2007, productivity had been growing steadily at 2.2% a year since before 1970. That ended with the financial crisis; in the 7 years since it has barely risen at all. The government, and its independent forecasters, the Office of Budgetary Responsibility, have spent that time confidently expecting an upturn, a resumption of the pre-crisis growth rate. But that upturn has never arrived. My plot shows that history; this shows the successive OBR predictions for a resumption of productivity growth, together with the successive disappointing outcomes.

obr-predictions

Labour productivity according to the successive Office of Budgetary Responsibility’s Economic and Fiscal Assessments for the years indicated, showing estimates of productivity up to the time of publication of each report (solid lines), and predictions for the future (dotted lines). Data for 2010-2014 from the October 2015 OBR Forecast Evaluation Report, for 2015 and March 2016 from the March 2016 OBR Economic and Fiscal Outlook, and November 2016 from the November 2016 OBR EFO.

After seven years of anomalously slow productivity growth, it’s time to face facts and acknowledge this isn’t an anomaly, it’s the new normal. Continue reading “Time to face facts about the UK’s productivity stagnation”

What has science policy ever done for Barnsley?

Cambridge’s Centre for Science and Policy, where I am currently a visiting fellow, held a roundtable discussion yesterday on the challenges for science policy posed by today’s politics post-Brexit, post-Trump, introduced by Harvard’s Sheila Jasanoff and myself. This is an expanded and revised version of my opening remarks.

I’m currently commuting between Sheffield and Cambridge, so the contrast between the two cities is particularly obvious to me at the moment. Cambridgeshire is one of the few regions of the UK that is richer than the average, with a GVA per head of £27,203 (the skewness of the UK’s regional income distribution, arising from London’s extraordinary dominance, leads to the statistical oddness that most of the country is poorer than the average). Sheffield, on the other hand, is one of the less prosperous provincial cities, with a GVA per head of £19,958. But Sheffield doesn’t do so badly compared with some of the smaller towns and cities in its hinterland – Barnsley, Rotherham and Doncaster, whose GVA per head, at £15,707, isn’t much more than half of Cambridge’s prosperity.

This disparity in wealth is reflected in the politics. In the EU Referendum, Cambridge voted overwhelmingly – 74% – for Remain, while Barnsley, Rotherham and Doncaster voted almost as overwhelmingly – 68 or 69% – to Leave. The same story could be told of many other places in the country – Dudley, in the West Midlands, Teeside, in the Northeast, Blackburn, in the Northwest. This is not just a northern phenomenon, as shown by the example of Medway, in the Southeast. These are all places with poorly performing local economies, which have failed to recover from 1980’s deindustrialisation. They have poor levels of educational attainment, low participation in higher education, poor social mobility, low investment, low rates of business start-ups and growth – and they all voted overwhelmingly to leave the EU.

Somehow, all those earnest and passionate statements by eminent scientists and academics about the importance for science of remaining in the EU cut no ice in Barnsley. And why should they? We heard about the importance of EU funding for science, of the need to attract the best international scientists, of how proud we should be of the excellence of UK science. If Leave voters in Barnsley thought about science at all, they might be forgiven for thinking that science was to be regarded as an ornament to a prosperous society, when that prosperity was something from which they themselves were excluded.

Of course, there is another argument for science, which stresses its role in promoting economic growth. That is exemplified, of course, here in Cambridge, where it is easy to make the case that the city’s current obvious prosperity is strongly connected with its vibrant science-based economy. This is underpinned by substantial public sector research spending, which is then more than matched by a high level of private sector innovation and R&D, both from large firms and fast growing start-ups supported by a vibrant venture capital sector.

The figures for regional R&D bear this out. East Anglia has a total R&D expenditure of €1,388 per capita – it’s a highly R&D intensive economy. This is underpinned by the €472 per capita that’s spent in universities, government and non-profit laboratories, but is dominated by the €914 per capita spent in the private sector, directly creating wealth and economic growth. This is what a science-based knowledge economy looks like.

South Yorkshire looks very different. The total level of R&D is less than a fifth of the figure for East Anglia, at €244 per capita; and this is dominated by HE, which carries out R&D worth €156. Business R&D is less than 10% of the figure for East Anglia, at €80 per capita. This is an economy in which R&D plays very little role outside the university sector.

An interesting third contrast is Inner London, which is almost as R&D intensive overall as East Anglia, with a total R&D expenditure of €1,130 per capita. But here the figure is dominated not by the private sector, which does €323 per capita R&D, but by higher education and government, at €815 per capita. A visitor to London from Barnsley, getting off the train at St Pancras and marvelling at the architecture of the new Crick Institute, might well wonder whether this was indeed science as an ornament to a prosperous society.

To be fair, governments have begun to recognise these issues of regional disparities. I’d date the beginning of this line of thinking back to the immediate period after the financial crisis, when Peter Mandelson returned from Brussels to take charge of the new super-ministry of Business, Innovation and Skills. Newly enthused about the importance of industrial strategy, summarised in the 2009 document “New Industry, New Jobs”, he launched the notion that the economy needed to be “rebalanced”, both sectorally and regionally.

We’ve heard a lot about “rebalancing” since. At the aggregate level there has not been much success, but, to be fair, the remarkable resurgence of the automobile industry perhaps does owe something to the measures introduced by Mandelson’s BIS and InnovateUK, and continued by the Coalition, to support innovation, skills and supply chain development in this sector.

One area in which there was a definite discontinuity in policy on the arrival of the Coalition government in 2010 was the abrupt abolition of the Regional Development Agencies. They were replaced by “Local Enterprise Partnerships”, rather loosely structured confederations of local government representatives and private sector actors (including universities), with private sector chairs. One good point about LEPs was that they tended to be centred on City Regions, which make more sense as economic entities than the larger regions of the RDAs, though this did introduce some political complexity. Their bad points were that they had very few resources at their disposal, they had little analytical capacity, and their lack of political legitimacy made it difficult for them to set any real priorities.

Towards the end of the Coalition government, the idea of “place” made an unexpected and more explicit appearance in the science policy arena. A new science strategy appeared in December 2014 – “Our Plan for Growth: Science and Innovation” , which listed “place” as one of five underpinning principles (the others being “Excellence, Agility,Collaboration, and Openness”)

What was meant by “place” here was, like much else in this strategy, conceptually muddled. On the one hand, it seemed to be celebrating the clustering effect, by which so much science was concentrated in places like Cambridge and London. On the other hand, it seemed to be calling for science investment to be more explicitly linked with regional economic development.

It has been this second sense that has subsequently developed by the new, all Conservative government. The Science Minister, Jo Johnson, announced in a speech in Sheffield, the notion of “One Nation Science” – the idea that science should be the route for redressing the big differences in productivity between regions in the UK.

The key instrument for this “place agenda” was to be the “Science and Innovation Audits” – assessments of the areas of strength in science and innovation in the regions, and suggestions for where opportunities might exist to use and build on these to drive economic growth.

I have been closely involved in the preparation of the Science and Innovation Audit for Sheffield City Region and Lancashire, which was recently published by the government. I don’t want to go into detail about the Science and Innovation Audit process or its outcomes here – instead I want to pose the general question about what science policy can do for “left behind” regions like Barnsley or Blackburn.

It seems obvious to me that “trophy science” – science as an ornament for a prosperous society – will be no help. And while the model of Cambridge – a dynamic, science based economy, with private sector innovation, venture capital, and generous public funding for research attracting global talent – would be wonderful to emulate, that’s not going to happen. It arose in Cambridge from the convergence of many factors over many years, and there are not many places in the world where one can realistically expect this to happen again.

Instead, the focus needs to be much more on the translational research facilities that will attract inward investment from companies operating at the technology frontier, on mechanisms to diffuse the use of new technology quickly into existing businesses, on technical skills at all levels, not just the highest. The government must have a role, not just in supporting those research facilities and skills initiatives, but also in driving the demand for innovation, as the customer for the new technologies that will be needed to meet its strategic goals (for a concrete proposal of how this might work, see Stian Westlake’s blogpost “If not a DARPA, then what? The Advanced Systems Agency” ).

The question “What have you lot ever done for Barnsley” is one that I was directly asked, by Sir Steve Houghton, leader of Barnsley Council, just over a year ago, at the signing ceremony for the Sheffield City Region Devo Deal. I thought it was a good question, and I went to see him later with a considered answer. We have, in the Advanced Manufacturing Research Centre, a great translational engineering research facility that demonstrably attracts investment to the region and boosts the productivity of local firms. We have more than 400 apprentices in our training centre, most sponsored by local firms, not only getting a first class training in practical engineering (some delivered in collaboration with Barnsley College), but also with the prospect of a tailored path to higher education and beyond. We do schools outreach and public engagement, we work with Barnsley Hospital to develop new medical technologies that directly benefit his constituents. I’m sure he still thinks we can do more, but he shouldn’t think we don’t care any more.

The referendum was an object lesson in how little the strongly held views of scientists (and other members of the elite) influenced the voters in many parts of the country. For them, the interventions in the referendum campaign by leading scientists had about as much traction as the journal Nature’s endorsement of Hilary Clinton did across the Atlantic. I don’t think science policy has done anything like enough to answer the question, what have you lot done for Barnsley … or Merthyr Tydfil, or Dudley, or Medway, or any of the many other parts of the country that don’t share the prosperity of Cambridge, or Oxford, or London. That needs to change now.

Optimism – and realism – about solar energy

10 days ago I was fortunate enough to attend the Winton Symposium in Cambridge (where I’m currently spending some time as a visiting researcher in the Optoelectronics Group at the Cavendish Laboratory). The subject of the symposium was Harvesting the Energy of the Sun, and they had a stellar cast of international speakers addressing different aspects of the subject. This sums up some what I learnt from the day about the future potential for solar energy, together with some of my own reflections.

The growth of solar power – and the fall in its cost – over the last decade has been spectacular. Currently the world is producing about 10 billion standard 5 W silicon solar cells a year, at a current cost of €1.29 each; the unsubsidised cost of solar power in the sunnier parts of the world is heading down towards 5 cents a kWh, and at current capacity and demand levels, we should see 1 TW of solar power capacity in the world by 2030, compared to current estimates that installed capacity will reach about 300 GW at the end of this year (with 70 GW of that added in 2016).

But that’s not enough. The Paris Agreement – ratified so far by major emitters such as the USA, China, India, France and Germany (with the UK promising to ratify by the end of the year – but President-Elect Trump threatening to take the USA out) – commits countries to taking action to keep the average global temperature rise from pre-industrial times to below 2° C. Already the average temperature has risen by one degree or so, and currently the rate of increase is about 0.17° a decade. The point stressed by Sir David King was that it isn’t enough just to look at the consequences of the central prediction, worrying enough though they might be – one needs to insure against the very real risks of more extreme outcomes. What concerns governments in India and China, for example, is the risk of the successive failure of three rice harvests.

To achieve the Paris targets, the installed solar capacity we’re going to need by 2030 is estimated as being in the range 8-10 TW nominal; this would require 22-25% annual growth rate in manufacturing capacity. Continue reading “Optimism – and realism – about solar energy”

Is nuclear power obsolete?

After a summer hiccough, the new UK government has finally signed the deal with the French nuclear company EDF and its Chinese financial backers to build a new nuclear power station at Hinkley Point. My belief that this is a monumentally bad deal for the UK has not changed since I wrote about it three years ago, here: The UK’s nuclear new build: too expensive, too late.

The way the deal has been structured simultaneously maximises the cost to UK citizens while minimising the benefits that will accrue to UK industry. It’s the fallacy of the private finance initiative exposed by reductio ad absurdum; the government has signed up to a 35 year guarantee of excessively high prices for UK consumers, driven by the political desire to keep borrowing off the government’s balance sheet and maintain the fiction that nuclear power can be efficiently delivered by the private sector.

But there’s another argument against the Hinkley deal that I want to look at more critically – this is the idea that nuclear power is now obsolete, because with new technologies like wind, solar, electric cars and so on, we will, or soon will, be able to supply the 3.2 GW of low-carbon power that Hinkley promises at lower marginal cost. I think this marginal cost argument is profoundly wrong – given the need to make substantial progress decarbonising our energy system over the next thirty years, what’s important isn’t the marginal cost of the next GW of low-carbon power, it’s the total cost (and indeed feasibility) of replacing the 160 GW or so that represents our current fossil fuel based consumption (not to mention replacing the 9.5 GW existing nuclear capacity, fast approaching the end of its working lifetime).

To get a sense of the scale of the task, in 2015 the UK used about 2400 TWh of primary energy inputs. 83% of that was in the form of fossil fuels – roughly 800 TWh each of oil and gas, and a bit less than 300 TWh of coal. The 3.2 GW output of Hinkley would contribute 30 TWh pa at full capacity, while the combined output of all wind (onshore and offshore) and solar generation in 2015 was 48 TWh. So if we increased our solar and wind capacity by a bit more than half, we could replace Hinkley’s contribution; this is indeed probably doable, and given the stupidly expensive nature of the Hinkley deal, we might well indeed be able to do it more cheaply.

But that’s not all we need to do, not by a long way. If we are serious about decarbonising our energy supply (and we should be: for my reasons, please read this earlier post Climate change: what do we know for sure, and what is less certain?) we need to find, not 30 TWh a year, but more like 1500 TWh, of low carbon energy. It’s not one Hinkley Point we need, but 50 of them.

What can’t be stressed too often, in thinking about the UK’s energy supply, is that most of the energy we use (82% in 2015) is not in the form of electricity, but directly burnt oil and gas. Continue reading “Is nuclear power obsolete?”

The Rose of Temperaments

The colour of imaginary rain
falling forever on your old address…

Helen Mort

“The Rose of Temperaments” was a colour diagram devised by Goethe in the late 18th century, which matched colours with associated psychological and human characteristics. The artist Paul Evans has chosen this as a title for a project which forms part of Sheffield University’s Festival of the Mind; for it six poets have each written a sonnet associated with a colour. Poems by Angelina D’Roza and A.B. Jackson have already appeared on the project’s website; the other four will be published there over the next few weeks, including the piece by Helen Mort, from which my opening excerpt is taken.

Goethe’s theory of colour was a comprehensive cataloguing of the affective qualities of colours as humans perceive them, conceived in part as a reaction to the reductionism of Newton’s optics, much in the same spirit as Keats’s despair at the tendency of Newtonian philosophy to “unweave the rainbow”.

But if Newton’s aim was to remove the human dimension from the analysis of colour, he didn’t entirely succeed. In his book “Opticks”, he retains one important distinction, and leaves one unsolved mystery. He describes his famous experiments with a prism, which show that white light can be split into its component colours. But he checks himself to emphasise that when he talks about a ray of red light, he doesn’t mean that the ray itself is red; it has the property of producing the sensation of red when perceived by the eye.

The mystery is this – when we talk about “all the colours of the rainbow”, a moment’s thought tells us that a rainbow doesn’t actually contain all the colours there are. Newton recognised that the colour we now call magenta doesn’t appear in the rainbow – but it can be obtained by mixing two different colours of the rainbow, blue and red.

All this is made clear in the context of our modern physical theory of colour, which was developed in the 19th century, first by Thomas Young, and then in detail by James Clerk Maxwell. They showed, as most people know, that one can make any colour by mixing the three primary colours – red, green and blue – in different proportions.

Maxwell also deduced the reason for this – he realised that the human eye must comprise three separate types of light receptors, with different sensitivities across the visible spectrum, and that it is through the differential response of these different receptors to incident light that the brain constructs the sensation of colour. Colour, then, is not an intrinsic property of light itself, it is something that emerges from our human perception of light.

In the last few years, my group has been exploring the relationship between biology and colour from the other end, as it were. In our work on structural colour, we’ve been studying the microscopic structures that in beetle scales and bird feathers produce striking colours without pigments, through complex interference effects. We’re particularly interested in the non-iridescent colour effects that are produced by some structures that combine order and randomness in rather a striking way; our hope is to be able to understand the mechanism by which these structures form and then reproduce them in synthetic systems.

What we’ve come to realise as we speculate about the origin of these biological mechanisms is that to understand how these systems for producing biological coloration have evolved, we need to understand something about how different animals perceive colour, which is likely to be quite alien to our perceptions. Birds, for example, have not three different types of colour receptors, as humans do, but four. This means not just that birds can detect light outside human range of perception, but that the richness of their colour perception has an extra dimension.

Meanwhile, we’ve enjoyed having Paul Evans as an artist-in-residence in my group, working with my colleagues Dr Andy Parnell and Stephanie Burg on some of our x-ray scattering experiments. In addition to the poetry and colour project, Paul has put together an exhibition for Festival of the Mind, which can be seen in Sheffield’s Millennium Gallery for a week from 17th September. Paul, Andy and I will also be doing a talk about colour in art, physics and biology on September 20th, at 5 pm in the Spiegeltent, Barker’s Pool, Sheffield.

Your mind will not be uploaded – the shorter version

The idea that it’s going to be possible, in the foreseeable future, to “upload” a human mind to a computer is, I believe, quite wrong. The difficulties are both practical and conceptual, as I explained at length and in technical detail in my earlier post Your mind will not be uploaded.

I’ve now summarised the argument against mind uploading in much shorter and more readable form in a piece for The Conversation – a syndication site for academic writers. I’m pleased to see that the piece – Could we upload a brain to a computer – and should we even try? – has had more than 100,000 readers.

It’s led to another career milestone, one that I’m a little more ambivalent about – my first by-line on the Daily Mail website: Would you upload YOUR brain to a computer? Experts reveal what it would take to live forever digitally. There was also a translation into Spanish in the newspaper El Pais: ¿Podríamos cargar nuestro cerebro en un ordenador?, and into German in the online magazine Netzpiloten: Könnten wir ein Gehirn hochladen – und sollten wir es überhaupt versuchen?