What hope against dementia?

An essay review of Kathleen Taylor’s book “The Fragile Brain: the strange, hopeful science of dementia”, published by OUP.

I am 56 years old; the average UK male of that age can expect to live to 82, at current levels of life expectancy. This, to me, seems good news. What’s less good, though, is that if I do reach that age, there’s about a 10% chance that I will be suffering from dementia, if the current prevalence of that disease persists. If I were a woman, at my age I could expect to live to nearly 85; the three extra years come at a cost, though. At 85, the chance of a woman suffering from dementia is around 20%, according to the data in Alzheimers Society Dementia UK report. Of course, for many people of my age, dementia isn’t a focus for their own future anxieties, it’s a pressing everyday reality as they look after their own parents or elderly relatives, if they are among the 850,000 people who currently suffer from dementia. I give thanks that I have been spared this myself, but it doesn’t take much imagination to see how distressing this devastating and incurable condition must be, both for the sufferers, and for their relatives and carers. Dementia is surely one of the most pressing issues of our time, so Kathleen Taylor’s impressive overview of the subject is timely and welcome.

There is currently no cure for the most common forms of dementia – such as Alzheimer’s disease – and in some ways the prospect of a cure seems further away now than it did a decade ago. The number of drugs which have been demonstrated to work to cure or slow down Alzheimer’s disease remains at zero, despite billions of dollars having been spent in research and drug trials, and it’s arguable that we understand less now about the fundamental causes of these diseases, than we thought we did a decade ago. If the prevalence of dementia remains unchanged, by 2051, the number of dementia sufferers in the UK will have increased to 2 million.

This increase is the dark side of the otherwise positive story of improving longevity, because the prevalence of dementia increases roughly exponentially with age. To return to my own prospects as a 56 year old male living in the UK, one can make another estimate of my remaining lifespan, adding the assumption that the increases in longevity we’ve seen recently continue. On the high longevity estimates of the Office of National Statistics, an average 56 year old man could expect to live to 88 – but at that age, there would be a 15% chance of suffering from dementia. For woman, the prediction is even better for longevity – and worse for dementia – with an life expectancy of 91, but a 20% chance of dementia (there is a significantly higher prevalence of dementia for women than men at a given age, as well as systematically higher life expectancy). To look even further into the future, a girl turning 16 today can expect to live to more than 100 in this high longevity scenario – but that brings her chances of suffering dementia towards 50/50.

What hope is there for changing this situation, and finding a cure for these diseases? Dementias are neurodegenerative diseases; as they take hold, nerve cells become dysfunctional and then die off completely. They have different effects, depending on which part of the brain and nervous system is primarily affected. The most common is Alzheimer’s disease, which accounts for more than half of dementias in the over 65’s, and begins by affecting the memory, and then progresses to a more general loss of cognitive ability. In Alzheimer’s, it is the parts of the the brain cortex that deal with memory that atrophy, while in frontotemporal dementia it is the frontal lobe and/or the temporal cortex that are affected, resulting in personality changes and loss of language. In motor neurone diseases (of which the most common is ALS, amyotrophic lateral sclerosis, also known as Lou Gehrig’s disease), it is the nerves in the brainstem and spinal cord that control the voluntary movement of muscles that are affected, leading to paralysis, breathing difficulties, and loss of speech. The mechanisms underlying the different dementias and other neurodegenerative diseases differ in detail, but they have features in common and the demarcations between them aren’t always well defined.

It’s not easy to get a grip on the science that underlies dementia – it encompasses genetics, cell and structural biology, immunology, epidemiology, and neuroscience in all its dimensions. Taylor’s book gives an outstanding and up-to-date overview of all these aspects. It’s clearly written, but it doesn’t shy away from the complexity of the subject, which makes it not always easy going. The book concentrates on Alzheimer’s disease, taking that story from the eponymous doctor who first identified the disease in 1901.

Dr Alois Alzheimer identified the brain pathology characteristic of Alzheimer’s disease – including the characteristic “amyloid plaques”. These consist of strongly associated, highly insoluble aggregates of protein molecules; subsequent work has identified both the protein involved and the structure it forms. The structure of amyloids – in which protein chains are bound together in sheets by strong hydrogen bonds – can be found in many different proteins, (I discussed this a while ago on this blog, in Death, Life and Amyloids) and when these structures occur in biological systems they are usually associated with disease states. In Alzheimer’s, the particular protein involved is called Aβ; this is a fragment of a larger protein of unknown function called APP (for amyloid precursor protein). Genetic studies have shown that mutations that involve the genes coding for APP and for the enzymes that snip the Aβ off the end of the APP, lead to more production of Aβ, more amyloid formation, and are associated with increased susceptibility to Alzheimer’s disease. The story seems straightforward, then – more Aβ leads to more amyloid, and the resulting build-up of insoluble crud in the brain leads to Alzheimer’s disease. This is the “amyloid hypothesis”, in its simplest form.

But things are not quite so simple. Although the genetic evidence linking Aβ to Alzheimer’s is strong, there are doubts about the mechanism. It turns out that the link between the presence of amyloid plaques themselves and the disease symptoms isn’t as strong as one might expect, so attention has turned to the possibility that it is the precursors to the full amyloid structure, where a handful of Aβ molecules come together to make smaller units – oligomers – which are the neurotoxic agents. Yet the mechanism by which these oligomers might damage the nerve cells remains uncertain.

Nonetheless, the amyloid hypothesis has driven a huge amount of scientific effort, and it has motivated the development of a number of potential drugs, which aim to interfere in various ways with the processes by which Aβ has formed. These drugs have, so far without exception, failed to work. Between 2002 and 2012 there were 413 trials of drugs for Alzheimer’s; the failure rate was 99.6%. The single successful new drug – memantine – is a cognitive enhancer which can relieve some symptoms of Alzheimer’s, without modifying the cause of the disease. This represents a colossal investment of money – to be measured at least in the tens of billions of dollars – for no return so far.

In November last year, Eli Lilly announced that its anti Alzheimer’s antibody, solanezumab, which was designed to bind to Aβ, failed to show a significant effect in phase 3 trials. After the failure of another phase III trial this February, of Merck’s beta-secretase inhibitor verubecestat, designed to suppress the production of Aβ, the medicinal chemist and long-time commentator on the pharmaceutical industry Derek Lowe wrote: “Beta-secretase inhibitors have failed in the clinic. Gamma-secretase inhibitors have failed in the clinic. Anti-amyloid antibodies have failed in the clinic. Everything has failed in the clinic. You can make excuses and find reasons – wrong patients, wrong compound, wrong pharmacokinetics, wrong dose, but after a while, you wonder if perhaps there might not be something a bit off with our understanding of the disease.”

What is perhaps even more worrying is that the supply of drug candidates currently going through the earlier stages of the processes, phase 1 and phase 2 trials, looks like it is starting to dry up. A 2016 review of the Alzheimer’s drug pipeline concludes that there are simply not enough drugs in phase 1 trials to give hope that new treatments are coming through in enough numbers to survive the massive attrition rate we’ve seen in Alzheimer’s drug candidates (for a drug to get to market by 2025, it would need to be in phase 1 trials now). One has to worry that we’re running out of ideas.

One way we can get a handle on the disease is to step back from the molecular mechanisms, and look again at the epidemiology. It’s clear that there are some well-defined risk factors for Alzheimer’s, which point towards some of the other things that might be going on, and suggest practical steps by which we can reduce the risks of dementia. One of these risk factors is type 2 diabetes, which according to data quoted by Taylor, increases the risk of dementia by 47%. Another is the presence of heart and vascular disease. The exact mechanisms at work here are uncertain, but on general principles these risk factors are not surprising. The human brain is a colossally energy-intensive organ, and anything that compromises the delivery of glucose and oxygen to its cells will place them under stress.

One other risk factor that Taylor does not discuss much is air pollution. There is growing evidence (summarised, for example, in a recent article in Science magazine) that poor air quality – especially the sub-micron particles produced in the exhausts of diesel engines – is implicated in Alzheimer’s disease. It’s been known for a while that environmental nanoparticles such as the ultra-fine particulates formed in combustion can lead to oxidative stress, inflammation and thus cardiovascular disease (I wrote about this here more than ten years ago – Ken Donaldson on nanoparticle toxicology). The relationship between pollution and cardiovascular disease would by itself indicate an indirect link to dementia, but there is in addition the possibility of a more direct link, if, as seems possible, some of these ultra fine particles can enter the brain directly.

There’s a fairly clear prescription, then, for individuals who wish to lower their risk of suffering from dementia in later life. They should eat well, keep their bodies and minds well exercised, and as much as possible breathe clean air. Since these are all beneficial for health in many other ways, it’s advice that’s worth taking, even if the links with dementia turn out to be less robust than they seem now.

But I think we should be cautious about putting the emphasis entirely on individuals taking responsibility for these actions to improve their own lifestyles. Public health measures and sensible regulation has a huge role to play, and are likely to be very cost-effective ways of reducing what otherwise will be a very expensive burden of disease. It’s not easy to eat well, especially if you’re poor; the food industry needs to take more responsibility for the products it sells. And urban pollution can be controlled by the kind of regulation that leads to innovation – I’m increasingly convinced that the driving force for accelerating the uptake of electric vehicles is going to be pressure from cities like London and Paris, Los Angeles and Beijing, as the health and economic costs of poor air quality become harder and harder to ignore.

Public health interventions and lifestyle improvements do hold out the hope of lowering the projected numbers of dementia sufferers from that figure of 2 million by 2051. But, for those who are diagnosed with dementia, we have to hope for the discovery of a breakthrough in treatment, a drug that does successfully slow or stop the progression of the disease. What needs to be done to bring that breakthrough closer?

Firstly, we should stop overstating the progress we’re making now, and stop hyping “breakthroughs” that really are nothing of the sort. The UK’s newspapers seem to be particularly guilty of doing this. Take, for example, this report from the Daily Telegraph, headlined “Breakthrough as scientists create first drug to halt Alzheimer’s disease”. Contrast that with the reporting in the New York Times of the very same result – “Alzheimer’s Drug LMTX Falters in Final Stage of Trials”. Newspapers shouldn’t be in the business of peddling false hope.

Another type of misguided optimism comes from Silicon Valley’s conviction that all is required to conquer death is a robust engineering “can-do” attitude. “Aubrey de Grey likes to compare the body to a car: a mechanic can fix an engine without necessarily understanding the physics of combustion”, a recent article on Silicon Valley’s quest to live for ever comments about the founder of the Valley’s SENS Foundation (the acronym is for Strategies for Engineered Negligible Senescence). Removing intercellular junk – amyloids – is point 6 in the SENS Foundation’s 7 point plan for eternal life.

But the lesson of several hundred failed drug trials is that we do need to understand the science of dementia more before we can be confident of treating it. “More research is needed” is about the lamest and most predictable thing a scientist can ever say, but in this case it is all too true. Where should our priorities lie?

It seems to me that hubristic mega-projects to simulate the human brain aren’t going to help at all here – they consider the brain at too high a level of abstraction to help disentangle the complex combination of molecular events that is disabling and killing nerve cells. We need to take into account the full complexity of the biological environments that nerve cells live in, surrounded and supported by glial cells like astrocytes, whose importance may have been underrated in the past. The new genomic approaches have already yielded powerful insights, and techniques for imaging the brain in living patients – magnetic resonance imaging and positron emission tomography – are improving all the time. We should certainly sustain the hope that new science will unlock new treatments for these terrible diseases, but we need to do the hard and expensive work to develop that science.

In my own university, the Sheffield Institute for Translational Neuroscience focuses on motor neurone disease/ALS and other neurodegenerative diseases, under the leadership of an outstanding clinician scientist, Professor Dame Pam Shaw. The University, together with Sheffield’s hospital, is currently raising money for a combined MRI/PET scanner to support this and other medical research work. I’m taking part in one fundraising event in a couple of months with many other university staff – attempting to walk 50 miles in less than 24 hours. You can support me in this through this JustGiving page.

Trade, Power and Innovation

Trade and its globalisation are at the top of the political agenda now. After decades in which national economies have become more and more entwined, populist politicians are questioning the benefits of globalisation. Meanwhile in the UK, we are embarked on a process of turning our backs on our biggest trading partner in the quest for a new set of global relationships, which, to listen to some politicians’ rhetoric, will bring back the days of Britain as a global trading giant. There’s no better time, then, to get some historical perspective on all this, so I’ve just finished reading Ronald Findlay & Kevin H. O’Rourke’s book Power and Plenty: Trade, War, and the World Economy in the Second Millennium – a world history of a millennium of trade globalisation.

The history of world trade is one part of a history of world economic growth. Basic economics tells us that trade in itself leads to economic growth – communities that trade with each other on an equal basis mutually benefit, because they can each specialise in what they’re best at doing.

But trade also drives innovation, the other mainspring of economic growth. The development of larger markets makes innovation worthwhile – the British industrial revolution would probably have fizzled out early if the new manufactured goods were restricted to home markets. Ideas and the technologies that are based on them diffuse along with traded goods. And the availability of attractive new imported goods creates demand and drives innovation to provide domestically produced substitutes. This was certainly the case in England in the 18th century, when the popularity of textiles from India and porcelain from China was so important in stimulating the domestic cotton and ceramics industries.

This view of trade is fundamentally benign, but one of the key points of the book is to insist that in history, the opening up of trade has often been a very violent process – the plenty that trade brings has come from military power.

The direct, organised,large-scale involvement of Western European powers in trade in the Far East was pioneered by the Dutch East India Company (VOC), formed in 1602. Continue reading “Trade, Power and Innovation”

McLaren comes to Sheffield

Last week saw the announcement that the high end car manufacturer McLaren Automotive is to open a new plant in Sheffield, to make the carbon fibre chassis assemblies for their sports cars. It was good to see that the extensive press reporting of this development (see e.g. the Guardian, the BBC and the FT (£) ) gave prominence to the role of the University of Sheffield’s Advanced Manufacturing Research Centre (AMRC) in attracting this investment. The production facility will be located adjacent to the AMRC, in what’s now a growing cluster of facilities for both production and research and development in various high value manufacturing sectors, and the expansion of the AMRC’s existing Composites Research Centre will support innovation in composites manufacturing technology. The focus in some news reports on the first McLaren apprentices, who will be trained in the AMRC Training Centre, is a nice illustration of the role of the AMRC in ensuring that McLaren will have the skilled people it needs.

This investment has been a long time cooking, and I know how much work was put in by people at the AMRC, the Sheffield City Region LEP and Sheffield City Council to make it happen. A sceptic might ask, though, why is everyone getting so excited about a mere 200 new jobs? After all, a recent estimate suggested that to catch up with to the average UK performance, Sheffield City Region needed to find 70,000 new jobs, a good proportion of those being high-skilled, high paid roles.

They are right to be excited; this illustrates some of the arguments I’ve been making about the importance of manufacturing. Sheffield, like most UK cities outside London and the South East, has a productivity problem; that means the focus of industrial strategy should not in the first instance be on bringing jobs to the region, but on bringing value. An investment by company like McLaren, which operates at the technological frontier in a very high value sector, has two beneficial effects. The direct effect is that by itself it brings value into the region, and the very high productivity jobs it provides by themselves will raise the average.

But the indirect effects are potentially even more important. Sheffield, like other cities, has a problem of a very wide dispersion in productivity performance between the best firms in a sector like manufacturing, and a long tail of less productive firms. National and international evidence suggests that the gap between the technological leaders and the laggards is widening, and that this is a major ingredient of slowing productivity growth. The presence of technologically leading firms like McLaren will help the existing manufacturing business base in Sheffield to raise their game through access to more skilled people, through the expansion of shared research facilities such as AMRC, and through the demands McLaren will make on the firms that want to sell stuff to it.

The McLaren investment, then, is an exemplar of the approach to regional industrial strategy we’ve been arguing for, for example in the Sheffield City Region/Lancashire Science and Innovation AuditDriving productivity growth through innovation in high value manufacturing. Our argument was we should develop open R&D facilities with a strong focus on translation, with very strong links both to the research base and to companies large and small, and we should focus on developing skills in a way that joined up the landscape from apprentice-level technical training of the highest quality, through degree and higher degree level education in technology and management. It’s for this reason that the University of Sheffield has created a large scale apprenticeship programme, in partnership with business and local FE colleges, through its AMRC Training Centre. This focus on innovation and skills, we argued, would have two effects – it would in itself improve the competitiveness of the existing business base, and it would attract inward investment from internationally leading companies new to the region.

But to what end is all this skill and innovation being put? Environmentally conscious observers might wonder whether making petrol-guzzling super-cars for the super-rich should be our top priority. As someone whose interest in motor-sports is close to zero, I’m the wrong person to look to for enthusiasm for fast cars. I note that for the price of the cheapest model of McLaren sports car, I could buy more than a hundred of the cars I drive (2001 Toyota Yaris). The counter-argument, though, is that it’s in these very high end cars that innovative new technologies can be introduced; then as manufacturing experience is gained costs fall and scales increase to the point where the new technologies can be more widely available. The role of Tesla in accelerating the wider uptake electric vehicles is a good example of this.

The technology McLaren will be developing is the use of composites. The driver here is reducing weight – weight is the key to fuel efficiency in both cars and aeroplanes, and carbon fibre is, for its weight, the strongest and stiffest material we know (carbon nanotubes and graphene feature the same sp2 carbon-carbon bonds, so are similar in stiffness, but could be stronger if they can be made with fewer defects, as I discussed a few years ago here). But carbon fibre composites are still not widely used outside the money-no-object domain of military aerospace; it’s expensive, both in terms of the basic materials cost, but perhaps more importantly in the cost of the manufacturing processes.

The successful and most efficient use of carbon fibre composites also needs a very different design approach. When composites engineers talk about “black metal”, they’re not talking about dubious Nordic rock bands; it’s a derogatory term for a design approach which treats the composite as if it were a metal. But composites are fundamentally anisotropic – like a three dimensional version of a textile – and those properties should be not just taken account of but exploited to use the material to its full effect (as an old illustration of this, there’s a great story in Gordon’s New Science of Strong Materials about the way Madeleine Vionnet’s invention of the bias cut for dressmaking influenced post-war British missile design).

It’s my hope, then, that McLaren’s arrival in Sheffield will make a difference to the local economy far greater than just through adding some jobs, positive though that is. It’s another major step in the revitalising of the high value manufacturing sector in our region.

Steps towards an industrial strategy

It’s impossible for a government to talk about industrial strategy in the UK without mentioning British Leyland, the auto conglomerate effectively nationalised after going bankrupt in 1975, and which finally expired in 2007. As everyone knows British Leyland illustrates the folly of governments “picking winners”, which inevitably produces outcomes like cars with square steering wheels. So it’s not surprising that the government’s latest discussion document Green Paper: Building our Industrial Strategy begins with a disclaimer – this isn’t a 1970’s industrial strategy, but a new vision, a modern industrial strategy that doesn’t repeat the mistakes of the past.

The document isn’t actually a strategy yet, and it’s a stretch to describe much of it as new. But it is welcome, nonetheless; its analysis of the UK economy’s current position is much more candid and clear-sighted about its problems than previous government documents have felt able to be. Above all, the document focuses on the UK’s lamentable recent productivity performance (illustrated in the graph below), and the huge disparities between the performances of the UK’s different regional economies. It puts science and innovation as the first “pillar” of the strategy, and doesn’t pull punches about the current low levels of both government and private sector support for research.

decadal productivity annotatedUK Productivity has grown less over the last ten years than over any previous decade since the late 19th century. Decadal average labour productivity growth, data from Thomas, R and Dimsdale, N (2016) “Three Centuries of Data – Version 2.3”, Bank of England

It is a consultation document, and unlike many such, the questions don’t give the impression that the answer is already known – it does read as a genuine invitation to contribute to policy development. And what is very welcome indeed are the strong signals of high level political support: the document was launched by the Prime Minister, as “a critical part of our plan for post-Brexit Britain”, and as an exemplar of a new, active approach to government. This is in very sharp contrast to the signals coming out of the previous Conservative government.

How should we judge the success of any industrial strategy? Again, the strategy is admirably clear about how it should be judged – the Secretary of State states the objective as being “to improve living standards and economic growth by increasing productivity and driving growth across the whole country.”

I agree with this. There’s a corollary, though. Our existing situation – stagnant productivity growth, gross regional disparities in prosperity – tells us this – whatever we’ve been doing up to now, it hasn’t worked.

Industrial strategy over the decades

This is where it becomes important to look at what’s proposed in the light of what’s gone before. Continue reading “Steps towards an industrial strategy”

The Higher Education and Research Bill, the Haldane principle, and the autonomy of science

The UK government’s Higher Education and Research Bill is currently working its way through parliament – it’s currently in the committee stage at the House of Lords, where it’s encountering some turbulence. A recent newspaper article by (Lord) Chris Patten – “Leave state control of Universities to China” – gives a sense of how some are perceiving the issues … “But the bill that he [Jo Johnson, Minister for Universities, Science, Research and Innovation] recommends we swallow gives the secretary of state greater power than ever to direct the course of research.”

Any discussion of who has the power to direct government funded research in the UK will very soon invoke something called the “Haldane principle”, which is held by many to be a quasi-constitutional principle upholding the autonomy of the research community and its immunity from direct political direction, and indeed the government has been at pains to affirm its adherence to this tradition, stating for example in a publication about the Bill (Higher Education and Research Bill: UKRI Vision, Principles & Governance) that “The commitment to the Haldane principle will be strengthened and protected.”

Nonetheless, the Bill itself does not refer to the Haldane principle. The Bill sets up a new body to oversee science funding, UK Research and Innovation (UKRI), and section 96 of the Bill seems pretty explicit about where power will lie. Clause 1 of section 96 states baldly “The Secretary of State may give UKRI directions about the allocation or expenditure by UKRI of grants received”. And for the avoidance of doubt, as the lawyers say, clause 4 underlines clause 1 thus: “UKRI must comply with any directions given under this section”.

So, what is the Haldane principle, and if the Haldane principle really does protect the autonomy of science, how can this be consistent with direct control the new Bill gives the Minister? To get to the bottom of this, it’s worth asking three separate questions. Firstly, what did the supposed originator of the principle, Lord Haldane, actually say? Secondly, what do people now understand by the “Haldane principle”, and what purposes are served by invoking it? And lastly, we should perhaps put aside the constitutional history and go back to first principles to ask what the proper relationship between politicians and the scientific enterprise should be.

The supposed origin of the Haldane principle is in a report written in 1918 by the “Machinery of Government Committee”, chaired by Viscount Richard Haldane. (It should perhaps be pointed out that this Haldane is not the same as the famous biologist J.B.S. Haldane, who was in fact Richard Haldane’s nephew).

This is a very famous and important report in the history of UK public administration – it established some principles of executive government still followed in the UK today, recommending the foundation of the Cabinet Office, and the creation of Ministries based around key functions, such as Education, Health, Defense, and Finance. It does talk quite extensively about research and intelligence, but it’s clear that Haldane usually means something rather different to academic science when he uses these terms. His focus is much more on what we would call the evidence base for policy.

However, Haldane does discuss the arrangements for supporting medical research, which over the years evolved into the Medical Research Council, and he commends the way the Department of Scientific and Industrial Research was arranged. The key feature here is that responsibility for spending money on research proposals rests with a Minister, who has to answer to Parliament, but that “the scientific and technical soundness of all research proposals recommended for State assistance is guaranteed by constant resort to the guidance of an Advisory Council, consisting of a small number of eminent men of science, with an Administrative Chairman”*. Some sense of the kind of research being talked about is given by the examples given – “certain enquiries into questions of urgent practical importance relating to the preservation of fish”, “the mortality and illness due to TNT poisoning”. The importance of expert advice emerges clearly, but this doesn’t read like a statement of principle about the autonomy of basic science.

So if Viscount Haldane didn’t invent the Haldane principle, who did? The historian David Edgerton (in his article “The ‘Haldane Principle’ and other invented traditions in science policy”) points the finger at the Conservative politician Lord Hailsham. He invoked the “Haldane principle” in the early 1960s to score a political point off his Labour opponents.

Clearly the Haldane principle still has considerable rhetorical power, so different meanings are attached to it by different people to the degree that these meanings support the arguments they are making. In a general sense, there’s an agreement that the Haldane principle has something to do with the protection of the operation of science from direct political interventions, but within this I’d identify two rather different versions.

In the first version, the Haldane principle relates to the devolution of decisions about scientific matters to groups of experts – the Research Councils. This is actually the version that is enshrined in current legislation, to the extent that the powers and responsibilities of the Research Councils are defined by their Royal Charters. When I talk about Research Councils here, I don’t mean the organisations in their Swindon offices – I mean the 15 or so experts who are appointed by the Minister to form each of the research councils’ governing boards (this is the position I hold for the EPSRC). These boards are, in effect, the descendants of the advisory Councils specified by Viscount Haldane – but in the current legislation they do have more independence and autonomy than Haldane implies. But this autonomy is codified in the Royal Charters that the current Higher Education and Research Bill would rescind.

This version of the Haldane principle – the devolution of technical decision-making to bodies of experts – can be thought of as an example of the wider trend in government to offload its responsibilities to technocratic bodies – the granting of independence to the Bank of England and the empowering of the Monetary Policy Committee to set interest rates being perhaps the most visible example.

But another reading of the Haldane principle takes the correct level of devolution of power to be, not to a small group of the scientific great and good, but to the broader scientific community. This comes in a strong version and a weak one. The strong version holds that the scientific enterprise (as conceptualised by Michael Polanyi as a Hayekian spontaneous order – the “independent republic of science”) should not be subject to any external steering at all, and should be configured to maximise, above all, scientific excellence (for some value of “excellence”, of course). Unsurprisingly, this view is popular amongst many elite scientists.

The weak version concedes that at the micro-level of individual research proposals, decisions should be left to peer review, but insists that larger scale, strategic decisions can and should be subject to political control. This is essentially the interpretation of the Haldane principle favoured by recent governments. Of course, in this interpretation, where the line of demarcation between strategic decisions and individual research proposals falls is crucial and contested.

If the point at issue is when, and at what level, it is appropriate for politicians to directly intervene in the selection of research projects, we should ask what purposes could be served by such intervention.

An intrinsically authoritarian government might feel that it ought to have such control on principle, without any particular instrumental purpose.

A government that was excessively concerned about its short-term popularity might seek to use such decisions to gain immediate political advantage, for example by announcing the kind of new initiatives that might attract headlines and win votes in marginal constituencies. The devolution of this kind of decision making to technocratic committees can be read as a recognition by politicians that they need to be thus protected from their own worst instincts, tied to the mast, as it were.

But if a government has an overwhelming strategic purpose, then it might feel that the urgency of that purpose might justify rather direct interventions in the direction of research. To go back to the original Haldane, it’s clear that this is the historical context of that 1918 report.

The central purpose of the state then was clear – “to defeat German militarism”. And as we look back on that war now with horror as an exemplar of futile mass slaughter, it’s easy to forget the degree to which the First World War was a scientific and technological war. Britain had relied for some decades on technology rather than mass mobilisation for its global military power, and the war was born out of a naval arms race of Dreadnoughts, torpedoes and sophisticated fire control. The war itself brought aeroplanes, tanks, submarines, chemicals for new munitions (and chemical weapons), together with state control of the means of production, and Haldane himself, as prewar War Minister, had been central in the modernisation of Britain’s armed forces to exploit these technologies.

My own view is that the elected government do have the right to intervene in the direction of research, but that such interventions need to be motivated by a clearly articulated sense of strategic purpose. For most of the twentieth century, Britain’s “Warfare State” supplied, for better or worse, just such a clear sense of purpose. What’s our urgent sense of state purpose for the 21st century?

* The gendered language leaps out, of course, but in fairness to Viscount Haldane it’s worth pointing out that the report also contains a strikingly modern section about the value of diversity and the importance of opening up senior civil service roles to women. Beatrice Webb was on the committee, as was a notably reactionary Treasury Permanent Secretary, Sir George Murray, who added a note dissenting on this section to his signature.

I’m grateful to Alice Vadrot, convenor of an STS reading group at Cambridge’s Centre for Science and Policy, and its other members, for a stimulating discussion of some of these ideas at its most recent meeting.

Manufacturing *is* special, but it’s important to understand why

The politics of Trump and Brexit has drawn attention again to the phenomenon of “left-behind” communities. In the US rust belt and the UK’s northern cities, de-industrialisation and the loss of manufacturing jobs has stripped communities, not just of their economic base, but of their very sense of purpose.

But to some commentators, the focus on manufacturing is misguided sentimentality, an appeal to the discredited idea that the only proper work is making stuff in factories. These jobs, they say, have gone for ever, killed by a combination of technology and globalisation; the clock cannot be turned back and we must adjust to the new reality of service based economies, which produce economic value just as real as any widget.

I agree that the world has changed, but I want to argue that, despite that, manufacturing does have a special importance for the economic health of developed countries. It’s important, though, to understand why, if not for sentimentality or conservatism, manufacturing is important, or we’ll end up with bad and counter-productive policy prescriptions.

Manufacturing is important for three reasons. Firstly, consistently, over the long-run, manufacturing innovation remains the most reliable way of delivering sustained productivity growth, and this productivity growth spills over into other sectors and the economy more generally.

Secondly, centres of manufacturing sustain wider clusters in which tangible and intangible assets accumulate and reinforce their collective value, and where tacit knowledge is stored in networks of skilled people and effective organisations (what Shih and Pisano call the “manufacturing commons”). These networks include, not just the core manufacturers, but suppliers and maintainers of equipment, design consultancies, R&D, and so on, which in the long term are anchored by those manufacturing activities at the core of the cluster.

Of course, the same is true in other sectors too; this brings me to the third point, which is that the diversity of types of manufacturing leaves room for clusters to be geographically dispersed. Rebalancing the economy in favour of manufacturing will at the same time rebalance it geographically, reducing the gross regional imbalances in wealth and opportunities that are such a dangerous feature of the UK now.

Recognising these as the features of manufacturing that make it so important make it clear what an industrial strategy to promote it should not try and do. Its aim should not be to prop up failing industries as they currently exist – the whole point of supporting manufacturing is as a focus for innovation. Neither should there be any expectation that a manufacturing resurgence will lead to large scale mass employment on the old model. If productivity growth is to be the motivation, then this will not lead directly to large numbers of new jobs.

The point is to create value, not, in the first instance, to create jobs. But the jobs will follow, in those sectors that will support the new manufacturing activities – in design, marketing, data analytics, professional services. In fact, the characteristic of the new manufacturing is precisely that the lines between manufacturing and its associated service activities are becoming more blurred.

So an industrial strategy to support the new manufacturing needs to have, at its heart, a focus on innovation and skills, and the goal of creating a self-sustaining ecosystem. This doesn’t mean that one can ignore history – the future manufacturing specialisms of a region will reflect their past, because the nature of the assets one has to build on, in terms of existing firms, institutions and skills, will reflect that past. But equally an understanding of the transformations that technology is bringing is important too.

Manufacturing is changing, through automation and robotics, new materials and manufacturing techniques, and new modes of organising manufacturing processes in more reconfigurable and customisable ways. New business models are being developed which erode the distinction between traditional manufacturing and service industries, and underlying all these changes is the power of new digital technology, and the potential of large scale data analytics and machine learning. All these demand new (often digital) skills, better management practises, more effective mechanisms by which new technologies diffuse widely through an existing business base.

Last summer, we began the process of defining what a modern industrial strategy might look like, to support a resurgence of high value manufacturing in the traditional manufacturing heartlands of South Yorkshire and Lancashire. The outcome of this is presented in the Science and Innovation Audit commissioned by the UK government, whose report you can read here – Driving productivity growth through innovation in high value manufacturing.

As the UK government develops its own industrial strategy, I hope the policies that emerge are designed to support the right sorts of manufacturing, for the right reasons.

Time to face facts about the UK’s productivity stagnation

One positive feature of the Autumn Statement that the Chancellor of the Exchequer presented yesterday was that he gave unprecedented prominence to the UK’s serious productivity problem. What was less positive was that he had no analysis of where the problem comes from, and his proposed measures to address it are entirely inadequate.

This matters. Our ability to harness technological and other improvements to produce more value from the same inputs is the only fundamental driver for real wage increases; productivity growth drives living standards. And we rely on productivity growth to meet the future promises we’re making now – to grow our way out of our debts, and to pay for our future pensions.

In 2007, productivity had been growing steadily at 2.2% a year since before 1970. That ended with the financial crisis; in the 7 years since it has barely risen at all. The government, and its independent forecasters, the Office of Budgetary Responsibility, have spent that time confidently expecting an upturn, a resumption of the pre-crisis growth rate. But that upturn has never arrived. My plot shows that history; this shows the successive OBR predictions for a resumption of productivity growth, together with the successive disappointing outcomes.

obr-predictions

Labour productivity according to the successive Office of Budgetary Responsibility’s Economic and Fiscal Assessments for the years indicated, showing estimates of productivity up to the time of publication of each report (solid lines), and predictions for the future (dotted lines). Data for 2010-2014 from the October 2015 OBR Forecast Evaluation Report, for 2015 and March 2016 from the March 2016 OBR Economic and Fiscal Outlook, and November 2016 from the November 2016 OBR EFO.

After seven years of anomalously slow productivity growth, it’s time to face facts and acknowledge this isn’t an anomaly, it’s the new normal. Continue reading “Time to face facts about the UK’s productivity stagnation”

What has science policy ever done for Barnsley?

Cambridge’s Centre for Science and Policy, where I am currently a visiting fellow, held a roundtable discussion yesterday on the challenges for science policy posed by today’s politics post-Brexit, post-Trump, introduced by Harvard’s Sheila Jasanoff and myself. This is an expanded and revised version of my opening remarks.

I’m currently commuting between Sheffield and Cambridge, so the contrast between the two cities is particularly obvious to me at the moment. Cambridgeshire is one of the few regions of the UK that is richer than the average, with a GVA per head of £27,203 (the skewness of the UK’s regional income distribution, arising from London’s extraordinary dominance, leads to the statistical oddness that most of the country is poorer than the average). Sheffield, on the other hand, is one of the less prosperous provincial cities, with a GVA per head of £19,958. But Sheffield doesn’t do so badly compared with some of the smaller towns and cities in its hinterland – Barnsley, Rotherham and Doncaster, whose GVA per head, at £15,707, isn’t much more than half of Cambridge’s prosperity.

This disparity in wealth is reflected in the politics. In the EU Referendum, Cambridge voted overwhelmingly – 74% – for Remain, while Barnsley, Rotherham and Doncaster voted almost as overwhelmingly – 68 or 69% – to Leave. The same story could be told of many other places in the country – Dudley, in the West Midlands, Teeside, in the Northeast, Blackburn, in the Northwest. This is not just a northern phenomenon, as shown by the example of Medway, in the Southeast. These are all places with poorly performing local economies, which have failed to recover from 1980’s deindustrialisation. They have poor levels of educational attainment, low participation in higher education, poor social mobility, low investment, low rates of business start-ups and growth – and they all voted overwhelmingly to leave the EU.

Somehow, all those earnest and passionate statements by eminent scientists and academics about the importance for science of remaining in the EU cut no ice in Barnsley. And why should they? We heard about the importance of EU funding for science, of the need to attract the best international scientists, of how proud we should be of the excellence of UK science. If Leave voters in Barnsley thought about science at all, they might be forgiven for thinking that science was to be regarded as an ornament to a prosperous society, when that prosperity was something from which they themselves were excluded.

Of course, there is another argument for science, which stresses its role in promoting economic growth. That is exemplified, of course, here in Cambridge, where it is easy to make the case that the city’s current obvious prosperity is strongly connected with its vibrant science-based economy. This is underpinned by substantial public sector research spending, which is then more than matched by a high level of private sector innovation and R&D, both from large firms and fast growing start-ups supported by a vibrant venture capital sector.

The figures for regional R&D bear this out. East Anglia has a total R&D expenditure of €1,388 per capita – it’s a highly R&D intensive economy. This is underpinned by the €472 per capita that’s spent in universities, government and non-profit laboratories, but is dominated by the €914 per capita spent in the private sector, directly creating wealth and economic growth. This is what a science-based knowledge economy looks like.

South Yorkshire looks very different. The total level of R&D is less than a fifth of the figure for East Anglia, at €244 per capita; and this is dominated by HE, which carries out R&D worth €156. Business R&D is less than 10% of the figure for East Anglia, at €80 per capita. This is an economy in which R&D plays very little role outside the university sector.

An interesting third contrast is Inner London, which is almost as R&D intensive overall as East Anglia, with a total R&D expenditure of €1,130 per capita. But here the figure is dominated not by the private sector, which does €323 per capita R&D, but by higher education and government, at €815 per capita. A visitor to London from Barnsley, getting off the train at St Pancras and marvelling at the architecture of the new Crick Institute, might well wonder whether this was indeed science as an ornament to a prosperous society.

To be fair, governments have begun to recognise these issues of regional disparities. I’d date the beginning of this line of thinking back to the immediate period after the financial crisis, when Peter Mandelson returned from Brussels to take charge of the new super-ministry of Business, Innovation and Skills. Newly enthused about the importance of industrial strategy, summarised in the 2009 document “New Industry, New Jobs”, he launched the notion that the economy needed to be “rebalanced”, both sectorally and regionally.

We’ve heard a lot about “rebalancing” since. At the aggregate level there has not been much success, but, to be fair, the remarkable resurgence of the automobile industry perhaps does owe something to the measures introduced by Mandelson’s BIS and InnovateUK, and continued by the Coalition, to support innovation, skills and supply chain development in this sector.

One area in which there was a definite discontinuity in policy on the arrival of the Coalition government in 2010 was the abrupt abolition of the Regional Development Agencies. They were replaced by “Local Enterprise Partnerships”, rather loosely structured confederations of local government representatives and private sector actors (including universities), with private sector chairs. One good point about LEPs was that they tended to be centred on City Regions, which make more sense as economic entities than the larger regions of the RDAs, though this did introduce some political complexity. Their bad points were that they had very few resources at their disposal, they had little analytical capacity, and their lack of political legitimacy made it difficult for them to set any real priorities.

Towards the end of the Coalition government, the idea of “place” made an unexpected and more explicit appearance in the science policy arena. A new science strategy appeared in December 2014 – “Our Plan for Growth: Science and Innovation” , which listed “place” as one of five underpinning principles (the others being “Excellence, Agility,Collaboration, and Openness”)

What was meant by “place” here was, like much else in this strategy, conceptually muddled. On the one hand, it seemed to be celebrating the clustering effect, by which so much science was concentrated in places like Cambridge and London. On the other hand, it seemed to be calling for science investment to be more explicitly linked with regional economic development.

It has been this second sense that has subsequently developed by the new, all Conservative government. The Science Minister, Jo Johnson, announced in a speech in Sheffield, the notion of “One Nation Science” – the idea that science should be the route for redressing the big differences in productivity between regions in the UK.

The key instrument for this “place agenda” was to be the “Science and Innovation Audits” – assessments of the areas of strength in science and innovation in the regions, and suggestions for where opportunities might exist to use and build on these to drive economic growth.

I have been closely involved in the preparation of the Science and Innovation Audit for Sheffield City Region and Lancashire, which was recently published by the government. I don’t want to go into detail about the Science and Innovation Audit process or its outcomes here – instead I want to pose the general question about what science policy can do for “left behind” regions like Barnsley or Blackburn.

It seems obvious to me that “trophy science” – science as an ornament for a prosperous society – will be no help. And while the model of Cambridge – a dynamic, science based economy, with private sector innovation, venture capital, and generous public funding for research attracting global talent – would be wonderful to emulate, that’s not going to happen. It arose in Cambridge from the convergence of many factors over many years, and there are not many places in the world where one can realistically expect this to happen again.

Instead, the focus needs to be much more on the translational research facilities that will attract inward investment from companies operating at the technology frontier, on mechanisms to diffuse the use of new technology quickly into existing businesses, on technical skills at all levels, not just the highest. The government must have a role, not just in supporting those research facilities and skills initiatives, but also in driving the demand for innovation, as the customer for the new technologies that will be needed to meet its strategic goals (for a concrete proposal of how this might work, see Stian Westlake’s blogpost “If not a DARPA, then what? The Advanced Systems Agency” ).

The question “What have you lot ever done for Barnsley” is one that I was directly asked, by Sir Steve Houghton, leader of Barnsley Council, just over a year ago, at the signing ceremony for the Sheffield City Region Devo Deal. I thought it was a good question, and I went to see him later with a considered answer. We have, in the Advanced Manufacturing Research Centre, a great translational engineering research facility that demonstrably attracts investment to the region and boosts the productivity of local firms. We have more than 400 apprentices in our training centre, most sponsored by local firms, not only getting a first class training in practical engineering (some delivered in collaboration with Barnsley College), but also with the prospect of a tailored path to higher education and beyond. We do schools outreach and public engagement, we work with Barnsley Hospital to develop new medical technologies that directly benefit his constituents. I’m sure he still thinks we can do more, but he shouldn’t think we don’t care any more.

The referendum was an object lesson in how little the strongly held views of scientists (and other members of the elite) influenced the voters in many parts of the country. For them, the interventions in the referendum campaign by leading scientists had about as much traction as the journal Nature’s endorsement of Hilary Clinton did across the Atlantic. I don’t think science policy has done anything like enough to answer the question, what have you lot done for Barnsley … or Merthyr Tydfil, or Dudley, or Medway, or any of the many other parts of the country that don’t share the prosperity of Cambridge, or Oxford, or London. That needs to change now.

How big should the UK manufacturing sector be?

Last Friday I made a visit to HM Treasury, for a round table with the Productivity and Growth Team. My presentation (PDF of the slides here: The UK’s productivity problem – the role of innovation and R&D) covered, very quickly, the ground of my two SPERI papers, The UK’s innovation deficit and how to repair it, and Innovation, research and the UK’s productivity crisis.

The plot that provoked the most thought-provoking comments was this one, from a recent post, showing the contributions of different sectors to the UK’s productivity growth over the medium term. It’s tempting, on a superficial glance at this plot, to interpret it as saying the UK’s productivity problem is a simple consequence of its manufacturing and ICT sectors having been allowed to shrink too far. I think this conclusion is actually broadly correct; I suspect that the UK economy has suffered from a case of “Dutch disease” in which more productive sectors producing tradable goods have been squeezed out by the resource boom of North Sea oil and a financial services bubble. But I recognise that this conclusion does not follow quite as straightforwardly as one might at first think from this plot alone.

UKSectoralMFP

Multifactor productivity growth in selected UK sectors and subsectors since 1972. Data: EU KLEMS database, rebased to 1972=1.

The plot shows multi-factor productivity (aka total factor productivity) for various sectors and subsectors in the UK. Increases in total factor productivity are, in effect, that part of the increase in output that’s not accounted for by extra inputs of labour and capital; this is taken by economists to represent a measure of innovation, in some very general sense.

The central message is clear. In the medium run, over a 40 year period, the manufacturing sector has seen a consistent increase in total factor productivity, while in the service sectors total factor productivity increases have been at best small, and in some cases negative. The case of financial services, which form such a dominant part of the UK economy, is particularly interesting. Although the immediate years leading up to the financial crisis (2001-2008) showed a strong improvement in total factor productivity, which has since fallen back somewhat, over the whole period, since 1972, there has been no net growth in total factor productivity in financial services at all.

We can’t, however, simply conclude from these numbers that manufacturing has been the only driver of overall total factor productivity growth in the UK economy. Firstly, these broad sector classifications conceal a distribution of differently performing sub-sectors. Over this period the two leading sub-sectors are chemicals and telecommunications (the latter a sub-sector of information and communication).

Secondly, there have been significant shifts in the composition of the economy over this period, with the manufacturing sector shrinking in favour of services. My plot only shows rates of productivity growth, and not absolute levels; the overall productivity of the economy could improve if there is a shift from manufacturing to higher value services, even if productivity in those sectors subsequently grows less fast. Thus a shift from manufacturing to financial services could lead to an initial rise in overall productivity followed eventually by slower growth.

Moreover, within each sector and subsector there’s a wide dispersion of productivity performances, not just at sub-sector level, but at the level of individual firms. One interpretation of the rise in manufacturing productivity in the early 1980’s is that this reflects the disappearance of many lower performing firms during that period’s rapid de-industrialisation. On the other hand, a recent OECD report (The Future of Productivity, PDF) highlights what seems to be a global phenomenon since the financial crisis, in which a growing gap has opened up between the highest performing firms, in which productivity has continued to grow, and a long tail of less well performing firms whose productivity has stagnated.

I don’t think there’s any reason to believe that the UK manufacturing sector, though small, is particularly innovative or high performing as a whole. Some relatively old data from Hughes and Mina (PDF) shows that the overall R&D intensity of the UK’s manufacturing sector – expressed as ratio of manufacturing R&D to manufacturing gross value added – was lower than competitor nations and moving in the wrong direction.

This isn’t to say, of course, that there aren’t outstandingly innovative UK manufacturing operations. There clearly are; the issue is whether there are enough of them relative to the overall scale of the UK economy and whether their innovations and practises are diffusing fast enough to the long tail of manufacturing operations that are further from the technological frontier.