Novavax – another nanoparticle Covid vaccine

The results for the phase III trial of the Novavax Covid vaccine are now out, and the news seems very good – an overall efficacy of about 90% in the UK trial, with complete protection against severe disease and death. The prospects now look very promising for regulatory approval. What’s striking about this is that we now have a third, completely different class of vaccine that has demonstrated efficacy against COVID-19. We have the mRNA vaccines from BioNTech/Pfizer and Moderna, the viral vector vaccine from Oxford/AstraZeneca, and now Novavax, which is described as “recombinant nanoparticle technology”. As I’ve discussed before (in Nanomedicine comes of age with mRNA vaccines), the Moderna and BioNTech/Pfizer vaccines both crucially depend on a rather sophisticated nanoparticle system that wraps up the mRNA and delivers it to the cell. The Novavax vaccine depends on nanoparticles, too, but it turns out that these are rather different in their character and function to those in the mRNA vaccines – and, to be fair, are somewhat less precisely engineered. So what are these “recombinant nanoparticles”?

All three of these vaccine classes – mRNA, viral vector and Novavax – are based around raising an immune response to a particular protein on the surface of the coronovirus – the so-called “spike” protein, which binds to receptors on the surface of target cells at the start of the process through which the virus makes its entrance. The mRNA vaccines and the viral vector vaccines both hijack the mechanisms of our own cells to get them to produce analogues of these spike proteins in situ. The Novavax vaccine is less subtle – the protein itself is used as the vaccine active ingredient. It’s synthesised in bioreactors by using a genetically engineered insect virus, which is used to infect a culture of cells from a moth caterpillar. The infected cells are harvested and the spike proteins collected and formulated. It’s this stage that, in the UK, will be carried out in the Teeside factory of the contract manufacturer Fujifilm Diosynth Biotechnologies.

The protein used in the vaccine is a slightly tweaked version of the molecule in the coronavirus. The optimal alteration was found by Novavax’s team, led by scientist Nita Patel, who quickly tried out 20 different versions before hitting on the variety that is most stable and immunologically active. The protein has two complications compared to the simplest molecules studied by structural biologists – it’s a glycoprotein, which means that it has short polysaccharide chains attached at various points along the molecule, and it’s a membrane protein (this means that it’s structure has to be determined by cryo-transmission electron microscopy, rather than X-ray diffraction). It has a hydrophobic stalk, which sticks into the middle of the lipid membrane which coats the coronavirus, and an active part, the “spike”, attached to this, sticking out into the water around the virus. For the protein to work as a vaccine, it has to have exactly the same shape as the spike protein has when it’s on the surface of the virus. Moreover, that shape changes when the virus approaches the cell it is going to infect – so for best results the protein in the vaccine needs to look like the spike protein at the moment when it’s armed and ready to invade the cell.

This is where the nanoparticle comes in. The spike protein is formulated with a soap-like molecule called Polysorbate 80 (aka Tween 80). This consists of a hydrocarbon tail – essentially the tail group of oleic acid – attached to a sugar like molecule – sorbitan – to which are attached short chains of ethylene oxide. The whole thing is what’s known as a non-ionic surfactant. It’s like soap, in that it has a hydrophobic tail group and a hydrophilic head group. But unlike soap or comment synthetic detergents, the head group is, although water soluble, uncharged. The net result is that in water Polysorbates-80 self-assembles into nanoscale droplets – micelles – in which the hydrophobic tails are buried in the core and the hydrophilic head groups cover the surface, interacting with the surrounding water. The shape and size of the micelles is set by the length of the tail group and the area of the head group, so for these molecules the optimum shape is a sphere, probably a few tens of nanometers in diameter.

As far as the spike proteins are concerned, these somewhat squishy nanoparticles look a bit like the membrane of the virus, in that they have an oily core that the stalks can be buried in. When the protein, having been harvested from the insect cells and purified, is mixed up with a polysorbate-80 solution, they end up stuck into the sphere like a bunch of whole cloves stuck into a mandarin orange. Typically each nanoparticle will have about 14 spikes. It has to be said that, in contrast to the nanoparticles carrying the mRNA in the BioNTech and Moderna vaccines, neither the component materials nor the process for making the nanoparticles is particularly specialised. Polysorbate-80 is a very widely used, and very cheap, chemical, extensively used as an emulsifier in convenience food and an ingredient in cosmetics, as well as in many other pharmaceutical formulations, and the formation of the nanoparticles probably happens spontaneously on mixing (though I’m sure there are some proprietary twists and tricks to get it to work properly, there usually are).

But the recombinant protein nanoparticles aren’t the only nanoparticles of importance in the Novavax vaccine. It turns out that simply injecting a protein as an antigen doesn’t usually provoke a strong enough immune response to work as a good vaccine. In addition, one needs to use one of the slightly mysterious substances called “adjuvants” – chemicals that, through mechanisms that are probably still not completely understood, prime the body’s immune system and provoke it to make a stronger response. The Novavax vaccine uses as an adjuvant another nanoparticle – a complex of cholesterol and phospholipid (major components of our own cell membranes, widely available commercially) together with molecules called saponins, which are derived from the Chilean soap-bark tree.

Similar systems have been used in other vaccines, both for animal diseases (notably foot and mouth) and human. The Novavax adjuvant technology was developed by a Swedish company, Isconova AB, which was bought by Novavax in 2013, and consists of two separate fractions of Quillaja saponins, separately formulated into 40 nm nanoparticles and mixed together. The Chilean soap-bark tree is commercially cultivated – the raw extract is used, for example, in the making of the traditional US soft drink, root beer – but production will need to be stepped up (and possibly redirected from fizzy drinks to vaccines) if these vaccines turn out to be as successful as it now seems they might.

Sources: This feature article on Novavax in Science is very informative, but I believe the cartoon depicting the nanoparticle isn’t likely to be accurate, depicting it as cylindrical when it is much more likely to be spherical, and based on double tailed lipids rather than the single tailed anionic surfactant that is in fact used in the formulation. This is the most detailed scientific article from the Novavax scientists describing the vaccine and its characterisation. The detailed nanostructure of the vaccine protein in its formulation is described in this recent Science article. The “Matrix-M” adjuvant is described here, while the story of the Chilean soap-bark tree and its products is described in this very nice article in The Atlantic Magazine.

Rubber City Rebels

I’m currently teaching a course on the theory of what makes rubber elastic to Material Science students at Manchester, and this has reminded me of two things. The first is that this a great topic to introduce a number of the most central concepts of polymer physics – the importance of configurational entropy, the universality of the large scale statistical properties of macromolecules, the role of entanglements. The second is that the city of Manchester has played a recurring role of the history of the development of this bit of science, which as always, interacts with technological development in interesting and complex ways.

One of the earliest quantitative studies of the mechanical properties of rubber was published by that great Manchester physicist, James Joule, in 1859. As part of his investigations of the relationship between heat and mechanical work, he measured the temperature change that occurs when rubber is stretched. As anyone can find out for themselves with a simple experiment, rubber is an unusual material in this respect. If you take an elastic band (or, better, a rubber balloon folded into a narrow strip), hold it close to your upper lip, suddenly stretch it and then put it to your lip, you can feel that it significantly heats up – and then, if you release the tension again, it cools down again. This is a crucial observation for understanding how it is that the elasticity of rubber arises from the reduction in entropy that occurs when a randomly coiled polymer strand is stretched.

But this wasn’t the first observation of the effect – Joule himself referred to an 1805 article by John Gough, in the Memoirs of the Manchester Literary and Philosophical Society, drawing attention to this property of natural rubber, and the related property that a strand of the material held under tension would contract on being heated. John Gough himself was a fascinating figure – a Quaker from Kendal, a town on the edge of England’s Lake District, blind, as a result of a childhood illness, he made a living as a mathematics tutor, and was a friend of John Dalton, the Manchester based pioneer of the atomic hypothesis. All of this is a reminder of the intellectual vitality of that time in the fast industrialising provinces, truly an “age of improvement”, while the universities of Oxford and Cambridge had slipped into the torpor of qualifying the dim younger offspring of the upper classes to become Anglican clergymen.

Joule’s experiments were remarkably precise, but there was another important difference from Gough’s pioneering observation. Joule was able to use a much improved version of the raw natural rubber (or caoutchouc) that Gough used; the recently invented process of vulcanisation produced a much stronger, stabler material than the rather gooey natural precursor. The original discovery of the process of vulcanisation was made by the self-taught American inventor Charles Goodyear, who found in 1839 that rubber could be transformed by being heated with sulphur. It wasn’t for nearly another century that the chemical basis of this process was understood – the sulphur creates chemical bridges between the long polymer molecules, forming a covalently bound network. Goodyear’s process was rediscovered – or possibly reverse engineered – by the industrialist Thomas Hancock, who obtained the English patents for it in 1843 [2].

Appropriately for Manchester, the market that Hancock was serving was for improved raincoats. The Scottish industrialist Mackintosh had created his eponymous garment from a waterproof fabric consisting of a sandwich of rubber between two textile sheets; Hancock meanwhile had developed a number of machines and technologies for processing natural rubber, so it was natural for the two to enter into partnership with their Manchester factory making waterproof fabric. Their firm prospered; Goodyear, though, failed to make money from his invention and died in poverty (the Goodyear tire company was named after him, but only some years after his death).

At that time, rubber was a product of the Amazonian rain forest, harvested from wild trees by indigenous people. In a well known story of colonial adventurism, 70,000 seeds of the rubber tree were smuggled out of Brazil by the explorer Henry Wickham, successfully cultivated at Kew Gardens, with the plants exported to the British colonies of Malaya and Ceylon to form the basis of a new plantation rubber industry. This expansion and industrialisation of the cultivation of rubber came at an opportune time – the invention of the pneumatic tyre and the development of the automobile industry led to a huge new demand for rubber around the turn of the century, which the new plantations were in a position to meet.

Wild rubber was also being harvested to meet this time in the Belgian Congo, involving an atrocious level of violent exploitation of the indigenous population by the colonisers. But most of the rubber being produced to meet the new demand came from the British Empire plantations; this cultivation may not have been accompanied by the atrocities committed in the Congo, but the competitive prices plantation rubber could be produced at reflected not just the capital invested and high productivity achieved, but also the barely subsistence wages paid to the workforce, imported from India and China.

Back in England, in 1892 the Birmingham based chemist William Tilden had demonstrated that rubber could be synthesised from turpentine [3]. But this invention created little practical interest in England. And why would it, given that the natural product is of a very high quality, and the British Empire had successfully secured ample supplies through its colonial plantations? The process was rediscovered by the Russian chemist Kondakov in 1901, and taken up by the German chemical company Bayer in time for the synthetic product to play a role in the First World War, when German access to plantation rubber was blocked by the allies. At this time the quality of the synthetic product was much worse than that of natural rubber; nonetheless German efforts to improve synthetic rubber continued in the 1920’s and 30’s, with important consequences in the Second World War.

It’s sobering[4] to realise that by 1919, the rubber industry constituted a global industry with an estimated value of £250 million (perhaps £12 billion in today’s money), on the cusp of a further massive expansion driven by the mass adoption of the automobile – and yet scientists were completely ignorant, not just of the molecular origins of rubber’s elasticity, but even of the very nature of its constituent molecules. It was the German chemist Hermann Staudinger who, in 1920, suggested that rubber was composed of very long, linear molecules – polymers. Obvious thought this may be now, this was a controversial suggestion at the time, creating bitter disputes in the community of German chemists at the time, a dispute that gained a political tinge with the rise of the Nazi regime. Staudinger remained in Germany throughout the Second World War, despite being regarded as deeply ideologically suspect.

Staudinger was right about rubber being made up of long-chain molecules, but he was wrong about the form those molecules would take, believing that they would naturally adopt the form of rigid rods. The Austrian scientist Herman Mark, who was working for the German chemical combine IG Farben on synthetic rubber and other early polymers, realised that these long molecules would be very flexible and take up a random coil conformation. Mark’s father was Jewish, so he left IG Farben, first for Austria, and then after the Anschluss he escaped to Canada. At the University of Vienna in the 1930’s, Mark developed, with Eugene Guth, the statistical theory that explains the elastic behaviour of rubber in terms of the entropy changes in the chains as they are stretched and unstretched. This, at last, provided the basic explanation for the effect Gough discovered more than a century before, and that Joule quantified – the rise of temperature that occurs when rubber is stretched.

By the start of the Second World War, both Mark and Guth found themselves in the USA, where the study of rubber was suddenly to become very strategically important indeed. The entry of Japan into the war and the fall of British Malaya cut off allied supplies of natural rubber, leading to a massive scale up of synthetic rubber production. Somewhat ironically, this was based on the pre-war discovery by IG Farben of a version of synthetic rubber that had a great improvement in properties on previous versions – styrene-butadiene rubber (Buna-S). Standard Oil of New Jersey had an agreement with IG Farben to codevelop and market Buna-S in the USA.

The creation, almost from scratch, of a massive synthetic rubber industry in the USA was, of course, just one dimension of the USA’s World War 2 production miracle, but its scale is still astonishing [5]. The industry scaled up, under government direction, from producing 231 tons of general purpose rubber in 1941, to a monthly output of 70,000 tons in 1945. 51 new plants were built to produce the massive amounts of rubber needed for aircraft, tanks, trucks and warships. The programme was backed up by an intensive R&D effort, involving Mark, Guth, Paul Flory (later to win the Nobel prize for chemistry for his work on polymer science) and many others.

There was no significant synthetic rubber programme in the UK in the 1920’s and 1930’s. The British Empire was at its widest extent, providing ample supplies of natural rubber, as well as new potential markets for the material. That didn’t mean that there was no interest in improving scientific understanding of the material – on the contrary, the rubber producers in Malaya first sponsored research in Cambridge and Imperial, then collectively created a research laboratory in England, led by a young physical chemist from near Manchester, Geoffrey Gee. Gee, together with Leslie Treloar, applied the new understanding of polymer physics to understand and control the properties of natural rubber. After the war, realising that synthetic rubber was no longer just an inferior substitute, but a major threat to the markets for natural rubber, Gee introduced a programme of standardisation of rubber grades which helped the natural product maintain its market position.

Gee moved to the University of Manchester in 1953, and some time later Treloar moved to the neighbouring institution, UMIST, where he wrote the classic textbook on rubber elasticity. Manchester in the 1950’s and 60’s was a centre of research into rubber and networks of all kinds. Perhaps the most significant new developments were made in theory, by Sam Edwards, who joined Manchester’s physics department in 1958. Edwards was a brilliant theoretical physicist, who had learnt the techniques of quantum field theory with Julian Schwinger in a postdoc at Harvard. Edwards, having been interested by Gee in the fundamental problems of polymer physics, realised that there are some deep analogies between the mathematics of polymer chains and the quantum mechanical description of the behaviour of electrons. He was able to rederive, in a much more rigorous way that demonstrated the universality of the results, some of the fundamental predictions of polymer physics that had been postulated by Flory, Mark, Guth and others, before going onto results of his own of great originality and importance.

Edwards’s biggest contribution to the theory of rubber elasticity was to introduce methods for dealing with the topological constraints that occur in dense, cross-linked systems of linear chains. Polymer chains are physical objects that can’t cross each other, something that the classical theories of Guth and Mark completely neglect. But it was by then obvious that the entanglements of polymer molecules could themselves behave as cross-links, even in the absence of the chemical cross linking of vulcanisation (in fact, this is already suggested looking back at Gough’s original 1805 observations, which were made on raw, unvulcanised, rubber). Edwards introduced the idea of a “tube” to represent those topological constraints. Combined with the insight of the French physicist Pierre-Gilles de Gennes, this led not just to improved models for rubber elasticity taking account of entanglements, but a complete molecular theory of the complex viscoelastic behaviour of polymer melts [6].

Another leading physicist who emerged from this Manchester school was Julia Higgins, who learnt about polymers while she was a research fellow in the chemistry department in the 1960’s. Higgins subsequently worked in Paris, where in 1974 she carried out, with Cotton, des Cloiseux, Benoit and others, what I think might be one of the most important single experiments in polymer science. Using a neutron source to study the scattering from a melt of polymer molecules, some of which were deuterium labelled, they were able to show that even in the dense, entangled environment of a polymer melt, a single polymer chain still behaves as a classical random walk. This is in contrast with the behaviour of polymers in solution, where the chains are expanded by a so-called “excluded volume” interaction – arising from the fact that two segments of a single polymer chain can’t be in the same place at the same time. This result had been anticipated by Flory, in a rather intuitive and non-rigorous way, but it was Edwards who proved this result rigorously.

[1] My apologies for the rather contrived title. No-one calls Manchester “Rubber City” – it is traditionally a city built on cotton. The true Rubber City is, of course, Akron Ohio. Neither can anyone really describe any of the figures I talk about here as “rebels” (with the possible exception of Staudinger, who in his way is rather a heroic figure). But as everyone knows [7], Akron was a centre of music creativity in the mid-to-late 1970s, producing bands such as Devo, Per Ubu, and the Rubber City Rebels, whose eponymous song has remained a persistent earworm for me since the late 1970’s, and from which I’ve taken my title.
[2] And I do mean “English” here, rather than British or UK – it seems that Scotland had its own patent laws then, which, it turns out, influenced the subsequent development of the rubber boot industry.
[3] It’s usually stated that Tilden succeeded in polymerising isoprene, but a more recent reanalysis of the original sample of synthetic rubber has revealed that it is actually poly(2,3-dimethybutadiene) (https://www.sciencedirect.com/science/article/pii/S0032386197000840)
[4] At least, it’s sobering for scientists like me, who tend to overestimate the importance of having a scientific understanding to make a technology work.
[5] See “U.S. Synthetic Rubber Program: National Historic Chemical Landmark” – https://www.acs.org/content/acs/en/education/whatischemistry/landmarks/syntheticrubber.html
[6] de Gennes won the 1991 Nobel Prize for Physics for his work on polymers and liquid crystals. Many people, including me, strongly believed that this prize should have been shared with Sam Edwards. It has to be said that both men, who were friends and collaborators, dealt with this situation with great grace.
[7] “Everyone” here meaning those people (like me) born between 1958 and 1962 who spent too much of their teenage years listening to the John Peel show.

How does the UK rank as a knowledge economy?

Now the UK has withdrawn from the European single market, it will need to rethink its current and potential future position in the world economy. Some helpful context is provided, perhaps, by some statistics summarising the value added from knowledge and technology intensive industries, taken from the latest edition of the USA’s National Science Board Science and Engineering Indicators 2020.

The plot shows the changing share of world value added in a set of knowledge & technology intensive industries, as defined by an OECD industry classification based on R&D intensity. This includes five high R&D intensive industries: aircraft; computer, electronic, and optical products; pharmaceuticals; scientific R&D services; and software publishing. It also includes eight medium-high R&D intensive industries: chemicals (excluding pharmaceuticals); electrical equipment; information technology (IT) services; machinery and equipment; medical and dental instruments; motor vehicles; railroad and other transportation; and weapons. It’s worth noting that, in addition to high value manufacturing sectors, it includes some knowledge intensive services. But it does exclude public knowledge intensive services in education and health care, and, in the private sector, financial services and those business services outside R&D and IT services.

From this plot we can see that the UK is a small but not completely negligible part of world advanced economy. This is perhaps a useful perspective from which to view some of the current talk of world-beating “global Britain”. The big story is the huge rise of China, and in this context, inevitable that the rest of the world’s share of the advanced economy has fallen. But the UK’s fall is larger than competitors (-46%, cf -19% for the USA and -13% for rest of EU).

The absolute share tells us about the UK’s overall relative importance in the world economy, and should be helpful in stressing the need, in developing industrial strategy, for some focus. Another perspective is provided if we normalise the figures by population, which give us a sense of the knowledge intensity of the economy, which might give a pointer to prospects for future productivity growth. The table shows a rank ordered list by country of value added in knowledge & technology intensive industries per head of population in 2002 and 2018. The values for Ireland & possibly Switzerland may be distorted by transfer pricing effects.

Measuring up the UK Government’s ten-point plan for a green industrial revolution

Last week saw a major series of announcements from the government about how they intend to set the UK on the path to net zero greenhouse gas emissions. The plans were trailed in an article (£) by the Prime Minister in the Financial Times, with a full document published the next day – The ten point plan for a green industrial revolution. “We will use Britain’s powers of invention to repair the pandemic’s damage and fight climate change”, the PM says, framing the intervention as an innovation-driven industrial strategy for post-covid recovery. The proposals are patchy, insufficient by themselves – but we should still welcome them as beginning to recognise the scale of the challenge. There is a welcome understanding that decarbonising the power sector is not enough by itself. The importance of emissions from transport, industry and domestic heating are all recognised, and there is a nod to the potential for land-use changes to play a significant role. The new timescale for the phase-out of petrol and diesel cars is really significant, if it can be made to stick. So although I don’t think the measures yet go far enough or fast enough, one can start to see the outline of what a zero-emission economy might look like.

In outline, the emerging picture seems to be of a power sector dominated by offshore wind, with firm power provided either by nuclear or fossil fuels with carbon capture and storage. Large scale energy storage isn’t mentioned much, though possibly hydrogen could play a role there. Vehicles will predominantly be electrified, and hydrogen will have a role for hard to decarbonise industry, and possibly domestic heating. Some hope is attached to the prospect for more futuristic technologies, including fusion and direct air capture.

To move on to the ten points, we start with a reassertion of the Manifesto commitment to achieve 40 GW of offshore wind installed by 2030. How much is this? At a load factor of 40%, this would produce 140 TWh a year; for comparison, in 2019, we used a total 346 TWh of electricity. Even though this falls a long way short of what’s needed to decarbonise power, a build out of offshore wind on this scale will be demanding – it’s a more than four-fold increase on the 2019 capacity. We won’t be able to expand the capacity of offshore wind indefinitely using current technology – ultimately we will run out of suitable shallow water sites. For this reason, the announcement of a push for floating wind, with a 1 GW capacity target, is important.

On hydrogen, the government is clearly keen, with the PM saying “we will turn water into energy with up to £500m of investment in hydrogen”. Of course, even this government’s majority of 80 isn’t enough to repeal the laws of thermodynamics; hydrogen can only be an energy store or vector. As I’ve discussed in an earlier post (The role of hydrogen in reaching net zero), hydrogen could have an important role in a low carbon energy system, but one needs to be clear about how the hydrogen is made in a zero-carbon way, and how it is used, and this plan doesn’t yet provide that clarity.

The document suggests the first use will be in a natural gas blend for domestic heating, with a hint that it could be used in energy intensive industry clusters. The commitment is to create 5 GW of low carbon hydrogen production capacity by 2030. Is this a lot? Current hydrogen production amounts to 3 GW (27 TWh/year), used in industry and (especially) for making fertiliser, though none of this is low carbon hydrogen – it is made from natural gas by steam methane reforming. So this commitment could amount to building another steam reforming methane plant and capturing the carbon dioxide – this might be helpful for decarbonising industry, on on Deeside or Teeside perhaps. To give a sense of scale, total natural gas consumption in industry and homes (not counting electricity generation) equates to 58 GW (512 TWh/year), so this is no more than a pilot. In the longer term, making hydrogen by electrolysis and/or process heat from high temperature fission is more likely to be the scalable and cost-effective solution, and it is good that Sheffield’s excellent ITM Power gets a namecheck.

On nuclear power, the paper does lay out a strategy, but is light on the details of how this will be executed. For more detail on what I think has gone wrong with the UK’s nuclear strategy, and what I think should be done, see my earlier blogpost: Rebooting the UK’s nuclear new build programme. The plan here seems to be for one last heave on the UK’s troubled programme of large scale nuclear new build, followed up by a possible programme implementing a light water small modular reactor, with research on a new generation of small, high temperature, fourth generation reactors – advanced modular reactors (AMRs). There is a timeline – large-scale deployment of small modular reactors in the 2030’s, together with a demonstrator AMR around the same timescale. I think this would be realistic if there was a wholehearted push to make it happen, but all that is promised here is a research programme, at the level of £215 m for SMRs and £170m for AMRs, together with some money for developing the regulatory and supply chain aspects. This keeps the programme alive, but hardly supercharges it. The government must come up with the financial commitments needed to start building.

The most far-reaching announcement here is in the transport section – a ban on sales of new diesel and petrol car sales after 2030, with hybrids being permitted until 2035, after which only fully battery electric vehicles will be on sale. This is a big deal – a major effort will be required to create the charging infrastructure (£1.3 bn is ear-marked for this), and there will need to be potentially unpopular decisions on tax or road charging to replace the revenue from fuel tax. For heavy goods vehicles the suggestion is that we’ll have hydrogen vehicles, but all that is promised is R&D.

For public transport the solutions are fairly obvious – zero-emission buses, bikes and trains – but there is a frustrating lack of targets here. Sometimes old technologies are the best – there should be a commitment to electrify all inter-city and suburban lines as fast as feasible, rather than the rather vague statement that “we will further electrify regional and other rail routes”.

In transport, though, it’s aviation that is the most intractable problem. Three intercontinental trips a year can double an individual’s carbon footprint, but it is very difficult to see how one can do without the energy density of aviation fuel for long-distance flight. The solutions offered look pretty unconvincing to me – “we are investing £15 million into FlyZero – a 12-month study, delivered through the Aerospace Technology Institute (ATI), into the strategic, technical and commercial issues in designing and developing zero-emission aircraft that could enter service in 2030.” Maybe it will be possible to develop an electric aircraft for short-haul flights, but it seems to me that the only way of making long-distance flying zero-carbon is by making synthetic fuels from zero-carbon hydrogen and carbon dioxide from direct air capture.

It’s good to see the attention on the need for greener buildings, but here the government is hampered by indecision – will the future of domestic heating be hydrogen boilers or electric powered heat pumps? The strategy seems to be to back both horses. But arguably, even more important than the way buildings are heated is to make sure they are as energy-efficient as possible in the first place, and here the government needs to get a grip on the mess that is our current building regulation regime. As the Climate Change Committee says, “making a new home genuinely zero-carbon at the outset is around five times cheaper than retrofitting it later” – the housing people will be living in in 2050 is being built today, so there is no excuse for not ensuring the new houses we need now – not least in the neglected social housing sector – are built to the highest energy efficiency standards.

Carbon capture, usage and storage is the 8th of our 10 points, and there is a commendable willingness to accelerate this long-stalled programme. The goal here is “to capture 10Mt of carbon dioxide a year by 2030”, but without a great deal of clarity about what this is for. The suggestion that the clusters will be in the North East, the Humber, North West, and in Scotland and Wales suggests a goal of decarbonising energy intensive sectors, which in my view is the best use of this problematic technology (see my blogpost: Carbon Capture and Storage: technically possible, but politically and economically a bad idea). What’s the scale proposed here – is 10 Mt of carbon a year a lot or a little? Compared to the total CO2 emissions for the UK – 350 Mt in 2019 – it isn’t much, but on the other hand it is roughly in line with the total emissions of the iron and steel industry in the UK, so as an intervention to reduce the carbon intensity of heavy industry it looks more viable. The unresolved issue is who bears the cost.

There’s a nod to the effects of land-use changes, in the section on protecting the natural environment. There are potentially large gains to be had here in projects to reforest uplands and restore degraded peatlands, but the scale of ambition is relatively small.

Finally, the tenth point concerns innovation, with the promise of a “£1 billion Net Zero Innovation Portfolio” as part of the government’s aspiration to raise the UK’s R&D intensity to 2.4% of GDP by 2027. The R&D is to support the goals in the 10 point plan, with a couple of more futuristic bets – on direct air capture, and on commercial fusion power through the Spherical Tokomak for Energy Production project.

I think R&D and innovation are enormously important in the move to net zero. We urgently need to develop zero-carbon technologies to make them cheaper and deployable at scale. My own somewhat gloomy view (see this post for more on this: The climate crisis now comes down to raw power) is that, taking a global view incorporating the entirely reasonable aspiration of the majority of the world’s population to enjoy the same high energy lifestyle that is to be found in the developed world, the only way we will effect a transition to a zero-carbon economy across the world is if the zero-carbon technologies are cheaper – without subsidies – than fossil fuel energy. If those cheap, zero-carbon technologies can be developed in the UK, that will make a bigger difference to global carbon budgets than any unilateral action that affects the UK alone.

But there is an important counter-view, expressed cogently by David Edgerton in a recent article: Cummings has left behind a No 10 deluded that Britain could be the next Silicon Valley. Edgerton describes a collective credulity in the government about Britain’s place in the world of innovation, which overstates the UK’s ability to develop these new technologies, and underestimates the degree to which the UK will be dependent on innovations developed elsewhere.

Edgerton is right, of course – the UK’s political and commentating classes have failed to take on board the degree to which the country has, since the 1980’s, run down its innovation capacity, particularly in industrial and applied R&D. In energy R&D, according to recent IEA figures, the UK spends about $1.335 billion a year – some 4.3% of the world total, eclipsed by the contributions of the USA, China, the EU and Japan.

Nonetheless, $1.3 billion is not nothing, and in my opinion this figure ought to increase substantially both in absolute terms, and as a fraction of rising public investment in R&D. But the UK will need to focus its efforts in those areas where it has unique advantages; while in other areas international collaboration may be a better way forward.

Where are those areas of unique advantage? One such probably is offshore wind, where the UK’s Atlantic location gives it a lot of sea and a lot of wind. The UK currently accounts for about 1/3 of all offshore wind capacity, so it represents a major market. Unfortunately, the UK has allowed the situation to develop where the prime providers of its offshore wind technology are overseas. The plan suggests more stringent targets for local content, and this does make sense, while there is a strong argument that UK industrial strategy should try and ensure that more of the value of the new technologies of deepwater floating wind are captured in the UK.

While offshore wind is being deployed at scale right now, fusion remains speculative and futuristic. The government’s strategy is to “double down on our ambition to be the first country in the world to commercialise fusion energy technology”. While I think the barriers to developing commercial fusion power – largely in materials science – remain huge, I do believe the UK should continue to fund it, for a number of reasons. Firstly, there is a possibility that it might actually work, in which case it would be transformative – it’s a long odds bet with a big potential payoff. But why should the UK be the country making the bet? My answer would be that, in this field, the UK is genuinely internationally competitive; it hosts the Joint European Torus, and the sponsoring organisation UKAEA retains, rare in UK, capacity for very complex engineering at scale. Even if fusion doesn’t deliver commercial power, the technological spillovers may well be substantial.

The situation in nuclear fission is different. The UK dramatically ran down its research capacity in civil nuclear power, and chose instead to develop a new nuclear build programme on the basis of entirely imported technology. This was initially the French EPR currently being built in Hinkley Point, with another another type of pressurised water reactor, from Toshiba, to be built in Cumbria, and a third type of reactor, a boiling water reactor from Hitachi, in Anglesea. That hasn’t worked out so well, with only the EPRs now looking likely to be built. The current strategy envisages a reset, with a new programme of light water small modular reactors – that is to say, a technologically conservative PWR designed with an emphasis on driving its capital cost down, followed by work on a next generation fission reactor. These “advanced modular reactors” would be relatively small high temperature reactor. The logic for the UK to be the country to develop this technology is that it is only country that has run an extensive programme of gas cooled reactors, but it still probably needs collaboration with other like-minded countries.

How much emphasis should the UK put into developing electric vehicles, as opposed to simply creating the infrastructure for them and importing the technology? The automotive sector still remains an important source of added value for the UK, having made an impressive recovery from its doldrums in the 90’s and 00’s. Jaguar Land Rover, though owned by the Indian conglomerate Tata, is still essentially a UK based company, and it has an ambitious development programme for electric vehicles. But even with its R&D budget of £1.8 bn a year, it is a relative minnow by world standards (Volkswagen’s R&D budget is €13bn, and Toyota’s only a little less); for this reason it is developing a partnership with BMW. The government should support the UK industry’s drive to electrify, but care will be needed to identify where UK industry can find the most value in global supply chains.

A “green industrial strategy” is often sold on the basis of the new jobs it will create. It will indeed create more jobs, but this is not necessarily a good thing. If it takes more people, more capital, more money to produce the same level of energy services – houses being heated, iron being smelted, miles driven in cars and lorries – then that amounts to a loss of productivity across the economy as a whole. Of course this is justified by the huge costs that burning fossil fuels impose on the world as a whole through climate change, costs which are currently not properly accounted for. But we shouldn’t delude ourselves. We use fossil fuels because they are cheap, convenient, and easy to use, and we will miss them – unless we can develop new technologies that supply the same energy services at a lower cost, and that will take innovation. New low carbon energy technologies need to be developed, and existing technologies made cheaper and more effective.

To sum up, the ten point plan is a useful step forward, The contours of a zero-emissions future are starting to emerge, and it is very welcome that the government has overcome its aversion to industrial strategy. But more commitment and more realism is required.

Nanomedicine comes of age with mRNA vaccines

There have been few scientific announcements that have made as big an impact as the recent news that a vaccine, developed in a collaboration between German biotech company BioNTech and the pharmaceutical giant Pfizer, has been shown to effective against covid-19. What’s even more striking is that this vaccine is based on an entirely new technology. It’s an mRNA vaccine; rather than injecting weakened or dead virus materials, it harnesses our own cells to make the antigens that prime our immune system to fight future infections, exactly where those antigens are needed. This is a brilliantly simple idea with many advantages over existing technologies that rely on virus material – but like most brilliant ideas, it takes lots of effort to make it actually work.

Here I want to discuss just one aspect of these new vaccines – how the mRNA molecule is delivered to the cells where we want it to go, and then caused to enter those cells, where it does its job of making the virus proteins that cause the chain of events leading to immunity. This relies on packaging the mRNA molecules inside nanoscale delivery devices. These packages protect the mRNA from the body’s defense mechanisms, carry it undamaged into the interior of a target cell through the cell’s protective membrane, and then open up to release the bare mRNA molecules to do their job. This isn’t the first application of this kind of nanomedicine in the clinic – but if the vaccine lives up to expectations, it will make unquestionably the biggest impact. In this sense, it marks the coming of age of nanomedicine.

Other mRNA vaccines are in the pipeline too. One being developed by the US company Moderna with National Institute of Allergy and Infectious Diseases (part of the US Government’s NIH), is also in phase 3 clinical trials, and it seems likely that we’ll see an announcement about that soon too. Another, from the German Biotech company CureVac, is one step behind, in phase 2 trials. All of these use the same basic idea, delivering mRNA which encodes a protein antigen. A couple of other potential mRNA vaccines use a twist on this simple idea; a candidate from Arcturus with the Duke-National University of Singapore, and another from Imperial College, use “self-amplifying RNA” – RNA which doesn’t just encode the desired antigen, but which also carries the instructions for some machinery to make more of itself. The advantage of this in principle is that it requires less RNA to produce the same amount of antigen.

But all of these candidates have overcome the same obstacle – how to get the RNA into the human cells where it is needed? The problem is that, even before the RNA reaches ones of its target cells, the human body is very effective at identifying any stray bits of RNA it finds wandering around and destroying them. All of the RNA vaccine candidates use more or less the same solution, which is to wrap up the vulnerable RNA molecule in a nanoscale shell made of the same sort of lipid molecules that form the cell membrane.

The details of this technology are complex, though. I believe the BioNTech, CureVac and Imperial vaccines all use the same delivery technology, working in partnership with a Canadian biotech company Acuitas Therapeutics. The Moderna vaccine delivery technology comes from that towering figure of nanomedicine, MIT’s Robert Langer. The details in each case are undoubtedly proprietary, but from the literature it seems that both approaches use the same ingredients.

The basic membrane components are a phospholipid analogous to that found in cell membranes (DSPC – distearoylphosphatidylcholine), together with cholesterol, which makes the bilayer more stable and less permeable. Added to that is a lipid to which is attached a short chain of the water-soluble polymer PEO. This provides the nanoparticle with a hairy coat, which probably helps the nanoparticle avoid some of the body’s defences by repelling the approach of any macromolecules (artificial vesicles thus decorated are sometimes known as “stealth liposomes”), and perhaps also controls the shape and size of the nanoparticles. Finally, perhaps the crucial ingredient is another lipid, with a tertiary amine head group – an ionisable lipid. This is what the chemists call a weak base – like ammonia, it can accept a proton to become positively charged (a cation). Crucially, its charge state depends on the acidity or alkalinity of its environment.

To make the nanoparticles, these four components are dissolved in ethanol, while the RNA is dissolved in a mildly acidic solution in water. Then the two solutions are mixed together, and out of that mixture, by the marvel of self-assembly, the nanoparticles appear, with the RNA safely packaged up inside them. Of course, it’s more complicated than that simple statement makes it seem, and I’m sure there’s a huge amount of knowledge that goes into creating the right conditions to get the particles you need. But in essence, what I think is going on is something like this.

When the ionisable lipid sees the acidic environment, it becomes positively charged – and, since the RNA molecule is negatively charged, the ionisable lipid and the RNA start to associate. Meanwhile, the other lipids will be self-organising into sheets two molecules thick, with the hydrophilic head groups on the outside and the oily tails in the middle. These sheets will roll up into little spheres, at the same time incorporating the ionisable lipids with their associated mRNA, to produce the final nanoparticles, with the RNA encapsulated inside them.

When the nanoparticles are injected into the patient’s body, their hairy coating, from the PEO grafted lipids, will give them some protection against the body’s defences. When they come into contact with the membrane of a cell, the ionisable lipid is once again crucial. Some of the natural lipids that make up the membrane coating the cell are negatively charged – so when they see the positively charged head-group of the ionisable lipids in the nanoparticles, they will bind to them. This has the effect of disrupting the membrane, creating a gap to allow the nanoparticle in.

This is a delicate business – cationic surfactants like CTAB use a similar mechanism to disrupt cell membranes, but they do that so effectively that they kill the cell – that’s why we can make disinfectants out of them. The cationic lipid in the nanoparticle must have been chosen so that it disrupts the membrane enough to let the nanoparticle in, but not so much as to destroy it. Once inside the cell, the conditions must be different enough that the nanoparticle, which is only held together by relatively weak forces, breaks open to release its RNA payload.

It’s taken a huge amount of work – over more than a decade – to devise and perfect a system that produces nanoparticles, that successfully envelopes the RNA payload, that can survive in the body long enough to reach a cell, that can deliver its payload through the cell membrane and then release it. What motivated this work wasn’t the idea of making an RNA vaccine.

One of the earliest clinical applications of this kind of technology was for the drug Onpattro, produced by the US biotech company Alnylam. This uses a different RNA based technology – so called small interfering RNA (siRNA) – to silence a malfunctioning gene in liver cells, to control the rare disease transthyretin amyloidosis. More recently, research has been driven by the field of cancer immunotherapy – this is the area for which the Founder/CEO of BioNTech, Uğur Şahin, received substantial funding from the European Research Council. Even for quite translational medical research, the path from concept to clinical application can take unexpected turns!

We all have to hope that the BioNTech/Pfizer vaccine lives up to its promise, and that at least some of the other vaccine candidates – both RNA based and more conventional – are similarly successful; it will undoubtedly be good to have a choice, as each vaccine will undoubtedly have relative strengths and weaknesses. The big question now must be how quickly production can be scaled up to the billions of doses needed to address a world pandemic.

One advantage of the mRNA vaccines is that the vaccine can be made in a chemical process, rather than having to culture viruses in a cell culture, making scale up faster. Of course there will be potential bottlenecks. These can be as simple as the vials needed to store the vaccine, or the facilities needed to transport and store them – especially acute for the BioNTech/Pfizer, which needs to be stored at -80° C.

There are also some quite specialised chemicals involved. I don’t know what will be needed for scaling up RNA synthesis; for the lipids to make the nanoparticles, I believe that the Alabama-based firm Avanti Polar Lipids has the leading position. This company was recently bought, in what looks like a very well-timed acquisition, by the Yorkshire based speciality chemicals company Croda, which I am sure has the capacity to scale up production effectively. Students of industrial history might appreciate that Croda was originally founded to refine Yorkshire wool grease into lanolin, so their involvement in this most modern application of nanotechnology, which nonetheless rests on fat-like molecules of biological origin, seems quite appropriate.

References.

The paper describing the BioNTech/Pfizer vaccine is: Phase I/II study of COVID-19 RNA vaccine BNT162b1 in adults.

The key reference this paper gives for the mRNA delivery nanoparticles is: Expression kinetics of nucleoside-modified mRNA delivered in lipid nanoparticles to mice by various routes.

The process of optimising the lipids for such delivery vehicles is described here: Rational design of cationic lipids for siRNA delivery.

A paper from the Robert Langer group describes the (very similar) kind of delivery technology that I presume underlies the Moderna vaccine: Optimization of Lipid Nanoparticle Formulations for mRNA Delivery in Vivo with Fractional Factorial and Definitive Screening Designs

UK Industrial Strategy’s three horsemen: COVID, Brexit and trade wars (and a fourth horseman of my own)

A couple of weeks ago, on 9 October 2020, I took part in a seminar for the Tony Blair Institute for Global Change, called “UK Industrial Strategy’s three horsemen: COVID, Brexit and trade wars”. This featured as speakers me, the economist Dame Kate Barker, and Anand Menon (Director of “UK in a changing Europe” at Kings College London), and was chaired by Ian Mulheirn. There is a YouTube video of the event here. Here is a slightly tidied up version of what I said.

It’s a real pleasure to be speaking at this event – and especially to be sharing a virtual platform with Kate Barker, from whom I learnt so much as a colleague working on the Industrial Strategy Commission back in 2017. Our final report then was intended to inform the discussion around the 2017 White Paper on industrial strategy. Now industrial strategy is back on the agenda – we read that the the government is planning to “rip up” the 2017 strategy, producing a new document with a heavy focus on science and technology.

Despite everything that’s happened since 2017, I agree with Kate that the principles we laid down do stand the test of changing times. Since then, the focus of my own work has been on the link between R&D, innovation and productivity, and the way regional imbalances in economic performance reflect regional imbalances in state spending in R&D.

But how is this all changed by the three horseman of the apocalypse – COVID, Brexit and trade wars – that we’re asked to discuss?

Brexit

We can talk about the link between Brexit and industrial strategy both in terms of cause and effect. Failures of industrial strategy contributed to the political conditions that led to Brexit, and the changes that Brexit will force on the UK’s economy will demand a different industrial strategy for the new economic model that the country will have to adopt.

Much written on the connection between “left behind communities” and the Brexit vote, and the relationship is possibly more complicated than simple accounts might suggest. But, as the economic geographer Philip McCann has demonstrated in his analysis of “geographies of discontent” , the UK is an outlier amongst developed countries in the scale of its economic imbalances. The greater Southeast looks like a prosperous Northern European country. The rest of the country looks like East Germany, Southern Italy or Portugal. In fact, East Germany has recovered from 40 years of communism faster than the North of England has recovered from the deindustrialisation of the 1980’s.

These regional imbalances are reflected in living standards and other measures of prosperity, including life expectancy and health outcomes. But at the root of the issue is a huge imbalance in productivity. In fact, the imbalances show up more strongly in productivity than in living standards, because the UK runs an effective transfer union – money is moved up from the greater Southeast – the only parts of the country to make a surplus on the government current account – to the rest of the country.

But the paradox is that while we transfer money to cover current spending, we concentrate the investments that build productivity growth in the already prosperous greater southeast. My own focus has been on Research and Development: here nearly half the public spending on R&D is concentrated in London and the two two subregions containing Oxford and Cambridge. R&D isn’t the only thing that matters in driving productivity growth, but it is associated with high value companies operating at the technological frontier and innovative start-ups, which anchor strong regional innovation ecosystems and produce highly prosperous knowledge intensive economies like those around Cambridge and Oxford.

Even more paradoxical is the fact that public spending on R&D is even more geographically concentrated than private sector spending. This means that potential spillovers from private sector R&D spending are being left uncaptured. We have regions like the North West, with highly productive, R&D intensive industries such as chemicals and pharmaceuticals, and the East Midlands, with its strong private sector innovation in automotive and aerospace, where the innovation potential of the regional economies isn’t being exploited to the full because the public money doesn’t follow the private.

On the other hand, there are some places that don’t have enough R&D of any kind, public or private. In Wales and the Northeast, for example, low investment in R&D leads to weak innovation economies, with poor productivity performance and low demand for skills

It is failures of industrial strategy that have led to a divided country, and those divisions have brought us sour politics.

Trade Wars

Moving onto the effects of Brexit – and the wider sense of a retreat from globalisation – we’re likely to find that the economic model that the UK has chosen is likely to be particularly threatened. Some of our highest productivity and most export oriented sectors have succeeded by becoming highly integrated in transnational value chains, and it is these sectors that are most at risk from the trade dislocations that Brexit threatens.

The UK’s automobile industry is a prime example. This has made a remarkable comeback from a low-point in the mid-2000’s – perhaps an unsung success of a modern industrial strategy which began with Mandelson’s time in the Business department at the end of the new Labour period, and persisted into the Coalition, with considerable policy continuity. But a finished car that rolls off a production line in Sunderland or Solihull combines components and parts that have been shuffled back and forth from a network of suppliers all across the world. This leads to efficient car production, but it’s going to be very difficult to adapt to a post-Brexit world where there are likely to detailed rules on local origin for the export of vehicles to the EU.

Brexit isn’t the only event that’s likely to give us hard lessons about technological self-sufficiency. We can see the effects of a much colder attitude to China in the USA in the pressure on the UK to lessen the involvement of Huawei in the 5G network, and the exclusion of the UK from the EU’s Galileo project for a satellite positioning has resulted in a scramble for a UK alternative. I suspect that politicians and policy makers severely underestimate the degree to which the UK has lost technological self-sufficiency in a whole range of sectors. I also wonder whether a wider sense of loss of what one might call technological sovereignty has itself contributed to the anxiety that culminated in Brexit.

I believe that the UK will need to rebuild some of its technological capacity if it is to remain a prosperous country, and that this will need to be a central ingredient of a more activist industrial policy. But that leaves some big questions. How much, in what sectors? The UK is a relatively small country in a big world economy. We will need to think very deeply about our place in the evolving trading systems of a world that might look very different to the post-cold-war, globalising world that policy makers have grown up in. An industrial strategy does need to be founded on a clear view about what kind of economy the UK wants – and can realistically hope – to become.

COVID-19

We’ve known for some time that increasing globalisation puts the world at greater risk of a pandemic, and with COVID-19 those fears have been realised. The closer entanglement of natural ecosystems with human society leads to more pathogens crossing from the animal world into people; then the worldwide traffic of business people and tourists spread the disease across the world before health systems have a chance to respond, as we have seen with such tragic consequences over the last year.

It’s too soon to unpick all the effects COVID-19 will have on the economy, and the implications that those effects have for the UK’s industrial strategy, but we can already start to see some themes emerging. Different sectors have been affected in different ways, with an obvious severe (but hopefully time-limited) blow to hospitality and tourism, and perhaps more far-reaching effects on commercial real estate as some pandemic induced changes in working practises are permanently adopted.

One very important question for the UK concerns the shape of the future civil aerospace industry. It’s difficult to know how future patterns of international mobility will change, but any permanent reduction will have a serious impact on one of the UK’s highest productivity industries. As a specific example, Rolls-Royce is one of the UK’s few world class innovative engineering companies of any size, but its dependence on long-haul air traffic makes the company – and the cities like Derby that depend on it – very vulnerable.

Rolls-Royce has been bailed out by the government once before, following its bankruptcy in 1971. I believe that letting Rolls-Royce fail now should be unthinkable, because of the dissipation of concentrations of high level skilled people, and the loss of innovation capacity in areas like the East Midlands that would follow. But what form should any bail-out take – and how should it take into account bigger imperatives such as the net zero greenhouse gas target?

What can we learn about our industrial strengths and weaknesses from the experience of our response to the pandemic? We went into the pandemic thinking we had the advantage of a world-class life sciences sector, but after the event we can’t say the UK has excelled in its response.

It’s certainly true that parts of our life sciences sector are excellent, and if the Oxford group or the Imperial group produce an effective vaccine against covid-19, and if the pharmaceutical industry is successful in rapidly scaling up its manufacture, that will be a huge achievement and an invaluable contribution to world health.

But in other important areas – in public health, diagnostics, the care sector – the UK’s weaknesses have been savagely exposed. We have learnt about weaknesses in supply chains for basic supplies like PPE and generic pharmaceuticals.

I think we made a category error in thinking that the “life sciences sector” is a sector at all. We have a strong pharmaceutical sector which historically been enormously productive (though not without recent difficulties), exploiting the UK’s excellent biomedical science base to produce drugs for the most lucrative world markets (particularly those of the USA). But we have done much worse in driving and implementing the kinds of innovation that serve the health needs of the UK’s own population.

The fourth horseman

There is, of course, a fourth horseman of the industrial strategy apocalypse, that is more important than any of the three we have been asked to discuss. That is climate change and the huge economic transition that the need to decarbonise our energy economy requires. It is an entirely positive development that the government has committed to a target of net zero greenhouse gas emissions by 2050, and that there is a wide political consensus in support of this target, or indeed a more ambitious goal. But I’m not convinced that policy makers and the public fully understand the magnitude of the task.

We need not just to decarbonise the electricity sector as it stands now; as we electrify other forms of energy use, we will need to at least double generating capacity. We need to decarbonise transport and domestic heating, probably using hydrogen as an energy store and vector, especially for hard to decarbonise industries like steel. We will need as much offshore wind as we can get (probably including new technologies like floating wind), we will need new nuclear build, possibly including new high temperature designs to make hydrogen from process heat.

This energy transition will be a huge dislocation. We have to do it, but we shouldn’t expect it to be without cost. People rightly talk about the new “green” jobs this transition will produce – but that’s not an unmixed blessing. If the new energy systems need to employ more people than our current fossil fuel based system, that implies a drop in productivity. We will have to apply more resources to achieve the same energy benefits, and those resources won’t be available to satisfy other wants and needs. Innovation, to create new zero carbon technologies and improve existing ones, will be urgently needed to drive down those costs, and that innovation – carried out in parallel with the deployment of existing technologies – should be a priority of industrial strategy.

The energy transition does have potential benefits for regional economic inequality, though. Much of the innovation and deployment of low carbon technologies should happen outside the prosperous Southeast – for example in Teeside and the Humber, in Cumbria and the Wirral. This should be an important part of the “levelling up” agenda.

An industrial strategy for our times

To sum up, the ravages of these four horseman mean that our economy will need to be transformed. That transformation needs to be driven by innovation, and it needs to be informed by a clear view of the enduring challenges the UK faces and a realistic assessment of the UK’s place in the world. As Kate stressed, the challenges are obvious: climate change, weak wage growth, the cost and effectiveness of health and social care, failing places. The need for a new start does give us a chance to spread the benefits of innovation more widely across the country, and we should seize that opportunity.

Talking about industrial strategy, “levelling up” and R&D

I’ve done a number of events over the past week on the themes of industrial strategy, “levelling up” and R&D. Here’s a summary of links to the associated videos, transcripts and podcasts.

1. Foundation for Science and Technology event: “The R&D roadmap and levelling up across the UK”. 7 October 2020.

An online seminar with me, the UK Science Minister, Amanda Solloway MP, and the Welsh Government Minister for the Economy, Transport and North Wales,Ken Skates MS.
Transcripts & YouTube video can be found here.

An associated Podcast“>podcast of an interview with me is here.

2. Oral evidence to House of Commons Science Select Committee on “A New UK Research Agency modelled on ARPA”, 7 October 2020

An evidence session with myself and Mariana Mazzucato (Professor in the Economics of Innovation & Public Value at UCL):
transcripts;
Video.

3. Seminar for Tony Blair Institute for Global Change, 9 October 2020: “UK Industrial Strategy’s three horsemen: COVID, Brexit and trade wars”

An online seminar featuring myself, the economist Dame Kate Barker, and Anand Menon (Director of UK in a changing Europe at Kings College London)
YouTube Video

On the UK’s chemicals industry

I did a webinar a couple of weeks ago, for the Society of Chemical Industry, about the role of the chemicals industry in addressing the UK’s problems of stagnant productivity and regional economic disparities. The recording of the talk should be on their website soon, but in the meantime here (5 MB PDF) are the slides I used. Here’s a summary of what I said.

I started by setting out the economic context the UK finds itself in. The very slow productivity growth since the 2007/8 global financial crisis has had the result that real wages have stagnated, while economic performance across the regions of the UK remains very uneven.

The most important contributor to productivity growth – and thus to rising living standards – is what economists call “total factor productivity” – the measure of how effectively an economy converts inputs, in the form of labour and capital, into valuable outputs. This includes, but is not limited to, the technological advances that allow us to produce existing products more efficiently and to create entirely new products and services.

We can thus map the different sectors of the UK’s economy on 2 dimensions – how big a share of the economy they take, and how much their total factor productivity increases. I argue that industrial strategy should focus on those areas that are both significant in scale relative to the economy as a whole, and that are dynamic in terms of showing long-term increases in total factor productivity. The three crucial sectors in the UK economy by these measures are knowledge intensive business services, information and communication technologies, and manufacturing. Within manufacturing, transport equipment – automotive and aerospace – stand out, but chemicals and pharmaceuticals are also highly significant.

Cumulative growth in total factor productivity in selected UK sectors and sub-sectors, indexed to 1995. Data from EU KLEMS Growth and Productivity Accounts database.”

Looking at the changes in total factor productivity over the last couple of decades offers an instructive window on the way the UK’s economy has changed.

Because normally manufacturing to grows productivity faster than services, we’d usually expect total factor productivity in the manufacturing sector to grow faster than the whole market economy. In the UK, that wasn’t so in the mid-1990’s – manufacturing lagged behind the economy as a whole. But from 1998 to the global financial crisis, manufacturing TFP grew faster than the economy as a whole; since the crisis both have stagnated.

Part of the explanation for this comes from the figures for the financial services industry. This showed very fast growth in the late 1990’s, booming right up to the financial crisis – since when it has fallen precipitately. It’s at least possible that some of the apparent boom was due to the way value is measured – or mismeasured – in financial services, but it’s clear that this sector, so influential politically, has been a drag on the whole economy over the last decade.

Focusing on manufacturing subsectors, transport equipment – including automotive and aerospace – stagnated in the late 90’s, began a recovery in the 00’s, which took off dramatically after the global financial crisis. It’s intriguing that the timing of this recovery almost exactly coincides with the UK government’s rediscovery of industrial policy – with an initial focus on automotive and aerospace industries. Pharmaceutical total factor productivity boomed from the late 90’s to the end of the 00’s, then collapsing, for reasons I’ve discussed extensively elsewhere.

But the surprise – to many, I suspect – is the performance of the chemicals sector. Written off in the late 90’s as the “old economy”, the chemicals industry has delivered the steadiest gains in total factor productivity, its cumulative performance exceeding both financial services and pharmaceuticals.

What’s more, if we look at where the chemicals industry takes place, in the context of regional economic inequality and the “levelling up” agenda of the government, we find that it is located outside the prosperous southeast, in Northwest England, the Humber and Teeside.

What sectors should industrial strategy focus on? My criteria would look at relative scale, the potential to produce significant and sustained gains in total factor productivity, and to contribute to economic growth in economically lagging parts of the UK. The chemicals industry qualifies on all counts.

What, though, of the future? Economic statistics don’t capture some of the costs of the chemicals industry, but these costs are borne by society more widely. The feedstocks it uses may be unsustainable and deplete the planet’s natural capital, pollution may damage local environments and ecosystems. Improper disposal of products – like plastic packaging – at their end of life causes yet more environmental damage.

Perhaps most importantly, the energy the industry uses produces carbon dioxide and thus accelerates climate change. 3% of the UK’s greenhouse gas emissions are directly associated with the chemicals industry, which accounts for about 20% of all emissions associated with manufacturing.

There is another side of the ledger, too. The products of the chemicals industry – like batteries and fuel cells – will be crucial in decarbonising the economy. In the future we might see the widespread use of hydrogen as an energy vector, direct capture of carbon dioxide from the air, and the synthesis of hydrocarbons from green hydrogen and captured carbon dioxide for zero-carbon aviation. Much of the net zero agenda is in fact a chemical industry agenda.

We need an industrial strategy for the UK chemicals industry, justified by its scale and its record of steady total factor productivity improvement. It’s a pity that the government hasn’t responded to the Chemistry Council’s proposed Sector Deal, which would provide a good start. In addition to a focus on productivity growth, that strategy should have a regional element, building on the existing chemical industry clusters in the North West and North East with further interventions to promote innovation and skills and all levels. Above all, it should emphasise the important role and responsibility of the chemicals industry as part of the wider economic transformation that needs to take place to achieve the government’s 2050 Net Zero emissions target.

The role of hydrogen in reaching net zero

The good news from the latest release of the UK government’s energy statistics is that the fraction of electrical power generated from renewable sources in 2019 reached a record high of 37.1%, driven largely by an increase in offshore wind of 20%, to a new high of 32 TWh a year. The bad news is how little difference this makes to the UK’s overall energy consumption – of the 2300 TWh used, 78.3% was obtained from burning fossil fuels. This is a decrease from last year’s fraction – 79.4% – but progress remains much too slow.

It’s tempting to focus on the progress we are making in decarbonising the electricity supply, and this isn’t insignificant. But while the UK used 346 TWh of electricity in 2019, the country directly burnt gas to provide 512 TWh heat for domestic and industrial purposes (not counting here the gas converted to electricity in power stations), and 152 TWh of petrol and 301 TWh of diesel to power vehicles. We’ve no chance of reaching net zero greenhouse gas emissions by 2050 without displacing this directly burnt fossil fuel contribution. And given the longevity of energy infrastructures, we haven’t got long to start building out the technologies to do this at scale.

Can hydrogen help? This technology – or more accurately, group of potential technologies – is having a moment of attention, not for the first time. I think it could well make a significant contribution, but there are some awkward choices to make. Implementing any use of hydrogen in our energy system at scale will involve massive, long-term investments, and making the right choices involve difficult economic judgements, not just about the technologies as they currently exist, but as they may evolve under the pressures of energy markets across the world. Of course that evolution can be steered by incentives, regulation, and targeted support for research and development.

To begin with the basics, because there aren’t any reserves of molecular hydrogen lying around, it isn’t a source of energy, but a way of storing, transmitting and using energy. When burnt, or combined with oxygen in a fuel cell, it produces nothing but water. So the issue is how you make it without producing carbon dioxide in its manufacture. There are three broad options:

  • Currently, most hydrogen is made from natural gas through a process called steam methane reformation. By adding heat to water and methane, with suitable catalysts, one can obtain hydrogen and carbon dioxide. The carbon dioxide produced in the reaction, and any that results from generating the heat needed to make the reaction run, would need to be captured and stored underground in old gas fields. This process, including separating the carbon dioxide, are mature technologies, used for example at scale to produce ammonia for fertiliser.
  • If zero-carbon energy is available cheaply, from wind, solar or nuclear, intrinsically zero carbon hydrogen can be produced by electrolysis of water. The most effective current technology uses a proton exchange membrane to separate anode and cathode.
  • If zero-carbon process heat is available cheaply, from high temperature nuclear reactors or solar concentrators, hydrogen can be made by the thermochemical splitting of water. (As a combination of the last two ideas, given both process heat and electricity, high temperature electrolysis is another option).
  • How then might the hydrogen be used to attack the carbon dioxide currently produced by the nearly 1000 TWh of energy we derive from burning gas, petrol and diesel for heating and transport?

  • Right now we could add some hydrogen to natural gas – perhaps up to 20% – making a significant lowering of its carbon intensity without substantial changes in our existing systems.
  • The complete replacement of natural gas by zero-carbon hydrogen for domestic heating and many industrial processes is probably technically feasible, but quite a lot more expensive. Some changes will need to be made to the gas distribution system (e.g. replacement of iron/steel pipes with thermoplastic pipes), and boilers and appliances would probably have to be replaced too.
  • Hydrogen can be used for transport, as fuel for internal combustion engines, or more likely, converted to electricity via fuel cells to power cars and trucks.
  • Finally, hydrogen might make possible the very large scale seasonal storage of energy (potentially on the scale of 10’s or even 100’s or TWh) generated by intermittent renewables, by storing it underground in rock salt formations.
  • All of these ways of making hydrogen and using it are technically possible. They’re also all potentially enormously expensive, with the potential for locking the country into solutions which turn out to be inappropriate or made redundant by rival technologies. Some experimentation is necessary, and some blind alleys are probably inevitable, but what needs to be taken into account as we make our choices?

    To start with the basic physics and chemistry, hydrogen is a light gas which burns completely and cleanly to yield only water vapour. Perceptions of hydrogen are inevitably shaped by the Hindenburg disaster – but all flammable gases are potentially dangerous, and these are risks of the kind that industrial societies have got used to managing. Hydrogen is more easily set aflame than methane and it burns hotter, but on the other hand at atmospheric pressure burning a given volume of hydrogen produces less energy than the equivalent volume of methane, and much less than petrol vapour. In fact it’s this low volumetric energy density of hydrogen that poses the biggest problem. Even compressed to 70 MPa (as it would be in typical compressed gas tanks) its energy density is only 1.3 MWh per cubic meter, compared to petrol or aviation spirit eat about 10 MWh per cubic meter. Even liquified its energy density is still only 2 MWh per cubic meter, and this needs a temperature of -250 °C, considerably colder than liquid nitrogen.

    Moving on to economics, how can we find the most cost-effective solutions? The problem is that technologies don’t stand still – indeed, it’s essential that costs come down, and substantial research efforts are needed to make sure that happens. Where can we hope to see the biggest cost reductions? Existing technologies – like steam reforming of natural gas with carbon capture – are probably the cheapest options with current technology, but being mature further improvements are likely to be more difficult to find than with newer technologies like proton exchange membrane or high temperature electrolysis.

    It’s important to remember that the UK accounted for just 1.4% of the world’s energy consumption in 2018, and this fraction will inevitably (and desirably) fall over the next few decades. The choices we make must take into account what the rest of the world is likely to do; while the UK might hope to influence that path, perhaps by helping develop new technologies cheap enough for wide adoption, the UK isn’t a big enough market to be able to make unilateral decisions about technology directions. If battery electric vehicles win the race for zero-carbon personal transport, it would be pointless for the UK to develop a hydrogen network for fuel cell cars. Likewise, if the UK is the only country to back hydrogen boilers for domestic heating while the rest of the world chooses electric heat pumps, it won’t be a big enough market to justify the development of hydrogen domestic boilers by itself, so its plans would be left high and dry.

    We have well developed existing energy distribution systems, so the question for any new energy vector is whether these systems can be incrementally adapted, or do new ones need to be built out entirely from scratch? We currently have a well developed electricity distribution system. Distributed PEM electrolysis plants could take zero-carbon from the grid, and produce hydrogen locally. We also have systems for distributing natural gas: it’s likely that the core high pressure network would have to be entirely rebuilt for hydrogen, but the low pressure local distribution system could be adapted. We don’t have a cryogenic liquid distribution system at scale, and this is likely to limit global trade in hydrogen.

    Finally, we have to consider our plans for low carbon electricity. Whatever we do, we need to replace the 512 TWh of gas we use for heating, and the 453 TWh of petrol and diesel we use for transport, with zero carbon alternatives. If this involves electrification – either directly or through the production of hydrogen from zero-carbon electricity – this will need a huge expansion of power generation capacity from the current 346 TWh/yr. I find it difficult to see how this can happen without both a massive increase in offshore wind – possibly including floating offshore wind – and new nuclear build, possibly next generation nuclear able to produce high temperature process heat for production of additional hydrogen.

    These are difficult choices, but we haven’t got much time. Let’s get on with it!

    Some references:

    Current UK energy statistics from DUKES 2020.
    Hydrogen supply chain evidence base.
    On hydrogen storage (US Dept of Energy PDF)
    Royal Society Policy Brief Options for producing low-carbon hydrogen at scale.

    The right road to higher UK research and development spending?

    The UK government published a “Research and Development Roadmap” last week, setting out “the UK’s vision and ambition for science, research and innovation”. It’s not by itself a strategy; instead it’s a document that sets out the issues that a subsequent strategy will need to address. The goals of the government here are very ambitious, and need to be thought of as part of a wider plan to remake the UK state as a global centre for science and innovation, after its departure from the European Union. In the recent words of the Prime Minister, “though we are no longer a military superpower we can be a science superpower”. Does this Roadmap give us a realistic route for translating this aspiration into policy?

    What’s at stake?

    The context for the roadmap is the commitment to raise the UK government’s R&D spend to £22 billion by 2024/25. The roadmap is important as a reassertion of this goal, set in the March 2020 budget, despite the strains that the pandemic have put on the public finances.

    What does this mean in practise? Total government spending on R&D was £12.8 billion in 2018 (the most recent year for which full figures are available). The implication is that this must rise by £1.5 billion per year on average. This amounts to introducing new spending amounting to the total budget of two large research councils (e.g. EPSRC and BBSRC combined), every year. There is very little clarity of how this is planned to happen. Will the change be evolutionary – just increasing spending through existing institutions, or revolutionary – introducing entirely new, large scale institutions, agencies and mechanisms?

    In very rough terms (rounded to the nearest half billion), Research Councils spend £4 billion a year, another £2.5 billion go to Universities through Research England and the devolved nations’ funding agencies, and InnovateUK gets a bit less than £1 billion. This is now bundled up in UKRI (except university funding in devolved nations). We’ve seen a £1.5 billion increase for UKRI in 2020/21, mostly in new funding instruments like the Industrial Strategy Challenge Fund.

    But it’s important to remember that the Research Councils are not the only means by which the government spends money on R&D. £1 billion goes into health research, mostly through the Department of Health’s National Institute for Health Research, £1.5 billion is spend on defence R&D by MoD, and BEIS spends a bit less than £1 billion outside UKRI (e.g. for space, UKAEA for the fusion programme, the National Physical Laboratory, and various industry programmes). The other government departments spend about another £1 billion between them.

    Finally, the UK Government spends money indirectly via its participation in the EU programmes. This amounts to another notional £1 billion.

    What are the government’s current imperatives?

    Where will the extra £1.5 billion a year go? Choices will be steered by the government’s current and emerging priorities. Here is a (no doubt) incomplete list, in no particular order:

    Increasing business R&D. The £22 billion is the government’s contribution to a bigger target – to increase the UK economy’s total R&D intensity from its current proportion of 1.7% of GDP to 2.4%. But most R&D comes from the private sector, in a roughly 2:1 ratio. So to achieve the overall target, the public money must be deployed in a way that maximises the chance of the private sector increasing its own spending in that 2:1 ratio. What will best persuade businesses – both UK owned and overseas owned – to spend another £18 billion or so a year on R&D in the UK?

    Translating R&D spending into economic outcomes. The current economic crisis makes this even more pressing, so there will be even more emphasis on interventions which will plausibly lead to productivity increases and new jobs, on timescales of years rather than decades.

    “Levelling up.” The economic underperformance of the UK outside the greater Southeast – including the relative underperformance of core cities like Manchester, the difficulties of deindustrialised towns and urban fringes, and the economic and social problems of rural and coastal peripheries, have achieved real political salience as the electoral centre of gravity of the Conservative party has moved north. The concentration of public R&D resources in the prosperous Southeast – as Tom Forth and I recently highlighted in our NESTA “Missing £4 billion” report – is increasingly recognised as part of the problem.

    Solving big societal problems. I believe the commitment of the government to net zero greenhouse gas emissions by 2050 is serious, but I don’t think policy makers yet realise the full scale of this economic transition. As this realisation takes hold the expectations on innovation and technology to deliver affordable solutions will only increase. Meanwhile the aftermath of the pandemic will prompt a reassessment of whether our “life sciences sector” has the optimal shape to support national health and well-being. The problems the UK is having in deploying a large-scale testing programme illustrate that strength in biotech and pharmaceutical research doesn’t automatically translate into diagnostic capacity. If and when vaccines and antibody therapeutics for COVID come on stream, there will be a tough test of the UK’s manufacturing capacity in the face of worldwide demand.

    Perceived problems in the culture of research in the UK and internationally. There is a strong perception in parts of government that all is not well in the culture of research in the UK. There’s a view that research culture itself is unhealthy, with insufficient autonomy for younger researchers and problems in the career structure, while the culture of funding bodies is believed to be too risk averse and bureaucratic.

    Life after Brexit. The position of UK science in an international context is clearly in question in the aftermath of Brexit. The immediate problem is the nature of the UK’s relationship with the EU’s science programmes. It’s clear that there is a desire for the UK to associate with Horizon Europe, but this is a second order issue for the government so if negotiations falter for other reasons then this may not happen, in which case there will be a need to find replacement programmes (particularly for the ERC, which is highly prized by the science establishment). The longer term issues are the nature of scientific relationships with other existing and emerging and science powers, and ensuring an openness to scientific talent from the rest of the world.

    Economic and technological sovereignty. Finally, the rapidly changing attitude of the government to China has raised questions of the degree to which the UK can be autonomous in key areas of strategic technology. The saga of Huawei’s involvement in the 5G network, questions about the involvement of China in the nuclear new build programme, and a realisation of the limitations of global supply chains in the pandemic, have led to talk of retaining or rebuilding some of the UK’s technological sovereign capability in key areas. I don’t think policy makers yet fully appreciate how much this capability has been run down over the last few decades.

    Possible new policies suggested by the Roadmap

    The roadmap reads as a rather open-ended document, but within it are some strong hints and indications of possible new policy directions. Here I’ve tried to extract some possible new policies that seem to be being suggested, expressing them in a more concrete way than the Roadmap does, where necessary reading between the lines, and possibly on occasion extrapolating somewhat. I’ve suggested some of the questions that these proposals might provoke. We need to keep in mind the scale of interventions implied by the £22 billion target – i.e. £1.5 billion additional spending each year, in considering these possible new policies.

    Raising our research ambitions

    New mechanisms for funding will be introduced, which involve less bureaucracy and taking bigger bets: more long-term, investigator-led funding. The new UK-ARPA like agency is already announced, but at £200m a year this is relatively small. It will sit outside UKRI. Will any other new mechanisms be left to the research councils, or can we expect more new agencies to be created?

    Defense-related R&D could be substantially increased. This would address the funding gap for development relative to research, and it’s a sector in which there is existing capacity which could be expanded – in both the public and private sectors. But how can we avoid the waste that defense procurement is often accused of, and maximise spillovers to the civilian economy?

    The government will fund large-scale “Moonshot” projects. Again, if done seriously this would lead to more development funding. What do we mean by a “moonshot”? To me, it needs to be an ambitious, engineering project that delivers a concrete outcome (i.e. at least a full scale prototype) on a defined timescale, but which is difficult enough that it drives a substantial associated R&D programme to solve the problems that arise on the way. The questions it prompts include – how would we select them, how can we be confident that the UK has the capacity to deliver, what scale of spending is involved? My first guess on the latter question is that if this isn’t measured in £ billions it’s either not a proper moonshot or we’re not serious about succeeding.

    What are possible concrete examples?

  • An all electric long haul aeroplane, as mentioned in the Prime Minister’s recent speech. (I think this is technologically implausible – my guess now is that if we want long-haul flying in a zero carbon world we will do it by making synthetic hydrocarbon fuel from green hydrogen and carbon dioxide captured directly from air).
  • A generation 4 advanced modular fission reactor which is low waste, intrinsically fail-safe and generates enough process heat to produce hydrogen as well as electricity (I think the government should do this).
  • A working, scalable, quantum computer (In my view this would be an example of a bad choice because the UK is not competitive with existing major projects elsewhere in the world).
  • A commercial fusion reactor supplying significant, low cost electricity to the grid by 2040 (i.e. STEP – the Spherical Tokomak for Energy Production. I think the government will do this, and it probably should, in case it works.)
  • Inspiring and enabling talented people and teams

    A big increase in R&D spending won’t deliver results if there aren’t the talented people – at all levels – to do the work. Much potential homegrown talent is currently missed, due to the underrepresentation of women and black and minority ethnic people in research. The roadmap announces the creation of an “Office of Talent” to make it easier for overseas researchers to work and settle in the UK.

    The relationship between higher education and further education will be rethought, especially in the context of expanding intermediate level technical training. I believe that we need much more joined-up systems for further and higher education on a regional basis, with much easier routes between the different parts of the system, and much more cooperation to expand provision for adult and continuing education.

    Catapult Centres could be given a more explicit mandate to embrace technical training in their missions. Again, this needs to be done in a regional context, working with existing HE and FE institutions.

    There will be an expansion of postgraduate research training. Will responsibility for PGR training be left with the research councils? Do we think of PhD students as primarily researchers or as trainees? Currently, PhD students are funded at a level far below the actual cost of training them, so given the current financial difficulties of universities the appropriate funding level will need to be reconsidered.

    Innovation and productivity

    The proportion of public R&D funding devoted to translational and applied research will be increased, with a particular focus on new medicines and treatment, and on defense research. What agencies will this funding be pushed through? Will funding for NIHR be substantially increased? What will be the role of Innovate UK?

    Universities will be further incentivised to carry out knowledge exchange activities: HEIF funding is being increased, and the Knowledge Exchange Framework introduced. Care will be needed to create the right incentives here – perhaps they could be structured to encourage more regional collaboration between institutions?

    The Catapult Network of translational research institutes could be restructured “We will review whether they should all continue in their current form, exploring the potential to seize new opportunities.” There’s a broader question of whether the Catapult Network should continue to be run by InnovateUK, or developed as an independent translational research agency with greater central coordination?

    New innovation zones and clusters should be created, based around existing and new innovation assets such as Catapult Centres, and the role of Catapult Centres in promoting local and regional economic growth made more explicit in their goals. What is the right balance between the regional and national missions of Catapult Centres?

    Levelling up R&D across the UK

    “We have already committed to developing a comprehensive and ambitious UK R&D Place Strategy together with the devolved administrations over the coming months.” Tom Forth and I have published a comprehensive set of suggestions for “levelling up” in our recent NESTA paper “The Missing £4 billion”.

    Central government will support local leaders in co-creating effective innovation approaches for their local economies. Should this be made formal, with cities/regions coming forward with “innovation deals” in return for devolved funding, as Tom Forth and I suggested?

    Some proportion of national R&D funding should be ring-fenced for particular regions, in order to make progress towards “levelling-up” R&D funding across the country, and/or devolved to those cities and regions that have demonstrated the capacity to create robust innovation strategies. How much of the “levelling up” agenda should be driven top-down as opposed to created bottom-up?

    All future decisions on R&D infrastructure investments should include an explicit consideration of their impacts on local and regional economies. This commitment is explicitly made in the Roadmap, though the issue will be the weight that is in practise attached to these factors relative to national considerations.

    There should be mechanisms for more local and regional voices in the advice given to central government agencies. The emphasis so far has been on UKRI, but what about NIHR, MoD, and any new agencies that emerge?

    Being at the forefront of global collaboration

    The immediate question here is what happens to the UK’s participation in EU science programs. The stated intention is to negotiate participation in Horizon Europe and the Euratom research programme – but there is an if: “It is our ambition to fully associate to both programmes if we can agree a fair and balanced deal”. So there is a plan B:

    “If we do not formally associate to Horizon Europe or Euratom R&T, we will implement ambitious alternatives as quickly as possible from January 2021 and address the funding gap. As a first step we will launch an ambitious new Discovery Fund offering sizeable grants over long periods of time to talented early, mid and late-career researchers, whether already in the UK or coming here from anywhere in the world, to pursue discovery-led, ground- breaking research.” This is clearly intended as a substitute for the European Research Council. One shouldn’t underestimate the difficulty of rapidly establishing a single-nation programme that reproduces the rigour and credibility of the ERC.

    More funding will be made for bilateral programmes with appropriate national partners across the world, in a way that is more responsive to new opportunities. This in part is a response to a long-standing complaint by Science Ministers that they don’t have any flexibility to assign such funds during overseas visits, but this raises the problem of how to make the choice of partner countries strategic rather than simply depending on the travel schedule of the Minister. European partners shouldn’t be neglected here.

    Ensuring a healthy R&D system

    Public sector research establishments (PSREs) will be strengthened and integrated into the wider system. They will be allowed to bid for funding from UKRI, which should come with full economic costs. What is the right division of labour between university-based research and R&D in PSREs? Is there a danger of the two parts of the system entering into sub-optimal competition?

    The PSRE network will be integrated into a true network of national laboratories, strengthened where necessary, with new organisations being created to fill obvious gaps. This needs a very clear view of national strategic priorities. One answer to the previous question is to differentiate more clearly between strategic science in support of national priorities and discovery science, but then this needs clarity about how universities and PSREs can most effectively collaborate.

    My concrete suggestion would be to create a new “Net Zero Delivery Agency” to take responsibility for the innovation that will be needed to reach the net zero greenhouse gas goal.

    The Government Office for Science will be strengthened and its resources increased, so that it can coordinate better science advice government and act as an authoritative technology assessment agency. Increased funding for GO Science was announced in the March budget, which I welcome.

    The Research Excellence Framework will be reformed to reduce its bureaucratic overhead and focus more on measuring change and development. How to do this without introducing perverse incentives?

    University research will be funded at closer to full economic cost. Part of the reason that a larger proportion of the UK’s public research enterprise happens in universities than in other comparable countries is that this has seemed a cheaper way of doing research than carrying it out in free-standing research institutes. But, as we’re now about to find out, that’s been an illusion – in reality, universities have subsidised the cost of research using the surplus from teaching overseas students. This subsidy – amounting to about a couple of billion pounds a year across the system – has been dramatically exposed by the pandemic.

    What’s next?

    The last section of the document begins by saying: “This Roadmap is the start of a conversation”. This conversation needs to take place with some speed: over this summer and autumn, the government needs to put in place its future spending plans in a Comprehensive Spending Review. In normal times, we’d expect this to cover the next three years – 21/22, 22/23 and 23/24. It’s the year after that – 24/25 – that the commitment to £22 billion R&D spending has been made, so these three years need to see substantial progress towards reaching that target, with concrete plans for those £billion scale increases. But it takes time to build new institutions, to recruit suitable people, to make evidence-based decisions about what projects to support.

    It’s natural to ask, how robust will this spending target, and the general priority being attached to R&D, be to the shifting winds of politics? While the commitment of the current Number 10 operation to R&D seems not to be in doubt, it’s not obvious that there’s a deep commitment to research throughout the Conservative Party. It’s not difficult to imagine circumstances – perhaps a change in leadership following the inevitable economic difficulties that we’ll encounter recovering from the pandemic – in which that commitment will be diluted.

    Of course, the spending target isn’t the ultimate goal, it’s the means to an end. That end is a more prosperous, more productive nation, with prosperity spread more equally across the country, on track to rapidly move its energy economy to a sustainable, net zero greenhouse emissions, basis. It is these goals that should drive our emerging R&D strategy.