UK ARPA: An experiment in science policy?

This essay was published yesterday as part of a collection called “Visions of ARPA”, by the think-tank Policy Exchange, in response to the commitment of the UK government to introduce a new science funding agency devoted to high risk, high return projects, modelled on the US agency DARPA (originally ARPA). All the essays are well worth reading; the other authors are William Bonvillian, Julia King (Baroness Brown), two former science ministers, David Willetts and Jo Johnson, Nancy Rothwell and Luke Georghiou, and Tim Bradshaw. My thanks to Iain Sinclair for editing.

The UK’s research and innovation funding agency – UKRI – currently spends £7 billion a year supporting R&D in universities, public sector research establishments and private industry [1]. The Queen’s Speech in December set out an intention to increase substantially public funding for R&D, with the goal of raising the R&D intensity of the UK economy – including public and private spending – from its current level of 1.7% of GDP to a target of 2.4%. It’s in this context that we should judge the Government’s intention to introduce a new approach, providing “long term funding to support visionary high-risk, high-pay off scientific, engineering, and technology ideas”. What might this new approach – inevitably described as a British version of the legendary US funding agency DARPA – look like?

If we want to support visionary research, whose applications may be 10-20 years away, we should be prepared to be innovative – even experimental – in the way we fund research. And just as we need to be prepared for research not to work out as planned, we should be prepared to take some risks in the way we support it, especially if the result is less bureaucracy. There are some lessons to take from the long (and, it needs to be stressed, not always successful) history of ARPA/DARPA. To start with its operating philosophy, an agency inspired by ARPA should be built around the vision of the programme managers. But the operating philosophy needs to be underpinned by as enduring mission and clarity about who the primary beneficiaries of the research should be. And finally, there needs to be a deep understanding of how the agency fits into a wider innovation landscape. Continue reading “UK ARPA: An experiment in science policy?”

More reactions to “Resurgence of the Regions”

The celebrity endorsement of my “Resurgence of the regions” paper has led to a certain amount of press interest, which I summarise here.

The Times Higher naturally focuses on the research policy issues. I’m interviewed in the piece “Tory election victory sets scene for UK research funding battle”, which focuses on a perceived tension between a continuing emphasis on supporting “excellence” and disruptive innovation based on existing centres, and my agenda of boosting R&D in the regions to redress productivity imbalances.

Peter Franklin asks, in UnHerd, “Is this the Tories’ real manifesto?”

“Alas, no”, I expect is the answer to that question, but this article does a really great job of summarising the content of my paper. It also includes this hugely generous quotation from Stian Westlake: “The mini-storm over Dom Cummings citing @RichardALJones’s recent paper on innovation policy prompted me to re-read it, and *boy* is it good. I agree with more or less everything, and as a bonus it is delightfully written… On a couple of occasions I’ve been asked by a new science minister ‘what should I read on innovation?’, and it was always quite a hard question to answer. But now, I’d just say ‘read that’.”

I suspect Franklin’s excellent article was instrumental in focusing some wider attention on my paper. The Sunday Times’s Economics Editor, David Smith, agreed that “A renewed focus on innovation can deliver a resurgence in the regions”, while Oliver Wright, in the Times, focused on the industrial strategy implications of the net zero greenhouse gas target, and in particular nuclear energy, in a piece entitled “Reinvigorate north with nuclear power stations”.

It was left to Alan Lockey, writing in CapX, to point out the tension between the government activism I call for and more traditional laissez-faire Conservative attitudes, putting this tension at the centre of what he called “The coming battle for modern Conservatism”. On the one hand, Lockey described the arguments as being “a bit boring”, “comfort-zone industrial policy instincts of Ed Miliband-era social democracy” from “a hitherto politically obscure physicist”… but he also found it “as an object lesson in how to construct an expansive and data-rich case for systemic public policy change … pretty near faultless. The ideas too, I find to be entirely unproblematic”. As he later graciously put it on Twitter, “I was merely just trying to convey that it seemed less controversial perhaps to those of us who are, basically, boring social democrats who see nothing wrong with industrial activism!”

On being endorsed by Dominic Cummings

The former chief advisor to the Prime Minister, Dominic Cummings, wrote a blogpost yesterday about the need for leave voters to mobilise to make sure the Conservatives are elected on the 12 December. At the end of the post, he writes “Ps. If you’re interested in ideas about how the new government could really change our economy for the better, making it more productive and fairer, you’ll find this paper interesting. It has many ideas about long-term productivity, science, technology, how to help regions outside the south-east and so on, by a professor of physics in Sheffield”. He’s referring to my paper “A Resurgence of the Regions: rebuilding innovation capacity across the whole UK”.

As I said on Twitter,“Pleased (I think) to see my paper “Resurgence of the regions” has been endorsed in Dominic Cummings’s latest blog. Endorsement not necessarily reciprocated, but all parties need to be thinking about how to grow productivity & heal our national divides”.

I provided a longer reaction to a Guardian journalist, which resulted in this story today: Academic praised by Cummings is remain-voting critic of Tory plans. Here are the comments I made to the journalist which formed the basis of the story:

I’m pleased that Dominic Cummings has endorsed my paper “Resurgence of the regions”. I think the analysis of the UK’s current economic weaknesses is important and we should be talking more about it in the election campaign. I single out the terrible record of productivity growth since the financial crisis, the consequences of that in terms of flat-lining wages, the role of the weak economy in the fiscal difficulties the government has in balancing the books, and (as others have done) the profound regional disparities in economic performance across the country. I’d like to think that Cummings shares this analysis – the persistence of these problems, though, is hardly a great endorsement for the last 9.5 years of Conservative-led government.

In response to these problems we’re going to need some radical changes in the way we run our economy. I think science and innovation is going to be important for this, and clearly Cummings thinks that too. I also offer some concrete suggestions for how the government needs to be more involved in driving innovation – especially in the urgent problem we have of decarbonising our energy supply to meet the target of net zero greenhouse gas emissions by 2050. It’s good that the Conservative Party has signed up to a 2050 Net Zero Greenhouse Gas target, but the scale of the measures it proposes are disappointingly timid – as I explain in my paper, reaching this goal is going to take much more investment, and more direct state involvement in driving innovation to increase the scale and drive the cost down of low carbon energy. This needs to be a central part of a wider industrial strategy.

I welcome all three parties’ commitment to raise the overall R&D intensity of the economy (to 2.4% of GDP by 2027 for the Conservatives, 3% of GDP by 2030 for Labour, 2.4% by 2027 with longer term aspiration for 3% for the Lib Dems). The UK’s poor record of R&D investment compared to other developed countries is surely a big contributing factor to our stagnating productivity. But this is also a stretching target – we’re currently at 1.7%. It’s going to need substantial increases in public spending, but even bigger increases in R&D investment from the private sector, and we’re going to need to see much more concrete plans for how government might get this might happen. Again, my paper has some suggestions, with a particular focus on building new capacity in those parts of the country where very little R&D gets done – and which, not coincidentally, have the worst economic performance (Wales, Northern Ireland, the North of England in particular).

As for Cummings’s views on Brexit: I voted remain, not least because I thought that a “leave” vote would result in a period of very damaging political chaos for the UK. I can’t say that subsequent events have made me think I was wrong on that. I do think that it would be possible for the UK to do ok outside the EU, but to succeed post-Brexit we’ll need to stay close to Europe in matters such as scientific cooperation (preferably through associating with EU science programmes like the European Research Council),and in matters related to nuclear technology. We will need to be a country that welcomes talented people from overseas, and provides an attractive destination for overseas investment – particularly important for innovation, where more than half of the UK’s business R&D is done by overseas owned firms. The need to have a close relationship with our major trading partners will mean that we’ll need to stay in regulatory alignment with the EU (very important, for example, for the chemicals industry) and minimise frictions for industries, like the automotive industry where the UK is closely integrated into European supply chains, and in the high value knowledge based services which are so important for the UK economy. It doesn’t look like that’s the direction of travel the Conservatives are currently going down.

Whatever happens in the next election, anyone who has any ambition to heal the economic and social divides in this country needs to be thinking about the issues I raise in my paper.

Rock climbing and the economics of innovation

The rock climber Alex Honnold’s free, solo ascent of El Capitan is inspirational in many ways. For economist John Cochrane, watching the film of the ascent has prompted a blogpost: “What the success of rock climbing tells us about economic growth”. He concludes that “Free Solo is a great example of the expansion of ability, driven purely by advances in knowledge, untethered from machines.” As an amateur in both rock climbing and innovation theory, I can’t resist some comments of my own. I think it’s all a bit more complicated than Cochrane thinks. In particular his argument that Honnold’s success tells us that knowledge – and the widespread communication of knowledge – is more important than new technology in driving economic growth doesn’t really stand up.

The film “Free Solo” shows Honnold’s 2017 ascent of the 3000 ft cliff El Capitan, in the Yosemite Valley, California. The climb was done free (i.e. without the use of artificial aids like pegs to make progress), and solo – without ropes or any other aids to safety. How come, Cochrane asks, rock climbers have got so much better at climbing since El Cap’s first ascent in 1958, which took 47 days, done with “siege tactics” and every artificial aid available at the time? “There is essentially no technology involved. OK, Honnold wears modern climbing boots, which have very sticky rubber. But that’s about it. And reasonably sticky rubber has been around for a hundred years or so too.”

Hold on a moment here – no technology? I don’t think the history of climbing really bears this out. Even the exception that Cochrane allows, sticky rubber boots, is more complicated than he thinks.

When the modern sport of climbing began, more than a hundred years ago, people wore boots – nailed boots – on their feet (as they would do for pretty much any outdoor activity). There is a lost technology of the best types of nails and nailing patterns to use, but it’s true that, as harder climbs were done in the 1920s and 30s, the leading climbers of the day tended to use tennis shoes or plimsolls for the hardest climbs. But these were everyday footwear, in no way designed for climbing.

I believe the first shoes designed specifically for rock climbing, of the kind that would be recognised as the ancestors of today’s shoes, came from France. These were designed by the alpinist Pierre Allain for use on the sandstone boulders of the Fontainbleau forest, a favoured training ground for the climbers of Paris. By the time I started climbing, in the 1970’s, the descendants of these shoes – the EB Super Gratton- had an almost complete worldwide monopoly on climbing shoes. They were characterised by a tight fit, a treadless rubber sole and a wide rand, allowing precise footwork and good friction on dry rock.

In 1982 the makers of EBs made a “New Coke” like marketing blunder, introducing a new model with a moulded sole – probably cheaper to manufacture, but thicker and less precise than the original. This might not have mattered given their existing market position, but a then unheard of Spanish shoe company – Boreal – had recently introduced a model of their own, with a sole made of a new kind of high friction rubber.

Rubber is a strange material, and the microscopic origins of friction in rubber are different to those in more conventional materials like metals. When a climber steps on a tiny foothold, the sole starts to slide against the rock, very slowly, usually imperceptibly. As the rubber slides past the asperities, the internal motions within the bulk of the rubber, of molecule against molecule, dissipate energy – and the greater the rate of energy dissipation, the higher the friction. This energy dissipation, though, is a very strongly peaked function of temperature – and as a consequence, a given rubber compound will have a temperature at which the friction is at a maximum.

Boreal, by accident or design, had found a rubber compound where the friction peaked much closer to room temperature than in EBs. Boreal’s new climbing boot – the “Firé” – swept the marketplace. The increased friction and the advantage this gave was obvious both to the leading climbers of the day, and the much more average performers. I was in the latter category, and succumbed to the trend. The improvement in performance the new shoes made possible was immediately tangible, the only downside being that Firés were cripplingly uncomfortable. Soon US and Italian competitors started selling boots with comparably high friction rubber that were actually foot-shaped.

Modern rock boots do make a difference, but this isn’t really the crucial technology that has enabled hard rock climbing. What’s made the biggest difference – both to the wider popularity of the sport and the achievements of its leading proponents – has been the development of technologies that allow one to fall off without dying.

Hold on, you might say here – wasn’t Alex Honnold climbing solo, without ropes, in a situation in which if he fell he would most certainly die? Yes, indeed, but Honnold didn’t get to be a good climber by doing a lot of soloing, he got to be a good soloist by doing a lot of climbing. Most of that climbing – especially the climbing where he was pushing himself – was done roped. To get himself ready for his El Cap solo, he spent hundreds of hours on the route, roped, working out and memorising all the moves.

When climbing started, every climb was effectively a solo, at least for the leader. Before the 2nd World War, climbing ropes were made of natural fibres – hemp or manila. They were strong – strong enough to hold a slip of a second on the rope. But they were brittle, and for the leader, any fall that would put a shock load on the rope was likely to break it. “The leader must not fall” was the stern instruction of books of the time. The knowledge that a fall would lead to death or serious injury was ever-present for a pre-war climber pioneering a new hard route, and it’s not difficult to imagine that this was a brake on progress.

As in other areas of technology, the war changed things. The new artificial fibre nylon was put into mass production for parachute cord for aircrew and airborne troops; its strength, resilience and elasticity made the postwar surpluses of the fibre ideal for making climbing ropes. Together with the invention of metal snap-links they made it possible to imagine a leader surviving a fall – the rope could be clipped to an anchor in the rock to make a “running belay”, limiting the length of the fall. In the USA and the European Alps, the anchors would usually be metal pegs hammered into cracks, while on the smaller crags of the UK a tradition developed of using jammed machine nuts threaded on loops of nylon..

By the 1960’s and 70’s, the likelihood was that a leader would survive a fall, but you wouldn’t want to do it too often. The job of arresting the fall went to the second, who would pass the rope round their back and use the force of their grip and the friction of the rope around their body to hold the fall. You had to be attentive, quick and decisive to do this without getting a bad friction burn, or at worst letting the rope go entirely. The crudest mechanical friction devices were devised in the early 70’s, and have now been developed to the point that a second no longer needs strength or skill to hold the rope of a falling climber. Meanwhile the leader would be tied on to the rope with a simple knot round the waist, making a fall painful – and a prolonged period of dangling, after a fall from overhanging rock, potentially fatal through asphyxiation. Simple but effective harnesses were developed in the 60’s and 70’s, which spread the force of arresting a fall onto the buttocks and thighs, and made the sudden stop at the end of a leader fall bearable, if not entirely comfortable.

In California, it was the particular character of the rock and the climbs, especially in Yosemite, that drove developments in the technology for anchoring the rope to the rock. Yvon Chouinard realised that the mild steel pegs used in the European Alps weren’t suitable for the hard granite of Yosemite, and he developed optimally shaped pegs from hard chrome-molybdenum alloy steel – the bongs, blades and leepers that I just about remember from my youth. But like other technological developments, this one had its downsides – the repeated placement and removal of these pegs from the cracks led to scarring and damage, which in the climate of heightened environmental awareness in the 60’s and 70’s led to some soul-searching by US climbers. A “clean-climbing” movement developed, with Chouinard himself one of its leaders. To replace steel pegs as anchors, the British tradition of jammed machine nuts as anchors was developed. Purpose designed chocks and wedges were marketed, like Chouinard’s cunningly designed “hexcentrics”, which would cam under load to hold even in parallel sided cracks.

It was another Californian devotee of Yosemite that made the real breakthrough in clean climbing protection, though. Ray Jardine, an aerospace engineer, devised an ingenious spring-loaded camming device that was easily placed and would hold a fall even if placed in a parallel sided or slightly flared crack. These were patented and commercialised as “Friends”. Many developments of this idea have since been put on the market, and these form the basis of the “rack” of anchoring equipment that climbers carry today.

It’s this combination of strong and resilient nylon ropes, able to absorb the energy of a long fall, automatic braking gadgets to hold the rope when a fall happens, reliable devices for anchoring the rope to the rock, and harnesses that spread the load of a fall across the climbers body, that have got us to where we are today, where climbers can practise harder and harder routes, (mostly) safe in the knowledge that a fall won’t be fatal, or even that uncomfortable.

This is not to say that knowledge isn’t important, of course. All this equipment needs skill to use – and knowledge has helped in the sheer physical aspects of getting up steep rock. As well as the new technology, one of the causes of the big advances in rock climbing standards in the 1980’s was undoubtedly a change in attitude amongst leading climbers. Training was taken much more seriously than it had been before: training techniques were imported from athletics and gymnastics, artificial climbing walls were developed, and the discipline of trying out very hard moves close to the ground on boulders – pioneered by the American mathematician and gymnast John Gill – became popular.

I think one kind of knowledge is particularly important in climbing – and maybe in other areas of human endeavour, too. That’s simply the knowledge that something has already been done – the existence proof that a feat is possible. Guidebooks record that a climb has been done and where it goes, though not usually how to do it. To know in advance the physical details of how a climb is done – what climbers call “beta” – is considered to lessen the achievement of a subsequent ascent. But simply to know that the climb is possible (and have some idea of how hard it is going to be) is an important piece of information in itself.

How is knowledge transmitted? We have books – instructional books of technique, and guidebooks to particular climbing areas. And now we have the internet, so one can read and post questions on climbers internet forums. I’m not sure how much this has added to more traditional ways of conveying information – discussions on the most popular UK climbing forum seem to mostly consist of endless arguments about Brexit. But I do think there is one change that modern times have brought that makes a huge difference to knowledge transmission, and that is the advent of cheap air travel.

My first overseas climbing trips (in 1981 and 1982) were to the French Alps. These were hugely important to my development as a climber, and undoubtedly some part of that came from interactions with climbers from other countries with different traditions and different techniques. Big climbing centres tended to have well known places where climbers from different countries stayed and mixed (the squalid informal campsite known as Snell’s Field in the case of Chamonix, the legendary Camp 4 for Yosemite). I climbed with a couple of outstanding Australian climbers from the campsite while I was there, we picked up tips on big wall climbing from a Yosemite habitué, and I came home with half a dozen beautiful titanium ice screws, light, thin walled, and sharp. Such things were unobtainable in the West at the time; I’d bartered them from some East European climbers, but they had undoubtedly been knocked off after hours in some Soviet aircraft factory.

But getting to Chamonix had taken me nearly 24 hours on a bus. Nowadays climbers can take several holidays a year with easy and cheap air travel, to the sunshine in Spain or Greece or Thailand, the big mountains of the Himalayas or South America, desert climbing in Morocco, Jordan, or Oman, Nevada, Utah, or Arizona, to the subarctic conditions of Patagonia or Baffin Island, or to the more traditional centres like the Dolomites or Yosemite. This does lead to a rapid spread of attitudes and techniques. It’s a paradox, of course, that climbers, who love the wilderness and the world’s beautiful places, and are more environmentally conscious than most, make, through their flying, such an above average contribution to climate change. Can this go on?

So if John Cochrane has learnt the wrong lesson from rock climbing, what better lessons should we take away from all this?

Some economists love simple stories, especially when they support their ideological priors, but a bit of knowledge of history often reveals that the truth is somewhat more complicated. More importantly, perhaps, we should remember that technological innovation isn’t just about iPhones and artificial intelligence. All around us – in our homes, in everyday life, in our hobbies and pastimes – we can see, if we care to look, the products of all kinds of technological innovation in products and the materials that make them, that collectively lead to overall economic growth. Technological innovation doesn’t have to be about giant leaps and moonshots – even mundane things like shoe soles and ropes tell a story of a whole series of incremental changes that together add up to progress.

And to return to Alex Honnold, perhaps the most important lesson a free-market loving economist should draw is that sometimes people will do extraordinary things without the motivation of money.

The challenge of deep decarbonisation

This is roughly the talk I gave in the neighbouring village of Grindleford about a month ago, as part of a well-attended community event organised by Grindleford Climate Action.

Thanks so much for inviting me to talk to you today. It’s great to see such an impressive degree of community engagement with what is perhaps the defining issue we face today – climate change. What I want to talk about today is the big picture of what we need to do to tackle the climate change crisis.

The title of this event is “Without Hot Air” – I know this is inspired by the great book “Sustainable Energy without the Hot Air”, by the late David McKay. David was a physicist at the University of Cambridge; he wrote this book – which is free to download – because of his frustration with the way the climate debate was being conducted. He became Chief Scientific Advisor to the Department of Energy and Climate Change in the last Labour government, but died, tragically young at 49, in 2016.

His book is about how to make the sums add up. “Everyone says getting off fossil fuels is important”, he says, “and we’re all encouraged to ‘make a difference’, but many of the things that allegedly make a difference don’t add up.“

It’s a book about being serious about climate change, putting into numbers the scale of the problem. As he says “if everyone does a little, we’ll achieve only a little.”

But to tackle climate change we’re going to need to do a lot. As individuals, we’re going to need to change the way we live. But we’re going to need to do a lot collectively too, in our communities, but also nationally – and internationally – through government action.

Net zero greenhouse gas emission by 2050?

The Government has enshrined a goal of achieving net zero greenhouse gas emissions by 2050 in legislation. This is a very good idea – it’s a better target than a notional limit on the global temperature rise, because it’s the level of greenhouse gas emissions that we have direct control over.

But there are a couple of problems.

We’ve emitted a lot of greenhouse gases already, and even if we – we being the whole world here – reach the 2050 target, we’ll have emitted a lot more. So the target doesn’t stop climate change, it just limits it – perhaps to 1.5 – 2° of warming or so.

Even worse, the government just isn’t being serious about doing what would need to be done to reach the target. The trouble is that 2050 sounds a long way off for politicians who think in terms of 5 year election cycles – or, indeed, at the moment, just getting through the next week or two. But it’s not long in terms of rebuilding our economy and society.

Just think how different is the world now to the world in 1990. In terms of the infrastructure of everyday life – the buildings, the railways, the roads – the answer is, not very. I’m not quite driving the same car, but the trains on the Hope Valley Line are the same ones – and they were obsolete then! Most importantly, our energy system is still dominated by hydrocarbons.

I think on current trajectory there is very little chance of achieving net zero greenhouse gas emissions by 2050 – so we’re heading for 3 or 4 degrees of warming, a truly alarming and dangerous prospect. Continue reading “The challenge of deep decarbonisation”

What do we mean by scientific productivity – and is it really falling?

This is the outline of a brief talk I gave as part of the launch of a new Research on Research Institute, with which I’m associated. The session my talk was in was called “PRIORITIES: from data to deliberation and decision-making
. How can RoR support prioritisation & allocation by governments and funders?”

I want to focus on the idea of scientific productivity – how it is defined, and how we can measure it – and whether it is declining – and if it is, what can we do about it?

The output of science increases exponentially, by some measures…

…but what do we get back from that? What is the productivity of the scientific enterprise – the output of the enterprise, as defined by some measure of the output of science per unit input?

It depends on what we think the output of science is, of course.

We could be talking of some measure of the new science being produced and its impact within the scientific community.

But I think many of us – from funders to the wider publics who support that science – might also want to look outside the scientific community. How can we measure the effectiveness with which scientific advances are translated into wider socio-economic goals? As the discourses of “grand challenges” and “mission driven” research become more widely taken up, how will we tell whether those challenges and missions have been met?

There is a gathering sense that the productivity of the global scientific endeavour is declining or running into diminishing returns. A recent article by Michael Nielsen and Patrick Collison asserted that “Science is getting less bang for its buck”, while a group of distinguished economists have answered in the affirmative their own question: “Are ideas getting harder to find?” This connects to the view amongst some economists, that we have seen the best of economic growth and are living in a new age of stagnation.

Certainly the rate of innovation in some science-led industries seems to be slowing down. The combination of Moore’s law and Dennard scaling which brought us exponential growth in computing power in the 80’s and 90’s started to level off around 2004 and has since slowed to a crawl, despite continuing growth in resources devoted to it. Continue reading “What do we mean by scientific productivity – and is it really falling?”

It’s the Industrial that enables the Artisanal

It’s come to this, even here. My village chippy has “teamed up” with a “craft brewery” in the next village to sell “artisanal ales” specially brewed to accompany one’s fish and chips. This prompts me to reflect – is this move from the industrial to the artisanal really a reversion to a previous, better world? I don’t think so – instead, craft beer is itself a product of modernity. It depends on capital equipment that is small scale, but dependent on high technology – on stainless steel, electrical heating and refrigeration, computer powered process control. And its ingredients aren’t locally grown and processed – the different flavours introduced by new hop varieties are the outcome of world trade. What’s going on here is not a repudiation of industrialisation, but its miniaturisation, the outcome of new technologies which erode previous economies of scale.

A craft beer from the Eyam Brewery, on sale at the Toll Bar Fish and Chip Shop, Stoney Middleton, Derbyshire.

Beer was one of the first industrial foodstuffs. In Britain, the domestic scale of early beer making began to be replaced by factory scale breweries in the 18th century, as soon as transport improved enough to allow the distribution of their products beyond their immediate locality. Burton-on-Trent was an early centre, whose growth was catalysed by the opening up of the Trent navigation in 1712. This allowed beer to be transported by water via Hull to London and beyond. By the late 18th century some 2000 barrels a year of Burton beer were being shipped to Baltic ports like Danzig and St Petersburg.

Like other process industries, this expansion was driven by fossil fuels. Coal from the nearby Staffordshire and Derbyshire coalfields provided process heat. The technological innovation of coking, which produced a purer carbon fuel which burnt without sulphur containing fumes, was developed as early as 1640 in Derby, so coal could be used to dry malt without introducing off-flavours (this use of coke long predated its much more famous use as a replacement for charcoal in iron production).

By late 19th century, Burton on Trent had become a world centre of beer brewing, producing more than 500 million litres a year, for distribution by the railway network throughout the country and export across the world. This was an industry that was fossil fuel powered and scientifically managed. Coal powered steam engines pumped the large volumes of liquid around, steam was used to provide controllable process heat, and most crucially the invention of refrigeration was the essential enabler of year-round brewing, allowing control of temperature in the fermentation process, by-now scientifically understood by the cadre of formally trained chemists employed by the breweries. In a pint of Marston’s Pedigree or a bottle of Worthington White Shield, what one is tasting is the outcome of the best of 19th century food industrialisation, the mass production of high quality products at affordable prices.

How much of the “craft beer revolution” is a departure from this industrial past? The difference is one of scale – steam engines are replaced by electric pumps, coal fired furnaces by heating elements, and master brewers by thermostatic control systems. Craft beer is not a return to preindustrial, artisanal age – instead it’s based on industrial techniques, miniaturised with new technology, and souped up by the products of world trade. This is a specific example of a point more generally made in Rachel Laudan’s excellent book “Cuisine and Empire” – so-called artisanal food comes after industrial food, and is in fact enabled by it.

What more general lessons can we learn from this example? The energy economy is another place where some people are talking about a transition from a system that is industrial and centralised to one that is small scale and decentralised – one might almost say “artisanal”. Should we be aiming for a new decentralised energy system – a world of windmills and solar cells and electric bikes and community energy trusts?

To some extent, I think this is possible and indeed attractive, leading to a greater sense of control and involvement by citizens in the provision of energy. But we should be under no illusions – this artisanal also has to be enabled by the industrial. Continue reading “It’s the Industrial that enables the Artisanal”

Carbon Capture and Storage: technically possible, but politically and economically a bad idea

It’s excellent news that the UK government has accepted the Climate Change Committee’s recommendation to legislate for a goal of achieving net zero greenhouse emissions by 2050. As always, though, it’s not enough to will the end without attending to the means. My earlier blogpost stressed how hard this goal is going to be to reach in practise. The Climate Change Committee does provide scenarios for achieving net zero, and the bad news is that the central 2050 scenario relies to a huge extent on carbon capture and storage. In other words, it assumes that we will still be burning fossil fuels, but we will be mitigating the effect of this continued dependence on fossil fuels by capturing the carbon dioxide released when gas is burnt and storing it, into the indefinite future, underground. Some use of carbon capture and storage is probably inevitable, but in my view such large-scale reliance on it is, politically and economically, a bad idea.

In the central 2050 net zero scenario, 645 TWh of electricity is generated a year – more than doubled from 2017 value of 300 TWh, reflecting the electrification of sectors like transport. The basic strategy for deep decarbonisation has to be, as a first approximation, to electrify everything, while simultaneously decarbonising power generation: so far, so good.

But even with aggressive expansion of renewable electricity, this scenario still calls for 150 TWh to be generated from fossil fuels, in the form of gas power stations. To achieve zero carbon emissions from this fossil fuel powered electricity generation, the carbon dioxide released when the gas is burnt has to be captured at the power stations and pumped through a specially built infrastructure of pipes to disused gas fields in the North Sea, where it is injected underground for indefinite storage. This is certainly technically feasible – to produce 150 TWh of electricity from gas, around 176 million tonnes of carbon dioxide a year will be produced. For comparison currently about 42 million tonnes of natural gas a year is taken out of the North Sea reservoirs, so reversing the process at four times the scale is undoubtedly doable.

In fact, more carbon capture and storage will be needed than the 176 million tonnes from the power sector, because the zero net greenhouse gas plan relies on it in four distinct ways. In addition to allowing us to carry on burning gas to make electricity, the plan envisages capturing carbon dioxide from biomass-fired power stations too. This should lead to a net lowering of the amount of carbon dioxide in the atmosphere, amounting to a so-called “negative emissions technology”. The idea of these is one offsets the remaining positive carbon emissions from hard to decarbonise sectors like aviation with these “negative emissions” to achieve overall net zero emissions.

Meanwhile the plan envisages the large scale conversion of natural gas to hydrogen, to replace natural gas in industry and domestic heating. One molecule of methane produces two molecules of hydrogen, which can be burnt in domestic boilers without carbon emissions, and one of carbon dioxide, which needs to be captured at the hydrogen plant and pumped away to the North Sea reservoirs. Finally some carbon dioxide producing industrial processes will remain – steel making and cement production – and carbon capture and storage will be needed to render these processes zero carbon. These latter uses are probably inevitable.

But I want to focus on the principal envisaged use of carbon capture and storage – as a way of avoiding the need to move to entirely low carbon electricity, i.e. through renewables like wind and solar, and through nuclear power. We need to take a global perspective – if the UK achieves net zero greenhouse gas status by 2050, but the rest of the world carries on as normal, that helps no-one.

In my opinion, the only way we can be sure that the whole world will decarbonise is if low carbon energy – primarily wind, solar and nuclear – comes in at a lower cost than fossil fuels, without subsidies or other intervention. The cost of these technologies will surely come down: for this to happen, we need both to deploy them in their current form, and to do research and development to improve them. We need both the “learning by doing” that comes from implementation, and the cost reductions that will come from R&D, whether that’s making incremental process improvements to the technologies as they currently stand, or developing radically new and better versions of these technologies.

But we will never achieve these technological improvements and corresponding cost reductions for carbon capture and storage.

It’s always tempting fate to say “never” for the potential for new technologies – but there’s one exception, and that’s when a putative new technology would need to break one of the laws of thermodynamics. No-one has ever come out ahead betting against these.

To do carbon capture and storage will always need additional expenditure over and above the cost of an unabated gas power station. It needs both:

  • up-front capital costs for the plant to separate the carbon dioxide in the first place, infrastructure to pipe the carbon dioxide long distances and pump it underground,
  • lowered conversion efficiencies and higher running costs – i.e. more gas needs to be burnt to produce a given unit of electricity.
  • The latter is an inescapable consequence of the second law of thermodynamics – carbon capture will always need a separation step. Either one needs to take air and separate it into its component parts, taking out the pure oxygen, so one burns gas to produce a pure waste stream consisting of carbon dioxide and water. Or one has to take the exhaust from burning the gas in air, and pull out the carbon dioxide from the waste. Either way, you need to take a mixed gas and separate its components – and that always takes an energy input to drive the loss of entropy that follows from separating a mixture.

    The key point, then, is that no matter how much better our technology gets, power produced by a gas power station with carbon capture and storage will always be more expensive that power from unabated gas. The capital cost of the plant will be greater, and so will the revenue cost per kWh. No amount of technological progress can ever change this.

    So there can only be a business case for carbon capture and storage through significant government interventions in the market, either through a subsidy, or through a carbon tax. Politically, this is an inherently unstable situation. Even after the capital cost of the carbon capture infrastructure has been written off, at any time the plant operator will be able to generate electricity more cheaply by releasing the carbon dioxide produced when the gas is burnt. Taking an international perspective, this leads to a massive free rider problem. Any country will be able to gain a competitive advantage at any time by turning the carbon capture off – there needs to be a fully enforced international agreement to impose carbon taxes at a high enough level to make the economics work. I’m not confident that such an agreement – which would have to cover every country making a significant contribution to carbon emissions to be effective – can be relied to hold on the scale of many decades.

    I do accept that some carbon and capture and storage probably is essential, to capture emissions from cement and steel production. But carbon capture and storage from the power sector is a climate change solution for a world that does not exist any more – a world of multilateral agreements and transnational economic rationality. Any scenario that relies on carbon capture and storage is just a politically very risky way of persuading ourselves that fossil-fuelled business as usual is sustainable, and postponing the necessary large scale implementation and improvement through R&D of genuine low carbon energy technologies – renewables like wind and solar, and nuclear.

    A Resurgence of the Regions: rebuilding innovation capacity across the whole UK

    The following is the introduction to a working paper I wrote while recovering from surgery a couple of months ago. This brings together much of what I’ve been writing over the last year or two about productivity, science and innovation policy and the need to rebalance the UK’s innovation system to increase R&D capacity outside London and the South East. It discusses how we should direct R&D efforts to support big societal goals, notably the need to decarbonise our energy supply and refocus health related research to make sure our health and social care system is humane and sustainable. The full (53 page) paper can be downloaded here.

    We should rebuild the innovation systems of those parts of the country outside the prosperous South East of England. Public investments in new translational research facilities will attract private sector investment, bring together wider clusters of public and business research and development, institutions for skills development, and networks of expertise, boosting innovation and leading to productivity growth. In each region, investment should be focused on industrial sectors that build on existing strengths, while exploiting opportunities offered by new technology. New capacity should be built in areas like health and social care, and the transition to low carbon energy, where the state can use its power to create new markets to drive the innovation needed to meet its strategic goals.

    This would address two of the UK’s biggest structural problems: its profound disparities in regional economic performance, and a research and development intensity – especially in the private sector and for translational research – that is low compared to competitors. By focusing on ‘catch-up’ economic growth in the less prosperous parts of the country, this plan offers the most realistic route to generating a material change in the total level of economic growth. At the same time, it should make a major contribution to reducing the political and social tensions that have become so obvious in recent years.

    The global financial crisis brought about a once-in-a-lifetime discontinuity in the rate of growth of economic quantities such as GDP per capita, labour productivity and average incomes; their subsequent decade-long stagnation signals that this event was not just a blip, but a transition to a new, deeply unsatisfactory, normal. A continuation of the current policy direction will not suffice; change is needed.

    Our post-crisis stagnation has more than one cause. Some sources of pre-crisis prosperity have declined, and will not – and should not – come back. North Sea oil and gas production peaked around the turn of the century. Financial services provided a motor for the economy in the run-up to the global financial crisis, but this proved unsustainable.

    Beyond the unavoidable headwinds imposed by the end of North Sea oil and the financial services bubble, the wider economy has disappointed too. There has been a general collapse in total factor productivity growth – the economy is less able to create higher value products and services from the same inputs than in previous decades. This is a problem of declining innovation in its broadest sense.

    There are some industry-specific issues. The pharmaceutical industry, for example, has been the UK’s leading science-led industry, and a major driver of productivity growth before 2007; this has been suffering from a world-wide malaise, in which lucrative new drugs seem harder and harder to find.

    Yet many areas of innovation are flourishing, presenting opportunities to create new, high value products and services. It’s easy to get excited about developments in machine learning, the ‘internet of things’ and ‘Industrie 4.0’, in biotechnology, synthetic biology and nanotechnology, in new technologies for generating and storing energy.

    But the productivity data shows that UK companies are not taking enough advantage of these opportunities. The UK economy is not able to harness innovation at a sufficient scale to generate the economic growth we need.

    Up to now, the UK’s innovation policy had been focused on academic science. We rightly congratulate ourselves on the strength of our science base, as measured by the Nobel prizes won by UK-based scientists and the impact of their publications.

    Despite these successes, the UK’s wider research and development base suffers from three faults:
    • It is too small for the size of our economy, as measured by R&D intensity,
    • It is particularly weak in translational research and industrial R&D,
    • It is too geographically concentrated in the already prosperous parts of the country.

    Science policy has been based on a model of correcting market failure, with an overwhelming emphasis on the supply side – ensuring strong basic science and a supply of skilled people. We need to move from this ‘supply side’ science policy to an innovation policy that explicitly creates demand for innovation, in order to meet society’s big strategic goals.

    Historically, the main driver for state investment in innovation has been defence. Today, the largest fraction of government research and development supports healthcare – yet this is not done in a way that most effectively promotes either the health of our citizens or the productivity of our health and social care system.

    Most pressingly, we need innovation to create affordable low carbon energy. Progress towards decarbonising our energy system is not happening fast enough, and innovation is needed to decrease the price of low carbon energy and increase its scale, and increase energy efficiency.

    More attention needs to be paid to the wider determinants of innovation – organisation, management quality, skills, and the diffusion of innovation as much as discovery itself. We need to focus more on the formal and informal networks that drive innovation – and in particular on the geographical aspects of these networks. They work well in Cambridge – why aren’t they working in the North East or in Wales?

    We do have examples of new institutions that have catalysed the rebuilding of innovation systems in economically lagging parts of the country. Translational research institutions such as Coventry’s Warwick Manufacturing Group, and Sheffield’s Advanced Manufacturing Research Centre, bring together university researchers and workers from companies large and small, help develop appropriate skills at all levels, and act as a focus for inward investment.

    These translational research centres offer models for new interventions that will raise productivity levels in many sectors – not just in traditional ‘high technology’ sectors, but also in areas of the foundational economy such as social care. They will drive the innovation needed to create an affordable, humane and effective healthcare system. We must also urgently reverse decades of neglect by the UK of research into new sustainable energy systems, to hasten the overdue transition to a low carbon economy. Developing such centres, at scale, will do much to drive economic growth in all parts of the country.

    Continue to read the full (53 page) paper here (PDF).

    The climate crisis now comes down to raw power

    Fifteen years ago it was possible to be optimistic about the world’s capacity to avert the worst effects of climate change. The transition to low carbon energy was clearly going to be challenging and it probably wasn’t going to be fast enough. But it did seem to be going with the grain of the evolution of the world’s energy economy, in this sense: oil prices seemed to be on an upward trajectory, squeezed between the increasingly constrained supplies predicted by “peak oil” theories, and the seemingly endless demand driven by fast developing countries like China and India. If fossil fuels were on a one-way upward trajectory in price and availability, then renewable energy would inevitably take its place – subsidies might bring forward their deployment, but the ultimate destination of a decarbonised energy system was assured.

    The picture looks very different today. Oil prices collapsed in the wake of the global financial crisis, and after a short recovery have now fallen below the most pessimistic predictions of a decade ago. This is illustrated in my plot, which shows the long-term evolution of real oil prices. As Vaclav Smil has frequently stressed, long range forecasting of energy trends is a mugs game, and this is well-illustrated in my plot, which shows successive decadal predictions of oil prices by the USA’s Energy Information Agency.


    Successive predictions for future oil prices made by the USA’s EIA in 2000 and 2010, compared to the actual outcome up to 2016.

    What underlies this fall in oil prices? On the demand side, this partly reflects slower global economic growth than expected. But the biggest factor has been a shock on the supply side – the technological revolution behind fracking and the large-scale exploitation of tight oil, which has pushed the USA ahead of Saudi Arabia as the world’s largest producer of oil. The natural gas supply situation has been transformed, too, through a combination of fracking in the USA and the development of a long-distance market in LNG from giant reservoirs in places like Qatar and Iran. Since 1997, world gas consumption has increased by 25% – but the size of proven reserves has increased by 50%. The uncomfortable truth is that we live in a world awash with cheap hydrocarbons. As things now stand, economics will not drive a transition to low carbon energy.

    A transition to low carbon energy will, as things currently stand, cost money. Economist Jean Pisani-Ferry puts this very clearly in a recent article“let’s be clear: the green transition will not be a free lunch … we’ll be putting a price on something that previously we’ve enjoyed for free”. Of course, this reflects failings of the market economy that economics already understands. If we heat our houses by burning cheap natural gas rather than installing an expensive ground-source heat pump and running that off electricity from offshore wind, we get the benefit of saving money, but we impose the costs of the climate change we contribute to on someone else entirely (perhaps the Bangladesh delta dweller whose village gets flooded). And if we are moved to pay out for the low-carbon option, the sense of satisfaction our ethical superiority over our gas-guzzling neighbours gives us might be tempered by resentment of their greater disposable income.

    The problems of uncosted externalities and free riders are well known to economists. But just because the problems are understood, it doesn’t mean they have easy solutions. The economist’s favoured remedy is a carbon tax, which puts a price on the previously uncosted deleterious effects of carbon emissions on the climate, but leaves the question of how best to mitigate the emissions to the market. It’s an elegant and attractive solution, but it suffers from two big problems.

    The first is that, while it’s easy to state that emitting carbon imposes costs on the rest of the world, it’s very difficult to quantify what those costs are. The effects of climate change are uncertain, and are spread far into the future. We can run a model which will give us a best estimate of what those costs might be, but how much weight should we give to tail risk – the possibility that climate change leads to less likely, but truly catastrophic outcomes? What discount rate – if any – should we use, to account for the fact that we value things now more than things in the future?

    The second is that we don’t have a world authority than can impose a single tax uniformly. Carbon emissions are a global problem, but taxes will be imposed by individual nations, and given the huge and inescapable uncertainty about what the fair level of a carbon tax would be, it’s inevitable than countries will impose carbon taxes at the low end of the range, so they don’t disadvantage their own industries and their own consumers. This will lead to big distortions of global trade, as countries attempt to combat “carbon leakage”, where goods made in countries with lower carbon taxes undercut goods which more fairly price the carbon emitted in their production.

    The biggest problems, though, will be political. We’ve already seen, in the “Gilets Jaune” protests in France, how raising fuel prices can be a spark for disruptive political protest. Populist, authoritarian movements like those led by Trump in the USA are associated with enthusiasm for fossil fuels like coal and oil and a downplaying or denial of the reality of the link between climate change and carbon emissions. To state the obvious, there are very rich and powerful entities that benefit enormously from the continued production and consumption of fossil fuels, whether those are nations, like Saudi Arabia, Australia, the USA and Russia, or companies like ExxonMobil, Rosneft and Saudi Aramco (the latter two, as state owned enterprises, blurring the line between the nations and the companies).

    These entities, and those (many) individuals who benefit from them, are the enemies of climate action, and oppose, from a very powerful position of incumbency, actions that lesson our dependence on fossil fuels. How do these groups square this position with the consensus that climate change driven by carbon emissions is serious and imminent? Here, again, I think the situation has changed since 10 or 15 years or so ago. Then, I think many climate sceptics did genuinely believe that anthropogenic climate change was unimportant or non-existent. The science was complicated, it was plausible to find a global warming hiatus in the data, the modelling was uncertain – with the help of a little motivated reasoning and confirmation bias, a sceptical position could be reached in good faith.

    I think this is much less true now, with the warming hiatus well and truly over. What I now suspect and fear is that the promoters of and apologists for continued fossil fuel burning know well that we’re heading towards a serious mid-century climate emergency, but they are confident that, from a position of power, their sort will be able to get through the emergency. With enough money, coercive power, and access to ample fossil fuel energy, they can be confident that it will be others that suffer. Bangladesh may disappear under the floodwaters and displace millions, but rebuilding Palm Beach won’t be a problem.

    We now seem to be in a world, not of peak oil, but of a continuing abundance of fossil fuels. In these circumstances, perhaps it is wrong to think that economics can solve the problem of climate change. It is a now a matter of raw power.

    Is there an alternative to this bleak conclusion? For many the solution is innovation. This is indeed our best hope – but it is not sufficient simply to incant the word. Nor is the recent focus in research policy on “grand challenges” and “missions” by itself enough to provide an implementation route for the major upheaval in the way our societies are organised that a transition to zero-carbon energy entails. For that, developing new technology will certainly be important, and we’ll need to understand how to make the economics of innovation work for us, but we can’t be naive about how new technologies, economics and political power are entwined.