UK Science in a post-liberal world

I took part in a panel discussion at the Royal Society last Thursday 20th November, with the topic ‘The future role of the state in a changing R&D landscape’.  Here is a slightly expanded version of my opening remarks. My intention was to be provocative; other more optimistic views are available (and were presented by the other panel members).

The UK has seen a long period of consensus about the role of the state in the R&D landscape – and that consensus has actually been very positive for academic science.  I think it’s not improbable, even quite likely, that this consensus will come to an end in the next few years.  This will have huge consequences for the R&D landscape, potentially very negative, and I think it’s really important that we start to think this through.

The consensus

There’s been a great deal of continuity in UK science policy over the last twenty years.  The 2004 10 Year Science and Innovation Investment Framework set out the foundations of an approach that has persisted through governments of different flavours, overseen by a series of influential science ministers – from Lord Sainsbury, through Lords Willetts, to Lord Vallance.  

This is fundamentally a supply side science policy, with a focus on correcting market failure.  Government supports basic science, ensures a pipeline of skilled people (including through skilled migration), and supports commercialisation of university research.  There’s been some gradual change – a bumpy path to a more explicit industrial strategy since Mandelson’s return to government in 2008, to the current fully developed version.  But the basic assumptions remain the same.

The consequences have been significant real terms increases in government R&D budgets, and a system dominated by research in universities.

Auguries of a breakdown

Why might we think this consensus is at risk?  If we look internationally, we see the USA, often seen as the natural partners of the UK in science as in other areas, we see direct attacks on the autonomy of funding agencies, and on the position of leading research universities.

Continue reading “UK Science in a post-liberal world”

Good reasons and bad reasons for supporting manufacturing (and some uncertainties)

Manufacturing industries make a direct contribution to the UK economy worth about 8% of GDP; this has roughly halved since 1990. The UK has deindustrialised more than other wealthy developed nations; in Germany and Switzerland manufacturing still accounts for 18% of GDP. Amongst the fast-growing economies of East Asia, Korea has a manufacturing share of 24% of GDP and Singapore 16%. The world’s manufacturing behemoth is China, where manufacturing accounts for about 25% of its large and rapidly growing economy. The rise of China as a huge manufacturer and exporter mirrors the deindustrialisation of the USA, where the manufacturing share has fallen to 10% [1]. Rebuilding US manufacturing was an explicit policy goal of the Biden administration, while Trump promises “a new era of American manufacturing dominance”, and this is a key motivation for Trump’s tariff policy[2]. There is less consensus in the UK that manufacturing should be a larger share of the economy [3]; I think it should be, but recognise that many reasons advanced for this are wrong. This post is my attempt to separate good reasons from bad, and to highlight some areas of uncertainty.

It’s about value, not jobs

Clearly in the bad category is nostalgia for old-fashioned factory jobs – we should support manufacturing for the value it creates, not for the number of low level jobs it makes. Overall, as shown in my first plot, manufacturing has above average productivity, though there are marked differences between subsectors [4]. Continue reading “Good reasons and bad reasons for supporting manufacturing (and some uncertainties)”

The Economics Nobel, Joel Mokyr, and the UK’s changing landscape of innovation

This year’s Nobel Prize was awarded to Joel Mokyr, Philippe Aghion and Peter Howitt, for their work on the relationship between technological innovation and economic growth.  The press release  credits them with having “having explained innovation-driven economic growth”, which I think overstates the case – there is much that is not yet understood about the relationship between innovation and economic growth. But the importance of their contributions is not in doubt – and it’s particularly welcome that both Mokyr and Aghion have laid out their arguments in fascinating and accessible books.  What makes the award particularly timely is that we are now in a period where economic growth has notably slowed, despite apparently continuing technological progress.  As the press release states, “perhaps most importantly, the laureates have taught us that sustained growth cannot be taken for granted”.

Continue reading “The Economics Nobel, Joel Mokyr, and the UK’s changing landscape of innovation”

What makes a manufacturing superpower?

Some reflections on Breakneck: China’s quest to engineer the future by Dan Wang.

Dan Wang’s new book on China is rightly getting great reviews. It’s a compelling read, engagingly written, reflecting both the author’s deep understanding of China’s developing economy, and his personal sympathy with the Chinese nation. It is admiring of Chinese achievements over the last couple of decades , while being entirely clear-eyed about the deficiencies of the political system and its human costs.

The big idea behind the book is to compare and contrast the two great powers of the world today – China and the USA, summarising that comparison in a neat formula. For Wang, China is the Engineering State, while the USA is the Lawyerly Society – and from that contrast, the complementary strengths and weaknesses of the two nations can be derived.

What kind of state is China? According to Wang, it is a “Leninist Technocracy with Grand Opera tendencies”.

Continue reading “What makes a manufacturing superpower?”

Another Modern Industrial Strategy

This is a slightly expanded version of an article published last week in Research Professional – The latest industrial strategy has made choices

Last week’s Industrial Strategy Policy Paper is the latest chapter in the chequered history of UK Industrial Strategies. For nearly three decades after Thatcher’s ascent to power, the UK’s strategy was not to have an industrial strategy, which was a concept associated with money-losing supersonic airliners and cars with square steering wheels. But that conventional wisdom has been challenged by a global financial crisis and nearly two decades of economic stagnation, so after a number of stops and starts over the last decade, a fully developed Industrial Strategy has now arrived. Continue reading “Another Modern Industrial Strategy”

The civic university in hard times

Universities in the UK at the moment are broke and unloved. In these circumstances, the temptation is going to be to withdraw to “core business” – teaching students, and for research intensives, doing the kind of research that pushes the institution up the international league tables, to attract the overseas students whose fees prop the whole system up. In a period of retrenchment, it might be tempting for managements to see supporting the role of universities in their communities as a dispensable luxury. I think this would be a profound mistake.

This isn’t to understate the difficulty UK universities find themselves in. Around three quarters of them are expected to be in deficit next year, and about a hundred are now actively restructuring or making staff redundant. This follows a 40% real terms erosion in fees for home students, and a business model, reliant on growing overseas student numbers, which has become both politically unpopular and exposed to geopolitical risk. The latest proposal – of a levy on international student fees – is both another financial blow, and a symbol of the way universities find themselves on the wrong side of culture war discourse. What’s quite clear is that, whatever recognition there might be in government of the university sector’s troubles, the sector is simply not a high priority for a government facing difficult issues on all sides. Continue reading “The civic university in hard times”

Moore’s Law, past and future

Moore’s Law – and the technology it describes, the integrated circuit – has been one of the defining features of the past half century. The idea of Moore’s law has been invoked in three related senses. In its original form, it was rather a precise prediction about the rate of increase of the number of transistors to be fitted on a single integrated circuit. It’s never been a law – it’s been more of an organising principle for an industry and its supply chain – and thus a self-fulfilling prophecy. In this sense, it’s been roughly true for fifty years – but is now bumping up against physical limits. Continue reading “Moore’s Law, past and future”

The economic impact of AI: three scenarios

Eighteen months ago I wrote about the potential economic effects of artificial intelligence – drawing attention to a “new Solow paradox”, in which rapid technological progress in machine learning and artificial intelligence has coexisted with continuing stagnation in productivity. What has happened since then?

Technical progress has continued, driven by massive investments, both in the companies developing the technologies themselves and in the huge server farms that are needed to provide the computing power to train and implement the models. Interestingly, there has been a significant upturn in productivity in the USA since 2023 (analysed in this recent Resolution Foundation report, PDF). Some of this is due to the USA’s increasing production of oil and gas, and some from the tech sector itself. But there has been a significant growth in productivity in those service sectors that use, rather than develop, technology. It’s at least plausible that some of this is being driven by the adoption of AI. Perhaps we’re seeing the beginnings of a resolution of the new Solow paradox.

What’s going to happen next? To try and clarify some of the widely divergent assumptions, it’s perhaps helpful to sketch out some scenarios. My primary purpose here isn’t to try and guess the most likely future outcome – personally, I’m deeply uncertain as to what’s going to happen. Instead, I think it’s interesting enough to ask what assumptions various actors are operating under, and how those assumptions themselves constrain and influence the future.

1. Intelligence explosion

In this scenario, two dynamics lead to the transformation of economy and society through AI. The first is a process of recursive self-improvement, by which the application of AI technologies to develop the AI methods themselves leads to a runaway process in the growth of the power and effectiveness of those methods. The second is an increasing application of AI to the physical world, leading to rapid technological progress in all fields. The outcome is a winner takes all economy, in which the controllers of the new technologies enjoy unprecedented political and economic power.

One of the most compelling early use cases of LLMs has been to write code, so it’s a natural extension to think that AI systems can be used to do systematic computer experiments in order to find an optimise algorithms for machine learning – it seems a fair assumption that this is already happening, contributing to the progress we’re seeing in the development of LLMs and reasoning systems.

But to make the hoped for transformational impact on the economy and society, artificial intelligence needs to have a more direct interaction with the physical world than simply through existing corpuses of text. This needs the incorporation of real time data of all kinds, together with improvements in robotics to intervene in the world. Self-driving cars are the prototype system here, but if AI is going to accelerate technological progress itself we need to move to self-driving laboratories.

Fully automated scientific discovery leads to rapid progress in medicine, but the biggest impact comes in developing the hardware infrastructure of computing itself. The planar CMOS integrated circuits that computing depends on now are replaced by new 3d assemblies of nanoscale electronic components, brought together in systems of ungraspable complexity. And so increasing computer power in turn feeds the acceleration of this intelligence explosion.

What does the economy look like in this scenario? It’s winner-take-all – the firm, organisation or nation which achieves this goal first accumulates an unprecedented degree of economic and political power. On the other hand, technological progress becomes so rapid that there’s a hope that everyone benefits

Who believes in this scenario? My impression is that this is a relatively conservative version of the Silicon Valley consensus, summarised in the near-universal SV opinion that artificial general intelligence (a term usually left poorly defined) is imminent. If this is what one believes, then the logical course of action is devote all possible resources to achieving this goal as soon as possible. It’s obviously a matter of self-interest to be one of the controllers of such an all-powerful technology.

But altruists can also reassure themselves that entirely focusing on this goal is the most effective way of solving any global problem. The corollary is that normal approaches to scientific and technological progress will soon become obsolete, so the existing scientific enterprise becomes less and less relevant and doesn’t need to be sustained.

2. Excel in prose

In this scenario, the development of large language models has essentially solved the problem of automating verbal reasoning, in the same way that spreadsheets automated arithmetic and bookkeeping. As happened with spreadsheets before them, this leads to the quiet transformation of most business processes. Productivity growth recovers, perhaps even doubling, to return to levels seen in the 1990’s. Beyond LLMs, the application of machine learning and artificial intelligence to the physical world continues to make incremental progress, though technological progress in the physical world remains markedly slower than in the digital world.

The first killer application for LLMs was machine translation, now a solved problem. Many of the problems of information flow in big organisations are susceptible to automation by LLMs – producing meeting notes, summarising documents, generating routine communications. As in previous technologies, the key factor limiting the speed of uptake is the need to adapt existing processes to create places where LLMs can contribute, but the number of compelling use cases steadily expands.

In software engineering, LLM assistants dramatically speed up the process of writing code, leaving more time for higher order tasks such as designing system architectures. The effectiveness and efficiency of LLMs themselves is significantly improved; a focus emerges on fine-tuning LLMs on custom datasets to improve their reliability and accuracy. More generally, LLMs enable natural language interfaces to computer systems of all kinds, potentially reducing the barriers to their widespread adoption.

On the other hand, early enthusiasm for artificial intelligence as a transformational technology in areas such as biotechnology and healthcare leads to some disappointment, despite early successes like AlphaFold. It turns out that the limiting factors here are fundamental shortcomings in our understanding of how biology works, and while machine learning and laboratory automation provides useful new tools, LLMs, as a technology fundamentally based on manipulating language, provide no dramatic shortcuts to developing scientific understanding of the natural world.

What are the economic implications of this scenario? One should expect significant productivity gains, but with a delay as business processes have to be adapted to make the most of the new technology. As with any new technologies, many of the early movers go bankrupt, but the firms that survive and make the most money from the technology are those that successfully integrate the technology into wider suites of business oriented software.

Who believes in this scenario? I think this is close to a consensus view of those economists and policy makers not directly involved in the AI business. For example, a recent US National Academies Consensus Report Artificial Intelligence and the Future of Work, from a blue-ribbon committee of economists and computer scientists, co-chaired by Erik Brynjolfsson and Tom Mitchell, identifies AI as a general purpose technology with significant potential to drive productivity improvements. However, it observes that “achieving the full benefits of AI will likely require complementary investments in new skills and new organizational processes and structures.”

In this scenario, the most effective focus will probably be on technology diffusion and skills development. Technological advances in other areas will depend on continued research and development spending, with the existing scientific enterprise benefitting incrementally from machine learning techniques.

3. Crash and burn

History has featured a number of financial bubbles, in which asset prices rise beyond any seeming connection to their underlying value. Many of these are related to technological advances, in which the advent of new technologies encourages an irrational exuberance based on overoptimism, both on the speed with which the new technologies will have an impact, and on the ultimate scale of that impact. The classic recent example is the dot-com bubble of the late nineties, while one can go back in history to episodes such as the Railway Mania of the 1840s in the UK.

In this scenario the current enthusiasm for AI is revealed as one such bubble – in scale, one of the biggest in history. The bursting of that bubble exposes a scale of over-investment so large as to risk the stability of the whole financial system, while the technology itself disappoints. Ultimately, rationality returns, and the technology finds useful applications. As in the case of the dot-com bubble, the capital infrastructure installed may ultimately yield useful returns, though not to the original investors.

The initial danger signals are continuing technical difficulties limiting the reliability of large language models, and fundamental issues in the business models underpinning the very large investments being made in the computing infrastructure to support LLMs. Larger models, trained on more data, still suffer from “hallucinations” – factual statements stated with great confidence and plausibility that turn out to be incorrect. The perception that LLMs are demonstrating any kind of real intelligence turns out to be largely a case of anthropomorphism. In fact, what LLMs tell us is not how intelligent and original computers have become, but how unoriginal and derivative most human interactions are. Rather than being “stochastic parrots”, LLMs have turned out to be automated “catechisms of cliché”.

Meanwhile, for those use cases that do turn out to have some value, a panoply of rival models – many open source – destroys the pricing power of the tech giants. It turns out that there is no “moat”, no way of achieving and protecting the monopoly position that Silicon Valley tech firms aspire to. Lacking the financial returns that would justify the huge cost of building out the infrastructure for artificial intelligence, those investments are written off and valuations of the tech companies – especially the “Magnificent Seven” tech giants, which saw such a huge increase while the enthusiasm for AI persisted, collapse.

To get a sense of the scale of the bubble, the market capitalisation of the Magnificent Seven increased by about $7 trillion between January 2023 and May 2025. Microsoft, Alphabet, Amazon and Meta have reported combined capital expenditure on AI of $246bn in 2024, up from $151bn in 2023.. OpenAI’s “Stargate” project to develop AI computing infrastructure, aims to raise $100bn now, with a total target investment of $500 bn. Substantial additional investments will have come from private equity and venture capital.

The overvaluation of the Magnificent Seven, together with the excess capital investments in the private market, can be thought of as a “bezzle”, in the sense discussed by Michael Pettis here. While the bubble persists, the owners of those overpriced assets are in possession of apparent wealth that doesn’t reflect the real productive capacity of the economy, but does lead to increases in GDP through a number of channels. But the bezzle always reverses, in turn depressing GDP, as the loss of apparent wealth is distributed across the economy. The outcomes include the failure of financial institutions (sometimes subsequently bailed out at the expense of the taxpayer), people across the world losing their savings, and a freezing of investments in other areas of technology as the venture capital industry retrenches.

As the situation stabilises, useful, but not transformative, applications are found for large language models, and the massive installed infrastructure of high performance computing finds new applications in science and engineering.

This is definitely a contrarian scenario – but contrarians are not always wrong. They will be bracing themselves for financial turbulence.

Last words

These are scenarios, not predictions, and it is possible to imagine many other different possibilities. But given the constant tendency to treat technological progress as unfolding along a single fixed track, it’s important to hold on to the fact that the future is open, especially when the range of plausible outcomes seems so large.

The end of wage growth in the UK

I’ve been writing about the UK’s slowdown in productivity growth for about a decade, as I discussed here. I think it’s fair to say that this issue is well-understood amongst economists and some policy people, but productivity is an abstract concept. So, it’s perhaps unsurprising that, even now, the seriousness of our economic situation isn’t fully understood by commentators and journalists, let alone the wider public.

But there’s one way in which our productivity slowdown has very visible everyday consequences – and that’s in the end of wage growth. As my plot shows, wages have flatlined in the UK over last 15 years. This long period of stagnation is unprecedented in living memory, & marks a decisive & unwelcome break from the UK’s postwar economic trajectory.

Average real weekly UK wages. Green: Composite Average Weekly Earnings series, corrected for inflation using consumer prices index. Thomas, R and Dimsdale, N (2017) “A Millennium of UK Data”, Bank of England OBRA dataset. Brown: ONS, Real Average Weekly Earnings, total pay, using CPI (seasonally adjusted). 18/2/2025 release.

The period from the end of the Second World War right up to the mid 2000s shows a remarkably consistent record of wage growth. There are moments of economic turbulence that are reflected in deviations from the trend of continuous 2.8% pa growth; a short-lived period of more rapid growth in the late 60s and early 70s – the Barber boom – with the excess growth unwinding in the mid-1970s crisis. And again, more rapid growth in the late 1980s Lawson boom, with the excess gains lost in weaker wage growth in the subsequent recession.

But nothing compares to the stagnation that we’ve seen since the global financial crisis. By the economic measure that arguably matters most to people at large – how their wages grow – the last decade and a half is by far the worst period since the war. In comparison, the economic turbulence of the 1970’s looks like a golden age.

UK labour productivity, index 2022=100. Data: ONS, 15/11/2024 release. Line: non-linear least squares fit to two exponential functions, continuous at the break point, which occurs at 2005 for the best fit. See When did the UK’s productivity slowdown begin? for more details of the fitting approach.

The end of wage growth in the UK is a direct consequence of the end of productivity growth. It’s worth making a couple of points about the link between productivity growth and wage growth. In the USA, that link is weaker than it was. But the UK is not the USA; while in the USA the labour share of GDP – the share of overall economic activity that goes to wages, rather than rewarding the owners of capital – has significantly fallen, this is not so in the UK. For whatever reason, in the UK, over the last decade, the labour share of GDP has actually increased.

Of course, my plot of wage growth presents a single average, and it’s a fair question to ask how the distribution of wages has changed with time – has this become more unequal, with more of the benefits of productivity growth going to higher earners? It turns out that, while there was a substantial increase in inequality in the 1980s, overall measures of income inequality have been relatively steady since then.

The wage growth plot explains so much about state of UK politics today. Few people have an intuitive feel in the abstract for what productivity growth – or its absence – means, but the sense of stalling living standards, and worse prospects for young people, is all too palpable.

The world of business R&D (and the UK’s place in that world)

Most research and development (R&D) in the world is done not in universities or research institutes, but by businesses – big businesses can do more R&D than medium size countries. A useful snapshot of this world is provided by the 2024 EU Industrial R&D Investment Scoreboard, which came out in December. The scoreboard lists the top 2000 companies in the world by their annual R&D expenditure, classifying them by sector and nationality of headquarters. In total, this amounts to total R&D spend of €1257.7 billion (converted at market rates), which the authors believe accounts for 85% to 90% of worldwide R&D funded by the business enterprise sector.

Unsurprisingly, the top companies are US tech firms – Alphabet, Meta, Apple and Microsoft – which between them spend €127 billion. Number 5 is the German auto firm Volkswagen, Asia provides numbers 6 and 7 in the shape of China’s Huawei and Korea’s Samsung.

Taking the world as a whole, the top sectors are Software, accounting for 19% of the total, Pharma at 18%, Automobiles at 15%. Tech hardware accounts for 16% and Electronic & Electrical hardware another 7%. These last two categories do have some overlap – the former includes Apple, Huawei, Intel, Qualcomm, Nvidia, Cisco and TSMC, while the latter includes Samsung, Siemens and Hon Hai (aka Foxconn).

How does the UK do? The share of world business R&D done by UK domiciled firms is 2.8%, and there are just two UK companies in the top 100 – the pharmaceutical companies AstraZeneca and GSK.

Of course, where a company is domiciled and where it does its R&D aren’t necessarily the same. Roughly half of UK business R&D is done by overseas owned companies – for example, the significant R&D carried out in the UK by the auto company Jaguar LandRover is ascribed in these statistics to its Indian parent, Tata Motors. This is a very high fraction of R&D done by overseas firms, by comparison with other countries of a similar size. The positive interpretation of this is that it is a testament to the attractiveness of the UK as a place to do R&D. But control matters, and this exposes the UK to the risk that this R&D may be more footloose than R&D done my domestically owned firms.

We can get a sense of the sectors that the UK focuses on by comparing the UK shares with the global fraction.

Pharmaceuticals is a clear leader for the UK – it accounts for 49% of the UK owned business R&D, which amounts to 7.5% of the world total. There is an interesting aspect to this, however – it is completely dominated by the two giants, AstraZeneca and GSK. This is in contrast to the USA, where there is a significant tier of relatively recently founded companies that have emerged from the biotech revolution – such as Gilead, Amgen, Moderna, Regeneron and Vertex, all with € multibillion R&D spend. UK pharma scale-ups – like Bicycle Therapeutics and Immunocore – are still an order of magnitude smaller.

The other area of specialism for the UK is Banking – this accounts for 17% of the UK’s R&D; this represents 41% of the world R&D in this sector. Of course, there may be issues of what is classified as R&D in different companies.

Where UK firms are largely absent is in Software, Tech hardware and Electronic & Electrical hardware. Between them, these sectors dominate global business R&D, accounting for 42% of all business R&D. But the UK accounts for just 0.6% of world Software R&D, 0.45% in Electronic & Electrical hardware, and a tiny 0.046% of world R&D in Tech hardware. Once again, this doesn’t take into account of R&D carried out in the UK by overseas firms – for example, DeepMind’s work will be ascribed to its US owner, Alphabet. But it does suggest that the UK has largely missed out on innovation in the fastest moving areas of new technology in its domestically owned firms.

Finally, one might ask how effective markets are at allocating resources to the areas where the need for innovation is greatest. Given the urgency of climate change, and the need for innovation to drive down the costs of low carbon energy, it’s depressing to see that business R&D in the Alternative Energy sector accounts for just 0.23% of the world total, with Oil and Gas still accounting for 1.05%.