A Resurgence of the Regions: rebuilding innovation capacity across the whole UK

The following is the introduction to a working paper I wrote while recovering from surgery a couple of months ago. This brings together much of what I’ve been writing over the last year or two about productivity, science and innovation policy and the need to rebalance the UK’s innovation system to increase R&D capacity outside London and the South East. It discusses how we should direct R&D efforts to support big societal goals, notably the need to decarbonise our energy supply and refocus health related research to make sure our health and social care system is humane and sustainable. The full (53 page) paper can be downloaded here.

We should rebuild the innovation systems of those parts of the country outside the prosperous South East of England. Public investments in new translational research facilities will attract private sector investment, bring together wider clusters of public and business research and development, institutions for skills development, and networks of expertise, boosting innovation and leading to productivity growth. In each region, investment should be focused on industrial sectors that build on existing strengths, while exploiting opportunities offered by new technology. New capacity should be built in areas like health and social care, and the transition to low carbon energy, where the state can use its power to create new markets to drive the innovation needed to meet its strategic goals.

This would address two of the UK’s biggest structural problems: its profound disparities in regional economic performance, and a research and development intensity – especially in the private sector and for translational research – that is low compared to competitors. By focusing on ‘catch-up’ economic growth in the less prosperous parts of the country, this plan offers the most realistic route to generating a material change in the total level of economic growth. At the same time, it should make a major contribution to reducing the political and social tensions that have become so obvious in recent years.

The global financial crisis brought about a once-in-a-lifetime discontinuity in the rate of growth of economic quantities such as GDP per capita, labour productivity and average incomes; their subsequent decade-long stagnation signals that this event was not just a blip, but a transition to a new, deeply unsatisfactory, normal. A continuation of the current policy direction will not suffice; change is needed.

Our post-crisis stagnation has more than one cause. Some sources of pre-crisis prosperity have declined, and will not – and should not – come back. North Sea oil and gas production peaked around the turn of the century. Financial services provided a motor for the economy in the run-up to the global financial crisis, but this proved unsustainable.

Beyond the unavoidable headwinds imposed by the end of North Sea oil and the financial services bubble, the wider economy has disappointed too. There has been a general collapse in total factor productivity growth – the economy is less able to create higher value products and services from the same inputs than in previous decades. This is a problem of declining innovation in its broadest sense.

There are some industry-specific issues. The pharmaceutical industry, for example, has been the UK’s leading science-led industry, and a major driver of productivity growth before 2007; this has been suffering from a world-wide malaise, in which lucrative new drugs seem harder and harder to find.

Yet many areas of innovation are flourishing, presenting opportunities to create new, high value products and services. It’s easy to get excited about developments in machine learning, the ‘internet of things’ and ‘Industrie 4.0’, in biotechnology, synthetic biology and nanotechnology, in new technologies for generating and storing energy.

But the productivity data shows that UK companies are not taking enough advantage of these opportunities. The UK economy is not able to harness innovation at a sufficient scale to generate the economic growth we need.

Up to now, the UK’s innovation policy had been focused on academic science. We rightly congratulate ourselves on the strength of our science base, as measured by the Nobel prizes won by UK-based scientists and the impact of their publications.

Despite these successes, the UK’s wider research and development base suffers from three faults:
• It is too small for the size of our economy, as measured by R&D intensity,
• It is particularly weak in translational research and industrial R&D,
• It is too geographically concentrated in the already prosperous parts of the country.

Science policy has been based on a model of correcting market failure, with an overwhelming emphasis on the supply side – ensuring strong basic science and a supply of skilled people. We need to move from this ‘supply side’ science policy to an innovation policy that explicitly creates demand for innovation, in order to meet society’s big strategic goals.

Historically, the main driver for state investment in innovation has been defence. Today, the largest fraction of government research and development supports healthcare – yet this is not done in a way that most effectively promotes either the health of our citizens or the productivity of our health and social care system.

Most pressingly, we need innovation to create affordable low carbon energy. Progress towards decarbonising our energy system is not happening fast enough, and innovation is needed to decrease the price of low carbon energy and increase its scale, and increase energy efficiency.

More attention needs to be paid to the wider determinants of innovation – organisation, management quality, skills, and the diffusion of innovation as much as discovery itself. We need to focus more on the formal and informal networks that drive innovation – and in particular on the geographical aspects of these networks. They work well in Cambridge – why aren’t they working in the North East or in Wales?

We do have examples of new institutions that have catalysed the rebuilding of innovation systems in economically lagging parts of the country. Translational research institutions such as Coventry’s Warwick Manufacturing Group, and Sheffield’s Advanced Manufacturing Research Centre, bring together university researchers and workers from companies large and small, help develop appropriate skills at all levels, and act as a focus for inward investment.

These translational research centres offer models for new interventions that will raise productivity levels in many sectors – not just in traditional ‘high technology’ sectors, but also in areas of the foundational economy such as social care. They will drive the innovation needed to create an affordable, humane and effective healthcare system. We must also urgently reverse decades of neglect by the UK of research into new sustainable energy systems, to hasten the overdue transition to a low carbon economy. Developing such centres, at scale, will do much to drive economic growth in all parts of the country.

Continue to read the full (53 page) paper here (PDF).

The climate crisis now comes down to raw power

Fifteen years ago it was possible to be optimistic about the world’s capacity to avert the worst effects of climate change. The transition to low carbon energy was clearly going to be challenging and it probably wasn’t going to be fast enough. But it did seem to be going with the grain of the evolution of the world’s energy economy, in this sense: oil prices seemed to be on an upward trajectory, squeezed between the increasingly constrained supplies predicted by “peak oil” theories, and the seemingly endless demand driven by fast developing countries like China and India. If fossil fuels were on a one-way upward trajectory in price and availability, then renewable energy would inevitably take its place – subsidies might bring forward their deployment, but the ultimate destination of a decarbonised energy system was assured.

The picture looks very different today. Oil prices collapsed in the wake of the global financial crisis, and after a short recovery have now fallen below the most pessimistic predictions of a decade ago. This is illustrated in my plot, which shows the long-term evolution of real oil prices. As Vaclav Smil has frequently stressed, long range forecasting of energy trends is a mugs game, and this is well-illustrated in my plot, which shows successive decadal predictions of oil prices by the USA’s Energy Information Agency.


Successive predictions for future oil prices made by the USA’s EIA in 2000 and 2010, compared to the actual outcome up to 2016.

What underlies this fall in oil prices? On the demand side, this partly reflects slower global economic growth than expected. But the biggest factor has been a shock on the supply side – the technological revolution behind fracking and the large-scale exploitation of tight oil, which has pushed the USA ahead of Saudi Arabia as the world’s largest producer of oil. The natural gas supply situation has been transformed, too, through a combination of fracking in the USA and the development of a long-distance market in LNG from giant reservoirs in places like Qatar and Iran. Since 1997, world gas consumption has increased by 25% – but the size of proven reserves has increased by 50%. The uncomfortable truth is that we live in a world awash with cheap hydrocarbons. As things now stand, economics will not drive a transition to low carbon energy.

A transition to low carbon energy will, as things currently stand, cost money. Economist Jean Pisani-Ferry puts this very clearly in a recent article“let’s be clear: the green transition will not be a free lunch … we’ll be putting a price on something that previously we’ve enjoyed for free”. Of course, this reflects failings of the market economy that economics already understands. If we heat our houses by burning cheap natural gas rather than installing an expensive ground-source heat pump and running that off electricity from offshore wind, we get the benefit of saving money, but we impose the costs of the climate change we contribute to on someone else entirely (perhaps the Bangladesh delta dweller whose village gets flooded). And if we are moved to pay out for the low-carbon option, the sense of satisfaction our ethical superiority over our gas-guzzling neighbours gives us might be tempered by resentment of their greater disposable income.

The problems of uncosted externalities and free riders are well known to economists. But just because the problems are understood, it doesn’t mean they have easy solutions. The economist’s favoured remedy is a carbon tax, which puts a price on the previously uncosted deleterious effects of carbon emissions on the climate, but leaves the question of how best to mitigate the emissions to the market. It’s an elegant and attractive solution, but it suffers from two big problems.

The first is that, while it’s easy to state that emitting carbon imposes costs on the rest of the world, it’s very difficult to quantify what those costs are. The effects of climate change are uncertain, and are spread far into the future. We can run a model which will give us a best estimate of what those costs might be, but how much weight should we give to tail risk – the possibility that climate change leads to less likely, but truly catastrophic outcomes? What discount rate – if any – should we use, to account for the fact that we value things now more than things in the future?

The second is that we don’t have a world authority than can impose a single tax uniformly. Carbon emissions are a global problem, but taxes will be imposed by individual nations, and given the huge and inescapable uncertainty about what the fair level of a carbon tax would be, it’s inevitable than countries will impose carbon taxes at the low end of the range, so they don’t disadvantage their own industries and their own consumers. This will lead to big distortions of global trade, as countries attempt to combat “carbon leakage”, where goods made in countries with lower carbon taxes undercut goods which more fairly price the carbon emitted in their production.

The biggest problems, though, will be political. We’ve already seen, in the “Gilets Jaune” protests in France, how raising fuel prices can be a spark for disruptive political protest. Populist, authoritarian movements like those led by Trump in the USA are associated with enthusiasm for fossil fuels like coal and oil and a downplaying or denial of the reality of the link between climate change and carbon emissions. To state the obvious, there are very rich and powerful entities that benefit enormously from the continued production and consumption of fossil fuels, whether those are nations, like Saudi Arabia, Australia, the USA and Russia, or companies like ExxonMobil, Rosneft and Saudi Aramco (the latter two, as state owned enterprises, blurring the line between the nations and the companies).

These entities, and those (many) individuals who benefit from them, are the enemies of climate action, and oppose, from a very powerful position of incumbency, actions that lesson our dependence on fossil fuels. How do these groups square this position with the consensus that climate change driven by carbon emissions is serious and imminent? Here, again, I think the situation has changed since 10 or 15 years or so ago. Then, I think many climate sceptics did genuinely believe that anthropogenic climate change was unimportant or non-existent. The science was complicated, it was plausible to find a global warming hiatus in the data, the modelling was uncertain – with the help of a little motivated reasoning and confirmation bias, a sceptical position could be reached in good faith.

I think this is much less true now, with the warming hiatus well and truly over. What I now suspect and fear is that the promoters of and apologists for continued fossil fuel burning know well that we’re heading towards a serious mid-century climate emergency, but they are confident that, from a position of power, their sort will be able to get through the emergency. With enough money, coercive power, and access to ample fossil fuel energy, they can be confident that it will be others that suffer. Bangladesh may disappear under the floodwaters and displace millions, but rebuilding Palm Beach won’t be a problem.

We now seem to be in a world, not of peak oil, but of a continuing abundance of fossil fuels. In these circumstances, perhaps it is wrong to think that economics can solve the problem of climate change. It is a now a matter of raw power.

Is there an alternative to this bleak conclusion? For many the solution is innovation. This is indeed our best hope – but it is not sufficient simply to incant the word. Nor is the recent focus in research policy on “grand challenges” and “missions” by itself enough to provide an implementation route for the major upheaval in the way our societies are organised that a transition to zero-carbon energy entails. For that, developing new technology will certainly be important, and we’ll need to understand how to make the economics of innovation work for us, but we can’t be naive about how new technologies, economics and political power are entwined.

What drives productivity growth in the UK economy?

How do you get economic growth? Economists have a simple answer – you can put in more labour, by having more people working for longer hours, or you can put in more capital, building more factories or buying more machines, or – and here things get a little more sketchy – you can find ways of innovating, of getting more outputs out of the same inputs. In the framework economists have developed for thinking about economic growth, the latter is called “total factor productivity”, and it is loosely equated with technological progress, taking this in its broadest sense. In the long run it is technological progress that drives improved living standards. Although we may not have a great theoretical handle on where total factor productivity comes from, its empirical study should tell us something important about the sources of our productivity growth. Or, in our current position of stagnation, why productivity growth has slowed down so much.

Of course, the economy is not a uniform thing – some parts of it may be showing very fast technological progress, like the IT industry, while other parts – running restaurants, for example, might show very little real change over the decades. These differences emerge from the sector based statistics that have been collected and analysed for the EU countries by the EU KLEMS Growth and Productivity Accounts database.

Sector percentage of 2015 economy by GVA contribution versus aggregate total factor productivity growth from 1998 to 2015. Data from EU KLEMS Growth and Productivity Accounts database.

Here’s a very simple visualisation of some key results of that data set for the UK. For each sector, the relative importance of the sector to the economy as a whole is plotted on the x-axis, expressed as a percentage of the gross value added of the whole economy. On the y-axis is plotted the total change in total factor productivity over the whole 17 year period covered by the data. This, then, is the factor by which that sector has produced more output than would be expected on the basis of additional labour and capital. This may tell us something about the relative effectiveness of technological progress in driving productivity growth in each of these sectors.

Broadly, one can read this graph as follows: the further right a sector is, the more important it is as a proportion of the whole economy, while the nearer the top a sector is, the more dynamic its performance has been over the 17 years covered by the data. Before a more detailed discussion, we should bear in mind some caveats. What goes into these numbers are the same ingredients as go into the measurement of GDP as a whole, so all the shortcomings of that statistic are potentially issues here.

A great starting point for understanding these issues is Diane Coyle’s book GDP: a brief but affectional history. The first set of issues concern what GDP measures and what it doesn’t measure. Lots of kinds of activity are important for the economy, but they only tend to count in GDP if money changes hands. New technology can shift these balances – if supermarkets replace humans at the checkouts by machines, the groceries still have to be scanned, but now the customer is doing the work for nothing.

Then there are some quite technical issues about how the measurements are done. This includes properly accounting for improvements in quality where technology is advancing very quickly; failing to fully account for the increased information transferred through a typical internet connection will mean that overall inflation will be overestimated, and productivity gains in the ICT will be understated (see e.g. A Comparison of Approaches to Deflating Telecoms Services Output, PDF). For some of the more abstract transactions in the modern economy – particularly in the banking and financial services sector, some big assumptions have to be made about where and how much value is added. For example, the method used to estimate the contribution of financial services – FISIM, for “Financial intermediation services indirectly measured” – has probably materially overstated the contribution of financial services to GDP by not handling risk correctly, as argued in this recent ONS article.

Finally, there’s the big question of whether increases in GDP correspond to increases in welfare. The general answer to this question is, obviously, not necessarily. Unlike some commentators, I don’t take this to mean that we shouldn’t take any notice of GDP – it is an important indicator of the health of an economy and its potential to supply people’s needs. But it does need looking at critically. A glazing company that spent its nights breaking shop windows and its days mending them would be increasing GDP, but not doing much for welfare – this is a ridiculous example, but there’s a continuum between what economist William Baumol called unproductive entrepreneurship, the more extractive varieties of capitalism documented by Acemoglu and Robinson – and outright organised crime.

To return to our plot, we might focus first on three dynamic sectors – information and communications, manufacturing, and professional, scientific, technical and admin services. Between them, these sectors account for a bit more than a quarter of the economy, and have shown significant improvements in total factor productivity over the period. In this sense it’s been ICT, manufacturing and knowledge-based services that have driven the UK economy over this period.

Next we have a massive sector that is important, but not yet dynamic, in the sense of having demonstrated slightly negative total factor productivity growth over the period. This comprises community, personal and social services – notably including education, health and social care. Of course, in service activities like health and social care it’s very easy to mischaracterise as a lowering of productivity a change that actually corresponds to an increase in welfare. On the other hand, I’ve argued elsewhere that we’ve not devoted enough attention to the kinds of technological innovation in health and social care sectors that could deliver genuine productivity increases.

Real estate comprises a sector that is both significant in size, and has shown significant apparent increases in total factor productivity. This is a point at which I think one should question the nature of the value added. A real estate business makes money by taking a commission on property transactions; hence an increase in property prices, given constant transaction volume, leads to an apparent increase in productivity. Yet I’m not convinced that a continuous increase in property prices represents the economy generating real value for people.

Finance and insurance represents a significant part of the economy – 7% – but its overall long term increase in total factor productivity is unimpressive, and probably overstated. The importance of this sector in thinking about the UK economy represents a distortion of our political economy.

The big outlier at the bottom left of the plot is mining and quarrying, whose total factor productivity has dropped by 50% – what isn’t shown is that its share of the economy has substantially fallen over the period too. The biggest contributor to this sector is North Sea oil, whose production peaked around 2000 and which has since been rapidly falling. The drop in total factor productivity does not, of course, mean that technological progress has gone backwards in this sector. Quite the opposite – as the easy oil fields are exhausted, more resource – and better technology – are required to extract what remains. This should remind us of one massive weakness in GDP as a sole measure of economic progress – it doesn’t take account of the balance sheet, of the non-renewable natural resources we use to create that GDP. The North Sea oil has largely gone now and this represents an ongoing headwind to the UK economy that will need more innovation in other sectors to overcome.

This approach is limited by the way the economy needs to be divided up into sectors. Of course, this sectoral breakdown is very coarse – within each sector there are likely to be outliers with very high total productivity growth which dramatically pull up the average of the whole sector. More fundamentally, it’s not obvious that the complex, networked nature of the modern economy is well captured by these rather rigid barriers. Many of the most successful manufacturing enterprises add big value to their products with the services that come attached to them, for example.

We can look into the EU Klems data at a slightly finer grained level; the next plot shows importance and dynamism for the various subsectors of manufacturing. This shows well the wide dispersions within the overall sectors – and of course within each of these subsectors there will be yet more dispersion.

Sub-sector fraction of 2015 economy by GVA contribution versus aggregate total factor productivity growth from 1998 to 2015 for subsectors of manufacturing. Data from EU KLEMS Growth and Productivity Accounts database.

The results are perhaps unsurprising – areas traditionally considered part of high value manufacturing – transport equipment and chemicals, which include aerospace, automotive, pharmaceuticals and speciality chemicals – are found in the top right quadrant, important in terms of their share of the economy, dynamic in terms of high total factor productivity growth. The good total factor productivity performance of textiles is perhaps more surprising, for an area often written off as part of our industrial heritage. It would be interesting to look in more detail at what’s going on here, but I suspect that a big part of it could be the value that can be added by intangibles like branding and design. Total factor productivity is not just about high tech and R&D, important though the latter is.

Clearly this is a very superficial look at a very complicated area. Even within the limitations of the EU Klems data set, I’ve not considered how rates of TFP growth have varied by time – before and after the global financial crisis, for example. Nor have I considered the way shifts between sectors have contributed to overall changes in productivity across the economy – I’ve focused only on rates, not on starting levels. And of course, we’re talking here about history, which isn’t always a good guide to the future, where there will be a whole new set of technological opportunities and competitive challenges. But as we start to get serious about industrial strategy, these are the sorts of questions that we need to be looking into.

Eroom’s law strikes again

“Eroom’s law” is the name given by pharma industry analyst Jack Scannell to the observation that the productivity of research and development in the pharmaceutical industry has been falling exponentially for decades – discussed in my earlier post Productivity: in R&D, healthcare and the whole economy. The name is an ironic play on Moore’s law, the statement that the number of transistors on an integrated circuit increases exponentially.

It’s Moore’s law that has underlain the orders of magnitude increases in computing power we’ve grown used to. But if computing power has been increasing exponentially, what can we say about the productivity of the research and development effort that’s underpinned those increases? It turns out that in the semiconductor industry, too, research and development productivity has been falling exponentially. Eroom’s law describes the R&D effort needed to deliver Moore’s law – and the unsustainability of this situation must surely play a large part in the slow-down in the growth in computing power that we are seeing now.

Falling R&D productivity has been explicitly studied by the economists Nicholas Bloom, Charles Jones, John Van Reenen and Michael Webb, in a paper called “Are ideas getting harder to find?” (PDF). I discussed an earlier version of this paper here – I made some criticisms of the paper, though I think its broad thrust is right. One of the case studies the economists look at is indeed the electronics industry, and there’s one particular problem with their reasoning that I want to focus on here – though fixing this actually makes their overall argument stronger.

The authors estimate the total world R&D effort underlying Moore’s law, and conclude: “The striking fact, shown in Figure 4, is that research effort has risen by a factor of 18 since 1971. This increase occurs while the growth rate of chip density is more or less stable: the constant exponential growth implied by Moore’s Law has been achieved only by a massive increase in the amount of resources devoted to pushing the frontier forward.”

R&D expenditure in the microelectronics industry, showing Intel’s R&D expenditure, and a broader estimate of world microelectronics R&D including semiconductor companies and equipment manufacturers. Data from the “Are Ideas Getting Harder to Find?” dataset on Chad Jones’s website. Inflation corrected using the US GDP deflator.

The growth in R&D effort is illustrated in my first plot, which compares the growth of world R&D expenditure in microelectronics with the growth of computing power. I plot two measures from the Bloom/Jones/van Reenen/Webb data set – the R&D expenditure of Intel, and an estimate of broader world R&D expenditure on integrated circuits, which includes both semiconductor companies and equipment manufacturers (I’ve corrected for inflation using the US GDP deflator). The plot shows an exponential period of increasing R&D expenditure, which levelled off around 2000, to rise again from 2010.

The weakness of their argument, that increasing R&D effort has been needed to maintain the same rate of technological improvement, is that it selects the wrong output measure. No-one is interested in how many transistors there are per chip – what matters to the user, and the wider economy – is that computing power continues to increase exponentially. As I discussed in an earlier post – Technological innovation in the linear age, the fact is that the period of maximum growth in computing power ended in 2004. Moore’s law continued after this time, but the end of Dennard scaling meant that the rate of increase of computing power began to fall. This is illustrated in my second plot. This, after a plot in Hennessy & Patterson’s textbook Computer Architecture: A Quantitative Approach (6th edn) and using their data, shows the relative computing power of microprocessors as a function of their year of introduction. The solid lines illustrate 52% pa growth from 1984 to 2003, 23% pa growth from 2003 – 201, and 9% pa growth from 2011 – 2014.

The growth in processor performance since 1988. Data from figure 1.1 in Computer Architecture: A Quantitative Approach (6th edn) by Hennessy & Patterson.

What’s interesting is that the slowdown in the rate of growth in R&D expenditure around 2000 is followed by a slowdown in the rate of growth of computing power. I’ve attempted a direct correlation between R&D expenditure and rate of increase of computing power in my next plot, which plots the R&D expenditure needed to produce a doubling of computer power as a function of time. This is a bit crude, as I’ve used the actual yearly figures without any smoothing, but it does seem to show a relatively constant increase of 18% per year, both for the total industry and for the Intel only figures.

Eroom’s law at work in the semiconductor industry. Real R&D expenditure needed to produce a doubling of processing power as a function of time.

What is the cause of this exponential fall in R&D productivity? A small part reflects Baumol’s cost disease – R&D is essentially a service business done by skilled people, who command wages that reflect the growth of the whole economy rather than their own output (the Bloom et al paper accounts for this to some extent by deflating R&D expenditure by scientific salary levels rather than inflation). But this is a relatively small effect compared to the more general problem of the diminishing returns to continually improving an already very complex and sophisticated technology.

The consequence seems inescapable – at some point the economic returns of improving the technology will not justify the R&D expenditure needed, and companies will stop making the investments. We seem to be close to that point now, with Intel’s annual R&D spend – $12 billion in 2015 – only a little less than the entire R&D expenditure of the UK government, and the projected cost of doubling processor power from here exceeding $100 billion. The first sign has been the increased concentration of the industry. For the 10 nm node, only four companies remained in the game – Intel, Samsung, the Taiwanese foundry company TSMC, and GlobalFoundries, which acquired the microelectronics capabilities of AMD and IBM. As the 7 nm node is being developed, GlobalFoundries has announced that it too is stepping back from the competition to produce next-generation chips, leaving only 3 companies at the technology frontier.

The end of this remarkable half-century of exponential growth in computing power has arrived – and it’s important that economists studying economic growth come to terms with this. However, this doesn’t mean innovation comes to an end too. All periods of exponential growth in particular technologies must eventually saturate, whether that’s as a result of physical or economic limits. In order for economic growth to continue, what’s important is that entirely new technologies must appear to replace them. The urgent question we face is what new technology is now on the horizon, to drive economic growth from here.

Innovation, regional economic growth, and the UK’s productivity problem

A week ago I gave a talk with this title at a conference organised by the Smart Specialisation Hub. This organisation was set up to help regional authorities in developing their economic plans; given the importance of local industrial strategies in the government’s overall industrial strategy its role becomes all the more important.

Other speakers at the conference represented central government, the UK’s innovation agency InnovateUK, and the Smart Specialisation Hub itself. Representing no-one but myself, I was able to be more provocative in my own talk, which you can download here (PDF, 4.7 MB).

My talk had four sections. Opening with the economic background, I argued that the UK’s stagnation in productivity growth and regional economic inequality has broken our political settlement. Looking at what’s going on in Westminster at the moment, I don’t think this is an exaggeration.

I went on to discuss the implications of the 2.4% R&D target – it’s not ambitious by developed world standards, but will be a stretch from our current position, as I discussed in an earlier blogpost: Reaching the 2.4% R&D intensity target.

Moving on to the regional aspects of research and innovation policy, I argued (as I did in this blog post: Making UK Research and Innovation work for the whole UK) that the UK’s regional concentration of R&D (especially public sector) is extreme and must be corrected. To illustrate this point, I used this version of Tom Forth’s plot splitting out the relative contributions of public and private sector to R&D regionally.

I argued that this plot gives a helpful framework for thinking about the different policy interventions needed in different parts of the country. I summarised this in this quadrant diagram [1].

Finally, I discussed the University of Sheffield’s Advanced Manufacturing Research Centre as an example of the kind of initiative that can help regenerate the economy of a de-industrialised area. Here a focus on translational research & skills at all levels both drives inward investment by international firms at the technology frontier & helps the existing business base upgrade.

I set this story in the context of Shih and Pisano’s notion of the “industrial commons” [2] – a set of resources that supports the collective knowledge, much of it tacit, that drives innovations in products and processes in a successful cluster. A successful industrial commons is rooted in a combination of large anchor companies & institutions, networks of supplying companies, R&D facilities, informal knowledge networks and formal institutions for training and skills. I argue that a focus of regional economic policy should be a conscious attempt to rebuild the “industrial commons” in an industrial sector which allows the opportunities of new technology to be embraced, yet which works with grain of the existing industry and institutional base. The “smart specialisation” framework is a good framework for identifying the right places to look.

1. As a participant later remarked, I’ve omitted the South East from this diagram – it should be in the bottom right quadrant, albeit with less business R&D than East Anglia, though with the benefits more widely spread.

2. See Pisano, G. P., & Shih, W. C. (2009). Restoring American Competitiveness. Harvard Business Review, 87(7-8), 114–125.

The semiconductor industry and economic growth theory

In my last post, I discussed how “econophysics” has been criticised for focusing on exchange, not production – in effect, for not concerning itself with the roots of economic growth in technological innovation. Of course, some of that technological innovation has arisen from physics itself – so here I talk about what economic growth theory might learn from an important episode of technological innovation with its origins in physics – the development of the semiconductor industry.

Economic growth and technological innovation

In my last post, I criticised econophysics for not talking enough about economic growth – but to be fair, it’s not just econophysics that suffers from this problem – mainstream economics doesn’t have a satisfactory theory of economic growth either. And yet economic growth and technological innovation provides an all-pervasive background to our personal economic experience. We expect to be better off than our parents, who were themselves better off than our grandparents. Economics without a theory of growth and innovation is like physics without an arrow of time – a marvellous intellectual construction that misses the most fundamental observation of our lived experience.

Defenders of economics at this point will object that it does have theories of growth, and there are even some excellent textbooks on the subject [1]. Moreover, they might remind us, wasn’t the Nobel Prize for economics awarded this year to Paul Romer, precisely for his contribution to theories of economic growth? This is indeed so. The mainstream approach to economic growth pioneered by Robert Solow regarded technological innovation as something externally imposed, and Romer’s contribution has been to devise a picture of growth in which technological innovation arises naturally from the economic models – the “post-neoclassical endogenous growth theory” that ex-Prime Minister Gordon Brown was so (unfairly) lampooned for invoking.

This body of work has undoubtedly highlighted some very useful concepts, stressing the non-rivalrous nature of ideas and the economic basis for investments in R&D, especially for the day-to-day business of incremental innovation. But it is not a theory in the sense a physicist might understand that – it doesn’t explain past economic growth, so it can’t make predictions about the future.

How the information technology revolution really happened

Perhaps to understand economic growth we need to turn to physics again – this time, to the economic consequences of the innovations that physics provides. Few would disagree that a – perhaps the – major driver of technological innovation, and thus economic growth, over the last fifty years has been the huge progress in information technology, with the exponential growth in the availability of computing power that is summed up by Moore’s law.

The modern era of information technology rests on the solid-state transistor, which was invented by William Shockley at Bell Labs in the late 1940’s (with Brattain and Bardeen – the three received the 1956 Nobel Prize for Physics). In 1956 Shockley left Bell Labs and went to Palo Alto (in what would later be called Silicon Valley) to found a company to commercialise solid-state electronics. However, his key employees in this venture soon left – essentially because he was, by all accounts, a horrible human being – and founded Fairchild Semiconductors in 1957. Key figures amongst those refugees were Gordon Moore – of eponymous law fame – and Robert Noyce. It was Noyce who, in 1960, made the next breakthrough, inventing the silicon integrated circuit, in which a number of transistors and other circuit elements were combined on a single slab of silicon to make a integrated functional device. Jack Kilby, at Texas Instruments, had, more or less at the same time, independently developed an integrated circuit on germanium, for which he was awarded the 2000 Physics Nobel prize (Noyce, having died in 1990, was unable to share this). Integrated circuits didn’t take off immediately, but according to Kilby it was their use in the Apollo mission and the Minuteman ICBM programme that provided a turning point in their acceptance and widespread use[2] – the Minuteman II guidance and control system was the first mass produced computer to rely on integrated circuits.

Moore and Noyce founded the electronics company Intel in 1968, to focus on developing integrated circuits. Moore had already, in 1965, formulated his famous law about the exponential growth with time of the number of transistors per integrated circuit. The next step was to incorporate all the elements of a computer on a single integrated circuit – a single piece of silicon. Intel duly produced the first commercially available microprocessor – the 4004 – in 1971, though this had been (possibly) anticipated by the earlier microprocessor that formed the flight control computer for the F14 Tomcat fighter aircraft. From these origins emerged the microprocessor revolution and personal computers, with its giant wave of derivative innovations, leading up to the current focus on machine learning and AI.

Lessons from Moore’s law for growth economics

What should clear from this very brief account is that classical theories of economic growth cannot account for this wave of innovation. The motivations that drove it were not economic – they arose from a powerful state with enormous resources at its disposal pursuing complex, but entirely non-economic projects – such as the goal of being able to land a nuclear weapon on any point of the earth’s surface with an accuracy of a few hundred meters.

Endogenous growth theories perhaps can give us some insight into the decisions companies made about R&D investment and the wider spillovers that such spending led to. They would need to take account of the complex institutional landscape that gave rise to this innovation. This isn’t simply a distinction between public and private sectors – the original discovery of the transistor was made at Bell Labs – nominally in the private sector, but sustained by monopoly rents arising from government action.

The landscape in which this innovation took place seems much more complex than growth economics, with its array of firms employing undifferentiated labour, capital, all benefiting from some kind of soup of spillovers seems able to handle. Semiconductor fabs are perhaps the most capital intensive plants in the world, with just a handful of bunny-suited individuals tending a clean-room full of machines that individually might be worth tens or even hundreds of millions of dollars. Yet the value of those machines represents, as much as anything physical, the embodied value of the intangible investments in R&D and process know-how.

How are the complex networks of equipment and materials manufacturers coordinated to make sure technological advances in different parts of this system happen at the right time and in the right sequence? These are independent companies operating in a market – but the market alone has not been sufficient to transmit the information needed to keep it coordinated. An enormously important mechanism for this coordination has been the National Technology Roadmap for Semiconductors (later the International Technology Roadmap for Semiconductors), initiated by a US trade body, the Semiconductor Industry Association. This was an important social innovation which allowed companies to compete in meeting collaborative goals; it was supported by the US government by the relaxation of anti-trust law and the foundation of a federally funded organisation to support “pre-competitive” research – SEMATECH.

The involvement of the US government reflected the importance of the idea of competition between nation states in driving technological innovation. Because of the cold war origins of the integrated circuits, the original competition was with the Soviet Union, which created an industry to produce ICs for military use, based around Zelenograd. The degree to which this industry was driven by indigenous innovation as against the acquisition of equipment and know-how from the west isn’t clear to me, but it seems that by the early 1980’s the gap between Soviet and US achievements was widening, contributing to the sense of stagnation of the later Brezhnev years and the drive for economic reform under Gorbachev.

From the 1980’s, the key competitor was Japan, whose electronics industry had been built up in the 1960’s and 70’s driven not by defense, but by consumer products such as transistor radios, calculators and video recorders. In the mid-1970’s the Japanese government’s MITI provided substantial R&D subsidies to support the development of integrated circuits, and by the late 1980’s Japan appeared within sight of achieving dominance, to the dismay of many commentators in the USA.

That didn’t happen, and Intel still remains at the technological frontier. Its main rivals now are Korea’s Samsung and Taiwan’s TSMC. Their success reflects different versions of the East Asian developmental state model; Samsung is Korea’s biggest industrial conglomerate (or chaebol), whose involvement in electronics was heavily sponsored by its government. TSMC was a spin-out from a state-run research institute in Taiwan, ITRI, which grew by licensing US technology and then very effectively driving process improvements.

Could one build an economic theory that encompasses all this complexity? For me, the most coherent account has been Bill Janeway’s description of the way government investment combines with the bubble dynamics that drives venture capitalism, in his book “Doing Capitalism in the Innovation Economy”. Of course, the idea that financial bubbles are important for driving innovation is not new – that’s how the UK got a railway network, after all – but the econophysicist Didier Sornette has extended this to introduce the idea of a “social bubble” driving innovation[3].

This long story suggests that the ambition of economics to “endogenise” innovation is a bad idea, because history tells us that the motivations for some of the most significant innovations weren’t economic. To understand innovation in the past, we don’t just need economics, we need to understand politics, history, sociology … and perhaps even natural science and engineering. The corollary of this is that devising policy solely on the basis of our current theories of economic growth is likely to lead to disappointing outcomes. At a time when the remarkable half-century of exponential growth in computing power seems to be coming to an end, it’s more important than ever to learn the right lessons from history.

[1] I’ve found “Introduction to Modern Economic Growth”, by Daron Acemoglu, particularly useful

[2] Jack Kilby: Nobel Prize lecture, https://www.nobelprize.org/uploads/2018/06/kilby-lecture.pdf

[3] See also that great authority, The Onion “Recession-Plagued Nation Demands New Bubble to Invest In

The Physics of Economics

This is the first of two posts which began life as a single piece with the title “The Physics of Economics (and the Economics of Physics)”. In the first section, here, I discuss some ways physicists have attempted to contribute to economics. In the second half, I turn to the lessons that economics should learn from the history of a technological innovation with its origin in physics – the semiconductor industry.

Physics and economics are two disciplines which have quite a lot in common – they’re both mathematical in character, many of their practitioners are not short of intellectual self-confidence – and they both have imperialist tendencies towards their neighbouring disciplines. So the interaction between the two fields should be, if nothing else, interesting.

The origins of econophysics

The most concerted attempt by physicists to colonise an area of economics is in the area of the behaviour of financial markets – in the field which calls itself “econophysics”. Actually at its origins, the traffic went both ways – the mathematical theory of random walks that Einstein developed to explain the phenomenon of Brownian motion had been anticipated by the French mathematician Bachelier, who derived the theory to explain the movements of stock markets. Much later, the economic theory that markets are efficient brought this line of thinking back into vogue – it turns out that financial markets can be quite often modelled as simple random walks – but not quite always. The random steps that markets take aren’t drawn from a Gaussian distribution – the distribution has “fat tails”, so rare events – like big market crashes – aren’t anywhere like as rare as simple theories assume.

Empirically, it turns out that the distributions of these rare events can sometimes be described by power laws. In physics power laws are associated with what are known as critical phenomena – behaviours such as the transition from a liquid to a gas or from a magnet to a non-magnet. These phenomena are characterised by a certain universality, in the sense that the quantitative laws – typically power laws – that describe the large scale behaviour of these systems doesn’t strongly depend on the details of the individual interactions between the elementary objects (the atoms and molecules, in the case of magnetism and liquids) whose interaction leads collectively to the larger scale phenomenon we’re interested in.

For “econophysicists” – whose background often has been in the study of critical phenomenon – it is natural to try and situate theories of the movements of financial markets in this tradition, finding analogies with other places where power laws can be found, such as the distribution of earthquake sizes and the behaviour of sand-piles. In terms of physicists’ actual impact on participants in financial markets, though, there’s a paradox. Many physicists have found (often very lucrative) employment as quantitative traders, but the theories that academic physicists have developed to describe these markets haven’t made much impact on the practitioners of financial economics, who have their own models to describe market movements.

Other ideas from physics have made their way into discussions about economics. Much of classical economics depends on ideas like the “representative household” or the “representative firm”. Physicists with a background in statistical mechanics recognise this sort of approach as akin to a “mean field theory”. The idea that a complex system is well represented by its average member is one that can be quite fruitful, but in some important circumstances fails – and fails badly – because the fluctuations around the average become as important as the average itself. This motivates the idea of agent based models, to which physicists bring the hope that even simple “toy” models can bring insight. The Schelling model is one such very simple model that came from economics, but which has a formal similarity with some important models in physics. The study of networks is another place where one learns that the atypical can be disproportionately important.

If markets are about information, then physics should be able to help…

One very attractive emerging application of ideas from physics to economics concerns the place of information. Friedrich Hayek stressed the compelling insight that one can think of a market as a mechanism for aggregating information – but a physicist should understand that information is something that can be quantified, and (via Shannon’s theory) that there are hard limits on how much information can transmitted in a physical system . Jason Smith’s research programme builds on this insight to analyse markets in terms of an information equilibrium[1].

Some criticisms of econophysics

How significant is econophysics? A critique from some (rather heterodox) economists – Worrying trends in econophysics – is now more than a decade old, but still stings (see also this commentary from the time from Cosma Shalizi – Why Oh Why Can’t We Have Better Econophysics? ). Some of the criticism is methodological – and could be mostly summed up by saying, just because you’ve got a straight bit on a log-log plot doesn’t mean you’ve got a power law. Some criticism is about the norms of scholarship – in brief: read the literature and stop congratulating yourselves for reinventing the wheel.

But the most compelling criticism of all is about the choice of problem that econophysics typically takes. Most attention has been focused on the behaviour of financial markets, not least because these provide a wealth of detailed data to analyse. But there’s more to the economy – much, much more – than the financial markets. More generally, the areas of economics that physicists have tended to apply themselves to have been about exchange, not production – studying how a fixed pool of resources can be allocated, not how the size of the pool can be increased.

[1] For a more detailed motivation of this line of reasoning, see this commentary, also from Cosma Shalizi on Francis Spufford’s great book “Red Plenty” – “In Soviet Union, Optimization Problem Solves You”.

The UK’s top six productivity underperformers

The FT has been running a series of articles about the UK’s dreadful recent productivity performance, kicked off with this very helpful summary – Britain’s productivity crisis in eight charts. One important aspect of this was to focus on the (negative) contribution of formerly leading sectors of the economy which have, since the financial crisis, underperformed:

“Computer programming, energy, finance, mining, pharmaceuticals and telecoms — which together account for only one-fifth of the economy — generated three-fifths of the decline in productivity growth.”

The original source of this striking statistic is a paper by Rebecca Riley, Ana Rincon-Aznar and Lea Samek – Below the Aggregate: A Sectoral Account of the UK Productivity Puzzle.

What this should stress is that there’s no single answer to the productivity crisis. We need to look in detail at different industrial sectors, different regions of the UK, and identify the different problems they face before we can work out the appropriate policy responses.

So what can we say about what’s behind the underperformance of each of these six sectors, and what lessons should policy-makers learn in each case? Here are a few preliminary thoughts.

Mining. This is dominated by North Sea Oil. The oil is running out, and won’t be coming back – production peaked in 2000; what oil is left is more expensive and difficult to get out.
Lessons for policy makers: more recognition is needed that the UK’s prosperity in the 90’s and early 2000’s depended as much on the accident of North Sea oil as any particular strength of the policy framework.

Finance. It’s not clear to me how much of the apparent pre-crisis productivity boom was real, but post-crisis increased regulation and greater capital requirements have reduced apparent rates of return in financial services. This is as it should be.
Lessons for policy makers: this sector is the problem, not the solution, so calls to relax regulation should be resisted, and so-called “innovation” that in practise amounts to regulatory arbitrage discouraged.

The end of North Sea oil and the finance bubble cannot be reversed – these are headwinds that the economy has to overcome. We have to find new sources of productivity growth rather than looking back nostalgically at these former glories (for example, there’s a risk that the enthusiasm for fracking and fintech represent just such nostalgia).

Energy. Here, a post-privatisation dysfunctional pseudo-market has prioritised sweating existing assets rather than investing. Meanwhile there’s been an unclear and inconsistent government policy environment; sometimes the government has willed the ends without providing the means (e.g. nuclear new build), elsewhere it has introduced perverse and abrupt changes of tack (e.g. in its support for onshore wind and solar).
Lessons for policy makers: develop a rational, long-term energy strategy that will deliver the necessary decarbonisation of the energy economy. Then stick to it, driving innovation to support the strategy. For more details, read chapter 4 – Decarbonisation of the energy economy – of the Industrial Strategy Commission’s final report.

Computer programming. Here I find myself on less sure ground. Are we seeing the effects of increasing overseas outsourcing and competition, for example to India’s growing IT industry? Are we seeing the effect of more commoditisation of computer programming, with new business models such as “software as a service”?

Telecoms. Again, here I’m less certain of what’s been going on. Are we seeing the effect of lengthening product cycles as the growth in processor power slows? Is this the effect of overseas competition – for example, rapidly growing Chinese firms like Huawei – moving up the value chain? Here it’s also likely that measurement problems – in correctly accounting for improvements in quality – will be most acute.

Pharmaceuticals. As my last blogpost outlined, productivity growth in pharmaceuticals depends on new products being developed through formal R&D, their value being protected by patents. There has been a dramatic, long-term fall in the productivity of pharma R&D, so it is unsurprising that this is now feeding through into reduced labour productivity.
Lessons for policy makers: see the recent NESTA report “The Biomedical Bubble”.

Many of these issues were already discussed in my 2016 SPERI paper Innovation, research and the UK’s productivity crisis. Two years on, the productivity crisis seems even more pressing, and as the FT series illustrates, is receiving more attention from policy makers and economists (though still not enough, in view of its fundamental importance for living standards and fiscal stability). The lesson I would want to stress is that, to make progress, policy makers and economists need to go beyond generalities, and pay more attention to the detailed particulars of individual industries, sectors and regions, and the different way innovation takes place – or hasn’t being taking place – within them.

The biomedical bubble

I have a new report out, written with science policy expert James Wilsdon for the innovation foundation NESTA, entitled The Biomedical Bubble: Why UK research and innovation needs a greater diversity of priorities, politics, places and people. Here’s a summary of the report:

Biomedical science and innovation has benefited from significant increases in public investment over the past 15 years. This builds on the remarkable strengths of the UK’s academic life sciences base and pharmaceutical industry. But continuing to prioritise the biomedical, in a period when government aims to boost research and development (R&D) spending to 2.4 per cent of GDP, risks unbalancing our innovation system, and is unlikely to deliver the economic benefits or improvements to health outcomes that society expects.

For too long, the pharmaceutical and biotechnology sectors have dominated policy thinking about translating research, but these sectors are in deep trouble, with R&D productivity plummeting and R&D investment falling. Meanwhile, much of the wider innovation needed for the NHS, public health and social care has been under-resourced. Greater emphasis needs to be given to the social, environmental, digital and behavioural determinants of health, and decisions about research priorities need to involve a greater diversity of perspectives, drawn from across the country. The creation of UK Research and Innovation (UKRI), which aims to bring a more strategic approach to funding and prioritisation, is the right moment to rethink this balance. This paper sets out why and how the UK needs to escape the biomedical bubble if it is to realise the economic, social and health potential of extra investment in R&D.

There are some shorter pieces discussing different aspects of our arguments:

In the Guardian Political Science blog: It’s time to burst the biomedical bubble in UK research

On the WonkHE website: Building an industrial strategy around the pharma/biotech industry is a bet on the US healthcare system remaining unreformed.
Rethinking the life sciences strategy

On the ResearchProfessional website (subscription required), focussing on the task UKRI has balancing its portfolio:
Examine funding balance to pop ‘biomedical bubble’, UKRI told

A news piece about the report in the Times Higher:
UK’s biomedical research funding ‘bubble’ is ‘about to burst’

Bad Innovation: learning from the Theranos debacle

Earlier this month, Elizabeth Holmes, founder of the medical diagnostics company Theranos, was indicted on fraud and conspiracy charges. Just 4 years ago, Theranos was valued at $9 billion, and Holmes was being celebrated as one of Silicon Valley’s most significant innovators, not only the founder of one of the mythical Unicorns, but through the public value of her technology, a benefactor of humanity. How this astonishing story unfolded is the subject of a tremendous book by the journalist who first exposed the scandal, John Carreyrou. “Bad Blood” is a compelling read – but it’s also a cautionary tale, with some broader lessons about the shortcomings of Silicon Valley’s approach to innovation.

The story of Theranos

The story begins in 2003. Holmes had finished her first year as a chemical engineering student at Stanford. She was particularly influenced by one of her professors, Channing Robertson; she took his seminar on drug delivery devices, and worked in his lab in the summer. Inspired by this, she was determined to apply the principles of micro- and nano- technology to medical diagnostics, and wrote a patent application for a patch which would sample a patient’s blood, analyse it, use the information to determine the appropriate response, and release a controlled amount of the right drug. This closed loop system would combine diagnostics with therapy – hence Theranos, (from “theranostic”).

Holmes dropped out from Stanford in her second year to pursue her idea, encouraged by her professor, Channing Robertson. By the end of 2004, the company she had incorporated, with one of Robertson’s PhD students, Shaunak Roy, had raised $6 million from angels and venture capitalists.

The nascent company soon decided that the original theranostic patch idea was too ambitious, and focused on diagnostics. Holmes focused on the idea of doing blood tests on very small volumes – the droplets of blood you get from a finger prick, rather than the larger volumes you get by drawing blood with a needle and syringe. It’s a great pitch for those scared of needles – but the true promise of the technology was much wider than this. Automatic units could be placed in patients’ homes, cutting out all the delay and inconvenience of having to go to the clinic for the blood draw, and then waiting for the results to come back. The units could be deployed in field situations – with the US Army in Iraq and Afghanistan – or in places suffering from epidemics, like ebola or zika. They could be used in drug trials to continuously monitor patient reactions and pick up side-effects quickly.

The potential seemed huge, and so were the revenue projections. By 2010, Holmes was ready to start rolling out the technology. She negotiated a major partnership with the pharmacy chain Walgreens, and the supermarket Safeway had loaned the company $30 million with a view to opening a chain of “wellness centres”, built around the Theranos technology, in its stores. The US Army – in the powerful figure of General James Mattis – was seriously interested.

In 2013, the Walgreen collaboration was ready to go live; the company had paid Theranos a $100 million “innovation fee” and a $40 million loan on the basis of a 2013 launch. The elite advertising agency Chiat\Day, famous for their work with Apple, were engaged to polish the image of the company – and of Elizabeth Holmes. Investors piled in to a new funding round, at the end of which Theranos was valued at $9 billion – and Holmes was a paper billionaire.

What could go wrong? There turned out to be two flies in the ointment. Firstly, Theranos’s technology couldn’t do even half of what Holmes had been promising, and even on the tests it could do, it was unacceptably inaccurate. Carreyrou’s book is at its most compelling as he gives his own account of how he broke the story, in the face of deception, threats, and some very expensive lawyers. None of this would have come out without some very brave whistleblowers.

At what point did the necessary optimism about a yet-to-be developed technology turn first into self-delusion, and then into fraud? To answer this, we need to look at the technological side of the story.

The technology

As is clear from Carreyrou’s account, Theranos had always taken secrecy about its technology to the point of paranoia – and it was this secrecy that enabled the deception to continue for so long. There was certainly no question that they would be publishing anything about their methods and results in the open literature. But, from the insiders’ accounts in the book, we can trace the evolution of Theranos’s technical approach.

To go back to the beginning, we can get a sense of what was in Holmes’s mind at the outset from her first patent, originally filed in 2003. This patent – “Medical device for analyte monitoring and drug delivery” is hugely broad, at times reading like a digest of everything that anybody at the time was thinking about when it comes to nanotechnology and diagnostics. But one can see the central claim – an array of silicon microneedles would penetrate the skin to extract the blood painlessly, this would be pumped through 100 µm wide microfluidic channels, combined with reagent solutions, and then tested for a variety of analytes through detecting their binding to molecules attached to surfaces. In Holmes’s original patent, the idea was that this information would be processed, and then used to initiate the injection of a drug back into the body. One example quoted was the antibiotic vancomycin, which has rather a narrow window of effectiveness before side effects become severe – the idea would be that the blood was continuously monitored for vancomycin levels, which would then be automatically topped up when necessary.

Holmes and Roy, having decided that the complete closed loop theranostic device was too ambitious, began work to develop a microfluidic device to take a very small sample of blood from a finger prick, route it through a network of tiny pipes, and subject it to a battery of scaled-down biochemical tests. This all seems doable in principle, but fraught with practical difficulties. After three years making some progress, Holmes seems to have decided that this approach wasn’t going to work in time, so in 2007 the company switched direction away from microfluidics, and Shaunak Roy parted from it amicably.

The new approach was based around a commercial robot they’d acquired, designed for the automatic dispensing of adhesives. The idea of basing their diagnostic technology on this “gluebot” is less odd than it might seem. There’s nothing wrong with borrowing bits of technology from other areas, and reliably glueing things together depends on precise, automated fluid handling, just as diagnostic analysis does. But what this did mean was that Theranos no longer aspired to be a microfluidics/nanotech firm, but instead was in the business of automating conventional laboratory testing. This is a fine thing to do, of course, but it’s an area with much more competition from existing firms, like Siemens. No longer could Theranos honestly claim to be developing a wholly new, disruptive technology. What’s not clear is whether its financial backers, or its board, were told enough or had enough technical background to understand this.

The resulting prototype was called Edison 1.0 – and it sort-of worked. It could only do one class of tests – immunoassays, it couldn’t do many of these tests at the same time, and its results were not reproducible or accurate enough for clinical use. To fill in the gaps between what they promised their proprietary technology could do and its actual capabilities, Theranos resorted to modifying a commercial analysis machine – the Siemens Advia 1800 – to be able to analyse smaller samples. This was essential, to fulfil Theranos’s claimed USP, of being able to analyse the drops of blood from pin-pricks rather than the larger volumes taken for standard blood tests from a syringe and needle into a vein.

But these modifications presented their own difficulties. What they amounted to was simply diluting the small blood sample to make it go further – but of course this reduces the concentration of the molecules the analyses are looking for – often below the range of sensitivity of the commercial instruments. And there remained a bigger question, that actually hangs over the viability of the whole enterprise – can one take blood from a pin-prick that isn’t contaminated to an unknown degree by tissue fluid, cell debris and the like? Whatever the cause, it became clear that the test results Theranos were providing – to real patients, by this stage – were erratic and unreliable.

Theranos was working on a next generation analyser – the so-called miniLab – with the goal of miniaturising the existing lab testing methods to make a very versatile analyser. This project never came to fruition. Again, it was unquestionably an avenue worth pursuing. But Theranos wasn’t alone in this venture, and it’s difficult to see what special capabilities they brought that rivals with more experience and a longer track record in this area didn’t have already. Other portable analysers exist already (for example, the Piccolo Xpress), and the miniaturised technologies they would use were already in the market-place (for example, Theranos were studying the excellent miniaturised IR and UV spectrophotometers made by Ocean Optics – used in my own research group). In any case, events had overtaken Theranos before they could make progress with this new device.

Counting the cost and learning the lessons

What was the cost of this debacle? There was an human cost, not fully quantified, in terms of patients being given unreliable test results, which surely led to wrong diagnoses, missed or inappropriate treatments. And there is the opportunity cost – Theranos spent around $900 million, some of this on technology development, but rather too much on fees for lawyers and advertising agencies. But I suspect the biggest cost was the effect Theranos had slowing down and squeezing out innovation in an area that genuinely did have the potential to make a big difference to healthcare.

It’s difficult to read this story without starting to think that something is very wrong with intellectual property law in the United States. The original Theranos patent was astonishingly broad, and given the amount of money they spent on lawyers, there can be no doubt that other potential innovators were dissuaded from entering this field. IP law distinguishes between the conception of a new invention and its necessary “reduction to practise”. Reduction to practise can be by the testing of a prototype, but it can also be by the description of the invention in enough detail that it can be reproduced by another worker “skilled in the art”. Interpretation of “reduction to practise” seems to have become far too loose. Rather than giving the right to an inventor to benefit from a time-limited monopoly on an invention they’ve already got to work, patent law currently seems to allow the well-lawyered to carve out entire areas of potential innovation for their exclusive investigation.

I’m also struck from Carreyrou’s account by the importance of personal contacts in the establishment of Theranos. We might think that Silicon Valley is the epitome of American meritocracy, but key steps in funding were enabled by who was friends with who and by family relationships. It’s obvious that far too much was taken on trust, and far to little actual technical due diligence was carried out.

Carreyrou rightly stresses just how wrong it was to apply the Silicon Valley “fake it till you make it” philosophy to a medical technology company, where what follows from the fakery isn’t just irritation at buggy software, but life-and-death decisions about people’s health. I’d add to this a lesson I’ve written about before – doing innovation in the physical and biological realms is fundamentally more difficult, expensive and time-consuming than innovating in the digital world of pure information, and if you rely on experience in the digital world to form your expectations about innovation in the physical world, you’re likely to come unstuck.

Above all, Theranos was built on gullibility and credulousness – optimism about the inevitability of technological progress, faith in the eminence of the famous former statesmen who formed the Theranos board, and a cult of personality around Elizabeth Holmes – a cult that was carefully, deliberately and expensively fostered by Holmes herself. Magazine covers and TED talks don’t by themselves make a great innovator.

But in one important sense, Holmes was convincing. The availability of cheap, accessible, and reliable diagnostic tests would make a big difference to health outcomes across the world. The biggest tragedy is that her actions have set back that cause by many years.