Did the government build the iPhone? Would the iPhone have happened without governments?

The iPhone must be one of the most instantly recognisable symbols of the modern “tech economy”. So, it was an astute choice by Mariana Mazzacuto to put it at the centre of her argument about the importance of governments in driving the development of technology. Mazzacuto’s book – The Entrepreneurial State – argues that technologies like the iPhone depended on the ability and willingness of governments to take on technological risks that the private sector is not prepared to assume. She notes also that it is that same private sector which captures the rewards of the government’s risk taking. The argument is a powerful corrective to the libertarian tendencies and the glorification of the free market that is particularly associated with Silicon Valley.

Her argument could, though, be caricatured as saying that the government built the iPhone. But to put it this way would be taking the argument much too far – the contributions, not just of Apple, but of many other companies in a worldwide supply chain that have developed the technologies that the iPhone integrates, are enormous. The iPhone was made possible by the power of private sector R&D, the majority of it not in fact done by Apple, but by many companies around the world, companies that most people have probably not even heard of.

And yet, this private sector R&D was indeed encouraged, driven, and indeed sometimes funded outright, by government (in fact, more than one government – although the USA has had a major role, other governments have played their parts too in creating Apple’s global supply chain). It drew on many results from publicly funded research, in Universities and public research institutes around the world.

So, while it isn’t true to say the government built the iPhone, what is true is to say that the iPhone would not have happened without governments. We need to understand better the ways government and the private sector interact to drive innovation forward, not just to get a truer picture of where the iPhone came from, but in order to make sure we continue to get the technological innovations we want and need.

Integrating technologies is important, but innovation in manufacturing matters too

The iPhone (and the modern smartphone more generally) is, truly, an awe-inspiring integration of many different technologies. It’s a powerful computer, with an elegant and easy to use interface, it’s a mobile phone which connects to the sophisticated, computer driven infrastructure that constitutes the worldwide cellular telephone system, and through that wireless data infrastructure it provides an interface to powerful computers and databases worldwide. Many of the new applications of smartphones (as enablers, for example, of the so-called “sharing economy”) depend on the package of powerful sensors they carry – to infer its location (the GPS unit), to determine what is happening to it physically (the accelerometers), and to record images of its surroundings (the camera sensor).

Mazzacuto’s book traces back the origins of some of the technologies behind the iPod, like the hard drive and the touch screen, to government funded work. This is all helpful and salutary to remember, though I think there are two points that are underplayed in this argument.

Firstly, I do think that the role of Apple itself (and its competitors), in integrating many technologies into a coherent design supported by usable software, shouldn’t be underestimated – though it’s clear that Apple in particular has been enormously successful in finding the position that extracts maximum value from physical technologies that have been developed by others.

Secondly, when it comes to those physical technologies, one mustn’t underestimate the effort that needs to go in to turn an initial discovery into a manufacturable product. A physical technology – like a device to store or display information – is not truly a technology until it can be manufactured. To take an initial concept from an academic discovery or a foundational patent to the point at which one has a a working, scalable manufacturing process involves a huge amount of further innovation. This process is expensive and risky, and the private sector has often proved unwilling to bear these costs and risks without support from the state, in one form or another. The history of some of the many technologies that are integrated in devices like the iPhone illustrate the complexities of developing technologies to the point of mass manufacture, and show how the roles of governments and the private sector have been closely intertwined.

For example, the ultraminiaturised hard disk drive that made the original iPod possible (now largely superseded by cheaper, bigger, flash memory chips) did indeed, as pointed out by Mazzucato, depend on the Nobel prize-winning discovery by Albert Fert and Peter Grünberg of the phenomenon of giant magnetoresistance. This is a fascinating and elegant piece of physics, which suggested a new way of detecting magnetic fields with great sensitivity. But to take this piece of physics and devise a way of using it in practise to create smaller, higher capacity hard disk drives, as Stuart Parkin’s group at IBM’s Almaden Laboratory did, was arguably just as significant a contribution.

How liquid crystal displays were developed

The story of the liquid crystal display is even more complicated. Continue reading “Did the government build the iPhone? Would the iPhone have happened without governments?”

Does radical innovation best get done by big firms or little ones?

A recent blogpost by the economist Diane Coyle quoted JK Galbraith as saying in 1952: “The modern industry of a few large firms is an excellent instrument for inducing technical change. It is admirably equipped for financing technical development and for putting it into use. The competition of the competitive world, by contrast, almost completely precludes technical development.” Coyle describes this as “complete nonsense”“ big firms tend to do incremental innovation, while radical innovation tends to come from small entrants.” This is certainly conventional wisdom now – but it needs to be challenged.

As a point of historical fact, what Galbraith wrote in 1952 was correct – the great, world-changing innovations of the postwar years were indeed the products, not of lone entrepreneurs, but of the giant R&D departments of big corporations. What is true is that in recent years we’ve seen radical innovations in IT which have arisen from small entrants, of which Google’s search algorithm is the best known example. But we must remember two things. Digital innovations like these don’t exist in isolation – they only have an impact because they can operate on a technological substrate which isn’t digital, but physical. The fast, small and powerful computers, the world-wide communications infrastructure that digital innovations rely on were developed, not in small start-ups, but in large, capital intensive firms. And many of the innovations we urgently need – in areas like affordable low carbon energy, grid-scale energy storage, and healthcare for ageing populations – will not be wholly digital in character. Technologies don’t all proceed at the same pace (as I discussed in an earlier post – Accelerating change or innovation stagnation). In focusing on the digital domain, in which small entrants can indeed achieve radical innovations (as well as some rather trivial ones), we’re in danger of failing to support the innovation in the material and biological domains, which needs the long-term, well-resourced development efforts that only big organisations can mobilise. The outcome will be a further slowing of economic growth in the developed world, as innovation slows down and productivity growth stalls.

So what were the innovations that the sluggish big corporations of the post-war world delivered? Jet aircraft, antibiotics, oral contraceptives, transistors, microprocessors, Unix, optical fibre communications and mobile phones are just a few examples. Continue reading “Does radical innovation best get done by big firms or little ones?”

Growth, technological innovation, and the British productivity crisis

The biggest current issue in the UK’s economic situation is the continuing slump in productivity. It’s this poor productivity performance that underlies slow or no real wage growth, and that also contributes to disappointing government revenues and consequent slow progress reducing the government deficit. Yet the causes of this poor productivity performance are barely discussed, let alone understood. In the long-term, productivity growth is associated with innovation and technological progress – have we stopped being able to innovate? The ONS has recently released a set of statistics which potentially throw some light on the issue. These estimates of total factor productivity – productivity controlled for inputs of labour and capital – make clear the seriousness of the problem.

Multifactor productivity, whole economy, ONS estimates.
Total factor productivity relative to 1994, whole economy, ONS estimates

Here are the figures for the whole economy. They show that, up to 2008, total factor productivity grew steadily at around 1% a year. Then it precipitously fell, losing more than a decade’s worth of growth, and it continues to fall. This means that each year since the financial crisis, on average we have had to work harder or put in more capital to achieve the same level of economic output. A simple-minded interpretation of this would be that, rather than seeing technological progress being reflected in economic growth, we’re going backwards, we’re technologically regressing, and the only economic growth we’re seeing is because we have a larger population working longer hours.

Of course, things are more complicated than this. Many different sectors contribute to the economy – in some, we see substantial innovation and technological progress, while in others the situation is not so good. It’s the overall shape of the economy, the balance between growing and stagnating sectors, that contributes to the whole picture. The ONS figures do begin to break down total factor productivity growth into different sectors, and this begins to give some real insight into what’s wrong with the UK’s economy and what needs to be done to right it. Before I come to those details, I need to say something more about what’s being estimated here.

Where does sustainable, long term economic growth come from? Continue reading “Growth, technological innovation, and the British productivity crisis”

Science, Politics, and the Haldane Principle

The UK government published a new Science and Innovation Strategy just before Christmas, in circumstances that have led to a certain amount of comment (see, for example, here and here). There’s a lot to be said about this strategy, but here I want to discuss just one aspect – the document’s extended references to the Haldane Principle. This principle is widely believed to define, in UK science policy, a certain separation between politics and science, taking detailed decisions about what science to fund out of the hands of politicians and entrusting them to experts in the Research Councils, at arms’ length from the government. The new strategy reaffirms an adherence to the Haldane Principle, but it does this in a way that will make some people worry that an attempt is being made to redefine it, to allow more direct intervention in science funding decisions by politicians in Whitehall. No-one doubts that the government of the day has, not just a right, but a duty, to set strategic directions and priorities for the science the government funds. What’s at issue are how to make the best decisions, underpinned by the best evidence, for what by definition are the uncertain outcomes of research.

The key point to recognize about the Haldane Principle is that it is – as the historian David Edgerton pointed out – an invented tradition. Continue reading “Science, Politics, and the Haldane Principle”

Lecture on responsible innovation and the irresponsibility of not innovating

Last night I gave a lecture at UCL to launch their new centre for Responsible Research and Innovation. My title was “Can innovation ever be responsible? Is it ever irresponsible not to innovate?”, and in it I attempted to put the current vogue within science policy for the idea of Responsible Research and Innovation within a broader context. If I get a moment I’ll write up the lecture as a (long) blogpost but in the meantime, here is a PDF of my slides.

Rebuilding the UK’s innovation economy

The UK’s innovation system is currently under-performing; the amount of resource devoted to private sector R&D has been too low compared to competitors for many years, and the situation shows no sign of improving. My last post discussed the changes in the UK economy that have led us to this situation, which contributes to the deep-seated problems of the UK economy of very poor productivity performance and persistent current account deficits. What can we do to improve things? Here I suggest three steps.

1. Stop making things worse.
Firstly, we should recognise the damage that has been done to the countries innovative capacity by the structural shortcomings of our economy and stop making things worse. R&D capacity – including private sector R&D – is a national asset, and we should try and correct the perverse incentives that lead to its destruction. Continue reading “Rebuilding the UK’s innovation economy”

Business R&D is the weak link in the UK’s innovation system

What’s wrong with the UK’s innovation system is not that we don’t have a strong science base, or even that there isn’t the will to connect the science base to the companies and entrepreneurs who might want to use its outputs. The problem is that our economy isn’t assigning enough resource to pulling through the fruits of the science base into technological innovations, the innovation that will create new products and services, bring economic growth, and help solve some of the biggest social problems we face. The primary symptom of the problem is the UK’s very poor performance at business funded research and development R&D. This is the weak link in the UK’s national innovation system, and it is part of a bigger picture of short-termism and under-investment which underlie the UK economy’s serious long-term problems.

For context, it’s worth highlighting two particular features of the UK economy. The first is its very poor productivity growth: currently on one measure (annualised 6 year growth in productivity) we’re seeing the worst peace-time performance for the last 150 years. Without productivity growth, there will be no growth in average living standards, and that’s going to lead to an increasingly sour political scene.

The second is the huge current account deficit, which at 5.4% of GDP is worse than in the crisis years of the mid-1970s. Now, as then, the UK is unable to pay its way in the world. Unlike the 1970’s, though, there’s no immediate political crisis, no humiliating appeals to the IMF for a bail-out. This time round, overseas investors are happy to finance this deficit by buying UK assets. But this isn’t cost-free. An influx of overseas capital is what is currently driving a price bubble for domestic and commercial property in London, severely unbalancing the economy and leading to a growing gulf between the capital and the regions. The assets being bought include the nation’s key infrastructure in energy and transport; there will be an inevitable loss of control and sovereignty as more of this infrastructure falls into overseas ownership. Chinese money will be paying for any new generation of nuclear power stations that will be built; that will give the UK very little leverage in insisting that some of that investment is spent to create jobs in the UK, and it will be paid for by what will effectively be a tax on everyone’s electricity bills, guaranteed for 35 years.

These are long-term problems, and so is the decline in business R&D intensity. The last thirty years has seen this drop from 1.48% in 1981, to 1.09% now (measured as a percentage of GDP) Continue reading “Business R&D is the weak link in the UK’s innovation system”

Surely there’s more to science than money?

How can we justify spending taxpayers’ money on science when there is so much pressure to cut public spending, and so many other popular things to spend the money on, like the National Health Service? People close to the policy-making process tend to stress that if you want to persuade HM Treasury of the need to fund science, there’s only one argument they will listen to – that science spending will lead to more economic growth. Yet the economic instrumentalism of this argument grates for many people. Surely it must be possible to justify the elevated pursuit of knowledge in less mercenary, less meretricious terms? If our political economy was different, perhaps it would be possible. But in a system in which money is increasingly seen as the measure of all things, it’s difficult to see how things could be otherwise. If you don’t like this situation, it’s not science, but broader society, that you’ve got to change.

The relentless focus on the economic justification of science is relatively recent, but that doesn’t mean that what went before was a golden age. The dominant motivation for state support of science in the twentieth century wasn’t to make money, but to win wars. Continue reading “Surely there’s more to science than money?”

Spin-outs and venture capital won’t fill the pharma R&D gap

Now that Pfizer has, for the moment, been rebuffed in its attempt to take over AstraZeneca, it’s worth reflecting on the broader issues this story raised about the pharmaceutical industry in particular and technological innovation more generally. The political attention focused on the question of industrial R&D capacity was very welcome; this was the subject of my last post – Why R&D matters. Less has been said about the broader problems of innovation in the pharmaceutical industry, which I discussed in an earlier post – Decelerating change in the pharmaceutical industry. One of the responses I had to my last post argued that we shouldn’t worry about declining R&D in the pharmaceutical industry, because that represented an old model of innovation that was being rapidly superseded. In the new world, nimble start-ups, funded by far-seeing venture capitalists, are able to translate the latest results from academic life sciences into new clinical treatments in a much more cost-effective way than the old industry behemoths. It’s an appealing prospect that fits in with much currently fashionable thinking about innovation, and one can certainly find a few stories about companies founded that way that have brought useful treatments to market. The trouble is, though, if we look at the big picture, there is no evidence at all that this new approach is working.

A recent article by Matthew Herper in Forbes – The Cost Of Creating A New Drug Now $5 Billion, Pushing Big Pharma To Change – sets out pharma’s problems very starkly. Continue reading “Spin-outs and venture capital won’t fill the pharma R&D gap”