Science, Politics, and the Haldane Principle

The UK government published a new Science and Innovation Strategy just before Christmas, in circumstances that have led to a certain amount of comment (see, for example, here and here). There’s a lot to be said about this strategy, but here I want to discuss just one aspect – the document’s extended references to the Haldane Principle. This principle is widely believed to define, in UK science policy, a certain separation between politics and science, taking detailed decisions about what science to fund out of the hands of politicians and entrusting them to experts in the Research Councils, at arms’ length from the government. The new strategy reaffirms an adherence to the Haldane Principle, but it does this in a way that will make some people worry that an attempt is being made to redefine it, to allow more direct intervention in science funding decisions by politicians in Whitehall. No-one doubts that the government of the day has, not just a right, but a duty, to set strategic directions and priorities for the science the government funds. What’s at issue are how to make the best decisions, underpinned by the best evidence, for what by definition are the uncertain outcomes of research.

The key point to recognize about the Haldane Principle is that it is – as the historian David Edgerton pointed out – an invented tradition. Continue reading “Science, Politics, and the Haldane Principle”

Responsible innovation and irresponsible stagnation

This long blogpost is based on a lecture I gave at UCL a couple of weeks ago, for which you can download the overheads here. It’s a bit of a rough cut but I wanted to write it down while it was fresh in my mind.

People talk about innovation now in two, contradictory, ways. The prevailing view is that innovation is accelerating. In everyday life, the speed with which our electronic gadgets become outdated seems to provide supporting evidence for this view, which, taken to the extreme, leads to the view of Kurzweil and his followers that we are approaching a technological singularity. Rapid technological change always brings losers as well as unanticipated and unwelcome consequences. The question then is whether it is possible to innovate in a way that minimises these downsides, in a way that’s responsible. But there’s another narrative about innovation that’s growing in traction, prompted by the dismally poor economic growth performance of the developed economies since the 2008 financial crisis. In this view – perhaps most cogently expressed by economic Tyler Cowen – slow economic growth is reflecting a slow-down in technological innovation – a Great Stagnation. A slow-down in the rate of technological change may reassure conservatives worried about the downsides of rapid innovation. But we need technological innovation to help us overcome our many problems, many of them caused in the first place by the unforeseen consequences of earlier waves of innovation. So our failure to innovate may itself be irresponsible.

What irresponsible innovation looks like

What could we mean by irresponsible innovation? We all have our abiding cultural image of a mad scientist in a dungeon laboratory recklessly pursuing some demonic experiment with a world-consuming outcome. In nanotechnology, the idea of grey goo undoubtedly plays into this archetype. What if a scientist were to succeed in making self-replicating nanobots, which on escaping the confines of the laboratory proceeded to consume the entire substance of the earth’s biosphere as they reproduced, ending human and all other life on earth for ever? I think we can all agree that this outcome would be not wholly desirable, and that its perpetrators might fairly be accused of irresponsibility. But we should also ask ourselves how likely such a scenario is. I think it is very unlikely in the coming decades, which leaves for me questions about whose purposes are served by this kind of existential risk discourse.

We should worry about the more immediate implications of genetic modification and synthetic biology, for example in their potential to make existing pathogens more dangerous, to recreate historical pathogenic strains, or even to create entirely new ones. Continue reading “Responsible innovation and irresponsible stagnation”

Lecture on responsible innovation and the irresponsibility of not innovating

Last night I gave a lecture at UCL to launch their new centre for Responsible Research and Innovation. My title was “Can innovation ever be responsible? Is it ever irresponsible not to innovate?”, and in it I attempted to put the current vogue within science policy for the idea of Responsible Research and Innovation within a broader context. If I get a moment I’ll write up the lecture as a (long) blogpost but in the meantime, here is a PDF of my slides.

Your mind will not be uploaded

The recent movie “Transcendence” will not be troubling the sci-fi canon of classics, if the reviews are anything to go by. But its central plot device – “uploading” a human consciousness to a computer – remains both a central aspiration of transhumanists, and a source of queasy fascination to the rest of us. The idea is that someone’s mind is simply a computer programme, that in the future could be run on a much more powerful computer than a brain, just as one might run an old arcade game on a modern PC in emulation mode. “Mind uploading” has a clear appeal for people who wish to escape the constraints of our flesh and blood existence, notably the constraint of our inevitable mortality.

In this post I want to consider two questions about mind uploading, from my perspective as a scientist. I’m going to use as an operational definition of “uploading a mind” the requirement that we can carry out a computer simulation of the activity of the brain in question that is indistinguishable in its outputs from the brain itself. For this, we would need to be able to determine the state of an individual’s brain to sufficient accuracy that it would be possible to run a simulation that accurately predicted the future behaviour of that individual and would convince an external observer that it faithfully captured the individual’s identity. I’m entirely aware that this operational definition already glosses over some deep conceptual questions, but it’s a good concrete starting point. My first question is whether it will be possible to upload the mind of anyone reading this now. My answer to this is no, with a high degree of probability, given what we know now about how the brain works, what we can do now technologically, and what technological advances are likely in our lifetimes. My second question is whether it will ever be possible to upload a mind, or whether there is some point of principle that will always make this impossible. I’m obviously much less certain about this, but I remain sceptical.

This will be a long post, going into some technical detail. To summarise my argument, I start by asking whether or when it will be possible to map out the “wiring diagram” of an individual’s brain – the map of all the connections between its 100 billion or so neurons. We’ll probably be able to achieve this mapping in the coming decades, but only for a dead and sectioned brain; the challenges for mapping out a living brain at sub-micron scales look very hard. Then we’ll ask some fundamental questions about what it means to simulate a brain. Simulating brains at the levels of neurons and synapses requires the input of phenomenological equations, whose parameters vary across the components of the brain and change with time, and are inaccessible to in-vivo experiment. Unlike artificial computers, there is no clean digital abstraction layer in the brain; given the biological history of nervous systems as evolved, rather than designed, systems, there’s no reason to expect one. The fundamental unit of biological information processing is the molecule, rather than any higher level structure like a neuron or a synapse; molecular level information processing evolved very early in the history of life. Living organisms sense their environment, they react to what they are sensing by changing the way they behave, and if they are able to, by changing the environment too. This kind of information processing, unsurprisingly, remains central to all organisms, humans included, and this means that a true simulation of the brain would need to be carried out at the molecular scale, rather than the cellular scale. The scale of the necessary simulation is out of reach of any currently foreseeable advance in computing power. Finally I will conclude with some much more speculative thoughts about the central role of randomness in biological information processing. I’ll ask where this randomness comes from, finding an ultimate origin in quantum mechanical fluctuations, and speculate about what in-principle implications that might have on the simulation of consciousness.

Why would people think mind uploading will be possible in our lifetimes, given the scientific implausibility of this suggestion? I ascribe this to a combination of over-literal interpretation of some prevalent metaphors about the brain, over-optimistic projections of the speed of technological advance, a lack of clear thinking about the difference between evolved and designed systems, and above all wishful thinking arising from people’s obvious aversion to death and oblivion.

On science and metaphors

I need to make a couple of preliminary comments to begin with. First, while I’m sure there’s a great deal more biology to learn about how the brain works, I don’t see yet that there’s any cause to suppose we need fundamentally new physics to understand it. Continue reading “Your mind will not be uploaded”

Transhumanism has never been modern

Transhumanists are surely futurists, if they are nothing else. Excited by the latest developments in nanotechnology, robotics and computer science, they fearlessly look ahead, projecting consequences from technology that are more transformative, more far-reaching, than the pedestrian imaginations of the mainstream. And yet, their ideas, their motivations, do not come from nowhere. They have deep roots, perhaps surprising roots, and following those intellectual trails can give us some important insights into the nature of transhumanism now. From antecedents in the views of the early 20th century British scientific left-wing, and in the early Russian ideologues of space exploration, we’re led back, not to rationalism, but to a particular strand of religious apocalyptic thinking that’s been a persistent feature of Western thought since the middle ages.

Transhumanism is an ideology, a movement, or a belief system, which predicts and looks forward to a future in which an increasing integration of technology with human beings leads to a qualititative, and positive, change in human nature. Continue reading “Transhumanism has never been modern”

Rebuilding the UK’s innovation economy

The UK’s innovation system is currently under-performing; the amount of resource devoted to private sector R&D has been too low compared to competitors for many years, and the situation shows no sign of improving. My last post discussed the changes in the UK economy that have led us to this situation, which contributes to the deep-seated problems of the UK economy of very poor productivity performance and persistent current account deficits. What can we do to improve things? Here I suggest three steps.

1. Stop making things worse.
Firstly, we should recognise the damage that has been done to the countries innovative capacity by the structural shortcomings of our economy and stop making things worse. R&D capacity – including private sector R&D – is a national asset, and we should try and correct the perverse incentives that lead to its destruction. Continue reading “Rebuilding the UK’s innovation economy”

Business R&D is the weak link in the UK’s innovation system

What’s wrong with the UK’s innovation system is not that we don’t have a strong science base, or even that there isn’t the will to connect the science base to the companies and entrepreneurs who might want to use its outputs. The problem is that our economy isn’t assigning enough resource to pulling through the fruits of the science base into technological innovations, the innovation that will create new products and services, bring economic growth, and help solve some of the biggest social problems we face. The primary symptom of the problem is the UK’s very poor performance at business funded research and development R&D. This is the weak link in the UK’s national innovation system, and it is part of a bigger picture of short-termism and under-investment which underlie the UK economy’s serious long-term problems.

For context, it’s worth highlighting two particular features of the UK economy. The first is its very poor productivity growth: currently on one measure (annualised 6 year growth in productivity) we’re seeing the worst peace-time performance for the last 150 years. Without productivity growth, there will be no growth in average living standards, and that’s going to lead to an increasingly sour political scene.

The second is the huge current account deficit, which at 5.4% of GDP is worse than in the crisis years of the mid-1970s. Now, as then, the UK is unable to pay its way in the world. Unlike the 1970’s, though, there’s no immediate political crisis, no humiliating appeals to the IMF for a bail-out. This time round, overseas investors are happy to finance this deficit by buying UK assets. But this isn’t cost-free. An influx of overseas capital is what is currently driving a price bubble for domestic and commercial property in London, severely unbalancing the economy and leading to a growing gulf between the capital and the regions. The assets being bought include the nation’s key infrastructure in energy and transport; there will be an inevitable loss of control and sovereignty as more of this infrastructure falls into overseas ownership. Chinese money will be paying for any new generation of nuclear power stations that will be built; that will give the UK very little leverage in insisting that some of that investment is spent to create jobs in the UK, and it will be paid for by what will effectively be a tax on everyone’s electricity bills, guaranteed for 35 years.

These are long-term problems, and so is the decline in business R&D intensity. The last thirty years has seen this drop from 1.48% in 1981, to 1.09% now (measured as a percentage of GDP) Continue reading “Business R&D is the weak link in the UK’s innovation system”

Surely there’s more to science than money?

How can we justify spending taxpayers’ money on science when there is so much pressure to cut public spending, and so many other popular things to spend the money on, like the National Health Service? People close to the policy-making process tend to stress that if you want to persuade HM Treasury of the need to fund science, there’s only one argument they will listen to – that science spending will lead to more economic growth. Yet the economic instrumentalism of this argument grates for many people. Surely it must be possible to justify the elevated pursuit of knowledge in less mercenary, less meretricious terms? If our political economy was different, perhaps it would be possible. But in a system in which money is increasingly seen as the measure of all things, it’s difficult to see how things could be otherwise. If you don’t like this situation, it’s not science, but broader society, that you’ve got to change.

The relentless focus on the economic justification of science is relatively recent, but that doesn’t mean that what went before was a golden age. The dominant motivation for state support of science in the twentieth century wasn’t to make money, but to win wars. Continue reading “Surely there’s more to science than money?”

Spin-outs and venture capital won’t fill the pharma R&D gap

Now that Pfizer has, for the moment, been rebuffed in its attempt to take over AstraZeneca, it’s worth reflecting on the broader issues this story raised about the pharmaceutical industry in particular and technological innovation more generally. The political attention focused on the question of industrial R&D capacity was very welcome; this was the subject of my last post – Why R&D matters. Less has been said about the broader problems of innovation in the pharmaceutical industry, which I discussed in an earlier post – Decelerating change in the pharmaceutical industry. One of the responses I had to my last post argued that we shouldn’t worry about declining R&D in the pharmaceutical industry, because that represented an old model of innovation that was being rapidly superseded. In the new world, nimble start-ups, funded by far-seeing venture capitalists, are able to translate the latest results from academic life sciences into new clinical treatments in a much more cost-effective way than the old industry behemoths. It’s an appealing prospect that fits in with much currently fashionable thinking about innovation, and one can certainly find a few stories about companies founded that way that have brought useful treatments to market. The trouble is, though, if we look at the big picture, there is no evidence at all that this new approach is working.

A recent article by Matthew Herper in Forbes – The Cost Of Creating A New Drug Now $5 Billion, Pushing Big Pharma To Change – sets out pharma’s problems very starkly. Continue reading “Spin-outs and venture capital won’t fill the pharma R&D gap”