At the end of last year, the Nuffield Foundation for Bioethics published a report on the ethics of emerging biotechnologies, called Emerging Biotechnologies: technology, choice and the public good. I was on the working party for that report, and this piece reflects a personal view about some of its findings. A shorter version was published in Research Fortnight (subscription required).
In a speech at the Royal Society last November George Osborne said that, as Chancellor of the Exchequer, it is his job “to focus on the economic benefits of scientific excellence”. He then listed eight key technologies that he challenged the scientific community in Britain to lead the world in, and for which he promised continuing financial support. Among these technologies were synthetic biology, regenerative medicine and agri-science, key examples of what a recent report from the Nuffield Council for Bioethics calls emerging biotechnologies. Picking technology winners is clearly high on the UK science policy agenda, and this kind of list will increasingly inform the science funding choices the government and its agencies, like the research councils, make. So the focus of the Nuffield’s report, on how those choices are made and what kind of ethics should guide them, couldn’t be more timely.
These emerging technologies are not short of promises. According to Osborne, synthetic biology will have an £11 billion market by 2016 producing new medicines, biofuels and food – “they say that synthetic biology will heal us, heat and feed us.” Regenerative medicine is “set to transform current clinical approaches to replacing or regenerating damaged human organs or tissue,” while through new agri-science “we can design better seeds and more productive farm animals”; a doubling of wheat yield in the UK would generate £1.5 billion at the farm gate.
But how do we choose these problems as the ones for which we most need technological solutions? How do we know that these particular emerging technologies offer the best way of delivering these goals while minimizing the risk of unintended and undesirable consequences? And given that, by definition, these technologies are still immature, what’s the best way of ensuring that they really can make the transition from the laboratory into the real world, to yield those promised benefits and fulfill those heady financial predictions?
In a world of limited resources, choosing one approach implicitly means not choosing another, and other potential technological solutions go unexplored. We may neglect potential solutions whose innovations are social rather than technical in nature. For example, food security could be improved by increases in wheat yield, but in the light of estimates that 30-50% of all food produced is wasted (according to this recent report from the IMechE), one should look at ways the food distribution system could be changed too. Technology choices need to underpinned by evidence. This will usually be multidisciplinary in character, social factors are likely to be important, and public engagement may often have an important role to make sure that innovations support widely shared societal goals.
Where do the promises come from, and how much can we rely on them? Some in the scientific community resent the idea that the government would attempt to direct research at all. But the idea of synthetic biology, for example, wasn’t hatched in Whitehall – it’s from the research community itself that these ideas emerge and in which the promises are generated. Some would argue that it is the pressures on the scientific enterprise to deliver economic and other benefits themselves that impose perverse incentives on researchers to make promises about the potential impacts of their research that are too optimistic. I believe that this arises from a misinterpretation of the “impact agenda”, as it is understood by the research councils, but there are real dangers, as I discussed in an earlier piece about The Economy of Promises. Researchers may not be well placed to appreciate the broader societal dimensions of the problems that their technologies might be in a position to solve – here moves to incorporate emerging ideas about “responsible innovation” into research council practice are very welcome. And researchers – particularly in academia – are often not well placed to understand the difficulties of putting new technologies into practice and commercializing them.
It’s a cliché that the UK is excellent at doing fundamental science, but not so good at commercializing it. This has never actually been true, and it’s certainly not true now – universities have never been more active in partnerships with the private sector and in making the most of the intellectual property their researchers produce. But what is true is that the economic environment for bringing potentially valuable innovations to market seems to be particularly difficult at the moment.
Reading about emerging biotechnologies in the mainstream press, it’s the breakthroughs and marvels that catch the eye. But in the financial press the story is as much the apparent failure of anyone to be able to make any money from these marvels. The travails of the big pharmaceutical companies are well known, as the patents on their most lucrative products expire, and the cost and difficulty of developing new drugs escalates, to more than $1 billion per new medicine, according to one recent estimate. The response of the drug companies has been increasingly to rely on innovation in small biotech companies, often spin-outs from universities. But the environment for such spin-outs hasn’t been easy, either; for more than 10 years the venture capital industry, taken overall, has taken in more money from investors that it has paid out (see for example this FT article: Funding woes threaten next tech revolution (£). Meanwhile financially hard-pressed health systems like the NHS increasingly balk at the very high prices pharmaceutical companies demand for new treatments.
For new biotechnologies outside the biomedical arena, there’s an easier regulatory environment. But in many cases these technologies aren’t producing something completely new, they are offering another way – perhaps a more sustainable way – of making a product that already exists. For these technologies, the challenge of displacing the incumbent can be too great, particularly if the price of the existing technology doesn’t fully reflect its total costs to society. For example, it is reported that one of the pioneers of commercial synthetic biology, Amyris, is scaling back its ambitions to produce biodiesel, because, with a production cost of $29 a liter, it has no chance of competing with its fossil fuel competitor.
One company that does look like it may be successful in commercializing a genuinely innovative bionanotechnology product is Oxford Nanopore. Its technique for sequencing single molecules of DNA, if it can deliver on its promises, will offer greater speed for lower cost in a market that has been already developed by others, with relatively low regulatory barriers and secure protection of its intellectual property for long enough to make a good return. These conditions for success don’t apply over much of the pharmaceutical, biomedical or agricultural biotechnology sectors.
Our system isn’t set up to reward innovation and the development of those emerging biotechnologies that would bring widespread public benefit, and this needs to be changed. New biotechnologies which promise environmental benefits won’t be able to compete with unsustainable incumbent technologies unless their environmental impact is correctly priced. For new therapies, we may need to separate the reward for innovation from the price paid for products in order to get the innovation people want.
Technology choice is an ethical issue. When we make the wrong choices – whether consciously, or through not understanding the institutional pressures that make those choices by default – we not only suffer from the downsides of inappropriate technologies, but we don’t get the benefit of the better technologies we didn’t choose. But choosing between technologies is inevitable, so we need to organise our research policies and our broader innovation systems to make sure these new technologies can and do fulfil their promises.
A very thought provoking read!
Excellent piece.
Referring back to your earlier point, I am also interested in how the decision making process goes which chooses one solution over another and for what purpose. So where a systemic or non-technical solution is promising, or complementary, but doesn’t come with the kudos, market £££, jobs, excitement, or is a bit mundane or unappealing – then what levers are there?
It would be interesting to see better ‘benefit assessment’ where solutions are considered in the round, prioritised and open and honest discussions of options involve more of us than they do at the moment.
I am going to make some provocative comments…
Following your last two posts, it appears that the Capitalist Systems has hit a Fundemental Brick Wall in dealing with Complexity.
I call it is the Suckers Paradox.
Basically, New Technologies require buyers who are willing to pay exorbitant prices until the hopefully future Increasing Returns to Scale occur.
In the past, this was Governments who via either the Military, or Healthcare systems did this. However due to admittedly Stupid Conservative / Liberal False Economies these two sectors are being squeezed and it appears the Politicians have not got a clue!
So My question to Richard is this. How do we get out of this mess!!!
Thanks for your wonderful Blog highlighting THE urgent issues of our Time
Zelah