This long blogpost is based on a lecture I gave at UCL a couple of weeks ago, for which you can download the overheads here. It’s a bit of a rough cut but I wanted to write it down while it was fresh in my mind.
People talk about innovation now in two, contradictory, ways. The prevailing view is that innovation is accelerating. In everyday life, the speed with which our electronic gadgets become outdated seems to provide supporting evidence for this view, which, taken to the extreme, leads to the view of Kurzweil and his followers that we are approaching a technological singularity. Rapid technological change always brings losers as well as unanticipated and unwelcome consequences. The question then is whether it is possible to innovate in a way that minimises these downsides, in a way that’s responsible. But there’s another narrative about innovation that’s growing in traction, prompted by the dismally poor economic growth performance of the developed economies since the 2008 financial crisis. In this view – perhaps most cogently expressed by economic Tyler Cowen – slow economic growth is reflecting a slow-down in technological innovation – a Great Stagnation. A slow-down in the rate of technological change may reassure conservatives worried about the downsides of rapid innovation. But we need technological innovation to help us overcome our many problems, many of them caused in the first place by the unforeseen consequences of earlier waves of innovation. So our failure to innovate may itself be irresponsible.
What irresponsible innovation looks like
What could we mean by irresponsible innovation? We all have our abiding cultural image of a mad scientist in a dungeon laboratory recklessly pursuing some demonic experiment with a world-consuming outcome. In nanotechnology, the idea of grey goo undoubtedly plays into this archetype. What if a scientist were to succeed in making self-replicating nanobots, which on escaping the confines of the laboratory proceeded to consume the entire substance of the earth’s biosphere as they reproduced, ending human and all other life on earth for ever? I think we can all agree that this outcome would be not wholly desirable, and that its perpetrators might fairly be accused of irresponsibility. But we should also ask ourselves how likely such a scenario is. I think it is very unlikely in the coming decades, which leaves for me questions about whose purposes are served by this kind of existential risk discourse.
We should worry about the more immediate implications of genetic modification and synthetic biology, for example in their potential to make existing pathogens more dangerous, to recreate historical pathogenic strains, or even to create entirely new ones. So-called “gain-of-function” research has just been suspended in the USA, but even here one can argue that there are some legitimate grounds for research in this area. What no-one will dispute, though, is that if this kind of research needs to be carried out at all, it must be with the most stringent safeguards in place.
Most areas of innovation, though, are much more debatable. Geo-engineering remains in the future for the moment – perhaps we should get the research done, in case we need it. But maybe doing the research will lead us into moral hazard, relieving the pressure on us to move to decarbonise our economies, leaving us in a worse mess if the geo-engineering, with all its uncertainties, doesn’t work out. Little wonder that geo-engineering research has already been exposed to serious scrutiny as a case study for responsible innovation. Meanwhile hydraulic fracturing for winning shale gas and oil is a new technology already been deployed at scale, with dramatic effects for the world’s energy economy. Is this a welcome development, bringing cheaper energy with a lower carbon intensity than coal, perhaps as a bridge to a lower carbon economy? Or is it locking us into fossil fuel dependency, with the unwelcome side effects of local environmental degradation?
For a final example, consider the British on-line payday loan company Wonga. Wonga has been lauded for its technological innovation, with an entirely on-line, 24 hour a day service and automated credit checking methods aggregating multiple personal data sources. Yet recent headlines have been more negative, with the company agreeing with the UK regulator to write off £220 million of loans made to people who couldn’t afford them. Wonga’s innovation took place, of course, entirely in the private sector, and was arguably as much social in character as technical. But it depended on substantial technical underpinnings – in hardware and software – that were already in place, the result of much research and development in both private and public sectors. This particular application of these technologies was probably not foreseen by their originators (though perhaps it should have been, given the previous propensity of pornography, gambling and loan sharking to be the first sectors to take advantage of new technologies). The control came after the event, by regulation, rather than being anticipated. But this is a sector in which “disruption” is considered a virtue.
Not enough: why a failure to innovate is irresponsible too
Since technological innovation has so often led to unanticipated downsides, perhaps it is more responsible not to innovate at all, and instead to work to distribute the fruits of the technology we have more fairly? This is the argument made, in different ways, both by deep greens and romantic conservatives, and articulated very cogently by Bill McKibben in his book “Enough: staying human in an engineered age”. Perhaps the new technologies coming along now – like genetic modification and nanotechnology – are simply too powerful, have too much potential to cause irreversible damage, and should be abjured.
I think this view is too optimistic. The predicament we are in is this – we depend existentially on technology, but we cannot continue to depend on the technology we have. So we must develop new technologies to replace the unsustainable technologies we have now. I don’t say this in the spirit of optimism that thinks that there’s no need to worry about the state we’re in, because new technologies will surely come along to save us, but with a sense of urgency that recognises that the development of the right new technologies is far from inevitable, and will need a serious change of course.
If there is a single example of the way in which we depend on technology for our very existence, there is no better example than the Haber-Bosch process. A century ago, in wartime Germany, Fritz Haber invented the chemistry for fixing nitrogen – for use in fertilisers, or explosives – and Karl Bosch implemented this as in industrial process. Haber-Bosch fixed nitrogen, made possible by substantial energy inputs from fossil fuels, has been the basis for a transformation in farming. Between 1900 and 1990, on Vaclav Smil’s estimates, there was a 30% increase in the area of cultivated land, but a more than eightyfold increase in energy inputs to farming, in the form of artificial fertilisers and mechanised farming implements. This resulted in increases in yield per hectare in some cases approaching a factor of ten. It isn’t much of an exaggeration to say that we eat oil; a typical tonne of English winter wheat embodies about 20 kg of fixed nitrogen in artificial fertiliser, which itself embodies about 13 kg of oil. Without fossil-fuel enabled Haber-Bosch fixed nitrogen, again on Smil’s estimate, more than half the world’s population would starve.
What has been the effect of the two centuries of economic and population growth that were enabled by fossil fuels? As is well known, atmospheric carbon dioxide levels have increased from pre-industrial values around 275 parts per million to more than 400, and are continuing to rise, and as a result the earth’s surface is getting hotter. It’s important to realise how much we still depend on fossil fuels – across the world, they account for about 86% of energy inputs. In the decades to come, it is close to a certainty that fossil fuel consumption will continue to increase, driven by the rising energy demands of large, fast industrialising countries like China and India. So carbon dioxide emissions will increase too; for all the uncertainties of climate modelling, rising temperatures through the century seem inevitable. Just to stabilise temperatures after 2050 will need changes both social and technological. We need better, cheaper low carbon sources of energy at scales of implementation orders of magnitude bigger than currently seems likely to happen in the near future.
Responsible research and innovation
We can discuss what irresponsible innovation looks like, but not to innovate is irresponsible too. So what would responsible innovation look like? The question, then, is whether we can steer the development of science and technology in a way that meets widely shared societal goals. This, of course, is a very old idea, but every generation needs to re-examine it in the new context in which it finds itself, including both the broader political context and the specific ways in which science, technology and innovation are developed. For example, from the 1930’s the arguments around J.D. Bernal’s views on the social purposes of science and his opponents repay revisiting in a context of the wider arguments at the time about the merits of central planning. To be personal, as a student in the early 1980’s I was exposed to the tail-end of the radical science movement at the time. Some of the issues in dispute then remain current – the effects of automation, environmental degradation, the possibility of repurposing innovation for socially valuable purposes inherent in the Lucas Plan. But rather than worrying about climate change, there was the ominous shadow of a nuclear war which felt horribly imminent.
Now, responsible innovation is a term of art in science policy. Richard Owen, Jack Stilgoe and Phil Macnaghten, writing for the UK research council EPSRC, define it as “a commitment to care for the future through collective stewardship of science and innovation in the present”, while Rene von Schomberg, in the context of the EU’s Framework program, writes that “Responsible Research and Innovation is a transparent, interactive process by which societal actors and innovators become mutually responsive to each other with a view to the (ethical) acceptability, sustainability and societal desirability of the innovation process and its marketable products( in order to allow a proper embedding of scientific and technological advances in our society).” Von Schomberg also helpfully identifies four signatures of irresponsible innovation. Unreflective technology push marked, for example, the story of the failed introduction of agricultural biotechnology in Europe, while it was a neglect of fundamental ethical principles that marred the introduction of e-patient records in the Netherlands. Policy pull – the need to be seen to be doing something, anything, to address a perceived policy need – underlies the phenomenon of “security theatre” in airports round the world, while it is a lack of precautionary measures and technology foresight that has led to the tragedy of asbestos.
The role of public engagement
The focus on aligning technological development with widely shared societal goals begs the question of how we know that our societal goals are indeed widely shared among publics more broadly. One answer would be to rely on the standard forms of representative democracy, and assume that elected politicians are in a position to steer innovation wisely and in a way that commands wide public confidence. It isn’t clear at the moment that this assumption is entirely satisfied given the current public mood in many countries. A second, perhaps more compelling suggestion, is that the mechanisms of the market are sufficient for this purpose, and I will discuss this in more detail later. A final possibility is the straightforward one, that we accomplish this by the direct involvement of people in deliberative processes of public engagement.
There have been changing views in the UK about the way scientists should engage with the public. A “public understanding of science” movement, given momentum by a 1985 report, was subsequently criticised by social scientists, notably Lancaster’s Brian Wynne, as implicitly embodying a “deficit model” of the public’s understanding of science, which assumed that the task was to remedy the public’s deficient understanding of science, and that once this knowledge deficit was corrected, the public would eagerly embrace new technologies. In the view of these critics, engagement between scientists and the public needed to go both ways, and should also result in reflection and questioning by scientists of their previously unarticulated assumptions and values. When nanotechnology rose to public prominence as a potentially transformative and controversial new technology, around 2003, the response of the scientific community was informed by this new consensus in favour of two-way, “upstream” engagement. A report commissioned by the government from the Royal Society and Royal Academy of Engineering recommended that “a constructive and proactive debate about the future of nanotechnologies should be undertaken now – at a stage when it can inform key decisions about their development and before deeply entrenched or polarised positions appear.”
Of course, there were different views about what problem public engagement was trying to solve here. For many enthusiasts for the new technology, the fear was that there would be a backlash against the technology (or more properly, an anticipatory backlash, since the technology in its full form had not yet materialised) akin to the one that dogged the introduction of genetic modification for agricultural biotechnology. The background for this was a public discourse that featured, on the one hand, the far-fetched but evocative spectre of “grey goo” and the destruction of the ecosphere, and on the other, the linkage of not unfounded fears about the potential toxicity of nanoparticles with the tragedies of asbestos. For some science policy actors, public engagement seemed to offer a contribution to the difficult problem of how to make sounder decisions about highly interdisciplinary science in the context of societal needs. Idealists, on the other hand, welcomed public engagement as a way of reasserting wider considerations of the public value of science in the face of the perception of its increasing marketisation.
In response to this background, between 2005 and 2008, there was a flurry of public engagement activity around nanotechnology, funded and run variously by NGOs, government and research councils. This did have positive outcomes – it did influence funding decisions, and I would argue that it did lead to some better decisions. It’s possible that it did lead to a richer public dialogue – certainly the public discourse lacked the acrimony and hardened positions that characterised the GM debate. It also helped develop a cadre of reflective and socially engaged nanoscientists.
Why responsible innovation is difficult
Fundamentally, responsible innovation is difficult because we don’t know how the future will turn out. Can we be responsible in the way we think about the future? One group of people who would answer no to this are technological determinists, who believe that the future is essentially pre-ordained, and that technology essentially has its own logic – as succinctly expressed in the title of Kevin Kelly’s book, “What Technology Wants”. Followers of Friedrich von Hayek would also answer in the negative, on the basis of their views on the radical unknowability of the future. Historically, believers in our power collectively to rationally plan for the outcomes we desire, such as J.D. Bernal, would have answered positively. Supporters of responsible innovation believe that, even if we don’t know how the future will turn out, we can reflexively adjust the process of innovation as it happens through a process of “anticipation, reflection and inclusive deliberation”.
Perhaps the most pointed expression of the difficulties of steering technology is known, after its originator, as Collingridge’s control dilemma. When a a technology is young enough to influence its future trajectory, you can’t know where it will lead. But by the time a technology is mature enough for you to have a good idea of its consequences, it’s too late to change it – it’s locked-in. It’s easy now to see, for example, the downsides of a society based on the motor car. But those downsides probably couldn’t have been predicted when the technology was new, and now they are apparent our dependence on the technology mean that we’re committed to it, for all its drawbacks. It’s difficult to see a resolution of this dilemma, other than to take away from it the need to have a realistic understanding of the complete path that innovations must go through as they move from invention to widespread adoption.
The critique from Hayek
For me, the most powerful critique of the notion of responsible research and innovation, and the idea that public engagement can have any effect on steering technology to socially desirable goals, arises from the neo-liberal thinking derived from Hayek. In this view, basic science provides a resource that innovators can apply in ways unpredicted and unpredictable by the science’s originators. Entrepreneurs make innovations and test them in the market, and it is their success or failure in the market that provides the only way of assessing whether innovation is societally desirable. The crucial point is that the market operates as a device – the only device that is conceivable – for aggregating information distributed across countless people across society, all with their own valuable tacit knowledge.
Hayek’s friend Michael Polanyi extended these ideas to the scientific enterprise itself, coming up with the enduring notion of the “independent republic of science”. According to Polanyi, “the pursuit of science by independent self-co-ordinated initiatives assures the most efficient possible organization of scientific progress. And we may add, again, that any authority which would undertake to direct the work of the scientist centrally would bring the progress of science virtually to a standstill.” It follows from this view that there should be a moral division of labour, between basic scientists who who can’t and shouldn’t consider the ethics of potential applications, and applied scientists, who should. Polanyi’s view that it is futile and counter-productive to attempt to direct or shape the progress of science in any way makes his view enduringly popular amongst elite scientists, while the notion of the scientific enterprise as a self-made order, analogous to a market economy, appeals to Hayekians.
Even if one accepts (as I do) the power of the idea of the market as an information aggregating device, there is still much to criticise in the Hayekian view as it applies to the development of new technologies. In a market economy, preferences of individuals are weighted by how much money they have. Naturally this directs technologies in ways that neglect problems suffered by poor people (such as the diseases of underdeveloped tropical countries) and overemphasise the desires of the very rich (for example private space travel). Wide inequalities in wealth are associated with wide imbalances in power, about which Hayekians are naive or disingenuous. Not everyone agrees that markets always satisfy people’s genuine needs, rather than their transient desires that themselves are created through advertising and social pressures. Others would insist on respecting the right of people to make their own choices, and it is certainly true that it’s unreasonable to expect people to know what they want from new technology until its fruits are actually on offer. As Henry Ford is reputed to have said, “If I had asked people what they wanted, they would have said faster horses”.
Innovation in the neo-liberal political economy
Classical economic theory teaches us that it is hard in a fully competitive system for an innovator to capture the full societal value of an innovation, because others can quickly copy the innovation and thus benefit from it, without having had to pay the costs of developing it. Many of the innovations of the twentieth century were developed in conditions in which competition was suppressed. The global chemicals industry, whose extensive corporate laboratories produced plastics and the modern pharmaceuticals industry, for example, was effectively cartelised, and the astonishing inventive fecundity of Bell Labs, in which transistors and Unix were developed, was underpinned by an effective monopoly granted to the Bell Telephone System through government regulation. Other industries – notably aerospace and electronics, which were central parts of the military-industrial complexes of the cold war warfare states – were underpinned by generous government support.
Neoliberal economic policy recognises the difficulty of innovation in a system of pure competitive as a market failure, which it attempts to correct through measures such as the concept of “intellectual property”, permitting highly constrained time-limited monopolies on the exploitation of discoveries. Government support for basic science is recognised as the necessary provision of what are in effect purely public goods, and tax credits on research and development spending can be used to offset the disincentives to R&D that arise in competitive markets.
One specific area in which we saw a substantial move from highly regulated to more competitive markets in the late twentieth century was in energy. The consequence of this has been a worldwide collapse in energy R&D. Given the urgency with which we need to find economical, low carbon sources of energy to begin to mitigate climate change, this needs to be rapidly reversed.
Technoscience bubbles
In 2009, the satirical website “The Onion” ran a story with the headline “Recession plagued nation demands new bubble to invest in”, which like all good satire seemed to capture something important and true, in this case about the way our economies are working now. One thing that our neo-liberal political economies do seem to be good at generating are financial bubbles, in which asset prices rise beyond any seeming connection to their underlying value. Some kinds of asset bubbles – in real estate, for example – probably have a negative effect on innovation, by diverting resources away from productive sectors of the economy, but others, such as the turn of the century dot-com bubbles, can positively drive innovation by directing resources into new technologies that wouldn’t be justified by a more clear-headed assessment of the fundamental returns to investors from the technology. Venture capitalist William Janeway ascribes, in his book “Doing Capitalism in the Innovation Economy”, much of the recent growth in information and communication technology to the combination of state sponsored innovation and financial bubbles.
The idea of a bubble, as a self-reinforcing social phenomenon, is one that has been extended to technoscience by complexity theorists Monika Gisler and Didier Sornette. They examined the Human Genome Project, arguing it that it represented a kind of “social bubble”, in which “strong social interactions between enthusiastic supporters of the Human Genome Project weaved a network of reinforcing feedbacks that led to a widespread endorsement and extraordinary commitment by those involved in the project, beyond what would be rationalized by a standard cost-benefit analysis in the presence of extraordinary uncertainties and risks.”
The idea of a technoscience bubble allows us to understand much contemporary discourse about science and technology. Candidates for a technoscience bubble need to be founded on some genuinely interesting science, but diagnostic signs of bubblehood might include techno-nationalist appeals by the practitioners and their supporters for special funding initiatives: “we mustn’t be left behind in this global race”. Declarations that this will be “the next industrial revolution” are routine, along with greatly foreshortened timelines to predicted transformational societal impacts. Financial implications aren’t neglected, with confident predictions of “multiple-billion dollar markets”, often by people with an interest in inflating an accompanying financial bubble, for example in the value of start-up companies in the field. Concerned observers will worry about negative consequences, often painting extreme scenarios – “the end of the world as we know it”. Speculative techno-ethics and the discourse of existential risk are all part of the process of inflating the perceived significance of the area of technoscience in question (an important point made separately by philosopher Alfred Nordmann and cultural critic Dale Carrico).
I leave it to the reader to compare what has been written about fields such as biotechnology, nanotechnology, synthetic biology, artificial intelligence, 3d printing, graphene and computational neuroscience with this list.
Can bubble induced innovation ever be responsible? On the positive side, a bubble built the UK an railway infrastructure that still serves us, and more recently the dot-com bubble gave us much of internet’s fibre optics infrastructure, so they have undeniable social value in financing innovation. The obvious cost, of course, is the financial loss suffered by the unfortunates who lose their money when the bubble bursts. Less obvious is the opportunity cost of the potentially more worthwhile innovation that is foregone; it doesn’t seem right that mutual delusion is the most effective way of deciding where our innovation efforts are best concentrated.
Are we in an age of technological stagnation?
The libertarian tech entrepreneur Peter Thiel has expressed his frustration with innovation now in the slogan “We wanted flying cars, instead we got 140 characters”. This captures a sense that for all the rapidity of innovation in the information and communication technology sectors, and the rapid growth of companies like Facebook and Twitter, there’s something essentially trivial about these developments, frequently aimed at smoothing over the minor inconveniences in the lives of the already privileged, in comparison with the aspirations of those who grew up in the technological optimism of the 1960s and 70s. Certainly in terms of its effect on economic growth, the golden years of technological innovation were in the period 1940-1980. The economist Robert Gordon highlights data showing that, in the USA, total factor productivity (the Solow residual, measuring net economic growth when controlled for working hours and capital inputs, used as a measure of technological progress) grew fastest in the 1960’s, and is currently growing at only a fifth of this peak decadal rate. As data in one of my earlier blog posts showed, growth in real GDP per head across the G7 nations was slowing down substantially even before the recent financial crisis.
How do we resolve this paradox between the perception that technology is accelerating ever faster, and the dismal reality of the economic statistics, showing stagnating growth? As I’ve argued before, the starting point is to recognise that technology isn’t a single thing that grows or doesn’t grow at different rates. Some areas of technology may be accelerating, others advancing less fast, and indeed some areas may be going backwards as people retire and expertise is forgotten. I think it’s helpful to distinguish three realms of technological innovation. There is a digital realm, in which innovation currently is relatively easy and fast, and a material realm, in which progress is less easily won. Most difficult of all is innovation in the biological realm.
Innovation in the digital realm takes creativity, but the barriers to entry are low. With the right ideas, a handful of engineers and some low-cost hardware can build a global business. Those 140 characters didn’t result directly from a massive R&D program – according to
Innovation in the material realm, in contrast, takes a sustained and long-term input of capital and people, in the form of that late 19th century social innovation, formal Research and Development (R&D). The big advances in chemicals, materials, energy, and electronics of the twentieth century depended on large-scale R&D investments, both public and private. But between 1871 and 1991, whether public or private, the motivations for these investments were as much the magnification of state power as the desire to promote economic growth. To go back to an example I introduced earlier, the Haber-Bosch process for fixing nitrogen was converted into an industrial-scale process by an investment of about $100 million at 1919 prices, which would correspond to about $1 billion in current money, a significant share of the German GDP at the time. About half of this came from the German government. The long-term impact of this work was to transform world agricultural productivity through artificial fertilisers, though the immediate motivation was the importance of nitric acid in making explosives.
Innovation in the biological realm is yet more difficult, because biology is very complex, and organisms have agency of their own – they fight back, as we’re seeing with the development of antibiotic resistance. In some important areas, notably the pharmaceutical industry, innovation is slowing down and becoming unaffordable. The cost of developing a single new drug has been increasing exponentially for the last sixty years, doubling every 9 years, and passing the $1 billion mark at the turn of the millennium. This inexorable rise seems unaffected by the remarkable advances in fundamental biology in that period.
Many factors undoubtedly contribute to the slowdown in innovation and technological change which seems to be reflected in the slowing economies of the developing world. It is striking that this coincides with a trend towards more financialised economies underpinned by a growing free market fundamentalism. Free markets are good at driving the continuous incremental improvements. But, if we think of technological change as an optimisation process, the landscape in which that optimisation is carried out is probably rough and complex. Major technological change has in the past been driven by large-scale saltations rather than incremental changes, and free market economies can’t deliver these. This is not a new insight: as Joseph Schumpeter wrote in his 1942 book Capitalism, Socialism and Democracy, “a system – any system, economic or otherwise – that at every given point of time fully utilizes its possibilities to the best advantage may yet in the long run be inferior to a system that does so at no given point in time, because the latter’s failure to do so may be a condition for the level or speed of long-run performance”
Getting big stuff done – responsibly?
Another public figure who has recently agonised about the shrinking of our ambitions for invention and innovation is the science fiction author Neal Stephenson. In his article Innovation Starvation he bemoans our inability to ”get big stuff done”. That’s a good slogan to describe what’s going to be needed to decarbonise the world’s energy economy, adapt to the climate change we’re already committed to, and ensure the health and welfare of a growing and ageing world population. Like Schumpeter, it’s the emphasis of our current system on local optimisation that Stephenson blames for “innovation starvation” – “Any strategy that involves crossing a valley—accepting short-term losses to reach a higher hill in the distance—will soon be brought to a halt by the demands of a system that celebrates short-term gains and tolerates stagnation, but condemns anything else as failure. In short, a world where big stuff can never get done”.
Getting big stuff done needs to be a collective effort, mobilising people and capital on a big scale . This kind of collective effort was possible in the twentieth century. In the context of hot and cold wars, and superpower rivalries, the will to develop large scale new technologies was created by the perceived need to maintain state power. But no-one should want to return to the world so dominated by real or threatened wars, nor should anyone sane want destructive power to be the goal of our new technologies. And yet, we need to reflect on the fact that this period coincided with the fastest, most sustained period of economic growth and technological innovation in the world’s most developed countries.
It’s scarcely possible to imagine a kind of innovation that’s less responsible than the mid-twentieth century creation of massive nuclear arsenals and the capacity to deliver that destructive power anywhere in the world. And yet, there seems to be a rhetorical mismatch between talking about “responsible innovation” and “getting big stuff done”. In this context, “responsible” seems a prissy, pinching sort of word, incompatible with ambition, vision, and reach. Is it ridiculous to imagine that we can “get big stuff done” in a “responsible” way?
Some things are clear – no major technology will be introduced if we can’t tolerate the risk that it won’t work, or that it won’t have consequences that we haven’t foreseen – so responsible innovation needs to be reflective, open and adaptive about those risks rather than averse to them. And if this kind of large scale innovation is going to need large collective investments, there needs to be a way of mobilising a consensus in favour of that effort and the visions of the future they anticipate.
Many dimensions of responsibility
When we talk about responsible research and innovation, there are many dimensions of responsibility. Clearly, there needs to be a responsibility in the way science itself is practised, and as innovations are introduced there needs to be responsibility about their potential consequences, for example on health and the environment. Uncertainty inevitably accompanies new technologies, whose consequences can rarely be fully foreseen, and responsible innovation must always involve an explicit recognition of that uncertainty and a constant willingness to anticipate different possible outcomes and adapt what we are doing to new evidence.
Visions of the future are important too; they motivate scientists, through their wider cultural influence they motivate public support for science and provide the context in which science policy is framed. We need to be responsible about these visions, too. Our visions of possible futures are inescapably political; that’s not a bad thing, but this needs to be made explicit and we should be prepared for these to be contested. The idea that the future is preordained and predetermined has to be challenged; our visions of the future should be plural and provisional and their uncertainty acknowledged. Their uncertainty doesn’t absolve us from the need to subject them to critical thinking about their plausibility; futurology is too prone to wishful thinking. And we need responsible analysis of the challenges and constraints we face with issues climate change and the degree to which we are locked in we are to our current unsustainable energy system.
Finally, we need responsible salesmanship of science and technologies and their possibilities, to governments and funding agencies, but also to investors and to the public. We shouldn’t knowingly inflate techo-science bubbles.
Responsible Innovation or irresponsible stagnation?
We need to innovate responsibly, and yet, we do need to innovate. If it’s irresponsible to innovate without a reflexive process of alignment with widely held societal priorities, it’s irresponsible not to innovate in the face of pressing societal challenges. This necessary innovation is not happening.
Hydraulic fracturing is not a new technology. It has been around for a very long time. As for replacing oil dependency, why not simply admit there is no alternative instead of clinging to a meaningless term such as ‘innovation’.
As for the rest of the article, I don’t understand how it is relevant to the ordinary “citizen”. The average person is constantly losing access to the amount of physical living space (and claims to industrial output) while the gangsters of financialization are increasing their own ownership of acreage and demands for rent. No amount of “innovation” is going to change those basic facts because it ignores the fundamental problems which are financial illiteracy, cultural and moral rot, and no desire to eliminate corruption.
Maybe its time you update the happenings with Hinkley and then you can define what you define as “we”. If that energy deal is in the best interests of the UK then I wouldn’t send a dime to the state, nevermind all the increased spending you are demanding.
Well, I agree with you about Hinckley, as you would know if you’d read this earlier post of mine on the subject – that isn’t very new technology either. I’m sorry you don’t seem to be able to make the connections between what I’m writing and your concerns about financialisation. Take a look at this post where the connection is made more explicit.
Richard, what do you suppose are the social goals you are referring to, in the UK/Western world? I understand there are the global challenges, neglected diseases, climate change etc.
Innovation (as a way to deliver the science/technology) drives economic wealth which may be translated into welfare. Are there any other goals that are not encompassed under this umbrella?
Economic wealth is a good start, and so the slowing of long term growth rates in the developed world can be expected to translate into a slowing of growth in welfare. But what isn’t encompassed in this umbrella are what economists call externalities – both negative and positive. So climate change is a giant negative externality associated with economic growth inasmuch as it is fuelled by fossil fuelled energy. And similarly one can think of positive externalities, ways in which our lives could be improved that aren’t fully captured in terms of money, like improvements in longevity and health. Then of course there are issues of distribution and fairness that are concealed if you’re just talking about aggregate measures of wealth, before one starts to consider whether you can usefully measure things like happiness and wellbeing.
Great, I do think that science/technology should be directed towards the social goals. But are there not cases where such directing (by governments, scientists, publics, etc.) produce quite negative (i.e. opposite) outcomes?
Perhaps the impact criterion in research?
I’ve just read Nature’s M.Buchanan’s 2013 Summer article that mentions new USA and European brain science projects. Mapping the brain functions and so forth. It anecdotes teeny nanotechnology chips can be sent into the brain to monitor brain functions. Command and Control is still an issue (not controlling behaviour here but preventing hacking the signal sent out via radio or whatever), but this is a very significant chunk of a good future WMD sensor network’s good gvmt. Employees looking for AI present the problem of watching the watchers, and if you can tell if they are lying or if they have been learning humanitarian experiences, or if they are using humanitarian parts of their brains. It isn’t perfect but it might be important enough for me to focus upon.