This is the pre-edited version of a piece which appeared in Nature Nanotechnology 4, 336-336 (June 2009). The published version can be viewed here (subscription required).
What does it mean to be a responsible nanoscientist? In 2008, we saw the European Commission recommend a code of conduct for responsible nanosciences and nanotechnologies research (PDF). This is one of a growing number of codes of conduct being proposed for nanotechnology. Unlike other codes, such as the UK-based Responsible Nanocode, which are focused more on business and commerce, the EU code is aimed squarely at the academic research enterprise. In attempting this, it raises some interesting questions about the degree to which individual scientists are answerable for consequences of their research, even if those consequences were ones which they did not, and possibly could not, foresee.
The general goals of the EU code are commendable – it aims to encourage dialogue between everybody involved in and affected by the research enterprise, from researchers in academia and industry, through to policy makers to NGOs and the general public, and it seeks to make sure that nanotechnology research leads to sustainable economic and social benefits. There’s an important question, though, about how the responsibility for achieving this desirable state of affairs is distributed between the different people and groups involved.
One can, for example, imagine many scientists who might be alarmed at the statement in the code that “researchers and research organisations should remain accountable for the social, environmental and human health impacts that their N&N research may impose on present and future generations.” Many scientists have come to subscribe to the idea of a division of moral labour – they do the basic research which in the absence of direct application, remains free of moral implications, and the technologists and industrialists take responsibility for the consequences of applying that science, whether those are positive or negative. One could argue that this division of labour has begun to blur, as the distinction between pure and applied science becomes harder to make. Some scientists themselves are happy to embrace this – after all, they are happy to take credit for the positive impact of past scientific advances, and to cite the potential big impacts that might hypothetically flow from their results.
Nonetheless, it is going to be difficult to convince many that the concept of accountability is fair or meaningful when applied to the downstream implications of scientific research, when those implications are likely to be very difficult to predict at an early stage. The scientists who make an original discovery may well not have a great deal of influence in the way it is commercialised. If there are adverse environmental or health impacts of some discovery of nanoscience, the primary responsibility must surely lie with those directly responsible for creating conditions in which people or ecosystems were exposed to the hazard, rather than the original discoverers. Perhaps it would be more helpful to think about the responsibilities of researchers in terms of a moral obligation to be reflective about possible consequences, to consider different viewpoints, and to warn about possible concerns.
A consideration of the potential consequences of one’s research is one possible approach to proceeding in an ethical way. The uncertainty that necessarily surrounds any predictions about way research may end up being applied at a future date, and the lack of agency and influence on those applications that researchers often feel, can limit the usefulness of this approach. Another code – the UK government’s Universal Ethical Code for Scientists – takes a different starting point, with one general principle – “ensure that your work is lawful and justified” – and one injunction to “minimise and justify any adverse effect your work may have on people, animals and the natural environment”.
A reference to what is lawful has the benefit of clarity, and it provides a connection through the traditional mechanisms of democratic accountability with some expression of the will of society at large. But the law is always likely to be slow to catch up with new possibilities suggested by new technology, and many would strongly disagree with the principle that what is legal is necessarily ethical. As far as the test of what is “justified” is concerned, one has to ask, who is to judge this?
One controversial research area that probably would past the test that research should be “lawful and justified” is in applications of nanotechnology to defence. Developing a new nanotechnology-based weapons system would clearly contravene the EU code’s injunction to researchers that they “should not harm or create a biological, physical or moral threat to people”. Researchers working in a government research organisation with this aim might find reassurance for any moral qualms with the thought that it was the job of the normal processes of democratic oversight to ensure that their work did pass the tests of lawfulness and justifiability. But this won’t satisfy those people who are sceptical about the ability of institutions – whether they in government or in the private sector – to manage the inevitably uncertain consequences of new technology.
The question we return to, then, is how is responsibility divided between the individuals that do science, and the organisations, institutions and social structures in which science is done? There’s a danger that codes of ethics focus too much on the individual scientist, at a time when many scientists often feel rather powerless, with research priorities increasingly being set from outside, and with the development and application of their research out of their hands. In this environment, too much emphasis on individual accountability could prove alienating, and could divert us from efforts to make the institutions in which science and technology are developed more responsible. Scientists shouldn’t completely underestimate their importance and influence collectively, even if individually they feel rather impotent. Part of the responsibility of a scientist should be to reflect on how one would justify one’s work, and how people with different points of view might react to it, and such scientists will be in a good position to have a positive influence on those institutions they interact with – funding agencies, for example. But we still need to think more generally how to make responsible institutions for developing science and technology, as well as responsible nanoscientists.
…and of course responsible organisations to translate the science into innovations for us all. The Responsible Nano Code for business sought to help with that. I am always disappointed that the conversation is about responsible science only, not responsible innovation, the commercial end where many of the problems actually manifest themselves. When I wonder is someone going to pay more attention to that. Our work looking at public engagements in recent years suggests that’s the thing they are most worried about.
I agree with you entirely, Hilary. Part of the problem, though, is that there are strongly held (though not always loudly articulated) contrary views. For example, many influential people hold the Hayekian view that the only proper and reliable way in which people’s views should be incorporated into steering the commercial development of innovations is through the market. Meanwhile, spreading from the USA particularly, we have the opinion that the only proper way to police the private sector is through the law of tort. This, of course, is all part of the belief system that supports the “bad capitalism” about which I wrote a few weeks ago.
First of all, hello Richard and others, I hope you are all doing well.
Secondly, Richard, would you please check out http://www.incanautchallenge.com and tell me what you think of this concept for a universal construction kit/nano assembly system? I would appreciate your feedback.
Thirdly, let us say that the Drexl/Merkle type machine part nanotech is not as practical as the more biomimetic type. Do you believe it will still be possible, using the more biomimetic nanosystems you advocate, to construct materials that are hard, strong, dry, and tough, like carbon nanotubes, ceramics, carbides (tungsten, boron, etc), sapphire, silicates/quartz, and the like? Can these types of material structures be integrated into biochemistry systems, like we see with biomineralizations in nature? I think that even if we went that route, we would still end up with the great ability to fabricate diamondoids, fullerenes, and macroscopic structural materials with atomic precision, as well as nanomedical devices, food, clothing fabrics, and more, cheaply and abundantly, using clean solar power and clean chemical power. What are your thoughts on this? ie: Is the Drexler/Merkle type nanosystem necessary to assemble HARD/DRY parts and materials, or not?
I also look at the work of people such as Andrea Belcher who is working with engineered viruses, and she herself said she would like to see a sort of “programmable genetics” that allows us to make any sort of material with any sort of property, as well as the DNA nanotech of Ned Seeman and others.
Pardon me, that particular website seems to be down. Here is the basic patent of the Inter Nodal Connector Architecture:
http://www.y-pod.com/
Basically, the inventor, Steve Bridgers, shows that you can mimic the fullerene structure using his system, and I would like your take on this Richard, do you think this is a good route to nano assembly and also macroscopic snap together structures and electronics?
I have to correct something. I misspelled the name of Angela Belcher. Her first name is Angela.
Here is a good link on using viruses engineered to assemble nanotubes and other components into solar cells:
http://www.popsci.com/technology/article/2011-04/mit-researchers-use-viruses-build-more-efficient-solar-panels
Also:
http://www.bbc.co.uk/news/technology-15023175