The Nottingham nanotechnology debate – transcript now available

Last summer, a debate was held at Nottingham in which proponents and sceptics of the Drexlerian vision of molecular nanotechnology exchanged views (with me in the latter camp). I think it was notable both for its constructive tone, and for the high quality of the debate, helped by the presence of many very distinguished UK nanoscientists in the audience. At the time it was promised that a film of the event and a transcript would be published. These things often take longer than expected to come to fruition, but Philip Moriarty now reports, via a comment here, that the transcript, published in the journal Nanotechnology Perceptions, is now available for download from the Nottingham Nanoscience Group’s webpages. Since the links themselves are a bit obscure, the PDF of the transcript is here, and a short introductory piece by Philip is here.

To quote Philip Moriarty’s words: “The transcript on the following pages is the first time that a public (and lengthy) debate on the feasibility of nanomachines and molecular manufacturing, involving a significant number of world-leading surface- and nano-scientists, has been published in its entirety in the scientific literature.”

Which nation’s scientific output is rising fastest?

China, you might say, but you’d be wrong, according to a study of world rankings in science published recently by the UK government (latest DTI study into the outputs and outcomes from UK science – 920 kB PDF). This looks at a variety of input and output measures to construct a fairly complete picture of the distribution of scientific activity and impact around the world. Notwithstanding the surprising answer to my trick question (revealed at the end of this post), this report confirms the rapid growth of China as scientific power, the lessening of the formerly unchallenged dominance of the USA, and (from a parochial perspective) the rather strong performance of the UK, which spends less on research and has fewer researchers than its competitors, but nonetheless in comparison produces proportionately more science with a greater impact.

It’s in spending on science research that the rise of China is most obvious – in real terms (adjusted for purchasing power parity) China’s research spend has increased four-fold in the last decade; it now exceeds that of all other individual countries except USA and Japan, and has reached half the European Union total. In terms of output of scientific publications, China now has a 5% world share, up by a factor of three in the last decade, and now greater than France. Again, in terms of individual nations the USA still leads by this output measure, with almost exactly one third of world output, but the European Union nations taken together have now outstripped the USA, with 37.9% of publications. The UK, at just less than 9%, is the second placed individual nation, having recently overtaken Japan. If we took the Asia-Pacific group of China, Korea, Taiwan and Singapore together they would account for 10% of world output.

What about quality and impact? Here the USA still has a clear lead; taking as a measure of world impact the share of the most highly cited papers (taken as the top 1% in each discipline) puts the USA in the lead with 61%, while the UK outperforms its volume share with 13% of highly cited papers. China still underperforms on this measure but the gap is closing, and is likely to close further as citation counts are a lagging indicator – it takes some years for spending on science to translate, first into publication outputs, and only later into citations of those papers by other workers.

The country whose output of scientific publications has increased the most over the last decade is Iran, whose output has increased by a factor of ten, albeit from a low base (China’s increased by a factor of three, the second fastest rate of growth). It will be interesting to see, in the light of recent political developments, whether Iran’s good performance will continue.

The best of both worlds – organic semiconductors in inorganic nanostructures

Today’s picture is a scanning electron micrograph of a hybrid structure in which organic light emitters are confined in a micropillar by a pair of inorganic multilayer mirrors. These hybrid organic/inorganic structures have interesting photonic properties that may have applications in quantum cryptography and quantum computing; this work comes from my colleagues in the physics department here at Sheffield in collaboration with some of our electrical engineers.

SEM image of a micropillar
Image by Wen-Chang Hung, image post-treatment by Andy Eccleston.

This structure is made by laying down, by chemical vapour deposition, 12 pairs of alternate layers of silicon oxide and silicon nitride, each exactly one quarter of a light wavelength in thickness. This is coated by a 240 nm (half a wavelength) thick layer of the polymer polystyrene, doped with an organic dye called Lumogen red, which in turn is coated by another 12 pairs of layers, this time of tellurium oxide and lithium fluoride, thermally evaporated. The pillar is carved out of the resulting layer cake structure by using a focused ion beam.

The multilayers act as perfect mirrors. Imagine putting a light source in between two parallel mirrors – you’d get an infinite (if the mirrors are perfect) series of reflections of the light. In our situation the dye molecule is the light source; when it emits a single photon, that photon is going to interfere with its ghostly counterparts emitted from the reflections, which are all in phase with each other. This makes it a very efficient producer of single photons – potentially these could be used for quantum cryptography or quantum computing.

All of this has already been demonstrated using quantum dots – tiny particles of inorganic semiconductor – as the light emitter. What’s the advantage of using an organic dye instead? In these devices, photons are emitted when an electron and a hole annihilate. These electron-hole pairs – called excitons – are very weakly bound in ordinary semiconductors, which means that these devices only work at rather low temperatures, about 50 K. In organic molecules the charges distort the structure of the molecule itself, which means that the exciton is bound much more strongly and the device will work at room temperature. It goes without saying that this feature makes the possibility of an economically viable quantum computer seem much closer. To be fair, though, the organic materials have disadvantages, too – they are susceptible to being bleached by bright light.

The work is a collaboration between my colleagues in physics, Ali Adawi, Ashley Cadby, Daniele Sanvitto, Liam Connolly and Richard Dean, who are postdocs and grad students in the groups of David Lidzey, Mark Fox, and Maurice Skolnick. Device fabrication was done with the help of Wen-Chang Hung and Abbes Tahraoui, in Tony Cullis’s group in our Electronic and Electrical Engineering Department. It’s reported in the current edition of Advanced Materials here (subscription required).

The road to nanomedicine may not always be quick or easy

Of the six volunteers who became seriously ill during a drug trial last week, four, mercifully, seem to be beginning to recover, while two are still critical, according to the most recent BBC news story. It’s still too early to be sure what went so tragically wrong; there are informative articles, with some informed comment, on the websites both of New Scientist and Nature. What we should learn from this is that even as medicine gets more sophisticated and molecularly specific, many things can go wrong in the introduction of new therapies. The length of time it takes new treatments to get regulatory approval can be frustratingly, agonisingly long, but we need to be very careful about the calls we sometimes hear to speed these processes up. The delays are not just gratuitous red tape.

The drug behind this news story was developed by a small, German company, TeGenero immunotherapeutics. It’s a monoclonal antibody, code-named TGN1412; a protein molecule which specifically binds to a receptor molecule on T-cells, a type of white blood cell which is central to the body’s immune response. The binding site – code-named CD28 – is a glyco-protein – a combination of a protein with a carbohydrate segment – which provides the signal to activate the T-cells. What’s special about TGN1412 is that the action of this drug alone is sufficient to activate the T-cells; normally simultaneous binding to two different receptors is required. It’s as if TGN1412 overrides the safety catch, allowing the T-cells to be activated by a single trigger. It’s these activated T-cells that then carry out the therapeutic purpose, killing cancer cells, for example.

Few people have connected these events with bionanotechnology (an exception is the science journalist Niels Boeing in this piece on the German Technology Review blog). There are now a number of monoclonal antibody based drugs in clinical use, and they are not normally considered to be the product of nanomedicine. But they do illustrate some of the strategies that underlie developments in nanomedicine – they are exquisitely targeted to particular cells, they exploit the chemical communication strategies that cells use, and they increasingly co-opt biology’s own mechanisms for clinical purposes. Biology is so complex that it’s always going to spring surprises, and the worry must be that as our interventions in complex biological systems become more targeted, so the potential for unpleasant surprises may increase. Whenever one hears blithe assurances that nanotechnology will soon cure cancer or arrest ageing if only those bureaucratic regulators would allow it, one needs to think of those two men struggling for their lives in a North London hospital. There may be good reasons why the pace of innovation in medicine can sometimes be slow.

Forthcoming nano events in Sheffield

A couple of forthcoming events might interest nano-enthusiasts at a loose end in South Yorkshire in the next few weeks. Next Monday at 7pm, there’s a public lecture as part of National Science Week in the Crucible Theatre, called “A robot in the blood”. In it, my colleagues Tony Ryan and Noel Sharkey, will discuss what a real medical nanobot might look like. Both are accomplished public performers – Tony Ryan is a chemist (with whom I collaborate extensively) who gave the Royal Institution Christmas lectures a couple of years ago, and Noel Sharkey is an engineer and roboticist who regularly appears in the TV program “Robot Wars”.

Looking further ahead, on Monday April 3rd there is a one day meeting about “Nanotechnology in Society: The wider issues”. This will involve talks from commentators on nanotechnology from different view points, followed by a debate. Speakers include Olaf Bayer, from the campaigning group Corporate Watch, Jack Stilgoe, from the public policy thinktank Demos, Stephen Wood, co-author (with me and Alison Geldart) of the Economic and Social Reseach Council report “The Social and Economic Challenges of Nanotechnology”, and Rob Doubleday, a social scientist working in the Cambridge Nanoscience Centre. The day is primarily intended for the students of our Masters course in Nanoscale Science and Technology, but anyone interested is welcome to attend; please register in advance as described here.

How much should we worry about bionanotechnology?

We should be very worried indeed about bionanotechnology, according to Alan Goldstein, a biomaterials scientist from Alfred University, who has written a long article called I Nanobot on this theme in the online magazine Salon.com. According to this article, we are stumbling into creating a new form of life, which is, naturally, out of our control. “And Prometheus has returned. His new screen name is nanobiotechnology.” I think that some very serious ethical issues will be raised by bionanotechnology and synthetic biology as they develop. But this article is not a good start to the discussion; when you cut through Goldstein’s overwrought and overheated writing, quite a lot of what he says is just wrong.

Goldstein makes a few interesting and worthwhile points. Life isn’t just about information, you have to have metabolism too. A virus isn’t truly alive, because it consists only of information – it has to borrow a metabolism from the host it parasitises to reproduce. And our familiarity with one form of life – our form, based on DNA for information storage, proteins for metabolic function, and RNA to intercede between information and metabolism – means that we’re too unimaginative about conceiving entirely alien types of life. But the examples he gives of potentially novel, man-made forms of life reveal some very deep misconceptions about how life itself, at its most abstract, works.

I don’t think Goldstein really understands the distinction between equilibrium self-assembly, by which lipid molecules form vesicles, for example, and the fundamentally out-of-equilibrium character of the self-organisation characteristic of living things. I am literally not the same person I was when I was twenty; living organisms are constantly turning over the molecules they are made from; the patterns persist, but the molecules that make up the pattern are constantly changing. So his notion that if we make an anti-cancer drug delivery device with an antibody that targets a certain molecule on a cell wall, then that device will stay stuck there through the lifetime of the organism, and if it finds its way to a germ cell it will be passed down from generation to generation like a retrovirus, is completely implausible. The molecule that it’s stuck to will soon be turned over, the device itself will be similarly transient. It’s because the device lacks a way to store the information that would be needed to continually regenerate itself that it can’t be considered in any sensible way living.

If rogue, powered vesicles lodging in our sperm and egg cells aren’t scary enough, Goldstein next invokes the possibility of the meddling with the spark of life itself – electricity. But the moment we close that nano-switch and allow electron current to flow between living and nonliving matter, we open the nano-door to new forms of living chemistry — shattering the “carbon barrier.” This is, without doubt, the most momentous scientific development since the invention of nuclear weapons.” This sounds serious, but it seems to be founded on a misconception of how biology uses electricity. Our cells burn sugar, Goldstein says, which “yields high-energy electrons that are the anima of the living state. “ Again, this is highly misleading. The energy currency of biology isn’t electricity, it’s chemistry – specifically it’s the energy containing molecule ATP. And when electrical signals are transmitted, through our nerves, or to make our heart work, it isn’t electrons that are moving, it’s ions. Goldstein makes a big deal out of the idea of a Biomolecule-to-Material interface between a nanofabricated pacemaker and the biological pacemaker cells of the heart. “A nanofabricated pacemaker with a true BTM interface will feed electrons from an implanted nanoscale device directly into electron-conducting biomolecules that are naturally embedded in the membrane of the pacemaker cells. There will be no noise across this type of interface. Electrons will only flow if the living and nonliving materials are hard-wired together. In this sense, the system can be said to have functional self-awareness: Each side of the BTM interface has an operational knowledge of the other.” This sounds like a profound and disturbing blurring of the line between the artificial and the biological. The only trouble is, it’s based on a simple error. Pacemaker cells don’t have electron-conducting biomolecules embedded in their membranes; the membrane potentials are set up and relaxed by the flow of ions through ion channels. There can be no direct interface of the kind that Goldstein describes. Of course, we can and do make artificial interfaces between organisms and artefacts – the artificial pacemakers that Goldstein mentions are one example, and cochlear implants are another. The increasing use of this kind of interface between artefacts and human beings does already raise ethical and philosophical issues, but discussion of these isn’t helped by this kind of mysticism built on misconception.

In an attempt to find an abstract definition of life, Goldstein revives a hoary old error about the relationship between the second law of thermodynamics and life: “The second law of thermodynamics tells us that all natural systems move spontaneously toward maximum entropy. By literally assembling itself from thin air, biological life appears to be the lone exception to this law. “ As I spent several lectures explaining to my first year physics students last semester, what the second law of thermodynamics says is that isolated systems tend to maximum entropy. Systems that can exchange energy with their surroundings are bound only by the weaker constraint that as they change, the total entropy of the universe must not decrease. If a lake freezes, the entropy of the water decreases, but as the ice forms it expels heat which raises the entropy of its surroundings by at least as much as its own entropy decreases. Biology is no different, trading local decreases of entropy for global increases. Goldstein does at least concede this point, noting that “geodes are not alive”, but he then goes on to say that “nanomachines could even be designed to use self-assembly to replicate”. This statement, at least, is half-true; self-assembly is one of the most important design principles used by biology and it’s increasingly being exploited in nanotechnology too. But self-assembly is not, in itself, biology – it’s a tool used by biology. A system that is organised purely by equilibrium self-assembly is moving towards thermodynamic equilibrium, and things that are at equilibrium are dead.

The problem at the heart of this article is that in insisting that life is not about DNA, but metabolism, Goldstein has thrown the baby out with the bathwater. Life isn’t just about information, but it needs information in order to be able to replicate, and most centrally, it needs some way of storing information in order to evolve. It’s true that that information could be carried in other vehicles than DNA, and it need not necessarily be encoded by a sequence of monomers in a macromolecule. I believe that it might in principle be possible in the future to build an artificial system that does fulfill some general definition of life. I agree that this would constitute a dramatic scientific development that would have far-reaching implications that should be discussed well in advance. But I don’t think it’s doing anyone a service to overstate the significance of the developments in nanobiotechnology that we are seeing at the moment, and I think that scientists commenting on these issues do have some obligation to maintain some standards of scientific accuracy.

Taking the high road to large scale solar power

In principle there’s more than enough sunlight falling on the earth to meet all our energy needs in a sustainable way, but the prospects for large scale solar energy are dimmed by a dilemma. We have very efficient solar cells made from conventional semiconductors, but they are too expensive and difficult to manufacture in very large areas to make a big dent in our energy needs. On the other hand, there are prospects for unconventional solar cells – Graetzel cells or polymer photovoltaics – which can perhaps be made cheaply in large areas, but whose efficiencies and lifetimes are too low. In an article in this month’s Nature Materials (abstract, subscription required for full article, see also this press release), Imperial College’s Keith Barnham suggests a way out of the dilemma.

The efficiencies of the best solar cells available today exceed 30%, and there is every reason to suppose that this figure can be substantially increased with more research. These solar cells are based, not on crystalline silicon, like standard solar cell modules, but on carefully nanostructured compound semiconductors like gallium arsenide (III-V semiconductors, in the jargon). By building up complex layered structures it is possible efficiently to harvest the energy of light of all wavelengths. The problem is that these solar cells are expensive to make, relying on sophisticated techniques for building up different semiconductor layers, like molecular beam epitaxy, and currently are generally only used for applications where cost doesn’t matter, such as on satellites. Barnham argues that the cost disadvantage can be overcome by combining these efficient solar cells with low-cost systems for concentrating sunlight – in his words “our answer to this particular problem is ‘Smart Windows’, which use small, transparent plastic lenses that track the sun and act as effective blinds for the direct sunlight, when combined with innovative light collectors and small 3rd-generation cells,” and he adds “Even in London a system like this would enable a typical office behind a south-facing wall to be electrically self-sufficient.”

Even with conventional technologies, Barnham calculates that if all roofs and south-facing walls were covered in solar cells this would represent three times the total generating capacity of the UK’s current nuclear program – that is, 36 GW. This represents a really substantial dent in the energy needs of the UK, and if we believe Barnham’s calculation that his system would deliver about three times as much energy as conventional solar cells, this represents pretty much a complete solution to our energy problems. What is absent from the article, though, is an estimate of the total production capacity that’s likely to be achievable, merely observing that the UK semiconductor industry has substantial spare capacity after the telecoms downturn. This is the missing calculation that needs to be done before we can accept Barnham’s optimism.

Nanoscience in the European Research Area

Most research in Europe, in nanotechnology or any other field, is not funded by the European Union. Somewhere between 90% and 95% of research funding comes from national research agencies, working with their own procedures, to their own national priorities. This bothers some people, who see this as yet another example of the way in which Europe doesn’t get its act together and thus fails to live up to its potential. In research, the European Commission fears that, compared to rivals in the USA or the far east, European efforts suffer from fragmentation and duplication. Their solution is the concept of the “European Research Area”, in which different national funding agencies work to create a joint approach to funding, as well as doing what they can to ensure free movement of researchers and ideas across the continent. As part of this initiative, national research agencies have come together to form thematic networks. Nanoscience has such a network, and it is meeting this week in Amsterdam to finalise the details of a joint funding call on the theme of singly addressable nanoscale objects.

Another way of looking at the issue of the many different approaches used in funding nanoscience across Europe is that this gives us a laboratory of different approaches, a kind of controlled experiment in science funding models. Yesterday’s meeting was devoted to series of overviews of the national nanoscience landscape in each country. This was instructive and contrasting; among the large countries one had the German approach, with major groups across the country being supported with really substantial infrastructure. The French had most logical and comprehensive overall plan, while the talk describing the British effort (given by me) couldn’t entirely hide its ad-hoc and largely unplanned character. The presentations from smaller countries varied from really rather impressive displays of focused activities (from the Netherlands, Finland and Austria in particular), to more aspirational talks from countries like Portugal and Slovakia.

How do the European nations rank in nanoscience? The undisputed leader is clearly Germany, with France and the UK vying for second place. Readers of this blog will know that I’m suspicious of bibliometric measures, but some interesting data was shown showing France second and the UK third by total numbers of nanoscience papers, but with that order being reversed when only highly cited papers were considered. But the efforts of the rich, smaller European countries are very significant; these are countries with high per person GDP figures which typically spend a higher proportion of GDP on research than larger countries. They combine this with a very focused and targeted approach to the science they support. The Netherlands, in particular, looks very strong indeed in those areas that it has chosen to concentrate on.

Computing with molecules

It’s easy to forget that, looking at biology as a whole, computing and information processing is more often done by individual molecules than by brains and nervous systems. After all, most organisms don’t have a nervous system at all, yet they still manage to sense their environment and respond to what they discover. And a multi-cellular organism is itself a colony of many differentiated cells, all of which need to communicate and cooperate in order for the organism to function at all. In these processes, signals are communicated not by electrical pulses, but by the physical movement of molecules, and logic is performed, not by circuits of transistors, but by enzymes. Modern systems biology is just starting to unravel the operation of these complex and effective chemical computers, but we’re very far from being able to build anything like them with our currently available nanotechnology.

A news story on the New Scientist website (seen via Martyn Amos’s blog) reports an interesting step along the way, with an experimental demonstration of an enzyme-based system that chemically implements simple logic operations like a half-adder and a half-subtracter. The report, from Itamar Willner’s group at the Hebrew University of Jerusalem, is published in Angewandte Chemie International Edition (abstract here, subscription required for full paper). No-one is going to be doing complicated sums with these devices for a while; the inputs are provided by supplying certain chemical species (glucose and hydrogen peroxide, in this case), and the answers are provided by the appearance or non-appearance of reaction products. But where this system could come in useful is in providing a nanoscale system like a drug delivery device with some rudimentary mechanisms for sensing the environment and acting on the information, maybe by swimming towards the source of some chemicals or releasing its contents when it has detected some combination of chemicals around it.

This is still not quite a fully synthetic analogue of a cellular information processing system; it uses enzymes of biological origin, and it doesn’t use the ubiquitous chemical trick of allostery. In this, the binding of one molecule to an enzyme changes the way it processes another molecule, effectively allowing a single molecule to act as a logic gate. But it suggests many fascinating possibilities for the future.

Critical Design

I spent an interesting afternoon last Tuesday in the Royal College of Art spending some time with the students on the Interaction Design course, who are just beginning a project on nanotechnology. This department began life focusing on Computer Related Design, applying the lessons of fine art and graphic design to human centred design for computer interfaces, but it’s recently broadened its scope to a wider consideration of the way people and societies interact with technology. It’s in this context that the students are being asked to visualise possible nanotechnology-based futures.

My host for the visit was the Head of Department, Tony Dunne, the author of (among other works) Hertzian tales and Design Noir. He uses the space between industrial design, conceptual art and social theory to question the relationship between technology and society; on his appointment to the RCA he wrote “Interaction Design can be a test space where designers engage with different technnologies (not just electronics) before they enter the market place, exploring their possible impact on everyday life through design proposals – from a variety of perspectives: commercial, aesthetic, functional, critical, even ethical. I believe we need to educate designers to a higher level than we presently do, if they are to have a significant and meaningful role to play in the 21st Century and not just sit at the margins producing pleasant distractions”

To see why this approach to design might be useful for nanotechnology, take a look at the Nanofactory animation made by John Burch and Eric Drexler to illustrate their vision of the future of nanotechnology. Making no judgements for the moment about its technical feasibility, its worth looking at the symbolism of this vision. What’s striking about it is how amazingly conservative it is. The nano-fabricator itself looks like an upmarket bread-making machine, while the final product is a palm-top computer that could in design terms have come from your local PC World. It’s worth contrasting this vision with the much more radical vision of manufacturing outlined in Drexler’s original book Engines of Creation, which imagined a rocket motor growing, as if from a seed, in a huge tank of milky fluid. I’m sure this retreat to a more conservative, and less challenging, vision, was deliberate, and part of the attempt to defuse the”grey goo” controversy. If we are going to be prepared for what technological change brings us, we are going to need some more challenging visions of future artefacts, and I look forward to seeing the radical concepts that the design students come up with.