Save the planet by insulating your house

A surprisingly large fraction of the energy used in developed countries is used heating and lighting buildings – in the European Union 40% of energy used is in buildings. This is an obvious place to look for savings if one is trying to reduce energy consumption without compromising economic activity. A few weeks ago, I reported a talk by Colin Humphreys explaining how much energy could be saved by replacing conventional lighting by light emitting diodes. A recent report commissioned by the UK Government’s Department for Environment, Food and Rural Affairs, Environmentally beneficial nanotechnology – Barriers and Opportunities (PDF file) ranks building insulation as one of the areas in which nanotechnology could make a substantial and immediate contribution to saving energy.

The problem doesn’t arise so much from new buildings; current building regulations in the UK and the EU are quite strict, and the technologies for making very heat efficient buildings are fairly well understood, even if they aren’t always used to the full. It is the existing building stock that is the problem. My own house illustrates this very well; its 3 foot thick solid limestone walls look as handsome and sturdy as when they were built 150 years ago, but the absence of a cavity makes them very poor insulators. To bring them up to modern insulating standards I’d need to dryline the walls with plasterboard with a foam-filled cavity, at a thickness that would lose a significant amount of the interior volume of the rooms. Is their some magic nanotechnology enabled solution that would allow us to retrofit proper insulation to the existing housing stock in an acceptable way?

The claims made by manufacturers of various products in this area are not always crystal clear, so its worth reminding ourself of the basic physics. Heat is transferred by convection, conduction and radiation. Stopping convection is essentially a matter of controlling the drafts. The amount of heat transmitted by conduction is proportional to the difference of temperature, the thickness of the material, and a material constant called the thermal conductivity. For solids like brick, concrete and glass thermal conductivities are around 0.6 – 0.8 W/m.K. As everyone knows, still air is a very good thermal insulator, with a thermal conductivity of 0.024 W/m.K, and the goal of traditional insulation materials, from sheeps’ wool to plastic foam, is to trap air to exploit its insulating properties. Standard building insulation is made from materials like polyurethane foam, are actually pretty good. A typical commercial product has a value of thermal conductivity of 0.021 W/m.K; it manages to do a bit better than pure air because the holes in the foam are actually filled with a gas that is heavier than air.

The best known thermal insulators are the fascinating materials known as aerogels. These are incredibly diffuse foams – their densities can be as low as 2 mg/cm3, not much more than air – that resemble nothing as much as solidified smoke. One makes an aerogel by making a cross-linked gel (typically from water soluble polymers of silica) and then drying it above the critical point of the solvent, preserving the structure of the gel in which the strands are essentially single molecules. An aerogel can have a thermal conductivity around 0.008 W/m.K. This is substantially less than the conductivity of the air it traps, essentially because the nanscale strands of material disrupt the transport of the gas molecules.

Aerogels have been known for a long time, mostly as a laboratory curiousity, with some applications in space where their outstanding properties have justified their very high expense. But it seems that there have been some significant process improvements that have brought the price down to a point where one could envisage using them in the building trade. One of the companies active in this area is the US-based Aspen Aerogels, which markets sheets of aerogel made, for strength, in a fabric matrix. These have a thermal conductivity in the range 0.012 – 0.015 W/m.K. This represents a worthwhile improvement on the standard PU foams. However, one shouldn’t overstate its impact; this means to achieve a given level of thermal insulation one needs an insulating sheet a bit more than half the thickness of a standard material.

Another product, from a company called Industrial Nanotech Inc, looks more radical in its impact. This is essentially an insulating paint; the makers claim that three layers of this material – Nansulate will provide significant insulation. If true, this would be very important, as it would easily and cheaply solve the problem of retrofitting insulation to the existing housing stock. So, is the claim plausible?

The company’s website gives little in the way of detail, either of the composition of the product or, in quantitative terms, its effectiveness as an insulator. The active ingredient is referred to as “hydro-NM-Oxide”, a term not well known in science. However, a recent patent filed by the inventor gives us some clues. US patent 7,144,522 discloses an insulating coating consisting of aerogel particles in a paint matrix. This has a thermal conductivity of 0.104 W/m.K. This is probably pretty good for a paint, but it is quite a lot worse than typical insulating foams. What, of course, makes matters much worse is that as a paint it will be applied as a very thin film (the recommended procedure is to use three coats, giving a dry thickness of 7.5 mils, a little less than 0.2 millimeters. Since one needs a thickness of at least 70 millimeters of polyurethane foam to achieve an acceptable value of thermal insulation (U value of 0.35 W/m2.K) it’s difficult to see how a layer that is both 350 times thinner than this, and with a significantly higher value of thermal conductivity, could make a significant contribution to the thermal insulation of a building.

More on synthetic biology and nanotechnology

There’s a lot of interesting recent commentary about synthetic biology on Homunculus, the consistently interesting blog of the science writer Philip Ball. There’s lots more detail about the story of the first bacterial genome transplant that I referred to in my last post; his commentary on the story was published last week as a Nature News and Views article (subscription required).

Philip Ball was a participant in a recent symposium organised by the Kavli Foundation “The merging of bio and nano: towards cyborg cells”. The participants in this produced an interesting statement: A vision for the convergence of synthetic biology and nanotechnology. The signatories to this statement include some very eminent figures both from synthetic biology and from bionanotechnology, including Cees Dekker, Angela Belcher, Stephen Chu and John Glass. Although the statement is bullish on the potential of synthetic biology for addressing problems such as renewable energy and medicine, it is considerably more nuanced than the sorts of statements reported by the recent New York Times article.

The case for a linkage between synthetic biology and bionanotechnology is well made at the outset: “Since the nanoscale is also the natural scale on which living cells organize matter, we are now seeing a convergence in which molecular biology offers inspiration and components to nanotechnology, while nanotechnology has provided new tools and techniques for probing the fundamental processes of cell biology. Synthetic biology looks sure to profit from this trend.” The writers divide the enabling technologies for synthetic biology into hardware and software. For this perspective on synthetic biology, which concentrates on the idea of reprogramming existing cells with synthetic genomes, the crucial hardware is the capability for cheap, accurate DNA synthesis, about which they write: “The ability to sequence and manufacture DNA is growing exponentially, with costs dropping by a factor of two every two years. The construction of arbitrary genetic sequences comparable to the genome size of simple organisms is now possible. “ This, of course, also has implications for the use of DNA as a building block for designed nanostructures and devices (see here for an example).

The authors are much more cautious on the software side. “Less clear are the design rules for this remarkable new technology—the software. We have decoded the letters in which life’s instructions are written, and we now understand many of the words – the genes. But we have come to realize that the language is highly complex and context-dependent: meaning comes not from linear strings of words but from networks of interconnections, with its own entwined grammar. For this reason, the ability to write new stories is currently beyond our ability – although we are starting to master simple couplets. Understanding the relative merits of rational design and evolutionary trial-and-error in this endeavor is a major challenge that will take years if not decades. “

The new new thing

It’s fairly clear that nanotechnology is no longer the new new thing. A recent story in Business Week – Nanotech Disappoints in Europe – is not atypical. It takes its lead from the recent difficulties of the UK nanotech company Oxonica, which it describes as emblematic of the nanotechnology sector as a whole: “a story of early promise, huge hype, and dashed hopes.” Meanwhile, in the slightly neophilic world of the think-tanks, one detects the onset of a certain boredom with the subject. For example, Jack Stilgoe writes on the Demos blog “We have had huge fun running around in the nanoworld for the last three years. But there is a sense that, as the term ‘nanotechnology’ becomes less and less useful for describing the diversity of science that is being done, interesting challenges lie elsewhere… But where?”

Where indeed? A strong candidate for the next new new thing is surely synthetic biology. (This will not, of course, be new to regular Soft Machines readers, who will have read about it here two years ago). An article in the New York Times at the weekend gives a good summary of some of the claims. The trigger for the recent prominence of synthetic biology in the news is probably the recent announcement from the Craig Venter Institute of the first bacterial genome transplant. This refers to an advance paper in Science (abstract, subscription required for full article) by John Glass and coworkers. There are some interesting observations on this in a commentary (subscription required) in Science. It’s clear that much remains to be clarified about this experiment: “But the advance remains somewhat mysterious. Glass says he doesn’t fully understand why the genome transplant succeeded, and it’s not clear how applicable their technique will be to other microbes. “ The commentary from other scientists is interesting: “Microbial geneticist Antoine Danchin of the Pasteur Institute in Paris calls the experiment “an exceptional technical feat.” Yet, he laments, “many controls are missing.” And that has prevented Glass’s team, as well as independent scientists, from truly understanding how the introduced DNA takes over the host cell.”

The technical challenges of this new field haven’t prevented activists from drawing attention to its potential downsides. Those veterans of anti-nanotechnology campaigning, the ETC group, have issued a report on synthetic biology, Extreme Genetic Engineering, noting that “Today, scientists aren’t just mapping genomes and manipulating genes, they’re building life from scratch – and they’re doing it in the absence of societal debate and regulatory oversight”. Meanwhile, the Royal Society has issued a call for views on the subject.

Looking again at the NY Times article, one can perhaps detect some interesting parallels with the way the earlier nanotechnology debate unfolded. We see, for example, some fairly unrealistic expectations being raised: ““Grow a house” is on the to-do list of the M.I.T. Synthetic Biology Working Group, presumably meaning that an acorn might be reprogrammed to generate walls, oak floors and a roof instead of the usual trunk and branches. “Take over Mars. And then Venus. And then Earth” —the last items on this modest agenda.” And just as the radical predictions of nanotechnology were underpinned by what were in my view inappropriate analogies with mechanical engineering, much of the talk in synthetic biology is underpinned by explicit, but as yet unproven, parallels between cell biology and computer science: “Most people in synthetic biology are engineers who have invaded genetics. They have brought with them a vocabulary derived from circuit design and software development that they seek to impose on the softer substance of biology. They talk of modules — meaning networks of genes assembled to perform some standard function — and of “booting up” a cell with new DNA-based instructions, much the way someone gets a computer going.”

It will be interesting how the field of synthetic biology develops, to see whether it does a better of job of steering between overpromised benefits and overdramatised fears than nanotechnology arguably did. Meanwhile, nanotechnology won’t be going away. Even the sceptical Business Week article concluded that better times lay ahead as the focus in commercialising nanotechnology moved from simple applications of nanoparticles to more sophisticated applications of nanoscale devices: “Potentially even more important is the upcoming shift from nanotech materials to applications—especially in health care and pharmaceuticals. These are fields where Europe is historically strong and already has sophisticated business networks. “

Enough talk already

Nature this week carries an editorial about the recent flurry of activity around public engagement over nanotechnology. This is generally upbeat and approving, reporting the positive side of the messages from the final report of the Nanotechnology Engagement Group, and highlighting some of the interesting outcomes of the Nanodialogues experiments. The Software Control of Matter blog even gets a mention as a “taste of true upstream thinking by nanoscientists”.

As usual, the editorial castigates the governments of the USA and the UK for not responding to the results of this public engagement, particularly in failing to get enough research going on potential environmental and health risks of nanoparticles. “These governments and others not only need to act on this outcome of public engagement, but must also integrate such processes into their departments’ and agencies’ activities.” To be fair, I think we are beginning to see the start of this, in the UK at least.

Nanotechnology for solar cells

This month’s issue of Physics World has an useful article giving an overview of the possible applications of nanotechnology to solar cells, under the strapline “Nanotechnology could transform solar cells from niche products to devices that provide a significant fraction of the world’s energy”.

The article discusses both the high road to nano-solar, using the sophistication of semiconductor nanotechnology to make highly efficient (but expensive) solar cells, and the low road, which uses dye-sensitised nanoparticles or semiconducting polymers to make relatively inefficient, but potentially very cheap, materials. One thing the article doesn’t talk about much are the issues of production and scaling, which are currently the main barriers in the way of these materials fully meeting their potential. We will undoubtedly hear much more about this over the coming months and years.

Shot by both sides

Tuesday’s launch event for two new reports in public engagement in nanotechnology – All Talk? Nanotechnologies and public engagement was a packed and interesting day. I didn’t, though, go home in an entirely optimistic frame of mind, and that wasn’t just because of the very real difficulties I had travelling through the flooded English midlands.

The morning was spent in presentations of and discussions arising from the two reports. The Demos project, Nanodialogues (the full report can be downloaded here) is presented as a series of experiments in public engagement, and that’s a very apt way of putting it. Each of the four activities addressed a very different aspect of science policy, and managed to throw a great deal of light on areas that have in the past stayed out of sight. The most straightforward of these exercises concerned a relatively simple and bounded policy choice – should iron nanoparticles be used in environmental remediation? An exercise carried out with the research councils proved problematic, illustrating just how strange and remote the nuts and bolts of government science funding procedures can be when looked at with fresh eyes. The third project was carried out in Zimbabwe, and very graphically illustrated the practical gulf between the rhetoric one sees about how nanotechnology might help the developing world and the reality of the problems that exist there. The fourth experiment found away into the notoriously closed world of business, bringing together focus groups and corporate researchers at the laboratories of Unilever, the multinational behind household brands such as Sunsilk shampoo, Dove soaps, Ponds skincreams and Surf soap-powder. Here what was explored was the tension between the company’s view that they are innovating to respond to consumer demand, and the slightly different view of some of the public that “innovation is not following their needs; it is imagining their wants, fulfilling them and leading them somewhere.”

The final report of the Nanotechnology Engagement Group summarises lessons learnt from the variety of public engagement exercises that have taken place in the UK around nanotechnology in the last couple of years. The report highlights some of the difficulties that have been encountered. Sometimes there has been a lack of clarity of the purpose of public engagement, which has led to frustrations and disappointments by participants and sponsors. It has sometimes not been clear how the results of public engagement feed back into policy. But, being more positive, there is a great deal of evidence from participants of how rewarding these exercises can be. This was underlined at the event by presentations from two of the members of the public who were involved. Bill Cusack, from Halifax, who took part in Nanojury UK, and Deborah Perry, from London, who took part in Nanodialogues, both spoke eloquently about the rewards of taking part, as well as some of the frustrations they felt.

One of the recommendations of the NEG report was that the institutions who commission or participate in public engagement exercises should always be prompt in making a public response. The Research Councils who supported Nanodialogues have made a response, which can be read here. I’m quite optimistic that EPSRC, at least, will be responding in a quite substantial way, and will be developing further public engagement activities that will be feeding directly into policy decisions.

If, after the morning session, I was feeling reasonably optimistic that despite well-recognised difficulties, there was a consensus developing about how to incorporate public engagement into scientific policy making, by the end of the afternoon sessions, “Where next for public engagement in science” and “A new social contract for science?”, I was left thinking that this optimism might be naive and Pollyannish. It became clear that there were some strongly held views in opposition, from both sides, to this comfortable position. On the one hand, there was the view that Science itself provides clear answers to policy questions; for example, given the correct information the need for GM food to feed the world’s population and nuclear energy to power it would become obvious. As for policy, we have a representative democracy to ensure that the people’s views are represented; it is politicians, not opinion polls, who ought to decide these issues, and public engagement is really just a question of the print and broadcast media informing people. On the other hand, we heard that the direct action and controversy-stoking of the social movements are the only way that opposing views can properly be heard, and that irrationality is a legitimate tool in the face of the entrenched hegemony of technoscience.

So, by the end, I was feeling rather lonely in my menshevik position of looking for moderate progress within the bounds of societal and institutional constraints. I wasn’t the only one to be feeling uncomfortable, though, as this account of the meeting from industrial scientist turned policy maker David Bott makes clear.

Nano on the Today program

The BBC’s morning radio news show – Today – ran a couple of items about nanotechnology this morning, and I made a brief appearance myself. The occasion was the launch of the nano-task force that I wrote about on Friday. The highlights of the program can be downloaded as an mp3 file.

The coverage was relatively positive in tone, focusing on the potential importance of nanotechnology in areas like sustainable energy, but pointing out the strength of the competition from other countries. But it has to be said that (as Tim Harper notes) it wasn’t hugely clear after the interviews what people thought actually needed to be done. To be honest, this question wasn’t that much clearer at the meeting in the Houses of Parliament itself; the discussion didn’t really find much of a focus.

I was slightly surprised to get a call this evening from the Today program yet again, who were wondering whether to run another item about nanotechnology tomorrow, in connection with the Demos/NEG launch. But it looks like nanotechnology is going to be displaced by coverage of the rain and floods that have afflicted the country today. I’m not surprised; I can’t remember rain so heavy, and it certainly made my journey down to London today very painful, taking me 6 hours for what’s normally a 2 hour train ride.

Nanotechnology in the UK news next week

Some high profile events in London next week mean that nanotechnology may move a little way up the UK news agenda. On Monday, there’s an event at the Houses of Parliament: Nano Task Force Conference: Nanotechnology – is Britain leading the way? The Nano Task Force in question is a ginger group set up by Ravi Silva, at the University of Surrey, with political support from Ian Gibson MP. Gibson is a Labour Member of Parliament, one of the rare breed of legislators with a science PhD, and a reputation for being somewhat independent minded.

On Tuesday, public engagement is the theme, with an all-day event “All Talk? Nanotechnologies and public engagement” at the Institute of Physics. This is a joint launch; the thinktank Demos and the Nanotechnology Engagement Group are both launching reports. The Demos report is on a series of public engagement exercises, The Nanodialogues, while Nanotechnology Engagement Group final report is an overview of the lessons learnt from all the engagement activities around nanotechnology conducted so far in the UK. The keynote speaker is Sir David King, the government’s chief scientific advisor.

I’m involved in both, giving a talk on the potential of nanotechnology for sustainable energy on Monday, and Tuesday chairing one session and being a panel member on another. Other participants include Sheila Jasanoff from Harvard, David Edgerton, the author of the recently published book “The Shock of the Old”, Ben Goldacre, the writer of the Guardian’s entertaining ‘Bad science’ column, Andy Stirling, from Sussex, James Wilsdon and Jack Stilgoe from Demos, Doug Parr from Greenpeace, and David Guston, the Director of the Center for Nanotechnology in Society at Arizona State University. It promises to be a fascinating day.

The Kroto Research Institute

You wait for years for an interdisciplinary nanoscience and nanotechnology centre to be opened somewhere in the English Midlands or south Yorkshire, and then two come along at once. Having spent yesterday 40 miles south of Sheffield, in Nottingham, at the official opening of the Nottingham nanoscience and nanotechnology centre, today I’m back home at Sheffield for the official opening of the The Kroto Research Institute and Centre for Nanoscale Science and Technology. As yesterday, the man doing the opening was the Nobel Laureate Sir Harry Kroto (Harry is an alumnus of Sheffield).

Actually, the Kroto centre covers a little more than just nanotechnology. It houses the UK’s national facility for fabricating nanostructures from III-V semiconductors, an well-equipped microscopy facility, which will soon commission an aberration corrected high resolution electron microscope capable of chemical analysis at the single-atom level, and a tissue engineering centre which spans the range from surface analysis to putting cultured skin onto patients. But there’s also a centre for computational biology, one for environmental engineering, and one for virtual reality.

Having talked about the Nottingham centre, it’s worth talking about the ways in which our two operations complement each other. Nottingham has what’s probably the best department of pharmacy in the country; they have long operated at the nanoscale, and have been leaders in applying surface science and scanning probe techniques to look at systems of biological and biomedical interest. But when they talk about nanomedicine, they have the strong links with the pharmaceutical industry that are needed to turn ideas into therapies. They’ve been successful in collaborating with the Department of Physics, whose interest in applying physical techniques to biological systems goes back to the discovery there of magnetic resonance imaging. Like Sheffield, they have real strength in semiconductor nanotechnology, and they also have the UK’s leaders in single molecule manipulation using scanning probe techniques.

There are already some major collaborations between Nottingham and Sheffield. These include the Nanorobotics project, which aims to combine nanoscale actuator technology with live electron microscopy observation, each at a resolution of down to 0.1nm. The Snomipede project, also including Glasgow and Manchester, aims to combine near-field scanning probe microscopy as a way of patterning molecules with massive parallelisation of the kind familiar from the IBM millipede technology. There is undoubtedly room for more collaboration between the two universities in this area. One should probably never regret all those failed research proposals one has put in, but back in 2000 we did put together a joint bid, together with Leeds, to host one of the two Interdisciplinary Research Collaborations in Nanotechnology that were being funded then. The money went to Oxford and Cambridge, and I don’t want to cast aspersions on the good work that’s come out of both places, but I’m sure we would have done a good job.

The Nottingham nanotechnology and nanoscience centre

Today saw the official opening of the Nottingham nanotechnology and nanoscience centre, which brings together some existing strong research areas across the University. I’ve made the short journey down the motorway from Sheffield to listen to a very high quality program of talks, with Sir Harry Kroto, co-discoverer of buckminster fullerene, taking the top of the bill. Also speaking were Don Eigler, from IBM (the originator of perhaps the most iconic image in all nanotechnology, the IBM logo made from individual atoms) Colin Humphreys, from the University of Cambridge, and Sir Fraser Stoddart, from UCLA.

There were some common themes in the first two talks (common, also, with Wade Adams’s talk in Norway described below). Both talked about the great problems of the world, and looked to nanotechnology to solve them. For Colin Humphries, the solutions to problems of sustainable energy and clean water are to be found in the material gallium nitride, or precisely in the compounds of aluminium, indium and gallium nitride which allow one to make, not just blue light emitting diodes, but LEDs that can emit light of any wavelength between the infra-red and the deep ultra-violet. Gallium nitride based blue LEDs were invented as recently as 1996 by Shuji Nakamura, but this is already a $4 billion market, and everyone will be familiar with torches and bicycle lights using them.

How can this help the problem of access to clean drinking water? We should remind ourselves that 10% of world child mortality is directly related to poor water quality, and half the hospital beds in the world occupied by people with water related diseases. One solution would be to use deep ultraviolet to sterilise contaminated water. Deep UV works well for sterilisation because biological organisms never developed a tolerance to these waves, which don’t penetrate the atmosphere. UV at a wavelength of 270 nm does the job well, but existing lamps are not practical because they need high voltages and are not efficient, and also some use mercury. AlGaN LEDS work well, and in principle they could be powered by solar cells at 4 V, which might allow every household to sterilise its water supply easily and cheaply. The problem is efficiency is too low for flowing water. At blue wavelengths (400 nm) efficiency is very good at 70%, but it drops precipitously at smaller wavelengths, and this is not yet understood theoretically.

The contribution of solid state lighting to the energy crisis arises from the efficiency of LEDs compared to tungsten light bulbs. People often underestimate the amount of energy used in lighting domestic and commercial buildings. Globally, it accounts for 1,900 megatonnes of CO2; this is 70% of the total emissions from cars, and three times the amount due to aviation. In the UK, it amounts to 20% of electricity generated, and in Thailand, for example, it is even more, at 40%. But tungsten light bulbs, which account for 79% of sales, have an efficiency of only 5%. There is much talk now of banning tungsten light bulbs, but the replacement, fluorescent lights, is not perfect either. Compact fluorescents have an efficiency of 15%, which is an improvement, but what is less well appreciated is that each bulb contains 4 mg of mercury. This would lead to tonnes of mercury ending up in landfills if tungsten bulbs were replaced by compact fluorescents.

Could solid-state lighting do the job? Currently what you can buy are blue LEDs (made from InGaN) which excite a yellow phosphor. The colour balance of these leaves something to be desired, and soon we will see blue or UV LEDs exciting red/green/blue phosphors which will have a much better colour balance (you could also use a combination of red, green and blue LEDs, but currently green efficiencies are too low). The best efficiency in a commercial white LED is 30% (from Seoul Semiconductor), but the best in the lab (Nichia) is currently 50%. The target is an efficiency of 50-80% at high drive currents, which puts them at a higher efficiency than the current most efficient light, sodium lamps, whose familiar orange glow converts electricity at 45% efficiency. This target would make them 10 times more efficient than filaments, 3 times more efficient than compact fluorescents and with no mercury. In the US the 50% replacement of filaments would save 41 GW, in the UK 100% replacement would save 8 GW of power station capacity. The problem at the moment is cost, but the rapidity of progress in this area means that Humphries is confident that within a few years costs will fall dramatically.

Don Eigler also talked about societal challenges, but with a somewhat different emphasis. His talk was entitled “Nanotechnology: the challenge of a new frontier”. The questions he asked were “What challenges do we face as a society in dealing with this new frontier of nanotechnology, and wow should we as a society make decisions about a new technology like nanotechnology?”

There are three types of nanotechnology, he said: evolutionary nanotechnology (historically larger technologies that have been shrunk to nanoscale dimensions), revolutionary nanotechnology (entirely new nanometer-scale technologies) and natural nanotechnology (cell biology, offering inspirations for our own technologies). Evolutionary nanotechnologies include semiconductors, nanoparticles in cosmetics. Revolutionary nanotechnologies include carbon nanotubes, for potential new logic structures that might supplant silicon, and the IBM millipede data storage system. Natural nanotechnologies include bacterial flagellar motors.

Nanohysteria comes into different varieties too. Type 1 nanohysteria is represented by greed driven “irrational exuberance”, and is based on the idea that nanotechnology will change everything very soon, as touted by investment tipsters and consultants who want to take people’s money off them. What’s wrong with this is the absence of critical thought. Type 2 nanohysteria is the opposite – fear driven irrational paranoia exemplified by the grey goo scenario of out of control self-replicating molecular assemblers or nanobots. What’s wrong with this is again, the absence of critical thought. Prediction is difficult, but Eigler thinks that self-replicating nanobots are not going to happen any time soon, if ever.

What else do people fear about nanotechnology? Eigler recently met a young person with strong views, that nanotech is scary, it will harm the biosphere, it will create new weapons, it is being driven by greedy individuals and corporations, in summary it is not just wrong, it is evil. Where did these ideas come from? If you look on the web – you see talk of superweapons made from molecular assemblers. What you don’t find on the web are statements like “My grandmother is still alive today because nanotechnology saved her life”. Why is this? Nanotechnology has not yet provided a tangible benefit to grandmothers!

Some candidates include gold nanoshell cancer therapy, as developed by Naomi Halas at Rice. This particular therapy may not work in humans, but something similar will. Another example is the work of Sam Stupp at Northwestern, making nanofibers that cause neural progenitor cells turn into new neurons, not scar tissue, holding out the hope of regenerative medicine to repair spinal cord damage.

As an example of wrong conclusions, Eigler made the smallest logic circuit, 12nm by 17 nm, made from carbon monoxide. But carbon monoxide is a deadly poison – shouldn’t we worry about this? Let’s do the sum – 18 CO molecules are needed for one transistor. The context is that I breathe 2 billion trillion molecules a day, so every day I breathe enough to make 160 million computers.

What could the green side of nanotechnology be? We could have better materials, that are lighter, stronger and more easily recyclable, and this will reduce energy consumption. Perhaps we can use nanotechnology to reduce consumption of natural resources and helping recycling. We can’t prove yet that these good benefits will follow, but Eigler believes they are likely.

There is a real risk of nanotechnology, if it is used without evaluating the consequences. The widespread introduction of nanoparticulates into the environment would be an example of this. So how do we now if something is safe? We need to think it through, but we can’t guarantee absolutely that anything can be absolutely safe. The principles should be that we eliminate fantasies, understand the different motivations that people have, and honestly assess risk and benefit. We need informed discussion, that is critical, creative, inclusive and respectful. We need to speak with knowledge and respect, and listen with zeal. Scientists have not always been good at this and we need to get much better. Our best weapons are our traditions of rigorous honesty and our tolerance for diverse beliefs.