Who do we think we are?

I’m grateful for this glowing endorsement from TNTlog, and I’m impressed that it takes as few as two scientist bloggers to make a trend. But I’m embarrassed that Howard Lovy’s response seems to have taken the implied criticism so personally. I’ve always enjoyed reading NanoBot. I don’t always agree with Howard’s take on various issues, but he’s always got interesting things to say and his insistence on the importance of appreciating the interaction between nanotechnology and wider culture is spot-on.

But I think Howard’s pained sarcasm – “Scientists, go write about yourselves, and we in the public will read with wide-eyed wonder about the amazing work you’re doing and thank you for lowering yourselves to speak what you consider to be our language” – misses the mark. There are many ways in which scientists can contribute to this debate besides this crude and demeaning de haut en bas caricature, and many of them reflect real deficiencies in the ways in which mainstream journalists cover science.

To many journalists, science is marked by breakthroughs, which are conveniently announced by press releases from publicity hungry university or corporate press offices, or from the highly effective news offices of the scientific glamour magazines, Nature and Science. But scientists never read press releases, and they very rarely write them, because the culture of science doesn’t marry at all well with the event-driven mode of working of journalism. Very rarely, real breakthroughs really are made, though often their significance isn’t recognised at the time. But the usual pattern is of incremental advances, continuous progress and a mixture of cooperation and competition between labs across the world working in the same area. If scientists can write about science as it really is practised, with all its debates and uncertainties, unfiltered by press offices, that seems to me to be entirely positive. It’s also less likely, rather than more likely, to lead to the glorification and self-aggrandisement of scientists that Howard seems to think is our aim.

Artificial life and biomimetic nanotechnology

Last week’s New Scientist contained an article on the prospects for creating a crude version of artificial life (teaser here), based mainly on the proposals of Steen Rasmussen’s Protocell project at Los Alamos. Creating a self-replicating system with a metabolism, capable of interacting with its environment and evolving, would be a big step towards a truly radical nanotechnology, as well as giving us a lot of insight into how our form of life might have begun.

More details of Rasmussen’s scheme are given here, and some detailed background information can be found in this review in Science (subscription required), which discusses a number of approaches being taken around the world (see also this site, , with links to research around the world, also run by Rasmussen). Minimal life probably needs some way of enclosing the organism from the environment, and Rasmussen proposes the most obvious route of using self-assembled lipid micelles as his “protocells”. The twist is that the lipids are generated by light activation of an oil-soluble precursor, which effectively constitutes part of the organism’s food supply. Genetic information is carried in a peptide nucleic acid (PNA), which reproduces itself in the presence of short precursor PNA molecules, which also need to be supplied externally. The claim is that ‘this is the first explicit proposal that integrates genetics, metabolism, and containment in one chemical system”.

It’s important to realise that this, currently, is just that – a proposal. The project is just getting going, as is a closely related European Union funded project PACE (for programmable artificial cell evolution). But it’s a sign that momentum is gathering behind the notion that the best way to implement radical nanotechnology is to try and emulate the design philosophies that cell biology uses.

If this excites you enough that you want to invest your own money in it, the associated company Protolife is looking for first round investment funding. Meanwhile, a cheaper way to keep up with developments might be to follow this new blog on complexity, nanotechnology and bio-computing from Exeter University based computer scientist Martyn Amos.

Making and doing

Eric Drexler is quoted in Adam Keiper’s report from the NRC nanotechnology workshop in DC as saying:

“What’s on my wish list: … A clear endorsement of the idea that molecular machine systems that make things … with atomic precision is a natural and important goal for the development of nanoscale technologies … with the focus of that endorsement being the recognition that we can look at biology, and beyond…. It would be good to have more minds, more critical thought, more innovation, applied in those directions.”

I almost completely agree with this, particularly the bit about looking at biology and beyond. Why only almost?. Because “systems that make things” should only be a small part of the story. We need systems that do things – we need to process energy, process information, and, in the vital area of nanomedicine, interact with the cells that make up humans and their molecular components. This makes a big difference to the materials we choose to work with. Leaving aside, for the moment, the question of whether Drexler’s vision of diamondoid-based nanotechnology can be make to work at all, let’s ask the question, why diamond? It’s easy to see why you would want to use diamond for structural applications, as it is strong and stiff. But its bandgap is too big for optoelectronic applications (like solar cells) and its use in medicine will be limited by the fact that it probably isn’t that biocompatible.

In the very interesting audio clip that Adam Keiper posts on Howard Lovy’s Nanobot, Drexler goes on to compare the potential of universal, general purpose manufacture with that of general purpose computing. Who would have thought, he asks (I paraphrase from memory here), that we could have one machine that we can use to do spreadsheets, play our music and watch movies on? Who indeed? … but this technology depends on the fact that documents, music and moving pictures can all be represented by 1’s and 0’s. For the idea of general purpose manufacturing to be convincing, one would need to believe that there was an analogous way in which all material things could be represented by a simple low level code. I think this leads to an insoluble dilemma – the need to find simple low level operations drives one to use a minimum number – preferably one – basic mechanosynthesis step. But in limiting ourselves in this way, we make life very difficult for ourselves in trying to achieve the broad range of functions and actions that we are going to want these artefacts for. Material properties are multidimensional, and it’s difficult to believe that one material can meet all our needs.

Matter is not digital.

The irresistible rise of nano-gizmology

What would happen if nanotechnology suddenly went out of fashion in the academic world, all the big nano-funding initiatives dried up, and putting the nano word in grant applications doomed them to certain failure? Would all the scientists who currently label themselves nanoscientists just go back to being chemists and physicists as before? This interesting question was posed to me on Monday during a fascinating afternoon seminar and discussion with social scientists from the Institute for Environment, Philosophy and Public Policy at Lancaster University.

My first reaction was to say that nothing would change. Scientists can be a cynical bunch when it comes to funding, and it’s tempting to assume that they would just relabel their work yet again to conform with whatever the new fashion was, and carry on just as before. But on reflection my answer was that the rise of nanoscience and nanotechnology as a label in academic science has been accompanied by two real and lasting cultural changes. The first is so well-rehearsed that it’s a cliché to say it, but it is nonetheless true – nanoscientists really have got used to interdisciplinary working in a way that was very rare in academia twenty years ago (of course, it has always been the rule in industry). The second change is less obvious, though I think I first noticed it as a marked change six or seven years ago. This was a shift in emphasis away from testing theories and characterising materials towards making widgets or gizmos – things that, although usually still far away from being a real, viable product, did something or produced some functional effect. More than any use of the label “nano”, this seems to me to be a lasting change in the way scientists judge the value of their own and other peoples’s work; it’s certainly very much reflected in the editorial policies of the glamour journals like Nature and Science. Some will mourn the eclipse of the values of pure science values, while others will anticipate a more direct economic return on our societies’s investments in science as a result, but it remains to be seen what the overall outcome of this shift will be.

Nanotech at Hewlett-Packard

There’s a nice piece in Slate by Paul Boutin reporting on his trip round the Hewlett-Packard labs in Palo Alto. The opening stresses the evolutionary, short term, character of the research going on there, stressing that these projects only get funded if they are going to make a fast return for the company, usually within five years. The first projects he mentions are about RFID (radio frequency identification), and these are discussed in terms of Walmart, supply chains and keeping track of your pallets. I can relate to this because my wife used to be a production planner. She used to wake up in the night worrying about whether there were enough plastic overcaps in the warehouse to pack the next week’s production, but she knew that the only way to find out for sure, despite all their smart SAP systems, was to walk down to the warehouse and look. But despite these mundane immediate applications it’s the technologies that are going to underlie RFID that also have such uncomfortable implications for a universal surveillance society.

The article moves on to talk about HP’s widely reported recent development of crossbar latches as a key component for molecular electronic logic circuits (see for example this BBC report, complete with a good commentary from Soft Machines’s frequent visitor, Philip Moriarty). The author rightly highlights the need to develop new, defect tolerant computer architectures if these developments in molecular electronics are to be converted into useful products. This nicely illustrates the point I made below, that in nanotechnology you may well need to develop systems architectures that accommodate the physical realities of the nanoscale, rather than designing the architecture first and hoping that you’ll be able to find low-level operations that will suit your preconceived notions .

The mechanosynthesis debate

Now that the 130 or so people who have downloaded the Moriarty/Phoenix debate about mechanosynthesis so far have had a chance to digest it, here are some thoughts that occur to me on re-reading it. This is certainly not a summary of the debate; rather these are a just a few of the many issues that emerge that I think are important.

On definitions. A lot of the debate revolves around questions of how one actually defines mechanosynthesis. There’s an important general point here – one can have both narrow definitions and broad definitions, and there’s a clear role for both. For example, I have been very concerned in everything I write to distinguish between the broad concept of radical nanotechnology, and the specific realisation of a radical nanotechnology that is proposed by Drexler in Nanosystems. But we need to be careful not to imagine that a finding that supports a broad concept necessarily also validates the narrower definition. So, to use an example that I think is very important, the existence of cell biology is compelling evidence that a radical nanotechnology is possible, but it doesn’t provide any evidence that the Drexlerian version of the vision is workable. Philip’s insistence on a precise definition of mechanosynthesis in distinction from the wider class of single molecule manipulation experiments stems from his strongly held view that the latter don’t yet provide enough evidence for the former. Chris, on the other hand, is in favour of broader definitions, on the grounds that if the narrowly defined approach doesn’t work, then one can try something else. This is fair enough if one is prepared to be led to wherever the experiments take you, but I don’t think it’s consistent with having a very closely defined goal like the Nanosystems vision of diamondoid based MNT. If you let the science dictate where you go (and I don’t think you have any choice but to do this), your path will probably take you somewhere interesting and useful, but it’s probably not going to be the destination you set out towards.

On the need for low-level detail. The debate makes very clear the distinction between the high-level systems approach exemplified by Nanosystems and by the Phoenix nanofactory paper, and the need to work out the details at the “machine language” level. “Black-boxing” the low-level complications can only take you so far; at some point one needs to work out what the elementary “machine language” operations are going to be, or even whether they are possible at all. Moreover, the nature of these elementary operations can’t always be divorced from the higher level architecture. A good example comes from the operation of genetics, where the details of the interaction between DNA, RNA and proteins means that the distinction between hardware and software that we are used to can’t be sustained.

On the role of background knowledge from nanoscience. A widely held view in the MNT community is that very little research has been done in pursuit of the Drexlerian project since the publication of Nanosystems. This is certainly true in the sense that science funding bodies haven’t supported an overtly Drexlerian research project; but it neglects the huge amount of work in nanoscience that has a direct bearing, in detail, on the proposals in Nanosystems and related work. This varies from the centrally relevant work done by groups (including the Nottingham group, and a number of other groups around the world) which are actively developing the manipulation of single molecules by scanning probe techniques, to the important background knowledge accumulated by very many groups round the world in areas such as surface and cluster physics and chemical vapour deposition. This (predominantly experimental) work has greatly clarified how the world at the nanoscale works, and it should go without saying that theoretical proposals that aren’t consistent with the understanding gained in this enterprise aren’t worth pursuing. Commentators from the MNT community are scornful, with some justification, of nanoscientists who make pronouncements about the viability of the Drexlerian vision of nanotechnology without having acquainted themselves with the relevant literature, for example by reading Nanosystems. But this obligation to read the literature goes both ways.

I think the debate has moved us further forward. I think it is clear that the Freitas proposal that sparked the discussion off does have serious problems that will probably prevent its implementation in its original form. But the fact that a proposal concrete enough to sustain this detailed level of criticism has been presented is itself immensely valuable and positive, and it will be interesting to see what emerges when the proposal is refined and further scrutinised. It is also clear that, whatever the ultimate viability of this mechanosynthetic route to full MNT turns out to be (and I see no grounds to revise my sceptical position), there’s a lot of serious science to be done, and claims of a very short timeline to MNT are simply not credible.

Is mechanosynthesis feasible? The debate continues.

In my post of December 16th, Is mechanosynthesis feasible? The debate moves up a gear, I published a letter from Philip Moriarty, a nanoscientist from Nottingham University, which offered a detailed critique of a scheme for achieving the first steps towards the mechanosynthesis of diamondoid nanostructures, due to Robert Freitas. The Center for Responsible Nanotechnology‘s Chris Phoenix began a correspondence with Philip responding to the critique. Chris Phoenix and Philip Moriarty have given permission for the whole correspondence to be published here. It is released in a mostly unedited form; however the originals contained some quotations from Dr K. Eric Drexler which he did not wish to be published; these have therefore been removed.

The total correspondence is long and detailed, and amounts to 56 pages in total. It’s broken up into three PDF documents:
Part I
Part II
Part III.

I’m going to refrain from adding any comment of my own for the moment, so readers can form their own judgements, though I’ll probably make some observations on the correspondence in a few days time.

The correspondence between Philip Moriarty and Chris Phoenix, for the time being, ends here. However Philip Moriarty has asked me to include this statement, which he has agreed with Robert Freitas:

“Freitas and Moriarty have recently agreed to continue discussions related to the fundamental science underlying mechanosynthesis and the experimental implementation of the process. These discussions will be carried out in a spirit of collaboration rather than as a debate and, therefore, will not be published on the web. In the event that this collaborative effort produces results that impact (either positively or negatively) on the future of mechanosynthesis, those results will be submitted for publication in a peer-reviewed journal.”

Converging technologies in Europe and the USA

Last Thursday saw a meeting in London to introduce to the UK a report that came out last summer on the convergence of nanotechnology, biotechnology, information technology and neuroscience. Converging technologies for a diverse Europe can essentially be thought of as the European answer to the 2002 report from the USA, Converging Technologies for Improving Human Performance. The speaker line-up, besides me, included social scientists, futurologists, an arms control expert and an official from the European Commission. What was striking to me was how much this debate was framed in terms Europe trying to position itself somewhat apart from the USA, though perhaps this isn’t surprising in view of the broader flow of international politics at the moment.

It’s almost a clich?� that public opinion is very different on the two continents, with the USA being much more uninhibited in its welcoming of new technology than the more technophobic Europeans. George Gaskell, a sociologist from the London School of Economics, presented survey data that at first seems to confirm this view. In his 2002 surveys, he found that while 50% of people in the USA were sure that nanotechnology would be positive in its outcome, only 29% of Europeans were so optimistic. But the picture isn’t as simple as it first appears; the figures for the proportion who thought that nanotechnology would make things worse were not actually that different – 4% in the USA compared to 6% in Europe. The Europeans were simply taking the attitude that they didn’t know enough to judge. The absence of any across-the-board distrust of technology is shown by a comparison of attitudes to three key technologies – nuclear energy, computers and information technology and biotechnology. The data showed almost overwhelming opposition to nuclear power, equally overwhelming enthusiasm for computers and communication technology, and a mixed picture for biotech. The key issues for acceptance prove not to be any deep enthusiasm or distrust for technology in general; it’s simply a balance of the benefits and risks together with a judgement on how much the governance and regulation of the technology can be trusted.

Where there is a big difference between Europe and the USA is in the importance of the military in driving research. J?�rgen Altmann, a physicist turned arms-control expert from The University of Dortmund, is very worried about the military applications of nanotechnology, and his worries are nicely summarised in this pdf handout. His view is that the USA is currently undertaking an arms race against itself, wasting resources that could otherwise be used both to boost economic competitiveness and to counter the real threat that both the USA and Europe face by more appropriate and low-tech means. Others, of course, will differ on the nature of the threat and the best way to counter it.

The balance between civil and military research and development was also highlighted by Elie Faroult, from the Research Directorate of the European Commission, who pointed out with some glee that the EU was now considerably ahead of the USA in investment in most civil research, and that this trend is accelerating as the USA squeezes spending on non-military science. For him, this gave Europe the opportunity to develop a distinctive set of research goals which emphasised social coherence and environmental sustainability as well as economic competitiveness. But having taken the obligatory side-swipe at the USA he finished by saying that of course, looking to the future, it wasn’t the USA that Europe was in competition with. The real competitor for both the USA and Europe was China.

Nanomagnetics

Nature has some very elegant and efficient solutions to the problems of making nanoscale structures, exploiting the self-assembling properties of information-containing molecules like proteins to great effect. A very promising approach to nanotechnology is to use what biology gives us to make useful nanoscale products and devices. I spent Monday visiting a nanotechnology company that is doing just that. Nanomagnetics is a Bristol based company (I should disclose an interest here, in that I’ve just been appointed to their Science Advisory Board) which exploits the remarkable self-assembled structure of the iron-storage protein ferritin to make nanoscale magnetic particles with uses in data storage, water purification and medicine.

Ferritin

The illustration shows the ferritin structure; 24 individual identical protein molecules come together to form a hollow spherical shell 12 nm in diameter. The purpose of the molecule is to store iron until it is needed; iron ions enter through the pores and are kept inside the shell – given the tendency of iron to form a highly insoluble oxide, if we didn’t have this mechanism for storing the stuff our insides would literally rust up. Nanomagnetics is able to use the hollow shell that ferritin provides as a nanoscale chemical reactor, producing nanoparticles of magnetic iron oxide or other metals of great uniformity in size, and with a protein coat that both stops them sticking together and makes them biocompatible.

One simple, but rather neat, application of these particles is in water purification, in a process called forward osmosis. If you filled a bag made of a nanoporous membrane with sugar syrup, and immersed the bag in dirty water, water would be pulled through the membrane by the osmotic pressure exerted by the concentrated sugar solution. Microbes and contaminating molecules wouldn’t be able to get through the membrane, if its pores are small enough, and you end up with clean sugar solution. There’s a small company from Oregon, USA, HTI , which has commercialised just such a product. Essentially, it produces something like a sports drink from dirty or brackish water, and as such it’s started to prove its value for the military and in disaster relief situations. But what happens if you want to produce not sugar solution, but clean water? If you replace the sugar by magnetic nanoparticles then you can sweep the particles away with a magnetic field and then use them again to produce another batch of water, producing clean water from simple equipment with only a small cost in energy.

The illustration of ferritin is taken from the Protein Database’s Molecule of the Month feature. The drawing is by David S. Goodsell, based on the structure determined by Lawson et al., Nature 349 pp. 541 (1991).

Competitive Consumption

Partisans of molecular nanotechnology keep coming back to the theme of the devastation that they say will be caused to the world’s economic systems when it becomes possible to manufacture anything at no cost. Surely, they say, when goods cost nothing to make, then the money economy must wither away? I don’t accept the premise of this argument, but even if I did I think it is based on a misunderstanding of how economics works. The laws of economics, inasmuch as anything in that discipline can be described as a law, are really observations about human nature, and as such are not likely to be overturned on the basis of a mere technological advance. The key fallacy in this way of thinking is very succinctly put in an excellent book I’ve just finished: A nation of rebels: why counterculture became consumer culture, by Joseph Heath and Andrew Potter.

This book is mainly an entertaining polemic against the counterculture and the anti-globalisation movement. What’s relevant to us here is its gleeful demolition of the idea of postscarcity economics, as proposed by Herbert Marcuse and Murray Bookchin. This is the idea that once machines were able to take care of all our material needs and wants, we would be able to form a society based not on the demands of economic production, but on fellowship and love. It’s very easy to see the connection between this and the arguments made by the proponents of molecular nanotechnology.

The key concept in understanding what’s wrong with these ideas is the notion of a “positional good”. Positional goods get their value from the fact that not everyone can have them; people pay lots of money for an expensive and rare sports car like an Aston Martin, not simply because it is a nice piece of engineering, but explicitly because possession of one signals, in the view of the purchaser, something about their exalted status in society. The whole aim of much advertising and brand building is to increase the value of artefacts which often cost very little to make, by associating them with status messages of this kind. Very few people are immune to this, unless they live in cabins in the wilderness; for most of the middle class majorities of rich countries their biggest expenditure is on a house to live in, which by virtue of the importance of location and neighbourhood is an archetypal positional good.

When one realises how important positional goods are in market economies, the fallacy of the idea that molecular manufacturing would cause the end of the money economy becomes clear. In the words of Heath and Potter:

“What eventually led to the undoing of these views was the failure to appreciate the competitive nature of our consumption and the significance of positional goods. Houses in good neighborhoods, tasteful furniture, fast cars, stylish restaurant and cool clothes are all intrinsically scarce. We cannot manufacture more of them, because their value is based on the distinction they provide to consumers. The idea of overcoming scarcity through increased production is incoherent; in our society, scarcity is a social, not a material, phenomenon.”