Software control of matter at the atomic and molecular scale

The UK’s physical sciences research council, the EPSRC, has just issued a call for an “ideas factory” with the theme “Software control of matter at the atomic and molecular scale”, a topic proposed by Nottingham University nanophysicist Philip Moriarty. The way these programs work is that 20-30 participants, selected from many different disciplines, spend a week trying to think through new and innovative approaches to a very challenging problem. At the end of the process, it is hoped that some definite research proposals will emerge, and £1.5 million (i.e. not far short of US$ 3 million) has been set aside to fund these. The challenge, as defined by the call, is as follows:

“Can we design and construct a device or scheme that can arrange atoms or molecules according to an arbitrary, user-defined blueprint? This is at the heart of the idea of the software control of matter – the creation, perhaps, of a “matter compiler” which will interpret software instructions to output a macroscopic product in which every atom is precisely placed. Even partial progress towards this goal would significantly open up the range of available functional materials, permitting meta-materials with interesting electronic, optoelectronic, optical and magnetic properties.

One route to this goal might be to take inspiration from 3-d rapid prototyping devices, and conceive of some kind of pick-and-place mechanism operating at the atomic or molecular level, perhaps based on scanning probe techniques. On the other hand, the field of DNA nanotechnology gives us examples of complex structures built by self- assembly, in which the program to guide the construction is implicit within the structure of the building blocks themselves. This problem, then, goes beyond surface chemistry and the physics of self-assembly to some fundamental questions in computer science.

This ideas factory should attract surface physicists and chemists, including specialists in scanning probe and nanorobotic techniques, and those with an interest in self-assembling systems. Theoretical chemists, developmental biologists, and computer scientists, for example those interested in agent-based and evolutionary computing methods and emergent behaviour, will also be able to contribute. “

I’d encourage anyone who is eligible to receive EPSRC research funding (i.e. scientists working in UK universities and research institutes, broadly speaking) who is interested in taking part in this event to apply using the form on the EPSRC website. One person who won’t be getting any funding from this is me, because I’ve accepted the post of director of the activity.

Two forthcoming books

I’ve recently been looking over the page proofs of two interesting popular science books which are due to be published soon, both on subjects close to my heart. “The Middle World – the Restless Heart of Reality” by Mark Haw, is a discursive, largely historical book about Brownian motion. Of all the branches of physics, statistical mechanics is the one that is least well known in the wider world, but its story has both intellectual fascination and real human interest. The phenomenon of Brownian motion is central to understanding the way biology works, and indeed, as I’ve argued at length here and in my own book, learning how to deal with it and how to exploit it is going to be a prerequisite for success in making nanoscale machines and devices. Mark’s book does a nice job of bringing together the historical story, the relevance of Brownian motion to current science in areas like biophysics and soft matter physics, and its future importance in nanotechnology.

Martyn Amos (who blogs here), has a book called “Genesis Machines: The New Science of Biocomputing” coming out soon. Here the theme is the emerging interaction between computing and biology. This interaction takes a number of forms; the bulk of the book concerns Martyn’s own speciality, the various ways in which the biomolecule DNA can be used to do computations, but this leads on to synthetic biology and the re-engineering of the computing systems of individual cells. To me this is perhaps the most fascinating and potentially important area of science there is at the moment, and this book is an excellent introduction.

Neither book is out yet, but both can be preordered: The Middle World – the Restless Heart of Reality from Amazon, and Genesis Machines – the New Science of Biocomputation from Amazon UK.

ETC makes the case against nanomedicine

The most vocal and unequivocal opponent of nanotechnology – the ETC group – has turned its attention to nanomedicine, with a new report Nanotech Rx taking a sceptical look at the recent shift of emphasis we’ve seen towards medical applications of nanotechnology. The report, though, makes more sense as a critique of modern medicine in general rather than making many specific points about nanotechnology. Particularly in the context of health in the third world, the main thrust of the case is that enthusiasts of technocentric medicine have systematically underplayed the importance of non-technological factors (hygiene, better food, etc) on improving general health. As they say, “the global health crisis doesn’t stem from a lack of science innovation or medical technologies; the root problem is poverty and inequality. New medical technologies are irrelevant for poor people if they aren’t accessible or affordable.” However, in an important advance from ETC’s previous blanket opposition to nanotechnology, they do concede that “nanotech R&D related to water is potentially significant for the developing world. Access to clean water could make a greater contribution to global health than any single medical intervention.”

The debate about human enhancement also gets substantial discussion, with a point of view strongly influenced by disability rights activist Gregor Wolbring. (Newcomers to this debate could do a lot worse than to start with the recent Demos pamphlet, Better Humans? which collects essays by those from a variety of points of view, including Wolbring himself.) ETC correctly identifies the crypto-transhumanist position taken in some recent government publications, and gets succinctly to the nub of the matter as follows: “Certain personality traits (e.g., shyness), physical traits (e.g., “average” strength or height), cognitive traits (e.g., “normal” intelligence) will be deemed undesirable and correctable (and gradually unacceptable, not to be tolerated). The line between enhancement and therapy – already blurry – will be completely obliterated. “ I agree that there’s a lot to be concerned about here, but the issue as it now stands doesn’t have a lot to do with nanotechnology – current points of controversy include the use of SSRIs to “treat” shyness, and modafinil to allow soldiers to go without sleep. However, in the future nanotechnology certainly will be increasingly important in permitting human enhancement, in areas such as the development of interfaces with the brain and in regenerative medicine, and so it’s not unreasonable to flag the area as one to watch.

Naturally, the evils of big pharma get a lot of play. There are the well publicised difficulties big pharma seems to have in maintaining their accustomed level of innovation, the large marketing budgets and the concentration on “me-too” drugs for the ailments of the rich west, and the increasing trend to outsource clinical trials to third world countries. Again, these are all very valid concerns, but they don’t seem to have a great deal of direct relevance to nanotechnology.

In the context of the third world, one of the most telling criticisms of the global pharmaceutical industry has been the lack of R&D spend on diseases that affect the poor. Things have recently changed greatly for the better, thanks to Bill and Melinda and their ilk. ETC recognise the importance of public private partnerships of the kind supported by organisations like the Bill and Melinda Gates foundation, despite some evident distaste that this money has come from the disproportionately rich. “Ten years ago, there was not a single PPP devoted to the development of “orphan drugs” – medicines to treat diseases with little or no financial profit potential – and today there are more than 63 drug development projects aimed at diseases prevalent in the global South.” As an example of a Bill and Melinda supported project, ETC quote a project to develop a new synthetic route to the anti-malarial agent artemisinin. This is problematic for ETC, as the project uses synthetic biology, to which ETC is instinctively opposed; yet since artemisinin-based combination treatments seem to be the only effective way of overcoming the problem of drug resistant malaria, it seems difficult to argue that these treatments shouldn’t be universally available.

The sections of the report that are directly concerned with those areas of nanomedicine that are currently receiving the most emphasis seem rather weak. The section on the use of nanotechnology for drug delivery section discusses only one example, a long way from the clinic, and doesn’t really make any comments at all on the current big drive to develop new anti-cancer therapies based on nanotechnology. I’m also surprised that ETC don’t talk more about the current hopes for the widespread application of nanotechnology in diagnostics and sensor devices, not least because this raises some important issues about the degree to which diagnosis can be simply equated to the presence or absence of some biochemical marker.

At the end of all this, ETC are still maintaining their demand for a “moratorium on nanotechnology”, though this seems at odds with statements like this: “Nanotech R&D devoted to safe water and sustainable energy could be a more effective investment to address fundamental health issues.” I actually find more to agree with in this report than in previous ETC reports. And yet I’m left with the feeling that, even more than in previous reports, ETC has not managed to get to the essence of what makes nanotechnology special.

Is nanoscience different from nanotechnology?

In definitions of nanotechnology, it has now become conventional to distinguish between nanoscience and nanotechnology. One definition that is now very widely used is the one introduced by the 2004 Royal Society report, which defined these terms thus:

“Nanoscience is the study of phenomena and manipulation of materials at atomic, molecular and macromolecular scales, where properties differ significantly from those at a larger scale. Nanotechnologies are the design, characterisation, production and application of structures, devices and systems by controlling shape and size at nanometre scale.”

This echoed the definitions introduced earlier in the 2003 ESRC report, Social and Economic Challenges of Nanotechnology (PDF), which I coauthored, in which we wrote:

“We should distinguish between nanoscience, which is here now and flourishing, and nanotechnology, which is still in its infancy. Nanoscience is a convergence of physics, chemistry, materials science and biology, which deals with the manipulation and characterisation of matter on length scales between the molecular and the micron-size. Nanotechnology is an emerging engineering discipline that applies methods from nanoscience to create usable, marketable, and economically viable products.”

And this formulation itself was certainly derivative; I was certainly strongly influenced at the time by a very similar formulation from George Whitesides.

Despite having played a part in propagating this conventional wisdom, I’m now beginning to wonder how valid or helpful the distinction between nanoscience and nanotechnology actually is. Increasingly, it seems to me that the distinction tends to presuppose a linear model of technology transfer. In this picture, which was very widely held in post-war science policy discussions, we imagine a simple progression from fundamental research, predominantly curiosity driven, through a process of applied research, by which possible applications of the knowledge derived from fundamental science are explored, to the technological development of these applications into products or industrial processes. What’s wrong with this picture is that it doesn’t really describe how innovations in the history of technology have actually occurred. In many cases, inventions have been put into use well before the science that explains how they work was developed (the steam engine being one of many examples of this), and in many others it is actually the technology that has facilitated the science.

Meanwhile, the way science and technology is organised has greatly changed from the situation of the 1950’s, 60’s and ’70’s. At that time, a central role both in the generation of pure science and in its commercialisation was played by the great corporate laboratories, like AT&T’s Bell Labs in the USA, and in the UK the central laboratories of companies like ICI and GEC. For better or worse, these corporate labs have disappeared or been reduced to shadows of their former size, as deregulation and global competition has stripped away the monopoly rents that ultimately financed them. Without the corporate laboratories to broker the process of taking innovation from the laboratory to the factory, we are left with a much more fluid and confusing situation, in which there’s much more pressure on universities to move beyond pure science to find applications for their research and to convert this research into intellectual property to provide future revenue streams. Small research-based companies begin whose main assets are their intellectual property and the knowledge of their researchers, and bigger companies talk about “open innovation”, in which invention is just another function to be outsourced.

A useful concept for understanding the limitations of the linear model in this new environment is the idea of “mode II knowledge production” , (introduced, I believe, by Gibbons, M, et al (1994) The New Production of Knowledge. London: Sage). Mode II science would be fundamentally interdisciplinary, and motivated explicitly by applications rather than by the traditional discipline-based criteria of academic interest. These applications don’t necessarily have to be immediately convertible into something marketable; the distinction is that in this kind of science one is motivated not by exploring or explaining some fundamental phenomenon, but by the drive to make some device or gadget that does something interesting (nano-gizmology, as I’ve called this phenomenon in the past).

So in this view, nanotechnology isn’t simply the application of nanoscience. It’s definition is as much sociological as scientific. Prompted, perhaps, by observing the material success of many academic biologists who’ve founded companies in the biotech sector, and motivated by changes in academic funding climates and the wider research environment, we’ve seen physicists, chemists and materials scientists taking a much more aggressively application driven and commercially oriented approach to their science. Or to put it another way, nanotechnology is simply the natural outcome of an outbreak of biology envy amongst physical scientists.

Nanotechnology in the UK – judging the government’s performance

The Royal Society report on nanotechnology – Nanoscience and nanotechnologies: opportunities and uncertainties – was published in 2004, and the government responded to its recommendations early in 2005. At that time, many people were disappointed by the government response (see my commentary here); now the time has come to judge whether the government is meeting its commitments. The body that is going to make that judgement is the Council for Science and Technology. This is the government’s highest level advisory committee, reporting directly to the Prime Minister. The CST Nanotechnology Review is now underway, with a public call for evidence now open. Yesterday I attended a seminar in London organised by the working party.

I’ve written already of my disappointment with the government response so far, for example here, so you might think that I’d be confident that this review would be rather critical of the government. However, close reading of the call for evidence reveals a fine piece of “Yes Minister” style legerdemain; the review will judge, not whether the government’s response to the Royal Society report was itself adequate, but solely whether the government had met the commitments it made in that response.

One of the main purposes of yesterday’s seminar was to see if there had been any major new developments in nanotechnology since the publication of the Royal Society report. Some people expressed surprise at how rapid the introduction of nanotechnology into consumer products had been, though as ever it is difficult to judge how many of these applications can truly be described as nanotechnology, and equally how many other applications are in the market which do involve nanotechnology, but which don’t advertise the fact. However, one area in which there has been a demonstrable and striking proliferation is in nanotechnology road-maps, of which there are now, apparently, a total of seventy six.

Was Feynman the founder of nanotechnology?

Amidst all the controversy about what nanotechnology is and isn’t, one thing that everyone seems to agree on is the visionary role of Richard Feynman as the founding father of the field, through his famous lecture, There’s plenty of room at the bottom. In a series of posts I made here a year or so ago (Re-reading Feynman Part 1, Part 2, Part 3), I looked again, with the benefit of hindsight, at what Feynman actually said in this lecture, in the light of the way nanoscience and technology has developed. But does the claim that this lecture launched nanotechnology stand up to critical scrutiny?

This question was considered in a fascinating article by Chris Toumey called Apostolic Succession (PDF file). The article, published last year in Caltech’s house science magazine (just as Feynman’s original lecture was), takes a cool look at the evidence that might underpin the claim that “Plenty of Room at the Bottom” really was the foundational text for nanotechnology. The first place to look is in citations – occasions when the article was cited by other writers. Perhaps surprisingly, Plenty of Room was cited just 7 times in the two decades of the 60’s and 70’s, and the annual citation rate didn’t get into double figures until 1992. Next, Toumey directly questioned leading figures from nanoscience on the degree to which they were influenced by the Feynman lecture. The answers – from scientists of the standing of Binnig, Rohrer, and Eigler, Mirkin and Whitesides – were overwhelmingly negative. The major influence of the Feynman lecture, Toumey concludes, has been through the mediation of Drexler, who has been a vocal champion of the paper since coming across it around 1980.

Toumey draws three conclusions from all this. As he puts it, “The theory of apostolic succession posited that first there was “Plenty of Room”; then there was much interest in it; and finally that caused the birth of nanotechnology. My analysis suggests something different: first there was “Plenty of Room”; then there was very little interest in it; meanwhile, there was the birth of nanotechnology, independent of it; and finally there was a retroactive interest in it. I believe we can credit much of the rediscovery to Drexler, who has passionately championed Feynman’s paper.” As for why such a retroactive interest appeared, Toumey makes the obvious point that attaching one’s vision to someone with the genius, vision and charisma of Feynman is an obvious temptation. Finally, though, Toumey asks “how selective is the process of enhancing one’s work by retroactively claiming the Feynman cachet? “ The point here, and it is an important one, is that, as I discussed in my re-readings of Feynman, this lecture talked about many things and it requires a very selective reading to claim that Feynman’s musings supported any single vision of nanotechnology.

Toumey (who is from the centre for nanoScience & Technology Studies at the University of South Carolina) is an anthropologist by training, so it’s perhaps appropriate that his final conclusion is expressed in rather anthropological terms: “We can speculate about why “Plenty of Room” was rediscovered. Perhaps it shows us that a new science needed an authoritative founding myth, and needed it quickly. If so, then pulling Feynman’s talk off the shelf was a smart move because it gave nanotech an early date of birth, it made nanotech coherent, and it connected nanotech to the Feynman cachet.”

My thanks to Peter Rodgers for bringing this article to my attention.

In Australia

I’ve been to Australia for a brief trip, attending a closed public policy conference run by the Australian think-tank the Centre for Independent Studies. The terms of engagement of the conference prevent me from reporting on it in detail; it’s meant to be unreported and off-the-record. The attendance list was certainly a cut above the usual scientific conferences I go to; it included present and former cabinet ministers from the Australian and New Zealand governments, central bankers and senior judges, industry CEOs and prominent journalists.

A session of the conference was devoted to nanotechnology; I spoke, together with a couple of prominent Australian nanoscientists and the science correspondent of one of Australia’s major dailies. I was nervous about how I would be received, and I think many of the audience, more used to hearing about terrorism in Indonesia or commodity price fluctuations, were similarly nervous about whether they would find anything to interest them in such a specialised and futuristic sounding topic. In the event, I think, everyone was very pleasantly surprised at the success of the session and the lively debate it sparked.

I don’t want to divert this blog too much into discussing politics, but I can’t help observing that the tone of the meeting was a little bit more right wing than I am used to. The CIS clearly occupies rather a different part of think-tank space to my centrist friends in Demos, for example, and I regretted having left my Ayn Rand t-shirts at home. Nonetheless, I think it’s hugely important that science and technology do start to play a larger role in policy discussions.

A brief update

My frequency of posting has gone down in the last couple of weeks due to a combination of excessive busy-ness and a not wholly successful attempt to catch up with stuff before going on holiday. Here’s a brief overview of some of the things I would have written about if I’d had more time.

The Nanotechnology Engagement Group (which I chair) met last week to sketch out some of the directions of its second policy report, informed in part by an excellent workshop – Terms of Engagement – held in London a few weeks ago. The workshop brought together policy-makers, practitioners of public engagement, members of the public who had been involved in public engagement events about nanotechnology, and scientists, to explore the different expectations and aspirations these different actors have, and the tensions that arise when these expectations aren’t compatible.

The UK government’s funding body for the physical sciences, EPSRC, held a town meeting to discuss its new draft nanotechnology strategy last week. About 50 of the UKs leading nanoscientists attended; To summarise the mood of the meeting, people were pleased that EPSRC was drawing up a strategy, but they thought that the tentative plan was not nearly ambitious enough. EPSRC and its Strategic Working Group on Nanotechnology (of which I am a member) will be revising the draft strategy in line with these comments and the result should be presented to EPSRC Council for approval in October.

The last two issues of Nature have much to interest the nanotechnologist. Nanotubes unwrapped introduces the idea of using exfoliated graphite as a reinforcing material in composites; this should produce many of the advantages that people hope for in nanotube composites (but which have not yet so far fully materialised) at much lower cost. Spintronics at the atomic level describes a very elegant experiment in which a single manganese atom is introduced as a substitutional dopant on a gallium arsenide surface using a scanning tunnelling microscope, to probe its magnetic interactions with the surroundings. This week’s issue also includes a very interesting set of review articles about microfluidics, including pieces by George Whitesides and Harold Craighead, to which there is free access.

Rob Freitas has put together a website for his Nanofactory collaboration. Having complained on this blog before that my own critique of MNT proposals has been ignored by MNT proponents, it’s only fair for me to recognise that this site has a section about technical challenges which explicitly acknowledges such critiques with these positive words:
“This list, which is almost certainly incomplete, parallels and incorporates the written concerns expressed in thoughtful commentaries by Philip Moriarty in 2005 and Richard Jones in 2006. We welcome these critiques and would encourage additional constructive commentary – and suggestions for additional technical challenges that we may have overlooked – along similar lines by others.”

Finally, in a not totally unrelated development, the UKs funding council, EPSRC, will be running an Ideas Factory on the subject of Matter compilation via molecular manufacturing: reconstructing the wheel. The way this program works is that participants spend a week generating new ideas and collaborations, and at the end of it £1.45 million funding is guaranteed for the best proposals. I’ve been asked to act as the director of this activity, which should take place early in the New Year.

A cross-section of science at the Royal Society

I’ve been attending the New Fellows seminar at the Royal Society, the UK’s national academy of science. This is the occasion for the 44 new fellows that are elected each year (one of whom, this year, was me) to give a brief talk about their research. The resulting seminar is a fascinating snapshot of the whole breadth of current science and technology, of a kind that one rarely sees in today’s world of science specialization. Here are some impressions of the first day.

Biology is strongly represented, with a cluster of talks on various aspects of cell signaling, ranging from the details by which signaling molecules are switched on and off, to the ways stem cells are regulated. A revealing talk showed how electron microscopy could unravel the mechanism by which the remarkable machines that ensure proteins fold correctly – chaperonins – work. From environmental and earth science we had talks on the effects on our environment both of the forces of nature – in the shape of the relationship between long term climate change and variations in the sun’s activity – and of the effects of man, through the impact of our industrial society on atmospheric chemistry. In physics, there was a spread from the most pure aspects of the subject (how to measure the spin of a black hole) to the applied and commercially important (the molecular beam epitaxy technique that underlies much of current semiconductor nanotechnology). One thing that comes out very strongly from the talks are the unexpected unifying threads that run through what appear on the face of it to be very different pieces of science. Ideas from statistical mechanics, like entropy, are obviously important for understanding self-assembly in soft matter, but they also cropped up in talks about signal processing in the brain and in modelling the growth of cities.

The important relationship between science and society was highlighted in two contrasting talks about the application of science to solve problems in the developing world. In one, the talk was at an abstract level, highlighting the problems of governance and economics in Africa that made it difficult to apply existing science to solve pressing problems. These abstract ideas were made very concrete in a fascinating talk about the development of new combination therapies to overcome the problems caused by drug-resistance in malaria. The foundation of these therapies is a new anti-malarial, artemesinin, recently discovered by Chinese scientists on the basis of a remedy from traditional chinese herbal medicine. Now that effective remedies are available, the problems to overcome are the social, economic and political barriers that prevent them from being universally available.

A round-up of nano-blogs

To mark the growing popularity of science-based blogs, here’s a quick roundup of some blogs devoted to nanotechnology. Nanotechnology means many things to different people, and this diversity of points of view is reflected in the wide variety of perspectives on offer in the blogs.

From the point of view of business and the financial markets, TNTlog comes from Tim Harper, of the European consulting firm Cientifica. His posting frequency has dropped off recently, which is a pity, since this is a blog that manages to be both entertaining and well-informed, with a healthy scepticism about some of the wilder claims made on behalf of the “nanotechnology industry”. The web-portal nanotechnology.com hosts a contrasting pair of blogs. blog | nano, by Darrell Brookstein, is at the shriller end of the nanobusiness spectrum, while Steve Edwards’s blog combines commentary on nano financial markets with the odd extract from his (rather good) book – The Nanotech Pioneers.

Among blogs written by academics, there are those that come from scientists working inside the field, and some from social scientists whose interests run more towards the social issues surrounding nanotechnology. In the first category we have Nanoscale Views, by academic nanophysicist Doug Natelson. This combines capsule reviews of new condensed matter preprints and conference reports with more general observations about life as a junior faculty, and is at quite a high technical level. Martyn Amos is a computer scientist; his blog covers issues such as synthetic biology and chemical computing. The authors of Molecular Torch seem to be keen to keep their identities quiet, but from what they cover I’m guessing they work in the field of nanochemistry, with a particular interest in quantum dots. If you want to know what Soft Machines is about, just look around.

From the social science side of things, David Berube’s Nanohype casts a sceptical eye on the scene, leavening fairly detailed commentary on various reports and conferences with his enjoyably acerbic humour. Nano|Public, Dietram Scheufele, similarly covers public engagement issues from an academic point of view. Nanotechbuzz by George Elvin, is more general in its coverage, which reflects the interests of its author, an architecture professor with interest the relationship between nanotechnology and design.

A couple of blogs reflect the views of those interested in Drexler’s vision of molecular nanotechnology. The current market leader in the faith-based end of this space is Responsible Nanotechnology, from the Center for Responsible Nanotechnology, aka Mike Treder and Chris Phoenix. This pair have the most impressive output in terms of sheer volume. Their analysis is predicated on the unsupported assertion that desktop nanofactories could be with us in 10-15 years; any dissent from this view is met, not with rational argument, but with accusations of bad faith or scientific fraud. Nanodot, from the Foresight Nanotech Institute’s Christine Peterson, represents the more acceptable face of Drexlerism, combining reporting on current nanoscience developments and commentary about social and economic issues, with discussion of longer-ranged prospects, albeit in a framework of thorough-going technological determinism.

Finally, we have a couple of blogs written by professional writers. Howard Lovy’s Nanobot was a useful source of nano- commentary, particularly strong on charting the influence of nanotechnology on popular culture, before Howard’s move to the darkside of public relations led to a quiet period. Nanobot has recently gently restarted. A very welcome newcomer is homunculus from my favourite science writer, Philip Ball. The scope of homunculus goes well beyond nanotechnology, covering aspects of chemistry and physics ranging from the application of statistical mechanics to financial markets to the historical links between chemistry and fine arts. His most recent post contains much of the useful background information that didn’t make it into his recent news piece for Nature about the potential neurotoxicity of nanoscale titania.

My apologies to anyone I’ve missed out.