Moving on

For the last two years, I’ve been the Senior Strategic Advisor for Nanotechnology for the UK’s Engineering and Physical Science Research Council (EPSRC), the government agency that has the lead responsibility for funding nanotechnology in the UK. I’m now stepping down from this position to return to a new, full-time role at the University of Sheffield; EPSRC is currently in the process of appointing my successor.

In these two years, a substantial part of a new strategy for nanotechnology in the UK has been implemented. We’ve seen new, Grand Challenge programmes targeting nanotechnology for harvesting solar energy, and nanotechnology for medicine and healthcare, with a third programme looking for new ways of using nanotechnology to capture and utilise carbon dioxide shortly to be launched. At the more speculative end of nanotechnology, the “Software Control of Matter” programme received supplementary funding. Some excellent individual scientists have been supported through personal fellowships, and looking to the future, the three new Doctoral Training Centres in nanotechnology will produce, over the next five years, up to 150 additional PhDs in nanotechnology over and above EPSRC’s existing substantial support for graduate students. After a slow response to the 2004 Royal Society report on nanotechnology, I think we now find ourselves in a somewhat more defensible position with respect to funding of nano- toxicology and ecotoxicology studies, with some useful projects in these areas being funded by the Medical Research Council and the Natural Environment Research Council respectively, and a joint programme with the USA’s Environmental Protection Agency about to be launched. With the public engagement exercise that was run in conjunction with the Grand Challenge on nanotechnology in medicine and healthcare, I think EPSRC has gone substantially further than any other funding agency in opening up decision making about nanotechnology funding. I’ve found this experience to be fascinating and rewarding; my colleagues in the EPSRC nanotechnology team, led by John Wand, have been a pleasure to work with. I’ve also had a huge amount of encouragement and support from many scientists from across the UK academic community.

In the process, I’ve learned a great deal; nanotechnology of course takes in physics, chemistry, and biology, as well as elements from engineering and medicine. I’ve also come into contact with philosophers and sociologists, as well as artists and designers, from all of whom I’ve learnt new insights. This education will stand me in good stead in my new role at Sheffield – as the Pro-Vice-Chancellor for Research and Innovation I’ll be responsible for the health of research right across the University.

Accelerating evolution in real and virtual worlds

Earlier this week I was in Trondheim, Norway, for the IEEE Congress on Evolutionary Computing. Evolutionary computing, as its name suggests, refers to a group of approaches to computer programming that draws inspiration from the natural processes of Darwinian evolution, hoping to capitalise on the enormous power of evolution to find good solutions to complex problems from a very large range of possibilities. How, for example, might one program a robot to carry out a variety of tasks in a changing and unpredictable environment? Rather than an attempting to anticipate all the possible scenarios that your robot might encounter, and then writing control software that specified appropriate behaviours for all these possibilities, one could use evolution to select a robot controller that worked best for your chosen task in a variety of environments.

Evolution may be very effective, but in its natural incarnation it’s also very slow. One way of speeding things up is to operate in a virtual world. I saw a number of talks in which people were using simulations of robots to do the evolution; something like a computer game environment is used to simulate a robot doing a simple task like picking up an object or recognising a shape, with success or failure being used as input in a fitness function, through which the robot controller is allowed to evolve.

Of course, you could just use a real computer game. Simon Lucas, from Essex University, explained to me why classic computer games – his favourite is Ms Pac-Man – offer really challenging exercises in developing software agents. It’s sobering to realise that, while computers can beat a chess grand master, humans still have a big edge on computers in arcade games. The human high-score for Ms Pac-Man is 921,360; in a competition in the 2008 IEEE CEC meeting the winning bot achieved 15,970. Unfortunately I had to leave Trondheim before the results of the 2009 competition were announced, so I don’t know whether this year produced a big breakthrough in this central challenge to computational intelligence.

One talk at the meeting was very definitely rooted in the real, rather than virtual, world – this came from Harris Wang, a graduate student in the group of Harvard Medical School’s George Church. This was a really excellent overview of the potential of synthetic biology. At the core of the talk was a report of a recent piece of work that is due to appear in Nature shortly. This described the re-engineering of an micro-organism to increase its production of the molecule lycopene, the dye that makes tomatoes red (and probably confers significant health benefits, the basis for the seemingly unlikely claim that tomato ketchup is good for you). Notwithstanding the rhetoric of precision and engineering design that often accompanies synthetic biology, what made this project successful was the ability to generate a great deal of genetic diversity and then very rapidly screen these variants to identify the desired changes. To achieve a 500% increase in lycopene production, they needed to make up to 24 simultaneous genetic modifications, knocking out genes involved in competing processes and modifying the regulation of other genes. This produced a space of about 15 billion possible combinatorial variations, from which they screened 100,000 distinct new cell types to find their winner. This certainly qualifies as real-world accelerated evolution.

How to engineer a system that fights back

Last week saw the release of a report on synthetic biology from the UK’s Royal Academy of Engineering. The headline call, as reflected in the coverage in the Financial Times, is for the government to develop a strategy for synthetic biology so that the country doesn’t “lose out in the next industrial revolution”. The report certainly plays up the likelihood of high impact applications in the short term – within five to ten years, we’re told, we’ll see synbio based biofuels, “artificial leaf technology” to fix atmospheric carbon dioxide, industrial scale production of materials like spider silk, and in medicine the realisation of personalised drugs. An intimation that progress towards these goals may not be entirely smooth can be found in this news piece from a couple of months ago – A synthetic-biology reality check – which described the abrupt winding up earlier this year of one of the most prominent synbio start-ups, Codon Devices, founded by some of the most prominent US players in the field.

There are a number of competing visions for what synthetic biology might be; this report concentrates on just one of these. This is the idea of identifying a set of modular components – biochemical analogues of simple electronic components – with the aim of creating a set of standard parts from which desired outcomes can be engineered. This way of thinking relies on a series of analogies and metaphors, relating the functions of cell biology with constructs of human-created engineering. Some of these analogies have a sound empirical (and mathematical) basis, like the biomolecular realisation of logic gates and positive and negative feedback.

There is one metaphor that is used a lot in the report which seems to me to be potentially problematic – that’s the idea of a chassis. What’s meant by this is a cell – for example, a bacteria like E.coli – into which the artificial genetic components are introduced in order to produce the desired products. This conjures up an image like the box into which one slots the circuit boards to make a piece of electronic equipment – something that supplies power and interconnections, but which doesn’t have any real intrinsic functionality of its own. It seems to me difficult to argue that any organism is ever going to provide such a neutral, predictable substrate for human engineering – these are complex systems which have their own agenda. To quote from the report on a Royal Society Discussion Meeting about synthetic biology, held last summer: “Perhaps one of the more significant challenges for synthetic biology is that living systems actively oppose engineering. They are robust and have evolved to be self-sustaining, responding to perturbations through adaptation, mutation, reproduction and self-repair. This presents a strong challenge to efforts to ‘redesign’ existing life.”

Are electric cars the solution?

We’re seeing enthusiasm everywhere for electric cars, with government subsidies being directed both at buyers and manufacturers. The attractions seem to be obvious – clean, emission free transport, seemingly resolving effortlessly the conflict between people’s desire for personal mobility and our need to move to a lower carbon energy economy. Widespread use of electric cars, though, simply moves the energy problem out of sight – from the petrol station and exhaust pipe to the power station. A remarkably clear opinion piece in today’s Financial Times, by Richard Pike, of the UK’s Royal Society of Chemistry, poses the problem in numbers.

The first question we have to ask, is how does the energy efficiency of electric cars compare to cars powered by internal combustion engines? Electric motors are much more efficient than internal combustion engines, but a fair comparison has to take into account the losses incurred in generating and transmitting the electricity. Pike’s cites figures that show the comparison is actually surprisingly close. Petrol engines, on average, have an overall efficiency of 32%, whereas the much more efficient Diesel engine converts 45% of the energy in the fuel into useful output. Conversion efficiencies in power stations, on the other hand, come in at a bit more than 40%; add to this a transmission loss getting from the power station to the plug and a further loss from the charging/discharging cycle in the batteries and you end up with an overall efficiency of about 31%. So, on pure efficiency grounds, electric cars do worse than either petrol or diesel vehicles. One further factor needs to be taken into account, though – that’s the amount of carbon dioxide emitted per Joule of energy supplied from different fuels. Clearly, if all our electricity was generated by nuclear power or by solar photovoltaics, the advantages of electric cars would be compelling, but if it all came from coal-fired power stations this would make the situation substantially worse. With the current mix of energy sources in the UK, Pike estimates a small advantage for electric cars, with an overall potental reduction of emissions of one seventh. I don’t know the corresponding figures for other countries; presumably given France’s high proportion of nuclear the advantage of electric cars there would be much greater, while in the USA, given the importance of coal, things may be somewhat worse.

Pike’s conclusion is that the emphasis on electric cars is misplaced, and the subsidy money would be better off spent on R&D on renewable energy and carbon capture. The counter-argument would be that a push for electric cars now won’t make a serious difference to patterns of energy use for ten or twenty years, given the inertia attached to the current installed base of conventional cars and the plant to manufacture them, but is necessary to begin the process of changing that. In the meantime, one should be pursuing low carbon routes to electricity generation, whether nuclear, renewable, or coal with carbon capture. It would be comforting to think that this is what will happen, but we shall see.

Another step towards (even) cheaper DNA sequencing

An article in the current Nature Nantechnology – Continuous base identification for single-molecule nanopore DNA sequencing (abstract, subscription required for full article) marks another important step towards the goal of using nanotechnology for fast and cheap DNA sequencing. The work comes from the group of Hagen Bayley, at Oxford University.

The original idea in this approach to sequencing was to pull a single DNA chain through a pore with an electric field, and detect the different bases one by one by changes in the current through the pore. I wrote about this in 2007 – Towards the $1000 human genome – and in 2005 – Directly reading DNA. Difficulties in executing this appealing scheme directly mean that Bayley is now taking a slightly different approach – rather than threading the DNA through the hole directly, he uses an enzyme to chop a single base of the end of the DNA; as each base goes through the pore the characteristic current change is sensitive enough to identify its chemical identity. The main achievement reported in this paper is in engineering the pore – this is based on a natural membrane protein, alpha-haemolysin, but a chemical group is covalently bonded to the inside of the pore to optimise its discrimination and throughput. What still needs to be done is to mount the enzyme next to the nanopore, to make sure bases are chopped off the DNA strand and read in sequence.

Nonetheless, commercialisation of the technology seems to be moving fast, through a spin-out company, Oxford Nanopore Technologies Ltd. Despite the current difficult economic circumstances, this company managed to raise another £14 million in January.

Despite the attractiveness of this technology, commercial success isn’t guaranteed, simply because the competing, more conventional, technologies are developing so fast. These so-called “second generation” sequencing technologies have already brought the price of a complete human genome sequence down well below $100,000 – this itself is an astounding feat, given that the original Human Genome Project probably cost about $3 billion to produce its complete sequence in 2003. There’s a good overview of these technologies in the October 2008 issue of Nature Biotechnology – Next-generation DNA sequencing (abstract, subscription required for full article). It’s these technologies that underlie the commercial instruments, such as those made by Illumina, that have brought large scale DNA sequencing within the means of many laboratories; a newly started company Complete Genomics – plans to introduce a service this year at $5,000 for a complete human genome. As often is the case with a new technology, competition from incremental improvements of the incumbent technology can be fierce. It’s interesting, though, that Illumina regards the nanopore technology to be significant enough for it to take a a substantial equity stake in Oxford Nanopore.

What’s absolutely clear, though, is that the age of large scale, low cost, DNA sequencing is now imminent, and we need to think through the implications of this without delay.

How cells decide

One of the most important recent conceptual advances in biology, in my opinion, is the realization that much of the business carried out by the nanoscale machinery of the cell is as much about processing information as processing matter. Dennis Bray pointed out, in an important review article (8.4 MB PDF) published in Nature in 1995, that mechanisms such as allostery, by which the catalytic activity of an enzyme can be switched on and off by the binding of another molecule, mean that proteins can form the components of logic gates, which themselves can be linked together to form biochemical circuits. These information processing networks can take information about the environment from sensors at the cell surface, compute an appropriate action, and modify the cell’s behaviour in response. My eye was recently caught by a paper from 2008 which illustrates rather nicely how it is that the information processing capacity of a single cell can be quite significant.

The paper – Emergent decision-making in biological signal transduction networks (abstract, subscription required for full article in PNAS), comes from Tomáš Helikar, John Konvalina, Jack Heidel, and Jim A. Rogers at the University of Nebraska. What these authors have done is construct a large scale, realistic model of a cell signalling network in a generic eukaryotic cell. To do this, they’ve mined the literature for data on 130 different network nodes. Each node represents a protein; in a crucial simplification they reduce the complexities of the biochemistry to simple Boolean logic – the node is either on or off, depending on whether the protein is active or not, and for each node there is a truth table expressing the interactions of that node with other proteins. For some more complicated cases, a single protein may be represented by more than one node, expressing the fact that there may be a number of different modified states.

This model of the cell takes in information from the outside world; sensors at the cell membrane measure the external concentration of growth factors, extracellular matrix proteins, and calcium levels. This is the input to the cell’s information processing system. The outputs of the systems are essentially decisions by the cell about what to do in response to its environment. The key result of the simulations is that the network can take a wide variety of input signals, often including random noise, and for each combination of inputs produce one of a small number of biologically appropriate responses – as the authors write, “this nonfuzzy partitioning of a space of random, noisy, chaotic inputs into a small number of equivalence classes is a hallmark of a pattern recognition machine and is strong evidence that signal transduction networks are decision-making systems that process information obtained at the membrane rather than simply passing unmodified signals downstream.”

Can carbon capture and storage work?

Across the world, governments are placing high hopes on carbon capture and storage as the technology that will allow us to go on meeting a large proportion of the world’s growing energy needs from high carbon fossil fuels like coal. The basic technology is straightforward enough; in one variant one burns the coal as normal, and then takes the flue gases through a process to separate the carbon dioxide, which one then pipes off and shuts away in a geological reservoir, for example down an exhausted natural gas field. There are two alternatives to this simplest scheme; one can separate the oxygen from the nitrogen in the air and then burn the fuel in pure oxygen, producing nearly pure carbon dioxide for immediate disposal. Or in a process reminiscent of that used a century ago to make town gas, one can gasify coal to produce a mixture of carbon dioxide and hydrogen, remove the carbon dioxide from the mixture and burn the hydrogen. Although the technology for this all sounds straightforward enough, a rather sceptical article in last week’s Economist, Trouble in Store, points out some difficulties. The embarrassing fact is that, for all the enthusiasm from politicians, no energy utility in the world has yet built a large power plant using carbon capture and storage. The problem is purely one of cost. The extra capital cost of the plant is high, and significant amounts of energy need to be diverted to do the necessary separation processes. This puts a high (and uncertain) price on each tonne of carbon not emitted.

Can technology bring this cost down? This question was considered in a talk last week by Professor Mercedes Maroto-Valer from the University of Nottingham’s Centre for Innovation in Carbon Capture and Storage. The occasion for the talk was a meeting held last Friday to discuss environmentally beneficial applications of nanotechnology; this formed part of the consultation process about the third Grand Challenge to be funded in nanotechnology by the UK’s research council. A good primer on the basics of the process can be found in the IPCC special report on carbon capture. At the heart of any carbon capture method is always a gas separation process. This might be helped by better nanotechnology-enabled membranes, or nanoporous materials (like molecular sieve materials) that can selectively absorb and release carbon dioxide. These would need to be cheap and capable of sustaining many regeneration cycles.

This kind of technology might help by bringing the cost of carbon capture and storage down from its current rather frightening levels. I can’t help feeling, though, that carbon capture and storage will always remain a rather unsatisfactory technology for as long as its costs remain a pure overhead – thus finding something useful to do with the carbon dioxide is a hugely important step. This is another reason why I think the “methanol economy” deserves serious attention. The idea here is to use methanol as an energy carrier, for example as a transport fuel which is compatible with existing fuel distribution infrastructures and the huge installed base of internal combustion engines. A long-term goal would be to remove carbon dioxide from the atmosphere and use solar energy to convert it into methanol for use as a completely carbon-neutral transport fuel and as a feedstock for the petrochemical industry. The major research challenge here is to develop scalable systems for the photocatalytic reduction of carbon dioxide, or alternatively to do this in a biologically based system. Intermediate steps to a methanol economy might use renewably generated electricity to provide the energy for the creation of methanol from water and carbon dioxide from coal-fired power stations, extracting “one more pass” of energy from the carbon before it is released into the atmosphere. Alternatively process heat from a new generation nuclear power station could be used to generate hydrogen for the synthesis of methanol from carbon dioxide captured from a neighboring fossil fuel plant.

Natural complexity, engineering simplicity

One of the things that makes mass production possible is the large-scale integration of nearly identical parts. Much engineering design is based on this principle, which is taken to extremes in microelectronics; a modern microprocessor will contain several hundred million transistors, every one of which needs to be manufactured to very high tolerances if the device is to work at all. One might think that similar considerations would apply to biology. After all, the key components of biological nanotechnology – the proteins that are the key components of most of the nanoscale machinery of the cell – are specified by the genetic code down to the last atom, and in many cases are folded in a unique three dimensional configuration. It turns out, though, that this is not the case; biology actually has sophisticated mechanisms whose entire purpose is to introduce extra variation into its components.

This point was forcefully made by Dennis Bray in an article in Science magazine in 2003: called Molecular Prodigality (PDF version from Bray’s own website). Protein sequences can be chopped and changed, after the DNA code has been read, by processes of RNA editing and splicing and other types of post-translational modification, and these can lead to distinct changes in the operation of machines made from these proteins. Bray cites as an example the potassium channels in squid nerve axons; one of the component proteins can be altered by RNA editing in up to 13 distinct places, changing the channel’s operating parameters. He calculates that the random combination of all these possibilities means that there are 4.5 ×1015 subtly different possible types of potassium channels. This isn’t an isolated example; Bray estimates that up to a half of human structural genes allow some such variation, with the brain and nervous system being particularly rich in molecular diversity.

It isn’t at all clear what all this variation is for, if anything. One can speculate that some of this variability has evolved to increase the adaptability of organisms to unpredictable changes in environmental conditions. This is certainly true for the case of the adaptive immune system. A human has the ability to make 1012 different types of antibody, using combinatorial mechanisms to generate a huge library of different molecules, each of which has the potential to recognise characteristic target molecules on pathogens that we’ve yet to be exposed to. This is an example of biology’s inherent complexity; human engineering, in contrast, strives for simplicity.

Nanobots, nanomedicine, Kurzweil, Freitas and Merkle

As Tim Harper observes, with the continuing publicity surrounding Ray Kurzweil, it seems to be nanobot week. In one further contribution to the genre, I’d like to address some technical points made by Rob Freitas and Ralph Merkle in response to my article from last year, Rupturing the Nanotech Rapture, in which I was critical of their vision of nanobots (my thanks to Rob Freitas for bringing their piece to my attention in a comment on my earlier entry). Before jumping straight into the technical issues, it’s worth trying to make one point clear. While I think the vision of nanobots that underlies Kurzweil’s extravagant hopes is flawed, the enterprise of nanomedicine itself has huge promise. So what’s the difference?

We can all agree on why nanotechnology is potentially important for medicine. The fundamental operations of cell biology all take place on the nanoscale, so if we wish to intervene in those operations, there is a logic to carrying out these interventions at the right scale, the nanoscale. But the physical environment of the warm, wet nano-world is a very unfamiliar one, dominated by violent Brownian motion, the viscosity dominated regime of low Reynolds number fluid dynamics, and strong surface forces. This means that the operating principles of cell biology rely on phenomena that are completely unfamiliar in the macroscale world – phenomena like self-assembly, molecular recognition, molecular shape change, diffusive transport and molecule-based information processing. It seems to me that the most effective interventions will use the same “soft nanotechnology” paradigm, rather than being based on a mechanical paradigm that underlies the Freitas/Merkle vision of nanobots, which is inappropriate for the warm wet nanoscale world that our biology works in. We can expect to see increasingly sophisticated drug delivery devices, targeted to the cellular sites of disease, able to respond to their environment, and even able to perform simple molecule-based logical operations to decide appropriate responses to their situation. This isn’t to say that nanomedicine of any kind is going to be easy. We’re still some way away from being able to completely disentangle the sheer complexity of the cell biology that underlies diseases such as cancer or rheumatoid arthritis, while for other hugely important conditions like Alzheimer’s there isn’t even consensus on the ultimate cause of the disease. It’s certainly reasonable to expect improved treatments and better prospects for sufferers of serious diseases, including age-related ones, in twenty years or so, but this is a long way from the prospects of seamless nanobot-mediated neuron-computer interfaces and indefinite life-extension that Kurzweil hopes for.

I now move on to the specific issues raised in the response from Freitas and Merkle.

Several items that Richard Jones mentions are well-known research challenges, not showstoppers.

Until the show has actually started, this of course is a matter of opinion!

All have been previously identified as such along with many other technical challenges not mentioned by Jones that we’ve been aware of for years.

Indeed, and I’m grateful that the cited page acknowledges my earlier post Six Challenges for Molecular Nanotechnology. However, being aware of these and other challenges doesn’t make them go away.

Unfortunately, the article also evidences numerous confusions: (1) The adhesivity of proteins to nanoparticle surfaces can (and has) been engineered;

Indeed, polyethylene oxide/glycol end-grafted polymers (brushes) are commonly used to suppress protein adsorption at liquid/solid interfaces (and less commonly, brushes of other water soluble polymers, as in the link, can be used). While these methods work pretty well in vitro, they don’t work very well in vivo, as evidenced by the relatively short clearing times of “stealth” liposomes, which use a PEG layer to avoid detection by the body. The reasons for this are still aren’t clear, as the fundamental mechanisms by which brushes suppress protein adsorption aren’t yet fully understood.

(2) nanorobot gears will reside within sealed housings, safe from exposure to potentially jamming environmental bioparticles;

This assumes that “feed-throughs” permitting traffic in and out of the controlled environment while perfectly excluding contaminants are available (see point 5 of my earlier post Six Challenges for Molecular Nanotechnology). To date I don’t see a convincing design for these.

(3) microscale diamond particles are well-documented as biocompatible and chemically inert;

They’re certainly chemically inert, but the use of “biocompatible” here betrays a misunderstanding; the fact that proteins adsorb to diamond surfaces is experimentally verified and to be expected. Diamond-like carbon is used as a coating in surgical implants and stents and is biocompatible in the sense that it doesn’t cause cytotoxicity or inflammatory reactions. It’s biocompatibility with blood is also good, in the sense that it doesn’t lead to thrombus formation. But this isn’t because proteins don’t adsorb to the surface; it is because there’s a preferential adsorption of albumin rather than fibrinogen, which is correlated with a lower tendency of platelets to attach to the surface (see e.g. R. Hauert, Diamond and Related Materials 12 (2003) 583). For direct experimental measurements of protein adsorption to an amorphous diamond-like film see, for example, here. Almost all this work has been done, not on single crystal diamond, but on polycrystalline or amorphous diamond-like films, but there’s no reason to suppose the situation will be any different for single crystals; these are simply hydrophobic surfaces of the kind that proteins all too readily adsorb to.

(4) unlike biological molecular motors, thermal noise is not essential to the operation of diamondoid molecular motors;

Indeed, in contrast to the operation of biological motors, which depend on thermal noise, noise is likely to be highly detrimental to the operation of diamondoid motors. Which, to state the obvious, is a difficulty in the environment of the body where such thermal noise is inescapable.

(5) most nanodiamond crystals don’t graphitize if properly passivated;

Depends what you mean by most, I suppose. Raty et al. (Phys Rev Letts 90 art037401, 2003) did quantum simulation calculations showing that 1.2 nm and 1.4 nm ideally terminated diamond particles would undergo spontaneous surface reconstruction at low temperature. The equilibrium surface structure will depend on shape and size, of course, but you won’t know until you do the calculations or have some experiments.

(6) theory has long supported the idea that contacting incommensurate surfaces should easily slide and superlubricity has been demonstrated experimentally, potentially allowing dramatic reductions in friction inside properly designed rigid nanomachinery;

Superlubricity is an interesting phenomenon in which friction falls to very low (though probably non-zero) values when rigid surfaces are put together out of crystalline register and slide past one another. The key sentence above is “properly designed rigid nanomachinery”. Diamond has very low friction macroscopically because it is very stiff, but nanomachines aren’t going to be built out of semi-infinite blocks of the stuff. Measured by, for example, the average relative thermal displacements observed at 300K diamondoid nanomachines are going to be rather floppy. It remains to be seen how important this is going to be in permitting leakage of energy out of the driving modes of the machine into thermal energy, and we need to see some simulations of dynamic friction in “properly designed rigid nanomachinery”.

(7) it is hardly surprising that nanorobots, like most manufactured objects, must be fabricated in a controlled environment that differs from the application environment;

This is a fair point as far as it goes. But consider why it is that an integrated circuit, made in a controlled ultra-clean environment, works when it is brought out into the scruffiness of my office. It’s because it can be completely sealed off, with traffic in and out of the IC carried out entirely by electrical signals. Our nanobot, on the other hand, will need to communicate with its environment by the actual traffic of molecules, hence the difficulty of the feed-through problem referred to above.

(8) there are no obvious physical similarities between a microscale nanorobot navigating inside a human body (a viscous environment where adhesive forces control) and a macroscale rubber clock bouncing inside a clothes dryer (a ballistic environment where inertia and gravitational forces control);

The somewhat strained nature of this simile illustrates the difficulty of conceiving the very foreign and counter-intuitive nature of the warm, wet, nanoscale world. This is exactly why the mechanical engineering intuitions that underlie the diamondoid nanobot vision are so misleading.

and (9) there have been zero years, not 15 years, of “intense research” on diamondoid nanomachinery (as opposed to “nanotechnology”). Such intense research, while clearly valuable, awaits adequate funding

I have two replies to this. Firstly, even accepting the very narrow restriction to diamondoid nanomachinery, I don’t see how the claim of “zero years” squares with what Freitas and Merkle have been doing themselves, as I know that both were employed as research scientists at Zyvex, and subsequently at the Institute of Molecular Manufacturing. Secondly, there has been a huge amount of work in nanomedicine and nanoscience directly related to these issues. For example, the field of manipulation and reaction of individual atoms on surfaces directly underlies the visions of mechanosynthesis that are so important to the Freitas/Merkle route to nanotechnology dates back to Don Eigler’s famous 1990 Nature paper; this paper has since been cited by more than 1300 other papers, which gives an indication of how much work there’s been in this area worldwide.

— as is now just beginning.

And I’m delighted by Philip Moriarty’s fellowship too!

I’ve responded to these points at length, since we frequently read complaints from proponents of MNT that no-one is prepared to debate the issues at a technical level. But I do this with some misgivings. It’s very difficult to prove a negative, and none of my objections amounts to a proof of physical impossibility. But what is not forbidden by the laws of physics is not necessarily likely, let alone inevitable. When one is talking about such powerful human drives as the desire not to die, and the urge to reanimate deceased loved ones, it’s difficult to avoid the conclusion that rational scepticism may be displaced by deeper, older human drives.

Brain interfacing with Kurzweil

The ongoing discussion of Ray Kurzweil’s much publicized plans for a Singularity University prompted me to take another look at his book “The Singularity is Near”. It also prompted me to look up the full context of the somewhat derogatory quote from Douglas Hofstadter that the Guardian used and I reproduced in my earlier post. This can be found in this interview“it’s a very bizarre mixture of ideas that are solid and good with ideas that are crazy. It’s as if you took a lot of very good food and some dog excrement and blended it all up so that you can’t possibly figure out what’s good or bad. It’s an intimate mixture of rubbish and good ideas, and it’s very hard to disentangle the two, because these are smart people; they’re not stupid.” Looking again at the book, it’s clear this is right on the mark. One difficulty is that Kurzweil makes many references to current developments in science and technology, and most readers are going to take it on trust that Kurzweil’s account of these developments is accurate. All too often, though, what one finds is that there’s a huge gulf between the conclusions Kurzweil draws from these papers and what they actually say – it’s the process I described in my article The Economy of Promises taken to extremes – “a transformation of vague possible future impacts into near-certain outcomes”. Here’s a fairly randomly chosen, but important, example.

In this prediction, we’re in the year 2030 (p313 in my edition). “Nanobot technology will provide fully immersive, totally convincing virtual reality”. What is the basis for this prediction? “We already have the technology for electronic devices to communicate with neurons in both directions, yet requiring no direct physical contact with the neurons. For example, scientists at the Max Planck Institute have developed “neuron transistors” that can detect the firing of a nearby neuron, or alternatively can cause a nearby neuron to fire or suppress it from firing. This amounts to two-way communication between neurons and the electronic-based neuron transistors. As mentioned above, quantum dots have also shown the ability to provide non-invasive communication between neurons and electronics.” The statements are supported by footnotes, with impressive looking references to the scientific literature. The only problem is, that if one goes to the trouble of looking up the references, one finds that they don’t say what he says they do.

The reference to “scientists at the MPI” refers to Peter Fromherz, who has been extremely active in developing ways of interfacing nerve cells with electronic devices – field effect transistors to be precise. I discussed this research in an earlier post – Brain chips – the paper cited by Kurzweil is Weis and Fromherz, PRE, 55 877 (1977) (abstract). Fromherz’s work does indeed demonstrate two-way communication between neurons and transistors. However, it emphatically does not do this in a way that needs no physical contact with neurons – the neurons need to be in direct contact with the gate of the FET, and this is achieved by culturing neurons in-situ. This restricts the method to specially grown, 2-dimensional arrays of neurons, not real brains. The method hasn’t been demonstrated to work in-vivo, and it’s actually rather difficult to see how this could be done. As Fromherz himself says, “Of course, visionary dreams of bioelectronic neurocomputers and microelectronic neuroprostheses are unavoidable and exciting. However, they should not obscure the numerous practical problems.”

What of the quantum dots, that “have also shown the ability to provide non-invasive communication between neurons and electronics”? The paper referred to here is Winter et al, Recognition Molecule Directed Interfacing Between Semiconductor Quantum Dots and Nerve Cells, Advanced Materials 13 1673 (2001) ( Richard JonesPosted on Categories General