Science Horizons

One of the problems of events which aim to gauge the views of the public about emerging issues like nanotechnology is that it isn’t always easy to provide information in the right format, or to account for the fact that lots of publicly available information may be contested and controversial in ways that are difficult to appreciate unless one is deeply immersed in the subject. It’s also very difficult for anybody – lay person or expert – to be able to judge what impact any particular development in science or technology might actually have on everyday life. Science Horizons is a public engagement project that’s trying to deal with this problem. The project is funded by the UK government; its aim is to start a public discussion about the possible impacts of future technological changes by providing a series of stories about possible futures which do focus on everyday dilemmas that people may face.

The stories, which are available in interactive form on the Science Horizons website, focus on issues like human enhancement, privacy in a world with universal surveillance, and problems of energy supply. These, of course, will be very familiar to most readers of this blog. The scenarios are very simple, but they draw on the large amount of work that’s been done for the UK government recently by its new Horizon Scanning Centre, which reports to the Government’s Chief Scientist, Sir David King. This centre published its first outputs earlier this year; the Sigma Scan concentrating on broader social, economic, environmental and political trends, and a Delta Scan concentrating on likely developments in science and technology.

The idea is that the results of the public engagement work based on the Science Horizons material will inform the work of the Horizon Scanning centre as it advises government about the policy implications of these developments.

Nanoscale swimmers

If you were able to make a nanoscale submarine to fulfill the classic “Fantastic Voyage” scenario of swimming through the bloodstream, how would you power and steer it? As readers of my book “Soft Machines” will know, our intuitions are very unreliable guides to the environment in the wet nanoscale world, and the design principles that would be appropriate on the human scale simply won’t work on the nanoscale. Swimming is good example; on small scales water behaves, not as the free flowing liquid we are used to on the human scale; viscosity is much more important on small scales. To get a feel for what it would be like to try and swim on the nanoscale, one has to imagine trying to swim in the most viscous molasses. In my group we’ve been doing some experiments to demonstrate the realisation of one scheme to make a nanoscale object swim, the results of which are summarised in this preprint (PDF), “Self-motile colloidal particles: from directed propulsion to random walk”.

The brilliantly simple idea underlying these experiments was thought up by my colleague and co-author, Ramin Golestanian, together with his fellow theoretical physicists Tannie Liverpool and Armand Adjari, and was analysed theoretically in a recent paper in Physical Review Letters, “Propulsion of a Molecular Machine by Asymmetric Distribution of Reaction Products” (abstract here, subscription required for full paper). If one has a particle that has a patch of catalyst on one side, and that catalyst drives a reaction that produces more product molecules than it consumes in fuel molecules, then the particle will end up in a solution that is more concentrated on one side than the other. This leads to an osmotic pressure gradient, which in turn results in a force that pushes the particle along.

Jon Howse, a postdoc working in my group, has made an experimental system that realises this theoretical scheme. He coated micron-sized polystyrene particles, on one side only, with platinum. This catalyses the reaction by which hydrogen peroxide is broken down into water and oxygen. For every two hydrogen peroxide molecules that take part in the reaction, two water molecules and one oxygen molecule results. Using optical microscopy, he tracked the motion of particles in four different situations. In three of these situations – with control particles, uncoated with platinum, in both water and hydrogen peroxide solution, and with coated particles in hydrogen peroxide solution, he found identical results – the expected Brownian motion of a micron-sized particle. But when the coated particles were put in hydrogen peroxide, the particles clearly moved further and faster.

Detailed analysis of the particle motion showed that, in addition to the Brownian motion that all micro-size particles must be subject to, the propelled particles moved with a velocity that depended on the concentration of the hydrogen peroxide fuel – the more fuel that was present, the faster they went. But Brownian motion is still present, and it has an important effect even on the fastest propelled particles. Brownian motion makes particles rotate randomly as well as jiggle around, so the propelled particles don’t go in straight lines. In fact, at longer times the effect of the random rotation is to make the particles revert to a random walk, albeit one in which the step length is essentially the propulsion velocity multiplied by the characteristic time for rotational diffusion. This kind of motion has an interesting analogy to the kind of motion bacteria do when they are swimming. Bacteria, if they are trying to swim towards food, don’t simply swing the rudder round and propel themselves directly towards it. Like our particles, they are actually doing a kind of random walk in which stretches of straight-line motion are interrupted by episodes in which they change direction – this kind of motion has been called run and tumble motion. Counterintuitively, it seems that this is a better strategy for getting around in the nanoscale world, in which the random jostling of Brownian motion is unavoidable. What the bacteria do is change the length of time for which they are moving in a straight line according to whether they are getting closer to or further away from their food source. If we could do the same trick in our synthetic system, of changing the length of the run time, then that would suggest a strategy for steering our nanoscale submarines, as well as propelling them.

Brain chips

There can be few more potent ideas in futurology and science fiction than that of the brain chip – a direct interface between the biological information processing systems of the brain and nervous system and the artificial information processing systems of microprocessors and silicon electronics. It’s an idea that underlies science fiction notions of “jacking in” to cyberspace, or uploading ones brain, but it also provides hope to the severely disabled that lost functions and senses might be restored. It’s one of the central notions in the idea of human enhancement. Perhaps through a brain chip one might increase ones cognitive power in some way, or have direct access to massive banks of data. Because of the potency of the idea, even the crudest scientific developments tend to be reported in the most breathless terms. Stripping away some of the wishful thinking, what are the real prospects for this kind of technology?

The basic operations of the nervous system are pretty well understood, even if the way the complexities of higher level information processing work remain obscure, and the problem of consciousness is a truly deep mystery. The basic units of the nervous system are the highly specialised, excitable cells called neurons. Information is carried long distances by the propagation of pulses of voltage along long extensions of the cell called axons, and transferred between different neurons at junctions called synapses. Although the pulses carrying information are electrical in character, they are very different from the electrical signals carried in wires or through semiconductor devices. They arise from the fact that the contents of the cell are kept out of equilibrium with their surroundings by pumps which selectively transport charged ions across the cell membrane, resulting in a voltage across the membrane. This voltage can be relaxed when channels in the membrane, which are triggered by changes in voltage, open up. The information carrying impulse is actually a shock wave of reduced membrane potential, enabled by transport of ions through the membrane.

To find out what is going on inside a neuron, one needs to be able to measure the electrochemical potential across the membrane. Classically, this is done by inserting an electrochemical electrode into the interior of the nerve cell. The original work, carried out by Hodgkin, Huxley and oters in the 50’s, used squid neurons, because they are particularly large and easy to handle. So, in principle one could get a readout of the state of a human brain by measuring the potential at a representative series of points in each of its neurons. The problem, of course, is that there are a phenomenal number of neurons to be studied – around 20 billion in a human brain. Current technology has managed to miniaturise electrodes and pack them in quite dense arrays, allowing the simultaneous study of many neurons. A recent paper (Custom-designed high-density conformal planar multielectrode arrays for brain slice electrophysiology, PDF)) from Ted Berger’s group at the University of Southern California shows a good example of the state of the art – this has electrodes with 28 µm diameter, separated by 50 µm, in an array of 64 electrodes. These electrodes can both read the state of the neuron, and stimulate it. This kind of electrode array forms the basis of brain interfaces that are close to clinical trials – for example the BrainGate product.

In a rather different class from these direct, but invasive probes of nervous system activity at the single neutron level, there are some powerful, but indirect measures of brain activity, such as functional magnetic resonance imaging or positron emission tomography. These don’t directly measure the electrical activity of neurons, either individually or in groups; instead they rely on the fact that thinking is hard work (literally) and locally raises the rate of metabolism. Functional MRI and PET allow one to localise nervous activity to within a few cubic millimeters, which is hugely revealing in terms of identifying which parts of the brain are involved in which kind of mental activity, but which remains a long way away from the goal of unpicking the brain’s activity at the level of neurons.

There is another approach does probe activity at the single neuron level, but doesn’t feature the invasive procedure of inserting an electrode into the nerve itself. These are the neuron-silicon transistors developed in particular by Peter Fromherz at the Max Planck Institute for Biochemistry. These really are nerve chips, in that there is a direct interface between neurons and silicon microelectronics of the sort that can be highly miniaturised and integrated. On the other hand, these methods are currently restricted to operate in two dimensions, and require careful control of the growing medium that seems to rule out, or at least present big problems for, in-vivo use.

The central ingredient of this approach is a field effect transistor which is gated by the excitation of a nerve cell in contact with it (i.e., the current passed between the source and drain contacts of the transistor strongly depends on the voltage state of the membrane in proximity to the insulating gate dielectric layer). This provides a read-out of the state of a neuron; input to the neurons can also be made by capacitors, which can be made on the same chip. The basic idea was established 10 years ago – see for example Two-Way Silicon-Neuron Interface by Electrical Induction. The strength of this approach is that it is entirely compatible with the powerful methods of miniaturisation and integration of CMOS planar electronics. In more recent work, an individual mammalian cell can be probed “Signal Transmission from Individual Mammalian Nerve Cell to Field-Effect Transistor” (Small, 1 p 206 (2004), subscription required), and an integrated circuit with 16384 probes, capable of probing a neural network with a resolution of 7.8 µm has been built “Electrical imaging of neuronal activity by multi-transistor-array (MTA) recording at 7.8 µm resolution” (abstract, subscription required for full article).

Fromherz’s group have demonstrated two types of hybrid silicon/neuron circuits (see, for example, this review “Electrical Interfacing of Nerve Cells and Semiconductor Chips”, abstract, subscription required for full article). One circuit is a prototype for a neural prosthesis – an input from a neuron is read by the silicon electronics, which does some information processing and then outputs a signal to another neuron. Another, inverse, circuit is a prototype of a neural memory on a chip. Here there’s an input from silicon to a neuron, which is connected to another neuron by a synapse. This second neuron makes its output to silicon. This allows one to use the basic mechanism of neural memory – the fact that the strength of the connection at the synapse can be modified by the type of signals it has transmitted in the past – in conjunction with silicon electronics.

This is all very exciting, but Fromherz cautiously writes: “Of course, visionary dreams of bioelectronic neurocomputers and microelectronic neuroprostheses are unavoidable and exciting. However, they should not obscure the numerous practical problems.” Among the practical problems are the fact that it seems difficult to extend the method into in-vivo applications, it is restricted to two dimensions, and the spatial resolution is still quite large.

Pushing down to smaller sizes is, of course, the province of nanotechnology, and there are a couple of interesting and suggestive recent papers which suggest directions that this might go in the future.

Charles Lieber at Harvard has taken the basic idea of the neuron gated field effect transistor, and executed it using FETs made from silicon nanowires. A paper published last year in Science – Detection, Stimulation, and Inhibition of Neuronal Signals with High-Density Nanowire Transistor Arrays (abstract, subscription needed for full article) – demonstrated that this method permits the excitation and detection of signals from a single neuron with a resolution of 20 nm. This is enough to follow the progress of a nerve impulse along an axon. This gives a picture of what’s going on inside a living neuron with unprecendented resolution. But it’s still restricted to systems in two dimensions, and it only works when one has cultured the neurons one is studying.

Is there any prospect, then, of mapping out in a non-invasive way the activity of a living brain at the level of single neurons? This still looks a long way off. A paper from the group of Rodolfo Llinas at the NYU School of Medicine makes an ambitious proposal. The paper – Neuro-vascular central nervous recording/stimulating system: Using nanotechnology probes (Journal of Nanoparticle Research (2005) 7: 111–127, subscription only) – points out that if one could detect neural activity using probes within the capillaries that supply oxygen and nutrients to the brain’s neurons, one would be able to reach right into the brain with minimal disturbance. They have demonstrated the principle in-vitro using a 0.6 µm platinum electrode inserted into one of the capillaries supplying the neurons in the spinal cord. Their proposal is to further miniaturise the probe using 200 nm diameter polymer nanowires, and they further suggest making the probe steerable using electrically stimulated shape changes – “We are developing a steerable form of the conducting polymer nanowires. This would allow us to steer the nanowire-probe selectively into desired blood vessels, thus creating the first true steerable nano-endoscope.” Of course, even one steerable nano-endoscope is still a long way from sampling a significant fraction of the 25 km of capillaries that service the brain.

So, in some senses the brain chip is already with us. But there’s a continuum of complexity and sophisitication of such devices, and we’re still a long way from the science fiction vision of brain downloading. In the sense of creating an interface between the brain and the world, that is clearly possible now and has in some form been realised. Hybrid structures which combine the information processing capabilities of silicon electronics and nerve cells cultured outside the body are very close. But a full, two-way integration of the brain and artificial information processing systems remains a long way off.

Keeping on keeping on

There are some interesting reflections on the recent Ideas Factory Software control of matter from the German journalist Neils Boeing, in a piece called Nano-Elvis vs Nano-Beatles. He draws attention to the irony that a research program with such a Drexlerian feel had as its midwife someone like me, who has been such a vocal critic of Drexlerian ideas. The title comes from an analogy which I find very flattering, if not entirely convincing – roughly translated from the German, he says: “It’s intringuingly reminiscent of the history of pop music, which developed by a transatlantic exchange. The American Elvis began things, but it was the British Beatles who really got the epochal phenomenon rolling. The solo artist Drexler launched his vision on the world, but in practise the crucial developments could made by a British big band of researchers. We have just one wish for the Brits – keep on rocking!” Would that it were so.

In other media, there’s an article by me in the launch issue of the new nanotechnology magazine from the UK’s Insititute of Nanotechnology – NanoNow! (PDF, freely downloadable). My article has the strap-line “Only Skin Deep – Cosmetics companies are using nano-products to tart up their face creams and sun lotions. But are they safe? Richard A.L. Jones unmasks the truth.” I certainly wouldn’t claim to unmask the truth about controversial issues like the use of C60 in face-creams, but I hope I managed to shed a little light on a very murky and much discussed subject.

My column in Nature Nanotechnology this month is called “Can nanotechnology ever prove that it is green?” This is only available to subscribers. As Samuel Johnson wrote, “No man but a blockhead ever wrote, except for money.” I don’t think he would have approved of blogs.

Do naturally formed nanoparticles make ball lightning?

Ball lightning is an odd and obscure phenomenon; reports describe glowing globes the size of footballs, which float along at walking speed, sometimes entering buildings, and whose existence sometimes comes to an end with a small explosion. Observations are generally associated with thunderstorms. I’ve never seen ball lightning myself, though when I was a physics undergraduate at Cambridge in 1982 there was a famous sighting in the Cavendish Laboratory itself. This rather elusive phenomenon has generated a huge range of potential explanations, ranging from the exotic (anti-matter meteorites, tiny black holes) to the frankly occult. But there seems to be growing evidence that ball lightning may in fact be the manifestation of slowly combusting, loose aggregates of nanoparticles formed by the contact of lightning bolts with the ground.

The idea that ball lightning consists of very low density aggregates of finely divided material originates with a group of Russian scientists. A pair of scientists from New Zealand, Abrahamson and Dinnis, showed some fairly convincing electron micrographs of chains of nanoparticles produced by the contact of electrical discharges with the soil, as reported in this 2000 Nature paper (subscription required for full paper). Abrahamson’s theory is also described in this news report from 2002, while a whole special issue of the Royal Society’s journal Philosophical Transactions from that year puts the Abrahamson theory in context with the earlier Russian work and the observational record. The story is brought up to date with some very suggestive looking experimental results reported a couple of weeks ago in the journal Physical Review Letters, in a letter entitled Production of Ball-Lightning-Like Luminous Balls by Electrical Discharges in Silicon (subscription required for full article), by a group from the Universidade Federal de Pernambuco in Brazil. In their very simple experiment, an electric arc was made with a silicon wafer, in ambient conditions. This produced luminous balls, from 1- 4 cm in diameter, which moved erratically along the ground, sometimes squeezing through gaps, and disappeared after 2 – 5 seconds leaving no apparent trace. Their explanation is that the discharge created silicon nanoparticles which aggregated to form a very open, low density aggregate, and subsequently oxidised to produce the heat that made the balls glow.

The properties of nanoparticles which make this explanation at least plausible are fairly familiar. They have a very high surface area, and so are substantially more reactive than their parent bulk materials. They can aggregate into very loose, fractal, structures whose effective density can be very low (not much greater, it seems in this case, than air itself). And they can be made a variety of physical processes, some of which are to be found in nature.

Al Gore’s global warming roadshow

Al Gore visited Sheffield University yesterday, so I joined the growing number of people round the world who have seen his famous Powerpoint presentation on global warming (to be accurate, he did it in Keynote, being a loyal Apple board member). As a presentation it was undoubtedly powerful, slick, sometimes moving, and often very funny. His comic timing has clearly got a lot better since he was a Presidential candidate, even though some of his jokes didn’t cross the Atlantic very effectively. However, it has to be said that they worked better than the efforts of Senator George Mitchell, who introduced him. It is possible that Gore’s rhetorical prowess was even further heightened by the other speakers who preceded him; these included a couple of home-grown politicians, a regional government official and a lawyer, none of whom were exactly riveting. But, it’s nonetheless an interesting signal that this event attracted an audience of this calibre, including one government minister and an unannounced appearance by the Deputy Prime Minister.

Since a plurality of the readers of this blog are from the USA, I need to explain that this is one way in which the politics of our two countries fundamentally differ. None of the major political parties doubts the reality of anthropogenic climate change, and indeed there is currently a bit of an auction between them about who takes it most seriously. The ruling Labour Party commissioned a distinguished economist to write the Stern Report, a detailed assessment of the potential economic costs of climate change and of the cost-effectiveness of taking measures to combat it, and gave Al Gore an official position as an advisor on the subject. Gore’s UK apotheosis has been made complete by the announcement that the government is to issue all schools with a copy of his DVD “An Inconvenient Truth”. This announcement was made, in response to the issue of the latest IPCC summary for policy makers (PDF), by David Miliband, the young and undoubtedly very clever environment minister, who is often spoken of as being destined for great things in the future, and has been recently floating some very radical, even brave, notions about personal carbon allowances. The Conservatives, meanwhile, have demonstrated their commitment to alternative energy by their telegenic young leader David Cameron sticking a wind-turbine on top of his Notting Hill house. It’s gesture politics, of course, but an interesting sign of the times. The minority third party, the Liberal Democrats, believe they invented this issue long ago.

What does this mean for the policy environment, particularly as it affects science policy? The government’s Chief Scientific Advisor, Sir David King, has long been a vocal proponent of the need for urgent action on energy and climate. Famously, he went to the USA a couple of years ago to announce that climate change was a bigger threat than terrorism, to the poorly concealed horror of a flock of diplomats and civil servants. But (oddly, one might think), Sir David doesn’t actually directly control the science budget, so it isn’t quite the case that the entire £3.4 billion (i.e., nearly $7 billion) will be redirected to a combination of renewables research and nuclear (which Sir David is also vocally in favour of). Nonetheless, one does get the impression that a wall of money is just about to be thrown at energy research in general, to the extent that it isn’t entirely obvious that the capacity is there to do the research.

Integrating nanosensors and microelectronics

One of the most talked-about near term applications of nanotechnology is in in nanosensors – devices which can detect the presence of specific molecules at very low concentrations. There are some obvious applications in medicine; one can imagine tiny sensors implanted in one’s body, which continuously monitor the concentration of critical biochemicals, or the presence of toxins and pathogens, allowing immediate corrective action to be taken. A paper in this week’s edition of Nature (editor’s summary here, subscription required for full article) reports an important step forward – a nanosensor made using a process that is compatible with the standard methods for making integrated circuits (CMOS). This makes it much easier to imagine putting these nanosensors into production and incorporating them in reliable, easy to use systems.

The paper comes from Mark Reed’s group at Yale. The fundamental principle is not new – the idea is that one applies a voltage across a very thin semiconductor nanowire. If molecules adsorb at the interface between the nanowire and the solution, there is a change in electrical charge at the interface. This creates an electric field which has the effect of changing the electrical conductivity of the nanowire; the amount of current flowing through the wire then tells you about how many molecules have stuck to the surface. By coating the surface with molecules that specifically stick to the chemical that one wants to look for, one can make the sensor specific for that chemical. Clearly, the thinner the wire, the more effect the surface has in proportion, hence the need to use nanowires to make very sensitive sensors.

In the past, though, such nanowire sensors have been made by chemical processes, and then painstakingly wiring them up to the necessary micro-circuit. What the Reed group has done is devised a way of making the nanowire in-situ on the same silicon wafer that is used to make the rest of the circuitry, using the standard techniques that are used to make microprocessors. This makes it possible to envisage scaling up production of these sensors to something like a commercial scale, and integrating them a complete electronic system.

How sensitive are these devices? In a test case, using a very well known protein-receptor interaction, they were able to detect a specific protein at a concentration of 10 fM – that translates to 6 billion molecules per litre. As expected, small sensors are more sensitive than large ones; a typical small sensor had a nanowire 50 nm wide and 25 nm thick. From the published micrograph, the total size of the sensor is about 20 microns by 45 microns.

The pharmaceutical nanofactory

Drug delivery is becoming one of the most often cited application of nanotechnology in the medical arena. For the kind of very toxic molecules that are used in cancer therapy, for example, substantial increases in effectiveness, and reductions in side-effects, can be obtained by wrapping up the molecule in a protective wrapper – a liposome, for example – which isolates the molecule from the body until it reaches its target. Drug delivery systems of this kind are already in clinical use, as I discussed here. But what if, instead of making these drugs in a pharmaceutical factory and wrapping them up in the nanoscale container for injection into the body, you put the factory in the delivery device, and synthesised the drug when it was needed, where it was needed, inside the body? This intriguing possibility is discussed in a commentary (subscription probably required) in the January issue of Nature Nanotechnology. This article is itself based on a discussion held at a National Academies Keck Futures Initiative Conference, which is summarised here.

One of the reasons for wanting to do this is to be able to make drug molecules that aren’t stable enough to be synthesised in the usual way. In a related problem, such a medical nanofactory might be used to help the body dispose of molecules it can’t otherwise process – one example the authors give is the condition phenylketonuria, a relatively common condition in which the amino acid phenylalanine, instead of being converted to tyrosine, is converted to phenylpyruvic acid, the accumulation of which causes incurable brain damage.

What might one need to achieve this goal? The first requirement is a container to separate the chemical machinery from the body. The most likely candidates for such a container are probably polymersomes, robust spherical containers self-assembled from block copolymers. The other requirements for the nanofactory are perhaps less easy to fulfill; one needs ways of getting chemicals in and out of the nanofactory, one needs sensing functions on the outside to tell the nanofactory when it needs to start production, one needs the apparatus to do the chemistry (perhaps a system of enzymes or other catalysts), one needs to be able to target the nanofactory to where one needs it, and finally, one needs to ensure that the nanofactory can be safely disposed of when it has done its work. Cell biology suggests ways to approach some of these requirements, for example one can imagine analogues to the pores and channels which transport molecules through cell membranes. None of this will be easy, but the authors suggest that it would constitute “a platform technology for a variety of therapeutic approaches”.

Nanotechnology discussion on the American Chemical Society website

I am currently participating in a (ahem…) “blogversation” about nanotechnology on the website run by the publications division of the American Chemical Society. There’s an introduction to the event here, and you can read the first entry here; the conversation has got started around those hoary issues of nanoparticle toxicity and nanohype. Contributors, besides me, include David Berube, Janet Stemwedel, Ted Sargent, and Rudy Baum, Editor in Chief of Chemical and Engineering News.

New projects for the Software Control of Matter

The Ideas Factory on Software Control of Matter that has been dominating my life for the last couple of weeks has produced its outcome, and brief descriptions of the three projects that are likely to go forward for funding have been published on the Ideas Factory blog. There are two experimental projects, Software-controlled assembly of oligomers aims to build a machine to synthesise a controlled sequence of molecular building blocks from a sequence coded by a stretch of DNA, while Directed Reconfigurable Nanomachines aims to use the positional assembly of molecules and nanoscale building blocks to make prototype functional nanoscale devices. The Matter Compiler brings together computer science and computational chemistry and materials science to prototype the implementation of the engineering control and computer science aspects of directed molecular assembly. Between them, these projects will be initially funded to the tune of the £1.5 million set aside for the Ideas Factory. But there’s no doubt in my mind that the ideas generated during the week are worthy of a lot more support than this in the future.