Graphene and the foundations of physics

Graphite, familiar from pencil leads, is a form of carbon consisting of stacks of sheets, each of which consists of a hexagonal mesh of atoms. The sheets are held together only weakly; this is why graphite is such a good lubricant, and when you run a pencil across a piece of paper the mark is made from rubbed off sheets. In 2004, Andre Geim, from the University of Manchester, made the astonishing discovery that you could obtain large, near-perfect sheets of graphite only one atom thick, simply by rubbing graphite against a single crystal silicon substrate – these sheets are called graphene. What was even more amazing was the electronic properties of these sheets – they conduct electricity, and the electrons move through the material at great speed and with very few collisions. There’s been a gold-rush of experiments since 2004, uncovering the remarkable physics of this material. All this has been reviewed in a recent article by Geim and Novosolev (Nature Materials, 6 p 183, 2007) – The rise of graphene (It’s worth taking a look at Geim’s group website, which contains many downloadable papers and articles – Geim is a remarkably creative, original and versatile scientist; besides his discoveries in the graphene field, he’s done very significant work in optical metamaterials and gecko-like nanostructured adhesives, besides his notorious frog-levitation exploits). From the technological point of view, the very high electron mobility of graphene and the possibility of shrinking the dimensions of graphene based devices right down to atomic dimensions make it very attractive as a candidate for electronics when the further miniaturisation of silicon based devices stalls.

At the root of much of the strange physics of graphene is the fact that electrons behave in it like highly relativistic, massless particles. This arises from the way the electrons interact with the regular, 2-dimensional lattice of carbon atoms. Normally when an electron (which we need to think of as a wave, according to quantum mechanics) moves through a lattice of ions, the effect of the way the wave is scattered from the ions and the scattered waves interfere with each other is that the electron behaves as it has a different mass to its real, free space value. But in graphene the effective mass is zero (the energy is simply proportional to the wave-vector, like a photon, rather than being proportional to the wave-vector squared, as would be the case for a normal non-relativistic particle with mass).

The weird way in which electrons in graphene mimic ultra-relativistic particles allows one to test predictions of quantum field theory that would be inaccessible to experiments using fundamental particles. Geim writes about this in this week’s Nature, under the provocative title Could string theory be testable? (subscription needed). Graphene is an example where, from the complexity of the interactions between electrons and a 2-d lattice of ions, simple behaviour emerges, that seems to be well described by the theories of fundamental high energy physics. Geim asks “could we design condensed-matter systems to test the supposedly non-testable predictions of string theory too?” The other question to ask, though, is whether what we think of as the fundamental laws of physics, such as quantum field theory, themselves emerge from some complex inner structure that remains inaccessible to us.

Quaint folk notions of nanotechnologists

Most of us get through our lives with the help of folk theories – generalisations about the world that may have some grounding in experience, but which are not systematically checked in the way that scientific theories might be. These theories can be widely shared amongst a group with common interests, and they both serve as lenses through which to view and interpret the world, and guides to action. Nanotechnologists aren’t exempt from the grip of such folk theories, and Arie Rip, from the University of Twente, one of the leading lights in European science studies, has recently published an analysis of these – Folk theories of nanotechnologists(PDF) , (Science as Culture 15 p349 (2006)).

He identifies three clusters of folk theories. The first is the idea that new technologies inevitably follow a “wow-to-yuck” trajectory, in which initial public enthusiasm for the technology is followed by a backlash. The exemplar of this phenomenon is the reaction to genetically modified organisms, which, it is suggested, followed exactly this pattern, with widespread acceptance in the ’70s, then a backlash in 80’s and 90’s. Rip suggests that this doesn’t at all represent the real story of GMOs, and questions the fundamental characterisation of the public as essentially fickle.

Another folk theory of nanotechnology implies a similar narrative of initial enthusiasm followed by subsequent disillusionment; this is the “cycle of hype” idea popularised by the Gartner group. The idea is that all new technologies are initially accompanied by a flurry of publicity and unrealistic expectations, leading to a “peak of inflated expectations”. This is inevitably followed by disappointment and loss of public interest; the technology then falls into a “trough of disillusionment”. Only then does the technology start to deliver, with a “slope of enlightenment” leading to a “plateau of productivity”, in which the technology does deliver real benefits, albeit less dramatic than those initially promised in the first stage of the cycle. Rip regards this as a plausible storyline masquerading as an empirical finding. But the key issue he identifies at the core of this is the degree to which it is regarded as acceptable – or even necessary – to exaggerate claims about the impact of a technology. In Rip’s view, we have seen a divergence in strategies between the USA and Europe, with advocates of nanotechology in Europe making much more modest claims (and thus perhaps positioning themselves better for the aftermath of a bubble bursting).

Rip’s final folk theory concerns how nanotechnologists view the public. In his view, nanotechnologists are excessively concerned about public concern, projecting onto the public a fear of the technology out of proportion to what empirical findings actually measure. Of course, this is connected to the folk theory about GMOs implicit in the “wow-to-yuck” theory. The most telling example Rip offers is the widespread fear amongst nanotechnology insiders that a film of Michael Crichton’s thriller “Prey” would lead to a major backlash. Rip diagnoses a widespread outbreak of nanophobia-phobia.

The act of creation – or just scrapheap challenge?

It was fairly predictable that last Saturday’s headline in the Guardian about Craig Venter’s latest synthetic biology activitiesI am creating artificial life, declares US gene pioneer – would generate some reaction from that paper’s readers. The form of that reaction, though, wasn’t, as one might have expected, outrage about scientists “playing God”, or worries about the potential dangers of a supercharged version of genetic modification. Instead, the paper printed yesterday an extended response from Nick Gay, a biochemist at the University of Cambridge.

This makes the (to me, entirely reasonable) point that you can’t really describe this as creating life from scratch; it’s “as if he had selected a set of car parts, assembled them into a car and then claimed to have invented the car”. Gay’s own research is into the intricacies and complexities of cellular signalling, so perhaps it is not surprising that he thinks that the thinking underlying Venter’s approach is “the crudest and most facile kind of reductionism”. It would be interesting to know how widely his point of view is shared by other biochemists and molecular biologists.

Nobels, Nanoscience and Nanotechnology

It’s interesting to see how various newspapers have reported the story of yesterday’s award of the physics Nobel prize to the discoverers of giant magnetoresistance (GMR). Most have picked up on the phrase used in the press release of the Nobel foundation, that this was “one of the first real applications of the promising field of nanotechnology”. Of course, this begs the question of what’s in all those things listed in the various databases of nanotechnology products, such as the famous sunscreens and stain-resistant fabrics.

References to iPods are compulsory, and this is entirely appropriate. It is quite clear that GMR is directly responsible for making possible the miniaturised hard disk drives on which entirely new product categories, such as hard disk MP3 players and digital video recorders, depend. The more informed papers (notably the Financial Times and the New York Times) have noticed that one name was missing from the award – Stuart Parkin – a physicist working for IBM in Almaden, in California, who was arguably the person who took the basic discovery of GMR and did the demanding science and technology needed to make a product out of it.

The Nobel Prize for Chemistry announced today also highlights the relationship between nanoscience and nanotechnology. It went to Gerhard Ertl, of the Fritz-Haber-Institut in Berlin, for his contributions to surface chemistry. In particular, using the powerful tools of nanoscale surface science, he was able to elucidate the fundamental mechanisms operating in catalysis. For example, he worked out the basic steps of the Haber-Bosch process. A large proportion of the world’s population quite literally depends for their lives on the Haber-Bosch process, which artificially fixes nitrogen from the atmosphere to make the fertilizer on which the high crop yields that feed the world depend.

The two prizes illustrate the complexity of the interaction between science and technology. In the case of GMR, the discovery was one that came out of fundamental solid state physics. This illustrates how what might seem to the scientists involved to be very far removed from applications can, if the effect turns out to be useful, be very quickly be exploited in products (though the science and technology needed to make this transition will itself often be highly demanding, and is perhaps not always appreciated enough). The surface science rewarded in the chemistry prize, by contrast, represents a case in which science is used, not to discover new effects or processes, but to understand better a process that is already technologically hugely important. This knowledge, in turn, can then underpin improvements to the process or the development of new, but analogous, processes.

Giant magnetoresistance – from the iPod to the Nobel Prize

This year’s Nobel Prize for Physics, it was announced today, has been awarded to Albert Fert, from Orsay, Paris, and Peter Grünberg, from the Jülich research centre in Germany, for their discovery of giant magnetoresistance, an effect whereby a structure of layers of alternating magnetic and non-magnetic materials, each only a few atoms thick, has an electrical resistance that is very strongly changed by the presence of a magnetic field.

The discovery was made in 1988, and at first seemed an interesting but obscure piece of solid state physics. But very soon it was realised that this effect would make it possible to make very sensitive magnetic read heads for hard disks. On a hard disk drive, information is stored as tiny patterns of magnetisation. The higher the density of information one is trying to store on a hard drive, the weaker the resulting magnetic field, and so the more sensitive the read head needs to be. The new technology was launched onto the market in 1997, and it is this technology that has made possible the ultra-high density disk drives that are used in MP3 players and digital video recorders, as well as in laptops.

The rapidity with which this discovery was commercialised is remarkable. One probably can’t rely on this happening very often, but this is a salutory reminder that sometimes discoveries can move from the laboratory to a truly industry-disrupting product very quickly indeed, if the right application can be found, and if the underlying technology (in this case the nanotechnology required for making highly uniform films only a few atoms thick) is in place.

Venter in the Guardian

The front page of yesterday’s edition of the UK newspaper the Guardian was, unusually, dominated by a science story: I am creating artificial life, declares US gene pioneer. The occasion for the headline was an interview with Craig Venter, who fed them a pre-announcement that they had successfully managed to transplant a wholly synthetic genome into a stripped down bacterium, replacing its natural genetic code by an artificial one. In the newspaper’s somewhat breathless words: “The Guardian can reveal that a team of 20 top scientists assembled by Mr Venter, led by the Nobel laureate Hamilton Smith, has already constructed a synthetic chromosome, a feat of virtuoso bio-engineering never previously achieved. Using lab-made chemicals, they have painstakingly stitched together a chromosome that is 381 genes long and contains 580,000 base pairs of genetic code.”

We’ll see what, in detail, has been achieved when the work is properly published. It’s significant, though, that this story was felt to be important enough to occupy most of the front page of a major UK newspaper at a time of some local political drama. Craig Venter is visiting the UK later this month, so we can expect the current mood of excitement or foreboding around synthetic biology to continue for a while yet.

George Whitesides interview in ACS Nano

The American Chemical Society has launched a new journal devoted to nanotechnology, ACS Nano, to accompany its existing, and very successful, letters journal, Nano Letters, about which I wrote a little while ago. In contrast to the short report format of Nano Letters, ACS Nano publishes full length papers about original research, together with some perspectives and editorial material. The journal is now on its second issue, and features an interesting interview (I think this is available without subscription) with one of the leading figures of US academic nanotechnology, Harvard’s George Whitesides.

The interview is worth reading in its entirety, but a few points are worth picking out. Firstly, contrary to the hype that has surrounded nanotechnology, Whitesides exhibits rather a lack of confidence that nanotechnology ever will have a revolutionary impact, in the sense of supplying a fundamentally new capability. He doesn’t doubt that it is “a big, big deal”, but more through enabling incremental developments in many different industries and sectors. In a far future, the ability to exploit fundamentally quantum objects at room temperature, which nanoscale fabrication can facilitate, is his possible exception to this pessimism. “we talk about quantum computation, and quantum entanglement, and quantum communications, and the concepts are there, but the realization is going to require nanotechnology to make it work. If there is something there (I don’t know whether there is), what we’re seeing now is the beginning of the materials base that will lead to that, and that could be revolutionary in some major way.”

Whitesides is famous, among other achievements, for inventing soft lithography, and he tells a rueful but instructive story about the original motivation for this new technology. In the mid-90’s, it was felt that the continued miniaturisation of electronic circuits was threatened by the limits on how much optical lithography could be scaled down. It turned out that this was a misconception, which greatly underestimated how effective the semiconductor industry would be at driving down the working length scale in incremental (though immensely clever) ways. Nonetheless, soft lithography found many other uses, exploiting its unique advantages. As Whitesides says, “you don’t know until you get into it, you find out what works”.

Finally, he has excellent advice to young scientists – whatever else you do, make sure the problems you are working on are the really important ones, even if they seem more difficult or challenging than less interesting ones, on which one might feel one had a better chance of success. His logic for this is that it’s better to fail on an important problem than to succeed on a boring one.

Towards the $1000 human genome

It currently costs about a million dollars to sequence an individual human genome. One can expect incremental improvements in current technology to drop this price to around $100,000, but the need that current methods have to amplify the DNA will make it difficult for this price to drop further. So, to meet a widely publicised target of a $1000 genome a fundamentally different technology is needed. One very promising approach uses the idea of threading a single DNA molecule through a nanopore in a membrane, and identifiying each base by changes in the ion current flowing through the pore. I wrote about this a couple of years ago, and a talk I heard yesterday from one of the leaders in the field prompts me to give an update.

The original idea for this came from David Deamer and Dan Branton, who filed a patent for the general scheme in 1998. Hagan Bayley, from Oxford, whose talk I heard yesterday, has been collaborating with Reza Ghadiri from Scripps, to implement this scheme using a naturally occuring pore forming protein, alpha-hemolysin, as the reader.

The key issues are the need to get resolution at a single base level, and the correct identification of the bases. They get extra selectivity by a combination of modification of the pore by genetic engineering, and insertion into the pore of small ring molecules – cyclodextrins. At the moment speed of reading is a problem – when the molecules are pulled through by an electric field they tend to go a little too fast. But, in an alternative scheme in which bases are chopped off the chain one by one and dropped into the pore sequentially, they are able to identify individual bases reliably.

Given that the human genome has about 6 million bases, they estimate that at 1 millisecond reading time per base they’ll need to use 1000 pores in parallel to sequence a genome in under a day (taking into account the need for a certain amount of redundancy for error correction). To prepare the way for commercialisation of this technology, they have a start-up company – Oxford NanoLabs – which is working on making a miniaturised and rugged device, about the size of a palm-top computer, to do this kind of analysis.

Stochastic sensor
Schematic of a DNA reader using the pore forming protein alpha-hemolysin. As the molecule is pulled through the pore, the ionic conduction through the pore varies, giving a readout of the sequence of bases. From the website of the Theoretical and Computational Biophysics group at the University of Illinois at Urbana-Champaign.

Soft Machines in Korean

Soft Machines Korean cover

My book “Soft Machines: nanotechnology and life” is now available in a Korean translation made by Dr Tae-Erk Kim, and published by Kungree, price 18,000 Won.

The publication of the English paperback version is imminent: in the UK, OUP is giving the publication date as October 2007, (OUP catalogue entry) with a price of £9.99. Readers in the USA will have to wait until December 17th, where their version is priced at $17.99.