Less than Moore?

Some years ago, the once-admired BBC science documentary slot Horizon ran a program on nanotechnology. This was preposterous in many ways, but one sequence stands out in my mind. Michio Kaku appeared in front of scenes of rioting and mayhem, opining that “the end of Moore’s Law is perhaps the single greatest economic threat to modern society, and unless we deal with it we could be facing economic ruin.” Moore’s law, of course, is the observation, or rather the self-fulfilling prophecy, that the number of transistors on an integrated circuit doubles about every two years, with corresponding exponential growth in computing power.

As Gordon Moore himself observes in a presentation linked from the Intel site, “No Exponential is Forever … but We can Delay Forever (2 MB PDF). Many people have prematurely written off the semiconductor industry’s ability to maintain, over forty years, a record of delivering a nearly constant, year on year, percentage shrinking in circuits and increase in computing power. Nonetheless, there will be limits to how far the current CMOS-based technology can be pushed. These limits could arise from fundamental constraints of physics or materials science, or from engineering problems like the difficulties of managing the increasingly problematic heat output of densely packed components, or simply from the economic difficulties of finding business models that can make money in the face of the exponentially increasing cost of plant. The question, then, is not if Moore’s law, for conventional CMOS devices, will run out, but when.

What has underpinned Moore’s law is the International Technology Roadmap for Semiconductors, a document which effectively choreographs the research and development required to deliver the continual incremental improvements on our current technology that are needed to keep Moore’s law on track. It’s a document that outlines the requirements for an increasingly demanding series of linked technological breakthroughs as time marches on; somewhere between 2015 and 2020 a crunch comes, with many problems for which solutions look very elusive. Beyond this time, then, there are three possible outcomes. It could be that these problems, intractable though they look now, will indeed be solved, and Moore’s law will continue through further incremental developments. The history of the semiconductor industry tells us that this possibility should not be lightly dismissed; Moore’s law has already been written off a number of times, only for the creativity and ingenuity of engineers and scientists to overcome what seemed like insuperable problems. The second possibility is that a fundamentally new architecture, quite different from CMOS, will be developed, giving Moore’s law a new lease of life, or even permitting a new jump in computer power. This, of course, is the motivation for a number of fields of nanotechnology. Perhaps spintronics, quantum computing, molecular electronics, or new carbon-based electronics using graphene or nanotubes will be developed to the point of commercialisation in time to save Moore’s law. For the first time, the most recent version of the semiconductor roadmap did raise this possibility, so it deserves to be taken seriously. There is much interesting physics coming out of laboratories around the world in this area. But none of these developments are very close to making it out of the lab into a process or a product, so we need to at least consider the possibility that it won’t arrive in time to save Moore’s law. So what happens if, for the sake of argument, Moore’s law peters out in about ten years time, leaving us with computers perhaps one hundred times more powerful than the ones we have now that take more than a few years to become obsolete. Will our economies collapse and our streets fill with rioters?

It seems unlikely. Undoubtedly, innovation is a major driver of economic growth, and the relentless pace of innovation in the semiconductor industry has contibuted greatly to the growth we’ve seen in the last twenty years. But it’s a mistake to suppose that innovation is synonymous with invention; new ways of using existing inventions can be as great a source of innovation as new inventions themselves. We shouldn’t expect that a period of relatively slow innovation in hardware would mean that there would be no developments in software; on the contrary, as raw computing power gets less superabundant we’d expect ingenuity in making the most of available power to be greatly rewarded. The economics of the industry would change dramatically, of course. As the development cycle lengthened the time needed to amortise the huge capital cost of plant would stretch out and the business would become increasingly commoditised. Even as the performance of chips plateaued, their cost would drop, possibly quite precipitously; these would be the circumstances in which ubiquitous computing truly would take off.

For an analogy, one might want to look a century earlier. Vaclav Smil has argued, in his two-volume history of technology of the late nineteenth and twentieth century (Creating the Twentieth Century: Technical Innovations of 1867-1914 and Their Lasting Impact and Transforming the Twentieth Century: Technical Innovations and Their Consequences ), that we should view the period 1867 – 1914 as a great technological saltation. Most of the significant inventions that underlay the technological achievements of the twentieth century – for example, electricity, the internal combustion engine, and powered flight – were made in this short period, with the rest of the twentieth century being dominated by the refinement and expansion of these inventions. Perhaps we will, in the future, look back on the period 1967 – 2014, in a similar way, as a huge spurt of invention in information and communication technology, followed by a long period in which the reach of these inventions continued to spread throughout the economy. Of course, this relatively benign scenario depends on our continued access to those things on which our industrial economy is truly existentially dependent – sources of cheap energy. Without that, we truly will see economic ruin.

The uses and abuses of speculative futurism

My post last week – “We will have the power of the gods” about Michio Kaku’s upcoming TV series generated a certain amount of heat amongst transhumanists and singularitarians unhappy about my criticism of radical futurism. There’s been a lot of heated discussion on the blog of Dale Carrico, the Berkeley rhetorician who coined the very useful phrase “superlative technology discourse” for this strand of thinking, and who has been subjecting its underpinning cultural assumptions to some sustained criticism, with some robust responses from the transhumanist camp.

Michael Anissimov, founder of the Immortality Institute, has made an extended reply to my post. Michael takes particular issue with my worry that these radical visions of the future are primarily championed by transhumanists who have a “strong, pre-existing attachment to a particular desired outcome”, stating that “transhumanism is not a preoccupation with a narrow range of specific technological outcomes. It looks at the entire picture of emerging technologies, including those already embraced by the mainstream. “

It’s good that Michael recognises the danger of the situation I identify, but some other comments on his blog suggest to me that what he is doing here is, in Carrico’s felicitous phrase, sanewashing the transhumanist and singularitarian movements with which he is associated. He urgently writes in the same post “If any transhumanists do have specific attachments to particular desired outcome, I suggest they drop them — now”, while an earlier post on his blog is entitled Emotional Investment. In that he asks the crucial question: “Should transhumanists be emotionally invested in particular technologies, such as molecular manufacturing, which could radically accelerate the transhumanist project? My answer: for fun, sure. When serious, no.” Michael is perceptive enough to realise the dangers here, but I’m not at all convinced that the same is true of many of his transhumanist fellow-travellers. The key point is that I think transhumanists genuinely don’t realise quite how few informed people outside their own circles think that the full, superlative version of the molecular manufacturing vision is plausible (it’s worth quoting Don Eigler here again: “To a person, everyone I know who is a practicing scientist thinks of Drexler’s contributions as wrong at best, dangerous at worse. There may be scientists who feel otherwise, I just haven’t run into them”). The only explanation I can think of for the attachment of many transhumanists to the molecular manufacturing vision is that it is indeed a symptom of the coupling of group-think and wishful thinking.

Meanwhile, Roko, on his blog Transhuman Goodness, expands on comments made to Soft Machines in his post “Raaa! Imagination is banned you foolish transhumanist”. He thinks, not wholly accurately, that what I am arguing against is any kind of futurism: “But I take issue with both Dale and Richard when they want to stop people from letting their imaginations run wild, and instead focus attention only onto things which will happen for certain (or almost for certain) and which will happen soon…. Transhumanists look over the horizon and – probably making many errors – try to discern what might be coming…. If we say that we see something like AGI or Advanced Nanotechnology over that horizon, don’t take it as a certainty… But at least take the idea as a serious possibility….”

Dale Carrico responded at length to this. I want to stress here just one point; my problem is not that I think that transhumanists have let their imaginations run wild. Precisely the opposite, in fact; I worry that transhumanists have just one fixed vision of the future, which is now beginning to show its age somewhat, and are demonstrating a failure of imagination in their inability to conceive of the many different futures that have the potential to unfold.

Anne Corwin, who was interviewed for the Kaku program, makes some very balanced comments that get us closer to the heart of the matter: “most sensible people, I think, realize that utopia and apocalypse are equally unrealistic propositions — but projecting forward our present-day dreams, wishes, hopes, and deep anxieties can still be a useful (and, dare I say, enjoyable) exercise. Just remember that there’s a lot we can do now to help improve things in the world — even in the absence of benevolent nanobot swarms.”

There are two key points here. Firstly, there’s the crucial insight that futurism is not, in fact, about the future at all – it’s about the present and the hopes and fears that people have about the direction society seems to be taking now. This is precisely why futurism ages so badly, giving us the opportunity for all those cheap laughs about the non-arrival of flying cars and silvery jump-suits. The second is that futurism is (or should be) an exercise, or in other words, a thought experiment. Alfred Nordmann reminds us (in If and Then: A Critique of Speculative NanoEthics) that both physics and philosophy have a long history of using improbable scenarios to illuminate deep problems. “Think of Descartes conjuring an evil demon who deceives us about our sense perceptions, think more recently of Thomas Nagel’s infamous brain in a vat.” So, for example, interrogating the thought experiment of a nanofactory that could reduce all matter to the status of software, might give us useful insights into the economics of a post-industrial world. But, as Nordmann says, “Philosophers take such scenarios seriously enough to generate insights from them and to discover values that might guide decisions regarding the future. But they do not take them seriously enough to believe them.”

Science journals take on poverty and human development

Science journals around the world are participating in a Global theme issue on poverty and human development; as part of this the Nature group journals are making all their contributions freely available on the web. Nature Nanotechnology is involved, and contributes three articles.

Nanotechnology and the challenge of clean water, by Thembela Hillie and Mbhuti Hlophe, gives a perspective from South Africa on this important theme. Also available is one of my own articles, this month’s opinion column, Thesis. I consider the arguments that are sometimes made that nanotechnology will lead to economic disruptions in developing countries that depend heavily on natural resources. Will, for example, the development of carbon nanotubes as electrical conductors impoverish countries like Zambia that depend on copper mining?

“We will have the power of the gods”

According to a story in the Daily Telegraph today, science has succeeded in its task of unlocking the secrets of matter, and now it’s simply a question of applying this knowledge to fulfill all our wants and dreams. The article is trailing a new BBC TV series fronted by Michio Kaku, who explains that “we are making the historic transition from the age of scientific discovery to the age of scientific mastery in which we will be able to manipulate and mould nature almost to our wishes.”

A series of quotes from “today’s pioneers” covers some painfully familiar ground: nanobot armies will punch holes in the blood vessels of enemy soliders, leading Nick Bostrom to opine that “In my view, the advanced form of nanotechnology is arguably the greatest existential risk humanity is likely to confront in this century.” Ray Kurzweil tells us that within 10 to 15 years we will be able to “reprogram biology away from cancer, away from heart disease, to really overcome the major diseases that kill us. “ Other headlines speak of “an end to aging”, “perfecting the human body” and taking “control over evolution”. At the end, though, it’s loss of control that we should worry about, having succeeded in creating superhuman artificial intelligence: Paul Saffo tells us “”There’s a good chance that the machines will be smarter than us. There are two scenarios. The optimistic one is that these new superhuman machines are very gentle and they treat us like pets. The pessimistic scenario is they’re not very gentle and they treat us like food.”

This all offers a textbook example of what Dale Carrico, a rhetoric professor at Berkeley, calls a superlative technology discourse. It starts with an emerging technology with interesting and potentially important consequences, like nanotechnology, or artificial intelligence, or the medical advances that are making (slow) progress combatting the diseases of aging. The discussion leaps ahead of the issues that such technologies might give rise to at the present and in the near future, and goes straight on to a discussion of the most radical projections of these technologies. The fact that the plausibility of these radical projections may be highly contested is by-passed by a curious foreshortening. This process has been forcefully identified by Alfred Nordmann, a philosopher of science from TU Darmstadt, in his article “If and then: a critique of speculative nanoethics” (PDF). “If we can’t be sure that something is impossible, this is sufficient reason to take its possibility seriously. Instead of seeking better information and instead of focusing on the programs and presuppositions of ongoing technical developments, we are asked to consider the ethical and societal consequences of something that remains incredible.”

What’s wrong with this way of talking about technological futures is that it presents a future which is already determined; people can talk about the consequences of artificial general intelligence with superhuman capabilities, or a universal nano-assembler, but the future existence of these technologies is taken as inevitable. Naturally, this renders irrelevant any thought that the future trajectory of technologies should be the subject of any democratic discussion or influence, and it distorts and corrupts discussions of the consequences of technologies in the here and now. It’s also unhealthy that these “superlative” technology outcomes are championed by self-identified groups – such as transhumanists and singularitarians – with a strong, pre-existing attachment to a particular desired outcome – an attachment which defines these groups’ very identity. It’s difficult to see how the judgements of members of these groups can fail to be influenced by the biases of group-think and wishful thinking.

The difficulty that this situation leaves us in is made clear in another article by Alfred Nordmann – “Ignorance at the heart of science? Incredible narratives on Brain-Machine interfaces”. “We are asked to believe incredible things, we are offered intellectually engaging and aesthetically appealing stories of technical progress, the boundaries between science and science fiction are blurred, and even as we look to the scientists themselves, we see cautious and daring claims, reluctant and self- declared experts, and the scientific community itself at a loss to assert standards of credibility.” This seems to summarise nicely what we should expect from Michio Kaku’s forthcoming series, “Visions of the future”. That the program should take this form is perhaps inevitable; the more extreme the vision, the easier it is to sell to a TV commissioning editor. And, as Nordmann says: “The views of nay-sayers are not particularly interesting and members of a silent majority don’t have an incentive to invest time and energy just to “set the record straight.” The experts in the limelight of public presentations or media coverage tend to be enthusiasts of some kind or another and there are few tools to distinguish between credible and incredible claims especially when these are mixed up in haphazard ways.”

Have we, as Kaku claims, “unlocked the secrets of matter”? On the contrary, there are vast areas of science – areas directly relevant to the technologies under discussion – in which we have barely begun to understand the issues, let alone solve the problems. Claims like this exemplify the triumphalist, but facile, reductionism that is the major currency of so much science popularisation. And Kaku’s claim that soon “we will have the power of gods” may be intoxicating, but it doesn’t prepare us for the hard work we’ll need to do to solve the problems we face right now.

Graphene and the foundations of physics

Graphite, familiar from pencil leads, is a form of carbon consisting of stacks of sheets, each of which consists of a hexagonal mesh of atoms. The sheets are held together only weakly; this is why graphite is such a good lubricant, and when you run a pencil across a piece of paper the mark is made from rubbed off sheets. In 2004, Andre Geim, from the University of Manchester, made the astonishing discovery that you could obtain large, near-perfect sheets of graphite only one atom thick, simply by rubbing graphite against a single crystal silicon substrate – these sheets are called graphene. What was even more amazing was the electronic properties of these sheets – they conduct electricity, and the electrons move through the material at great speed and with very few collisions. There’s been a gold-rush of experiments since 2004, uncovering the remarkable physics of this material. All this has been reviewed in a recent article by Geim and Novosolev (Nature Materials, 6 p 183, 2007) – The rise of graphene (It’s worth taking a look at Geim’s group website, which contains many downloadable papers and articles – Geim is a remarkably creative, original and versatile scientist; besides his discoveries in the graphene field, he’s done very significant work in optical metamaterials and gecko-like nanostructured adhesives, besides his notorious frog-levitation exploits). From the technological point of view, the very high electron mobility of graphene and the possibility of shrinking the dimensions of graphene based devices right down to atomic dimensions make it very attractive as a candidate for electronics when the further miniaturisation of silicon based devices stalls.

At the root of much of the strange physics of graphene is the fact that electrons behave in it like highly relativistic, massless particles. This arises from the way the electrons interact with the regular, 2-dimensional lattice of carbon atoms. Normally when an electron (which we need to think of as a wave, according to quantum mechanics) moves through a lattice of ions, the effect of the way the wave is scattered from the ions and the scattered waves interfere with each other is that the electron behaves as it has a different mass to its real, free space value. But in graphene the effective mass is zero (the energy is simply proportional to the wave-vector, like a photon, rather than being proportional to the wave-vector squared, as would be the case for a normal non-relativistic particle with mass).

The weird way in which electrons in graphene mimic ultra-relativistic particles allows one to test predictions of quantum field theory that would be inaccessible to experiments using fundamental particles. Geim writes about this in this week’s Nature, under the provocative title Could string theory be testable? (subscription needed). Graphene is an example where, from the complexity of the interactions between electrons and a 2-d lattice of ions, simple behaviour emerges, that seems to be well described by the theories of fundamental high energy physics. Geim asks “could we design condensed-matter systems to test the supposedly non-testable predictions of string theory too?” The other question to ask, though, is whether what we think of as the fundamental laws of physics, such as quantum field theory, themselves emerge from some complex inner structure that remains inaccessible to us.

Quaint folk notions of nanotechnologists

Most of us get through our lives with the help of folk theories – generalisations about the world that may have some grounding in experience, but which are not systematically checked in the way that scientific theories might be. These theories can be widely shared amongst a group with common interests, and they both serve as lenses through which to view and interpret the world, and guides to action. Nanotechnologists aren’t exempt from the grip of such folk theories, and Arie Rip, from the University of Twente, one of the leading lights in European science studies, has recently published an analysis of these – Folk theories of nanotechnologists(PDF) , (Science as Culture 15 p349 (2006)).

He identifies three clusters of folk theories. The first is the idea that new technologies inevitably follow a “wow-to-yuck” trajectory, in which initial public enthusiasm for the technology is followed by a backlash. The exemplar of this phenomenon is the reaction to genetically modified organisms, which, it is suggested, followed exactly this pattern, with widespread acceptance in the ’70s, then a backlash in 80’s and 90’s. Rip suggests that this doesn’t at all represent the real story of GMOs, and questions the fundamental characterisation of the public as essentially fickle.

Another folk theory of nanotechnology implies a similar narrative of initial enthusiasm followed by subsequent disillusionment; this is the “cycle of hype” idea popularised by the Gartner group. The idea is that all new technologies are initially accompanied by a flurry of publicity and unrealistic expectations, leading to a “peak of inflated expectations”. This is inevitably followed by disappointment and loss of public interest; the technology then falls into a “trough of disillusionment”. Only then does the technology start to deliver, with a “slope of enlightenment” leading to a “plateau of productivity”, in which the technology does deliver real benefits, albeit less dramatic than those initially promised in the first stage of the cycle. Rip regards this as a plausible storyline masquerading as an empirical finding. But the key issue he identifies at the core of this is the degree to which it is regarded as acceptable – or even necessary – to exaggerate claims about the impact of a technology. In Rip’s view, we have seen a divergence in strategies between the USA and Europe, with advocates of nanotechology in Europe making much more modest claims (and thus perhaps positioning themselves better for the aftermath of a bubble bursting).

Rip’s final folk theory concerns how nanotechnologists view the public. In his view, nanotechnologists are excessively concerned about public concern, projecting onto the public a fear of the technology out of proportion to what empirical findings actually measure. Of course, this is connected to the folk theory about GMOs implicit in the “wow-to-yuck” theory. The most telling example Rip offers is the widespread fear amongst nanotechnology insiders that a film of Michael Crichton’s thriller “Prey” would lead to a major backlash. Rip diagnoses a widespread outbreak of nanophobia-phobia.

The act of creation – or just scrapheap challenge?

It was fairly predictable that last Saturday’s headline in the Guardian about Craig Venter’s latest synthetic biology activitiesI am creating artificial life, declares US gene pioneer – would generate some reaction from that paper’s readers. The form of that reaction, though, wasn’t, as one might have expected, outrage about scientists “playing God”, or worries about the potential dangers of a supercharged version of genetic modification. Instead, the paper printed yesterday an extended response from Nick Gay, a biochemist at the University of Cambridge.

This makes the (to me, entirely reasonable) point that you can’t really describe this as creating life from scratch; it’s “as if he had selected a set of car parts, assembled them into a car and then claimed to have invented the car”. Gay’s own research is into the intricacies and complexities of cellular signalling, so perhaps it is not surprising that he thinks that the thinking underlying Venter’s approach is “the crudest and most facile kind of reductionism”. It would be interesting to know how widely his point of view is shared by other biochemists and molecular biologists.

Nobels, Nanoscience and Nanotechnology

It’s interesting to see how various newspapers have reported the story of yesterday’s award of the physics Nobel prize to the discoverers of giant magnetoresistance (GMR). Most have picked up on the phrase used in the press release of the Nobel foundation, that this was “one of the first real applications of the promising field of nanotechnology”. Of course, this begs the question of what’s in all those things listed in the various databases of nanotechnology products, such as the famous sunscreens and stain-resistant fabrics.

References to iPods are compulsory, and this is entirely appropriate. It is quite clear that GMR is directly responsible for making possible the miniaturised hard disk drives on which entirely new product categories, such as hard disk MP3 players and digital video recorders, depend. The more informed papers (notably the Financial Times and the New York Times) have noticed that one name was missing from the award – Stuart Parkin – a physicist working for IBM in Almaden, in California, who was arguably the person who took the basic discovery of GMR and did the demanding science and technology needed to make a product out of it.

The Nobel Prize for Chemistry announced today also highlights the relationship between nanoscience and nanotechnology. It went to Gerhard Ertl, of the Fritz-Haber-Institut in Berlin, for his contributions to surface chemistry. In particular, using the powerful tools of nanoscale surface science, he was able to elucidate the fundamental mechanisms operating in catalysis. For example, he worked out the basic steps of the Haber-Bosch process. A large proportion of the world’s population quite literally depends for their lives on the Haber-Bosch process, which artificially fixes nitrogen from the atmosphere to make the fertilizer on which the high crop yields that feed the world depend.

The two prizes illustrate the complexity of the interaction between science and technology. In the case of GMR, the discovery was one that came out of fundamental solid state physics. This illustrates how what might seem to the scientists involved to be very far removed from applications can, if the effect turns out to be useful, be very quickly be exploited in products (though the science and technology needed to make this transition will itself often be highly demanding, and is perhaps not always appreciated enough). The surface science rewarded in the chemistry prize, by contrast, represents a case in which science is used, not to discover new effects or processes, but to understand better a process that is already technologically hugely important. This knowledge, in turn, can then underpin improvements to the process or the development of new, but analogous, processes.

Giant magnetoresistance – from the iPod to the Nobel Prize

This year’s Nobel Prize for Physics, it was announced today, has been awarded to Albert Fert, from Orsay, Paris, and Peter Grünberg, from the Jülich research centre in Germany, for their discovery of giant magnetoresistance, an effect whereby a structure of layers of alternating magnetic and non-magnetic materials, each only a few atoms thick, has an electrical resistance that is very strongly changed by the presence of a magnetic field.

The discovery was made in 1988, and at first seemed an interesting but obscure piece of solid state physics. But very soon it was realised that this effect would make it possible to make very sensitive magnetic read heads for hard disks. On a hard disk drive, information is stored as tiny patterns of magnetisation. The higher the density of information one is trying to store on a hard drive, the weaker the resulting magnetic field, and so the more sensitive the read head needs to be. The new technology was launched onto the market in 1997, and it is this technology that has made possible the ultra-high density disk drives that are used in MP3 players and digital video recorders, as well as in laptops.

The rapidity with which this discovery was commercialised is remarkable. One probably can’t rely on this happening very often, but this is a salutory reminder that sometimes discoveries can move from the laboratory to a truly industry-disrupting product very quickly indeed, if the right application can be found, and if the underlying technology (in this case the nanotechnology required for making highly uniform films only a few atoms thick) is in place.