Transhumanists look forward to a technological singularity, which we should expect to take place on or around 2045, if Ray Kurzweil is to be relied on. The technological singularity is described as something akin to an event horizon, a date at which technological growth becomes so rapid that to look beyond it becomes quite unknowable to us mere cis-humans. In some versions this is correlated with the time when, due to the inexorable advance of Moore’s Law, machine intelligence surpasses human intelligence and goes into a recursive cycle of self-improvement.
The original idea of the technological singularity is usually credited to the science fiction writer Vernor Vinge, though earlier antecedents can be found, for example in the writing of the British Marxist scientist J.D. Bernal. Even amongst transhumanists and singularitarianists there are different views about what might be meant by the singularity, but I don’t want to explore those here. Instead, I note this – when we talk of the technological singularity we’re using a metaphor, a metaphor borrowed from mathematics and physics. It’s the Singularity as a metaphor that I want to probe in this post.
A real singularity happens in a mathematical function, where for some value of the argument the result of the function is undefined. So a function like 1/(t-t0), as t gets closer and closer to t0, takes a larger and larger value until when t=t0, the result is infinite. Kurzweil’s thinking about technological advance revolves around the idea of exponential growth, as exemplified by Moore’s Law, so it’s worth making the obvious point that an exponential function doesn’t have a singularity. An exponentially growing function – exp(t/T) – certainly gets larger as t gets larger, and indeed the absolute rate of increase goes up too, but this function never becomes infinite for any finite t.
An exponential function is, of course, what you get when you have a constant fractional growth rate – if you charge your engineers to make your machine or device 20% better every year, for as long as they are successful in meeting their annual target you will get exponential growth. To get a technological singularity from a Moore’s law-like acceleration of technology, the fractional rate of technological improvement must itself be increasing in time (let me leave aside for the moment my often expressed conviction that technology isn’t single thing, and that it makes no sense at all to imagine that there’s some simple scalar variable that can be used to describe “technological progress” in general).
It isn’t totally implausible that something like this should happen – after all, we use technology to develop more technology. Faster computers should help us design more powerful microprocessors. On the other hand, as the components of our microprocessors shrink, the technical problems we have to overcome to develop the technology themselves grow more intractable. The question is, do our more powerful tools outstrip the greater difficulty of our outstanding tasks? The past has certainly seen periods in which the rate of technological progress has undergone periods of acceleration, due to the recursive, self-reinforcing effects of technological and social innovation. This is one way of reading the history of the first industrial revolution, of course – but the industrial revolution wasn’t a singularity, because the increase of the rate of change wasn’t sustained, it merely settled down at a higher value. What isn’t at all clear is whether what is happening now corresponds even to a one-off increase in the rate of change, let alone the sustained and limitless increase in rate of change that is needed to produce a mathematical singularity. The hope or fear of singularitarians is that this is about to change through the development of true artificial intelligence. We shall see.
Singularities occur in physics too. Or, to be more precise, they occur in the theories that physicists use. When we ask physics to calculate the self-energy of an electron, say, or the structure of space-time at the centre of a black hole, we end up with mathematical bad behaviour, singularities in the mathematics of the theories we are using. Does this mathematical bad behaviour correspond to bad behaviour in the physical world, or is it simply alerting us to the shortcomings of our understanding of that physical world? Do we really see infinity in the singularity – or is it just a signal to say we need different physics? Some argue it’s the latter, and here’s an example from my own field to illustrate why one might think that.
The great physicist Sam Edwards (who died a month ago) made his name and founded the branch of physics I’ve worked in, by realising that you could describe the statistical mechanics of polymer molecules with a theory that had the formal structure of the quantum field theories he himself learnt as a postdoc with Julian Schwinger.
Like those quantum field theories, Edwards’s theories of macromolecules produce some inconvenient, and unphysical, infinities that one has to work around. To Edwards, this was not a worry at all – as he was quoted as saying, “I know there are atoms down there, but I don’t care”. Edwards’s theories treated polymer molecules as wiggly worms that are wiggly on all scales, no matter how small. This works brilliantly if you want to know what’s happening on scales larger than the size of individual atoms, but it’s the existence of those very atoms that mean the polymer isn’t wiggly all the way down, as it were. So we don’t worry that the theory doesn’t work at scales smaller than atoms, and we know what the different physics is that we’d need to use to understand behaviour on those scales. In the quantum field theories that describe electrons and other sub-atomic particles, one might suspect that there’s some similar graininess that intervenes to save us from the bad mathematical behaviour of our theories, but we don’t yet know what new kind of theory might be needed below the Planck scale, where we think the graininess might set in.
The most notorious singularities in physics are the ones that are predicted to occur in the middle of black holes – here it is the equations of general relativity that predict divergent behaviour in the structure of space-time itself. But like other singularities in physics, what the mathematical singularity is signalling to us is that near the singularity, we have different physics, physics that we don’t yet understand. In this case the unknown is the physics of quantum gravity, where quantum mechanics meets general relativity. The singularity at the centre of a black hole is a double mystery; not only do we not understand what the new physics might be, but the phenomena of this physical singularity are literally unobservable, hidden by the event horizon which prevents us from seeing inside the black hole. The new physics beyond the Planck scale is unobservable, too, but for a different, less fundamental reason – the particle accelerators that we’d need to probe it would have to be unfeasibly huge in scale and energy, huge on scales that seem unattainable to humans with our current earth-bound constraints. Is it always a given that physical singularities are unobservable? Naked singularities are difficult to imagine, but don’t seem to be completely ruled out.
The biggest singularity in physics of all is the singularity where we think it all began – the Big Bang, a singularity in time which it is unimaginable to see through, just as the end of the universe in a big crunch provides a singularity in time which we can’t conceive of seeing beyond. Now we enter the territory of thinking about the creation of the universe and the ultimate end of the world, which of course have long been rich themes for religious speculation. This connects us back to the conception of a technologically driven singularity in human history, as a discontinuity in the quality of human experience and the character of human nature. I’ve already argued at length that this conception of the technological singularity is a metaphor that owes a great deal to these religious forbears.
So here we’re back at the metaphorical singularity – and perhaps metaphors are best left to creative writers. If we want a profound treatment of the metaphors of singularity, we should look, not to futurists, but to science fiction. I know of no more thought-provoking treatment of singularities and the singularity than that of M. John Harrison in his brilliant trilogy, “Light”, “Nova Swing” and “Empty Space”.
At the astrophysical centre of the trilogy is a vast, naked singularity. Bits of this drop off onto nearby planets, leading to ragged borders beyond which things are familiar but weirdly distorted, a ragged edge across which one can with some risk move back and forth, and which is crossed and recrossed by herds of inscrutable cats. The human narrative crosses back and forth between a near-present and a further future which feels very much post-singularity. This future is characterised by routine faster-than-light travel, “shadow operators” – disembodied pieces of code which find unexplained, nanobot like substrates to run on, radical and cheap genetic engineering leading to widespread, wholesale (and indeed retail) human modification. There is a fully realised nano-medicine, and widely available direct brain interfaces, one application of which turns humans into the cyborg controllers of the highest performing faster-than-light spaceships. And yet, the motivations that persuade a young girl to sign up to this irreversible transformation seem all too recognisable, and indeed the familiarity of this post-singularity world seems all too plausible.
Beyond the singularities, beyond the space opera setting and Harrison’s brilliant and stylish writing, the core of the trilogy concerns the ways people construct, and reconstruct, and sometimes fabricate, their own identities. It’s this theme that is claimed by transhumanism, but it’s one that seems to me to be very much more universal than that.
Richard, as the 10 year anniversary of your challenge to MNT is approaching, I’m curious as to what experimental/computational work has occurred to answer the concerns you raised. Also, it seems Drexler’s blog has gone silent, and others (eg. Freitas) have stopped work as well.
Finally, a recent report from MIT with a similar argument to what you have been proposing:
‘The Future Postponed: Why Declining Investment in Basic Research Threatens a US Innovation Deficit’
http://dc.mit.edu/sites/default/files/innovation_deficit/Future%20Postponed.pdf
Six challenges for molecular nanotechnology
http://www.softmachines.org/wordpress/?p=175
Thanks for the pointer to the MIT report, very interesting, and as you say, making some similar arguments to me.
Ten years, is it? Good grief. I should mark that with a post reviewing progress, certainly.
As far as I can tell Drexlerian Nanotechnology has close to zero support in the scientific community and almost no active research programs. I have come to think it is like nuclear rockets ( not impossible but almost nobody works on itfor a variety of reasons). So the update could be as simple as MNT has not been pursued and there has been almost no progress.
That being said soft wet nanotech does seem to be very active and productive research area. A post updating the last 10 years of soft wet nanotech development would be a more interesting post.
> . . . Vernon Vinge . . .
It’s actually Vernor.
(And you can delete this comment. ;-> ).
Thank you Jim F, corrected now (and your comment left undeleted, credit where credit’s due).
I think your assessment, Jim M, is pretty much correct – and an update on where we are with soft and wet nano would indeed make an interesting post, when I have a moment. I think it’s fair to say that Drexler himself now strongly favours the soft and wet approach (though he would regard this as an intermediate step towards goals that resemble more the kinds of ambitions he sketched in “Nanosystems”).
Kurzweil’s singularity is based on a mathematical error, but it’s not quite the one that you describe. He correctly observes that the time-to-adoption of technological innovations is dropping at an accelerating rate. It took thousands of years for the wheel to become widespread, hundreds of years for the steam engine, decades for television, but only a couple of years for the smartphone. (Let’s ignore the miniscule rate of improvement of the steam turbine for the last 50 years, and the 9-year doubling cycle for improvements in photovoltaic power.) So “Kurzweil’s Law of technology acceleration” is really exp(exp(t)). This can get really big in a very short time, but it still doesn’t reach infinity in any finite time. “Bigger than most people can imagine, including Ray Kurzweil”, will have to do as a defini
the best way to prevent AI is to slow down and stop diamond semiconductor R+D now and in the future.
At worst it is a red herring, but it is likely this is the easiest way to prevent Judgment Day.