Are we imminently facing The Singularity? This is the hypothesised moment of technological transcendence, in the concept introduced by mathematician and science fiction writer Vernor Vinge, when accelerating technological change leads to a recursively self-improving artificial intelligence of superhuman capabilities, with literally unknowable and ineffable consequences. The most vocal proponent of this eschatology is Ray Kurzweil, whose promotion of the idea takes to the big screen this year with the forthcoming release of the film The Singularity is Near.
Kurzweil describes the run-up to The Singularity: “Within a quarter century, nonbiological intelligence will match the range and subtlety of human intelligence. It will then soar past it … Intelligent nanorobots will be deeply integrated in our bodies, our brains, and our environment, overcoming pollution and poverty, providing vastly extended longevity … and vastly enhanced human intelligence. The result will be an intimate merger between the technology-creating species and the technological evolutionary process it spawned.” Where will we go from here? To The Singularity – “We’ll get to a point where technical progress will be so fast that unenhanced human intelligence will be unable to follow it”. This will take place, according to Kurzweil, in the year 2045.
The film is to be a fast-paced documentary, but to leaven the interviews with singularitarian thinkers like Aubrey de Gray, Eric Drexler and Eliezer Yudkowsky, there’ll be a story-line to place the technology in social context. This follows the touching struggle of Kurzweil’s female, electronic alter ego, Ramona, to achieve full person-hood, foiling an attack of self-replicating nanobots on the way, before finally being coached to pass a Turing test by self-help guru Tony Robbins.
For those who might prefer a less dramatic discussion of The Singularity, IEEE Spectrum, is running a special report on the subject in its June edition. According to a press release (via Nanowerk), “the editors invited articles from half a dozen people who have worked on and written about subjects central to the singularity idea in all its loopy glory. They encompass not just hardware and wetware but also economics, consciousness, robotics, nanotechnology, and philosophy.” One of those writers is me; my article, on nanotechnology, ended up with the title “Rupturing the Nanotech Rapture”.
We’ll need to wait a week or two to read the articles – I’m particularly looking forward to reading Christof Koch’s article on machine consciousness, and Alfred Nordmann’s argument against “technological fabulism” and the “baseless extrapolations” it rests on. Of course, even before reading the articles, transhumanist writer Michael Anissimov fears the worst.
Update. The IEEE Spectrum Singularity Special is now online, including my article, Rupturing the Nanotech Rapture. (Thanks to Steven for pointing this out in the comments).
My main objection was with perceived bias in the press release, because that’s all I’ve seen. Regarding the articles, I only said, “seems to be a mashup of sympathetic and contrarian views”. Note that I am not just interested in sympathetic views, I welcome contrarian views, as long as they are well-argued. Similarly, I reject “sympathetic” views (some of Kurzweil’s futuristic determinism comes to mind) if they are based on loose footing.
In general, I think a lot of people (academics included) are confronting the confusing idea of the “singularity” for the first time around now, and because the term has so many meanings, singularity “critiques” tend to be especially all-over-the-map and poorly researched. Even some people who “embrace” the singularity can’t even describe exactly what they mean by it, when pressed.
I look forward to see this movie: http://www.youtube.com/watch?v=TA1-dz_ZmQw&feature=related (Yeees Master..)
Wait.. who is Michael Anissimov? How old is he? What is Lifeboat Foundation and acceleratingfuture.com?
How R. Jones get to know about nanonews before me?
Michael, I appreciate it that you acknowledge that I have some substantive arguments, but I note that you still can’t resist imputing a (very implausible) ulterior motive to me. I don’t quite understand what you mean by bias, though. If I don’t agree with you, it’s because I think you’re wrong, not because I’m biased against you.
Perhaps it is true that the term “singularity” has many meanings; this could point to an essential lack of coherence in the idea in the first place, or to the fact that many adherents of the idea project their own preconceptions on to it. In fact, it is probably true that many critiques are not directed at the notion of the singularity itself, but at elements of the belief-package (as an anthropologist would put it) of singularitarians. For myself, my main critique is against the Drexlerian, mechanical vision of nanotechnology. I can quite see how you could construct a vision of the singularity without this (which I might have some different arguments against), but it’s clear that MNT is a central part of the package espoused by Kurzweil and others and so it’s a legitimate target of criticism.
Going back to your perception of bias, perhaps this mostly reflects your irritation with the connection between the singularity and “the rapture”. I can see why “the rapture of the nerds” annoys you so much, because it is so snidely funny. But your friend’s rejection of the parallel reveals a certain degree of parochial thinking. I think the comparison between the notion of the singularity and religious visions of the apocalypse is both fair and rather revealing, but this doesn’t mean there’s a direct comparison between the singularity and the idea of the “rapture” as understood by some American protestants in the early 21st century. The latter notion is simply one parochial interpretation of the idea of apocalypse that has been an undercurrent of western thought for at least a millennium, in Christian, Jewish and Islamic traditions. The singularity is not the first time this line of thinking has reappeared in an apparently secular guise, either; a number of writers, most notably Norman Cohn, have convincingly traced the roots of modern utopian political movements such as communism to early modern religious millenarianism.
In any case, the connection between singularitarianism and religious notions of the apocalypse is made absolutely explicit in Kurzweil’s book “The Singularity is Near”. To quote: ““Evolution moves towards greater complexity, greater elegance, greater knowledge, greater intelligence, greater beauty, greater creativity and greater levels of subtle attribures such as love. In every monotheistic trandition God is likewise described as all of these qualities, only without any limitation: infinite knowledge, infinite intelligence, infinite beauty, infinite creativity and infinite love, and so on. Of course, even the accelerating growth of evolution never achieves an infinite level, but as it explodes exponentially it certainly moves rapidly in that direction. So evolution moves inexorably toward this conception of God, although never quite reaching this ideal. We can regard, therefore, the freeing of our thinking from the severe limitations of its biological form to be an essentially spiritual undertaking”.”
Richard, what you say is “the idea of the ‘rapture’ as understood by some American protestants in the early 21st century” is in fact what most people, including the ones using the “rapture of the nerds” joke, understand the word “rapture” to refer to. If you mean “apocalypse” then you should say “apocalypse” and not “rapture”.
By the way, as a European I don’t think I’d even heard of the rapture before I saw people comparing it to the singularity, so wherever the parochialism is coming from, it’s not me.
A lot of the articles (now online) could be improved simply by substituting “Ray Kurzweil’s personal ideas” everywhere it says “singularitarianism” or “the singularity”.
I think comparisons of transhumanist and religious ideas are uninformative and distract from substantive technical issues. Enough religious thought has been produced over the ages that it’s possible to find a religious analogue for any concept that involves going beyond some of the limits that have tended to define human life. But just that religions were there first (because they had a lower intellectual tennis net to jump over) doesn’t mean they now own that part of concept space.
Steven, of course, if you didn’t know what the rapture was before you heard the phrase, and you didn’t understand the relationship of the idea of the rapture to the previous history of Christian apocalyptic thought, then you wouldn’t get the joke, or understand the substantive point underlying it.
I’m sorry you don’t think Kurzweil is a good representative of singularitarian thought. Since he is by far the most prominent and visible spokesman for these views, you’d better start working hard to counteract them.
I agree Kurzweil is the most prominent spokesman for transhumanist/singularity ideas, but he’s also atypical in some ways. Attributing his views to “transhumanists” or “singularitarians” in general is just not accurate.
I think most of what he says makes good sense, but there are a few areas (Moore’s law determinism, spirituality, confidence in what technologies will come in what order) where he goes way beyond what’s defensible, and those are the ones criticism tends to get focused on. There are also areas (the effects of transhuman intelligence) where I think he’s overly conservative.
Kurzweil is a knowledgeable and accomplished guy who has found a way to make transhumanist ideas appealing to many people, but for something authoritative or typical, you’re much better off looking at e.g. the Transhumanist FAQ.
(You could say that the Transhumanist FAQ presents the building block ideas, shared by transhumanists, out of which Kurzweil constructs his particular grand narrative.)
Whatever you say about how atypical of transhumanists Kurzweil might be, you can surely appreciate that if you are producing a mass circulation magazine, the point of reference you are going to work from is more likely to be the work of a best-selling author rather than the website of a fringe organisation.
To be honest, though, I’m not entirely convinced by your protestations of how out on a limb Kurzweil is. We shall see – I wonder, in his upcoming film, whether the interviews with all those prominent singularitarian thinkers are going to consist of them gently pointing out to Kurzweil the errors of his views? Somehow, I suspect not.
Of course it makes sense for them to focus somewhat on Kurzweil’s views; my problem is with them attributing Kurzweil’s views to people who aren’t Kurzweil and who don’t share those views.
I don’t really see what else critics of singularitarian or transhumanist ideas are expected to do, other than address the published views of their most prominent spokesmen. If there is a singularitarian “movement” that thinks these views aren’t representative in some way, that seems to me to be a problem for singularitarians, rather than their critics.
Richard, you essay was relatively conservative in its opposition.
It seems that the stated position is not that nanorobots are impossible foreever but impossible by 2030 and unlikely before 2050 and more likely to have a nanobio basis if it is accomplished. Then you indicate the specific problems with nanorobots operating in human bodies.
It seems likely to me that any more complicated nanodevices would be building upon what is working now.
More complicated and functional nanoparticles or particles up to several microns in size or particles within bloodstream robots.
There are devices and robots that are placed into the bloodstream.
Cellular repair – does not have to be invivo.
There is a coated straw like device which has coatings that attach to stem cells or to cancer cells.
Blood and body fluids can be taken out of the body and nanofilters and modification of the blood and blood products can be performed. (Advanced dialysis)
There is outside magnets used to guide nanoparticles.
There is laser activated release of drugs and other agents.
Richard, sorry for attaching an ulterior motive to you, I withdraw that claim and apologize.
The fundamental idea of a smarter-than-human thinker does seem coherent — obviously humans are not the smartest possible intelligent being. That’s how I define “Singularity”, as smarter-than-human intelligence, like Vinge basically did.
MNT is a component of transhumanist futurism, yes. As a transhumanist and director of the WTA, I want to make it clear that every person should examine the arguments and draw their own conclusions, based on as deep as technical understanding of the issue as they can muster. Professing a certainty in the feasibility of MNT is overconfident, but based on what I know, I consider the eventual development of bottom-up nanomanufacturing as likely.
The singularity/rapture analogy is so appealing that, in the minds of many, it becomes a substitute for actual thought. Steven’s post addresses numerous blatant differences between the two stances that makes it obvious any comparison is too distant. It’s perfectly possible to critique ideas that fall under the banner of “Singularity”, one at a time, without making reference to the rapture connection.
In any case, people like Dale Carrico even say that those like Aubrey de Grey, who has raised millions of dollars for life extension research, are delusional and pseudo-religious in their desire to extend human life. Atheists and agnostics (me being the former) sometimes tend to be trigger-happy when comparing new, unsettling-sounding ideas (like superintelligence) to past religious movements. Calling something a parallel to religion is the ultimate insult — considering how ridiculous religion, and especially Millennialism, so obviously is.
I, and the majority of people who call themselves “singularitarians”, explicitly reject many of Ray Kurzweil’s ideas, including the notion that the Singularity has a spiritual component. Sounds like an opportunity for a petition of some sort.
I just read your article, and thought you articulated it quite well. Two points:
We wouldn’t need to have nanoscale assemblers in the human body. We’d use nanofactories to build heat-tolerant microbots, maybe on the size scale of a micrometer or so, to do medical work. This is the general stance of Freitas, Drexler, CRN, etc.
The human body could easily be too hot or chaotic of an environment for the first generations of molecular assemblers (if they are possible at all). It makes sense to say that some categories of synthetic nanomachine might not operate well in a biological context, but to say that none of them will is somewhat excessive. More research is needed, of course.
Second point, I disagree with this sentence:
“such devices can function only at low temperatures and in a vacuum, their impact and economic importance would be virtually nil.”
If you could mass produce solar cells, factories, chemical processing plants, etc., in an entirely or almost-entirely automated way, then I think the manufacturing units would pay for themselves even if they required high vacuum and liquid helium for cooling. (Liquid nitrogen may be more likely.)
Brian, I’m not sure what you mean by conservative here. In my article, I followed the brief I was given, which was to discuss both the prospects for nanotechnology contributing to a singularity on Kurweilian time-scales, and what the actual achievements of nanotechnology were likely to be over that period. It sounds like we more or less agree that progress to a medical “nanobot” is likely to be essentially evolutionary, moving from things like the crude drug delivery devices we have today by adding increasingly complex functionality, for propulsion (perhaps), targetting and steering, sensing its environment and some capability to do logical operations.
Michael, I accept your apology. The rapture issue is a digression, but an interesting one. I’m not particularly religious, neither am I a militant atheist. I do think that the modern version of scientific atheism has a huge blind spot in failing to see that a study of religious thought can be interesting and instructive, even necessary. Religion has been intimately mixed up with the intellectual currents that have contributed to the making of our modern world-picture (I’m fascinated, for example, that the greatest physicist of all time, Isaac Newton, combined his fantastic insight into the physical world with a set of cosmological ideas that strike us as being utterly loopy – including, significantly, a conviction that a religious apocalypse was fast approaching). So simply dismissing the history of religious thought as wrong and therefore uninteresting leaves one, in one’s ignorance, unprepared to make critical judgements about new ideas and vulnerable to the reappearance of old, seductive notions in new guises. If I can adapt the words of J.M.Keynes: “Practical men, who believe themselves to be quite exempt from any religious influence, are usually the slaves of some defunct theologian.”
Since people have been promising the prospect of eternal life and of a transcendence of earthly limitations for some time, the first question that anyone with any knowledge of this history of ideas must surely ask is, “what’s different, this time round?” The point is that notions like “superintelligence” are not “new, unsettling-sounding ideas”; on the contrary, they’re rather old ideas, and if you don’t know this history, you’re failing to equip yourself with the necessary tools to make a proper critical judgement.
And yes, I do think that Aubrey de Gray is seriously misguided in his glib assumption that the very hard problems that stand in the way of his project, like solving the multiple problems of cancer or neurodegenerative diseases, will be easily susceptible to his nostrums. Yes, it’s good that he’s attracted a few millions for research, but keep some perspective here. The cancer research charity here in the UK, in just one small country, raises more than two million dollars a day, mostly from small donations and grass-roots fundraising. There is a value in raising awareness of the special problems of diseases of old age; one can certainly question the cultural preferences we have that mean that a child with leukaemia attracts so much more sympathy than an 80-year old with Parkinsons. Certainly I would like to see much more work on diseases like Alzheimer’s.
But isn’t the crucial test here the fact that these transcendent technologies are promised in our lifetimes? Saying that it may, in some unspecified future, be possible to arrest the aging process is a statement that may be controversial or provocative, but that is nonetheless interesting and suggestive of new research directions. But saying (particularly to an audience of potential financial donors) that it will be possible to halt aging in time to save your audience is so transparent an appeal for an emotional, rather than rational, response, that you’d better have some pretty strong evidence if people aren’t to place you in the long historical tradition of people selling that particular snake-oil.
I think where we may differ is that I believe the current state of nanomedicine is more advanced.
There is also mostly non-bio nanoelectronics that are being developed and designed into nanorobotics. Work which is implemented would have more kinship with Drexler’s work than the bio-approach. Clearly the bio details of the environment cannot be ignored and are not being ignored in the latest design work. (computer simulations.) Note: Freitas has been involved in these designs.
http://nextbigfuture.com/2008/05/new-work-on-nanorobotics-design.html
http://www.mdpi.org/sensors/papers/s8052932.pdf
http://www.nanorobotdesign.com/
Michael, I’m aware of course that it isn’t proposed to put assemblers inside the body. My point is that all the other functionalities that are envisaged for MNT-based medical nanobots – powering, sensing, information processing – are all based on the same mechanical paradigm that I argue is inappropriate for the warm wet world. As my reply to Brian should make clear, I do think that sub-micron devices with increasing degrees of functionality for use inside the body will come, and indeed are being developed now. It’s fine by me if we call these medical nanobots, as long as we remember that their operating principles are likely to be very different to those envisaged in MNT.
As for the economic acceptability of things that demand UHV and ultralow temperatures (and remember that liquid nitrogen, on an absolute scale, isn’t that cold – it only reduces thermal vibration amplitudes by about a factor of two), we’ll see. There are, of course, commercial processes that rely on either UHV or liquid helium temperatures (I can’t immediately think of one that needs both at the same time) and they are very, very painful and expensive.
Brian, I think prospects for nanomedicine are very interesting, and it’s at the forefront of my mind at the moment. We are currently finalising the next stage of the UK research councils nanotechnology strategy, which will involve a new $60 million boost to the UK’s investment in nanomedicine. It’s been fascinating getting input from the leading academics, industrialists and clinicians working in the area as we’ve developed the scope of the programme.
Richard: If I can adapt the words of J.M.Keynes: “Practical men, who believe themselves to be quite exempt from any religious influence, are usually the slaves of some defunct theologian.”
I prefer Voltaire’s “Anyone who has the power to make you believe absurdities has the power to make you commit injustices.”
Philip
That’s not an un-apt thought, either, Philip, when one is considering ideas like the singularity. Plenty of people seem quite capable of believing absurdities without any external compulsion.
Like Richard here, I am very skeptical that solution-phase chemistry will ever be replaced by “mechanical” chemistry that is the basis of the “drexlerian” nanotech. Also, all of the research all around the world is on the solution-phase variety, which is showing great promise in its ability to make the things that we want. There have been interesting developments in the “dry” approach. But they all come with some catch. The most common one being that the reaction or device must be done at cryogenic temperatures.
However, even if drexlerian nanotech is possible, its effect on the economy will be relatively modest. The wildest predictions of drexlerian nanotech is that it will reduce the capital cost of manufacturing (not the running costs) to near zero. At the present time, the capital cost of manufacturing makes up about 5% of the GDP of the U.S. It makes up maybe 10-15% the GDP of a typical developing country’s economy.
This suggests that the development of “drexlerian” nanotech will have a “one-shot” boost to economic productivity comparable to the internet/I.T. technologies of the late 90’s. A much less significant impact than the development of, say, electricity or mechanical motors in the late 1800’s.
Also, its impact will actually be more significant in places like China and developing countries, where much more of the economy is based on manufacturing, than it will be in the developed countries, whose economies are based mostly on services.
Drexlerian nanotech would have more than 10-15% GDP impact.
Reducing costs of large scale solar power with exponential manufacturing of high efficiency solar power. (Drexlerian nanotech is not necessarily required to achieve this but once MNT happens then that result follows).
If you get diamondoid MNT, you also get advanced thermoelectrics.
If you get diamondoid MNT, then you get radical transportation improvement.
If you get diamondoid MNT, then you get massive amounts of inexpensive and highly functional biomarker sensors. Medicine gets transformed.
the other parts of the economy do get effected.
If I have diamondiod MNT then I have single stage to orbit space craft or space elevators. I can go to space for the cost of electricity. I can tap into the material resources of the asteroids.
Many of these things could be done even without MNT but it requires that we be more clever in how we design things and how we put systems together. With MNT it becomes easy.
Atomically precise positionally chemistry would also enable rapid development of other technologies. Better computers which lead to more computation science advances etc… This is where the accelerating technology part comes in.
If I have MNT or anything close to it then testing of many different experiments in parallel (combinatorial science experiments) becomes faster, cheaper and more efficient and at a higher scale.
If Ramona ever becomes human, I’m really sorry for my lewd comments.
I’ve been told the original concept of the Singularity was V.Vinge’s declaration that semiconductor computers would experience consciousness so capable of creative thought that it makes us seem like ants. This is false. I’m sure there are a handful of people who know enough about how computers work (I assume silicon switches flipping back and forth or dipoles on a film rotating), and how human brains work (synchronized neural firings, plus much more); these people would never assume a human brain can multiply 50 digit numbers together in half a second, the same way many Singulatarians assume the converse.
I like R.Kurzweil’s definition. I think what he is saying is that at some point in a few decades with all these technologies, the complexity of the environment key human decision makers face, will regress society unless the decision makers are given technologies that enhance their decision-making capabilities. I mildly disagree with this.
The Cuban Missile crisis and WWWIII close calls in general, almost redeem the idea of: humans are too stupid to handle advanced technologies.
JFK had all his military advisors (Republicans like J.McCain I assume) tell him to attack Cuba despite not knowing that Cuba would have been capable of and almost forced to nuke America in retaliation. JFK had his hawkish brother turn dove throughout the crisis. He was also facing conflicting messages from the Soviets in negotiating the terms of an exit strategy, an exit strategy whereby he did not want to leave the world more likely to (try to) endure a nuclear war by compromising West German safety. There were spy planes incidents over Siberia and Cuba that could have been interpreted as acts of war. There were complicated assemblies of Soviet naval vessels threatening to “kind of” breach an American naval blockade. There was his own highest naval officers (Republicans like J.McCain, I assume) foolishly ignoring his orders to shrink the area of the blockage, thereby forcing Kruschev to make a rash decision. There were Soviet naval vessels that actually breached the blockade (unknowingly)…
Many Singulatarians take the attitude that an AI must be built to deal with advanced technologies (no mention that among these technologies include people building AIs?!), and that this AI should be entrusted with running essential human infrastructures. I disagree. If an AI or any other advanced technology becomes potentially dangerous, classify it as a WMD and don’t build it. This suggests an unsettling surveillence society, but I think there are perfectly appropriate ways of policing dangerous technologies while respecting privacy rights.
Once we stop killing eachother for ego masturbation and for consumer crap, we’ll probably get around to making longevity and/or quality-of-living supreme. Maybe America will keep looking for new Cold Wars to fight and new retarded consumer status symbols, but the rest of the world will probably come around to wherever Northern Europe is trending, minus the vitrol against immigrants. After then, I see deciding whether to harness powerful energy sources as a decision where a potentially dangerous AI might be useful. Building a time machine out of a galaxy is far beyond R.Kurzweil’s time-scale. His time-scale was based upon a false equation of brain and computer processing power. His methodology might be correct if a different application were selected. Maybe the AI needs a minimum level of processing power to created a superhacking program or to prototype weaponry needed to militarily conquer humanity, or something, and this could be timelined.
Assuming E.Yudkowsky and others are right that someone will create a dangerous software program and that it might be possible (I think EY is pessimistic about this) to create a harder-to-program “safe” software program to contain (via taking over the world in some fashion) any subsequent software programs, I’d still prefer other strategies than blindly believing one group of people are the best software programmers on Earth (to my knowledge, Microsoft or the NSA hire the best). For instance, you could make all military and essential infrastructures intranets by using quantum technologies to encrypt communications so they can’t be hacked. You could treat computers they way the world should treat GHGs by working to phase out non-essential computers and in 2030 or whenever, begin to retire most high-bandwith consumer electronics. You could monitor supercomputer programs, could better fund antivirus…
Singulatarians always respond with: humans are flawed. Yet they don’t even realize humans will program their flawless AI. And that being flawed is fine as long as a flawed human can still handle the complications of the world.
B.Wang I agree. If MNT diamond works like Drexler proposes, GDP becomes a meaningless measure. It would make electricity almost free. Capital stock turnovers would drop, plunging GDP, yet quality-of-life would rise. People would see the service economy as a volunteer economy, and big decisions (IDK, say you were considering Google-Earthing a water planet orbiting a nearby star, using a diamond observatory) presumably couldn’t be bought.
Hi,
There is alot of Utopian Economics out there associated with MNT!
Why does everyone believe that MNT will suspend the laws of scarcity?
Mining Asteroids will require alot of energy. Has anyone done a study of how much energy would be required?
1. If one is using Solar Energy then either the Asteroid would have to be moved from the Asteroid Belt, or we are back to scarcity due to mining only the Asteroids in nearby orbits!
2. If one is using Nuclear then why have we not been mining Asteroids already?
3. Finally, you would have to put people up there with all of the health problems/costs associated.
The only way I can see radical economic change such that everyone becomes rich, requires in my opinion that all of humanity becomes cyborg’s, and live near cheap sources of energy/materials like Gas Giants.
Zelah
Quote from the article:
“This vision holds wide currency among those anticipating a singularity, in which the creation of hyperintelligent, self-replicating machines triggers runaway technological advancement and economic growth, transforming human beings into cyborgs that are superhuman and maybe even immortal.”
“Many of these projects will almost certainly prove to be useful, lucrative, or even transformative, but none of them are likely to bring about the transhumanist rapture foreseen by singularitarians. Not in the next century, anyway.”
– how are you able to predict with such certainty what will and will not happen in the year 2108? Do you own a crystal ball?