This is the second part of an article I was asked to write to explain nanotechnology and the debates surrounding it to a non-scientific audience with interests in social and policy issues. This article was published in the Summer 2007 issue of the journal Soundings. The first installment can be read here.
Ideologies
There are many debates about nanotechnology; what it is, what it will make possible, and what its dangers might be. On one level these may seem to be very technical in nature. So a question about whether a Drexler style assembler is technically feasible can rapidly descend into details of surface chemistry, while issues about the possible toxicity of carbon nanotubes turn on the procedures for reliable toxicological screening. But it’s at least arguable that the focus on the technical obscures the real causes of the argument, which are actually based on clashes of ideology. What are the ideological divisions that underly debates about nanotechnology?
Transhumanism
Underlying the most radical visons of nanotechnology is an equally radical ideology – transhumanism. The basis of this movement is a teleological view of human progress which views technology as the vehicle, not just for the improvement of the lot of humanity, but for the transcendence of those limitations that non-transhumanists would consider to be an inevitable part of the human condition. The most pressing of these limitations, is of course death, so transhumanists look forward to nanotechnology providing a permanent solution to this problem. In the first instance, this will be effected by nanomedicine, which they anticipate as making cell-by-cell repairs to any damage possible. Beyond this, some transhumanists believe that computers of such power will become available that they will constitute true artificial intelligence. At this point, they imagine a merging of human and machine intelligence, in a way that would effectively constitute the evolution of a new and improved version of humankind.
The notion that the pace of technological change is continually accelerating is an article of faith amongst transhumanists. This leads to the idea that this accelerating rate of change will lead to a point beyond which the future is literally inconceivable. This point they refer to as “the singularity”, and discussions of this hypothetical event take on a highly eschatological tone. This is captured in science fiction writer Cory Doctorow’s dismissive but apt phrase for the singularity: “the rapture of the nerds”.
This worldview carries with it the implication that an accelerating pace of innovation is not just a historical fact, but also a moral imperative. This is because it is through technology that humanity will achieve its destiny, which is nothing less that to transcend its own current physical and mental limitations. The achievement of radical nanotechnology is central to this project, and for this reason transhumanists tend to share a strong conviction not only that radical nanotechnology along Drexlerian lines is possible, but also that its development is morally necessary.
Transhumanism can be considered to be the extreme limit of views that combine strong technological determinism with a highly progressive view of the development of humanity. It is a worldwide movement, but it’s probably fair to say that its natural home is California, its main constituency is amongst those involved in information technology, and it is associated predominantly, if not exclusively, with a strongly libertarian streak of politics, though paradoxically not dissimilar views seem to be attractive to a certain class of former Marxists.
Given that transhumanism as an ideology does not seem to have a great deal of mass appeal, it’s tempting to underplay its importance. This may be a mistake; amongst its adherents are a number of figures with very high media profiles, particularly in the United States, and transhumanist ideas have entered mass culture through science fiction, films and video games. Certainly some conservative and religious figures have felt threatened enough to express some alarm, notably Francis Fukuyama, who has described transhumanism as “the world’s most dangerous idea”.
Global capitalism and the changing innovation landscape
If it is the radical futurism of the transhumanists that has put nanotechnology into popular culture, it is the prospect of money that has excited business and government. Nanotechnology is seen by many worldwide as the major driver of economic growth over the next twenty years, filling the role that information technology has filled over the last twenty years. Breathless projections of huge new markets are commonplace, with the prediction by the US National Nanotechnology Initiative of a trillion dollar market for nanotechnology products by 2015 being the most notorious of these. It is this kind of market projection that underlies a worldwide spending boom on nanotechnology research, which encompasses both the established science and technology powerhouses like the USA, Germany and Japan, but also fast developing countries like China and India.
The emergence of nanotechnology has corresponded with some other interesting changes in the commercial landscape in technologically intensive sectors of the economy. The types of incremental nanotechnology that have been successfully commercialised so far have involved nanoparticles, such as the ones used in sunscreens, or coatings, of the kind used in stain-resistant fabrics. This sort of innovation is the province of the speciality chemicals sector, and one cynical view of the prominence of the nanotechnology label amongst new and old companies is that it has allowed companies in this rather unfashionable sector of the market to rebrand themselves as being part of the newest new thing, with correspondingly higher stock market valuations and easier access to capital. On the other hand, this does perhaps signal a more general change in the way science-driven innovations reach the market.
Many of the large industrial conglomerates that were such a prominent parts of the industrial landscape in Western countries up to the 1980s have been broken up or drastically shrunken. Arguably, the monopoly rents that sustained these combines were what made possible the very large and productive corporate laboratories that were the source of much innovation at that time. This has been replaced by a much more fluid scene in which many functions of companies, including research and innovation, have been outsourced. In this landscape, one finds nanotechnology companies like Oxonica, which are essentially holding companies for intellectual property, with functions that in the past would have been regarded as of core importance, such as manufacturing and marketing, outsourced to contractors, often located in different countries.
Even the remaining large companies have embraced the concept of “open innovation”, in which research and development is regarded as a commodity to be purchased on the open market (and, indeed, outsourced to low cost countries) rather than a core function of the corporation. It is in this light that one should understand the new prominence of intellectual property as something fungible and readily monetised. Universities and other public research institutes, strongly encouraged to seek new sources of funding other than direct government support, have made increasing efforts to spin-out new companies based on intellectual property developed by academic researchers.
In the light of all this, it’s easy to see nanotechnology as one aspect of a more general shift to what the social scientist Michael Gibbons has called Mode II knowledge production[4]. In this view, traditional academic values are being eclipsed by a move to more explicitly goal-oriented and highly interdisciplinary research, in which research priorities are set not by the values of the traditional disciplines, but by perceived market needs and opportunities. It is clear that this transition has been underway for some time in the life sciences, and in this view the emergence of nanotechnology can be seen as a spread of these values to the physical sciences.
Environmentalist opposition
In the UK at least, the opposition to nanotechnology has been spearheaded by two unlikely bedfellows. The issue was first propelled into the news by the intervention of Prince Charles, who raised the subject in newspaper articles in 2003 and 2004. These articles directly echoed concerns raised by the small campaigning group ETC[5]. ETC cast nanotechnology as a direct successor to genetic modification; to summarise this framing, whereas in GM scientists had directly intervened in the code of life, in nanotechnology they meddle with the very atomic structure of matter itself. ETC’s background included a strong record of campaigning on behalf of third world farmers against agricultural biotechnology, so in their view nanotechnology, with its spectre of the possible patenting of new arrangements of atoms and the potential replacement of commodities such as copper and cotton by nanoengineered substitutes controlled by multinationals, was to be opposed as an intrinsic part of the agenda of globalisation. Complementing this rather abstract critique was a much more concrete concern that nanoscale materials might be more toxic than their conventional counterparts, and that current regulatory regimes for the control of environmental exposure to chemicals might not adequately recognise these new dangers.
The latter concern has gained a considerable degree of traction, largely because there has been a very widespread degree of consensus that the issue has some substance. At the time of the Prince’s intervention in the debate (and quite possibly because of it) the UK government commissioned a high-level independent report on the issue from the Royal Society and the Royal Academy of Engineering. This report recommended a program of research and regulatory action on the subject of possible nanoparticle toxicity[6]. Public debate about the risks of nanotechnology has largely focused on this issue, fuelled by a government response to the Royal Society that has been widely considered to be quite inadequate. However, it is possible to regret that the debate has become so focused on this rather technical issue of risk, to the exclusion of wider issues about the potential impacts of nanotechnology on society.
To return to the more fundamental worldviews underlying this critique of nanotechnology, whether they be the rather romantic, ruralist conservatism of the Prince of Wales, or the anti-globalism of ETC, the common feature is a general scepticism about the benefits of scientific and technological “progress”. An extremely eloquent exposition of one version of this point of view is to be found in a book by US journalist Bill McKibben[7]. The title of McKibben’s book – “Enough” – is a succinct summary of its argument; surely we now have enough technology for our needs, and new technology is likely only to lead to further spiritual malaise, through excessive consumerism, or in the case of new and very powerful technologies like genetic modification and nanotechnology, to new and terrifying existential dangers.
Bright greens
Despite the worries about the toxicology of nanoscale particles, and the involvement of groups like ETC, it is notable that all-out opposition to nanotechnology has not yet fully crystallised. In particular, groups such as Greenpeace have not yet articulated a position of unequivocal opposition. This reflects the fact that nanotechnology really does seem to have the potential to provide answers to some pressing environmental problems. For example, there are real hopes that it will lead to new types of solar cells that can be produced cheaply in very large areas. Applications of nanotechnology to problems of water purification and desalination have obvious potential impacts in the developing world. Of course, these kinds of problems have major political and social dimensions, and technical fixes by themselves will not be sufficient. However, the prospects that nanotechnology may be able to make a significant contribution to sustainable development have proved convincing enough to keep mainstream environmental movements at least neutral on the issue.
While some mainstream environmentalists may still remain equivocal in their view of nanotechnology, another group seems to be embracing new technologies with some enthusiasm as providing new ways of maintaining high standards of living in a fully sustainable way. Such “bright greens” dismiss the rejection of industrialised economies and the yearning to return to a rural lifestyle implicit in the “deep green” worldview, and look to the use of new technology, together with imaginative design and planning, to create sustainable urban societies[8]. In this point of view, nanotechnology may help, not just by enabling large scale solar power, but by facilitating an intrinsically less wasteful industrial ecology.
Conclusion
If there is (or indeed, ever was) a time in which there was an “independent republic of science”, disinterestedly pursuing knowledge for its own sake, nanotechnology is not part of it. Nanotechnology, in all its flavours and varieties, is unashamedly “goal-oriented research”. This immediately begs the question “whose goals?” It is this question that underlies recent calls for a greater degree of democratic involvement in setting scientific priorities[9]. It is important that these debates don’t simply concentrate on technical issues. Nanotechnology provides a fascinating and evolving example of the complexity of the interaction between science, technology and wider currents in society. Nanotechnology, with other new and emerging technologies, will have a huge impact on the way society develops over the next twenty to fifty years. Recognising the importance of this impact does not by any means imply that one must take a technologically deterministic view of the future, though. Technology co-evolves with society, and the direction it takes is not necessarily pre-determined. Underlying the directions in which it is steered are a set of competing visions about the directions society should take. These ideologies, which often are left implicit and unexamined, need to be made explicit if a meaningful discussion of the implications of the technology is to take place.
[4] Gibbons, M, et al. (1994) The New Production of Knowledge. London: Sage.
[5] David Berube (in his book Nano-hype, Prometheus, NY 2006) explicitly links the two interventions, and identifies Zac Goldsmith, millionaire organic farmer and editor of “The Ecologist” magazine, as the man who introduced Prince Charles to nanotechnology and the ETC critique. This could be significant, in view of Goldsmith’s current prominence in Conservative Party politics.
[6] Nanoscience and nanotechnologies: opportunities and uncertainties, Royal Society and Royal Academy of Engineering, available from http://www.nanotec.org.uk/finalReport.htm
[7] Enough; staying human in an engineered age, Bill McKibben, Henry Hall, NY (2003)
[8] For a recent manifesto, see Worldchanging: a user’s guide for the 21st century, Alex Steffen (ed.), Harry N. Abrams, NY (2006)
[9] See for example See-through Science: why public engagement needs to move upstream, Rebecca Willis and James Wilsdon, Demos (2004)
It sounds to me like you’re conflating the ideas of one person, Ray Kurzweil, with transhumanism in general. Transhumanism in general says nothing more specific than that human enhancement technologies may be developed in the not too far future, and that, if these technologies are handled right, this will be a good thing. No “teleology” or “destiny” or “technological determinism” is involved, nor do transhumanists need to believe that technology is currently accelerating, that technology will always keep accelerating, that this acceleration is necessarily a good thing, or that human and machine intelligence will merge.
It’s true that transhumanists tend to think Drexlerian nanotechnology is possible. (I personally don’t have any strong opinion on the subject, but I do think that if it’s possible, it’s a Big Deal.) But Drexlerian nanotechnology isn’t the essential part of the transhumanist worldview that you seem to think it is. There are many other technologies that could serve to enhance human nature.
There are a number of different definitions of “the singularity”, and the one involving accelerating change isn’t the most interesting one.
I think there’s less of an eschatological tone in singularity writings than many people say. Here’s a blog post I wrote some time ago arguing that the phrase “rapture of the nerds” is not apt.
The first link should have been The Word “Singularity” Has Lost All Meaning.
Steven, I can entirely understand that transhumanists may well think about the singularity in many different ways, and I certainly plead guilty to being influenced by Ray Kurzweil (though actually my introduction to the idea came through Damien Broderick’s book “The Spike”). It does seem to me, though, that both in the major popularizations of the idea and in its roots in von Neumann via Vinge, accelerating change does seem to play a central role (and I note in passing that your interesting blog lives lives on a domain called “accelerating future”). The essence seems captured in Stan Ulam’s report of a conversation with von Neumann in the 50’s: “Our conversation centred on the ever-accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity beyond which human affairs, as we know them, cannot continue”. It seems overwhelmingly likely that, as this conversation was between two mathematical physicists, they were alluding to the technical meaning of “essential singularity”, and thinking of some function like exp(B/(T-T0)), which has the required property of having a constantly increasing rate of change and a divergence at T=T0, and this of course is the thinking behind all Kurzweil’s curves.
As for eschatology, your piece is interesting but it focuses on comparing your view of the singularity with the particular vision of the apocalypse favoured by some American evangelical Christians. This misses the point of the argument, which is that both ideas draw on a much deeper well of apocalyptic thinking in western thought, both political and religious. A provocative book which you might want to look at on the subject is John Gray’s latest, “Black Mass: apocalyptic religion and the death of Utopia”.
Richard – With a word count total of 5372, according to the ever diligent Jocelyn, even if it is a re-engineering of the Soundings article, which is a 10/10 on the Way Cool Scale Venue List, it is worthy indeed, of praise. I think the beach did you some good in spite of it all.
That said, Transhumanism is a topic which created such heated debate on the floor when we first started down the road with NanoTechnology that it was voted ‘Off the island’ until they can be rational and real. That is not to say the Ray Kurzweil, Mike Treder and company, do not have valid things to say, but to couch the speech in terms of seeking, in some form, eternal life, even if achievable, is questionable as whether it is worth it. With Eric Drexler’s programs on th 8/10 list after Mathematica and Catia, we are remaining interested in the core technology, yet not the semantics and rhetoric
There are many directions which the field of Nanotechnology can take us. Our focus is rapidly becoming its validity in power systems and water purification, remembering that this is how Camp One stared in the first place. Reducing the necessity to be tied to the structures and strictures of the urban environment while remaining fully found and connected is a concept which occupies a lot of time on the floor.
A tome, recently retrieved from a library recycling bin, has made the rounds at Camp One. Sleeping Where I Fall, the 1998 Memoir of Peter Coyote tells the tale of the 60’s revolution in San Francisco and has struck a chord which has resonated throughout the community. While, on the surface a correlation might be hard to see, it has sparked interest, steeling the minds to constructive engagement in how technology can be used for the common good.
I have, and will again, rung the bell of focus, a focus I contend should be the underlying dictum of this advancing field, that is advancing along this road with full disclosure and goals embracing the common good.
I suggest there will be, in the end, no ‘winners’ unless we are all winners.
Your conclusion concurs in a major way with our own with ‘See-through Science’ [http://www.demos.co.uk/files/Seethroughsciencefinal.pdf] a much read tome. As the community matures, so shall we all.
I consider myself a transhumanist and support much of their agenda. However, I simply do not buy into the singularity idea. Developments are coming fast and furious in several technical fields, specifically biotechnology and synthetic biology. However, other fields, like aviation and space, seem to be quite stagnant. I can go down to PDX and get on the flight to Tokyo and I have to sit for 10 hours to get there. This is no different than 30 years ago when it also took 10 hours to go to Tokyo.
Even in biotech and nanotech, I do not buy into the singularity idea. Almost all of the developments of a “nanotech manufacturing” capability are occuring in the biological approach, particularly synthetic biology. Indeed, I think synthetic biology will be the real nanotech as far as real nanotech is realizable. However, other areas, such as A.I. and dry nanotech, do not seem to be going anywhere. I also think technology tends to follow an “S” curve, with the technology maturing at the top of the “S”. Semiconductors are still going fast and furious, but I think they will reach their limits when they get down to the molecular level around 2030 or so.
I am a classic “1980’s” style transhumanist, in the sense that I believe that a biological cure to aging is possible and desirable and that the main “frontier” (after we cure aging) will be space development. To many of those who call themselves transhumanists today, this is decidedly old-fashioned, but I really think this is how it can and will go.
I think a lot of the ideas that are associated with transhumanism, like A.I., uploading, and drexlerian nanotech; strike me as being this current decade’s vision of the future in the same sense that “metropolis” was the 1920’s vision of the future and the helicopter in every garage was the 1950’s future vision.
As I mentioned before, even if I prove to be wrong and “dry” nanotech is possible, the economic revolution will be comparable to that of 1900-1920 when we got electricity, motor cars, and indoor plumbing for the first time. If only “wet” nanotech is possible (like I think), the economic impact of this will be comparable to the infotech bubble of the 1990’s, hardly a world-changing event.
In other words, our capabilities will grow tremendously in the future, but much of this growth will be incremental rather than revolutionary in nature. A singularity is simply an ever receeding horizon that lies just ahead in the future as our technology and capabilities continue to grow and develop.
If there is (or indeed, ever was) a time in which there was an “independent republic of science”, disinterestedly pursuing knowledge for its own sake, nanotechnology is not part of it. Nanotechnology, in all its flavours and varieties, is unashamedly “goal-oriented research”.
Hmmm. I’m just putting the finishing touches to an EPSRC proposal on self-organised nanoparticle assemblies where my opening line under the “Relevance to Beneficiaries” heading is “This is unashamedly curiosity-driven, blue skies research.” . It seems I must have misunderstood the fundamental role of a University when I signed up as an academic…
Philip
Philip, I thought you were too young to remember the golden age (if there was one)! In the context of UK science, a key date was 1993, when William Waldegrave’s White Paper “Realising our potential” explicitly linked government’s policy for science with “innovation” and industrial exploitation, and moved responsibility for science from the Cabinet Office to the Department of Trade and Industry. At the time I seem to remember at least some of the scientific community regarded it as a step forward that the Thatcher government thought there was anything worthwhile about science at all, having spent the last ten years or so regarding universities as another branch of “the enemy within”.
Government spin-doctor: “The time has come for a major initiative in education. We’re going to close down half the universities in the country.”
“You’re going to do what?!”
“We’re going to stop wasting public money on self-indulgent young people reading damn silly subjects like history or… philosophy. I mean, philosophy – for pity’s sake.”
From the BBC R4 series “Absolute Power” starring Stephen Fry and John Byrd. Episode 8: “Promoting Philosophy” (broadcast in 2001).
——————–
Hi, Richard.
Yes, I also don’t have any abiding memories of having lived through a golden age for science. Strange that…
I arrived in Nottingham in 1994 and so missed UK science’s darkest days under Thatcher. (However, I too have heard the frightening stories of half-starved academics eeking out a living in cold, damp basements equipped with only a battered old oscilloscope…). I am nevertheless well aware of the impact of Waldegrave’s White Paper (difficult not to be!) and I think that if we compare 1993 to 2007 it’s very much a case of “plus ca change, plus c’est la meme chose” when it comes to the increasing dominance of the private sector in science funding. (Interestingly, of course, Thatcher was very much in favour of basic research – she just didn’t see why it should be publicly funded!)
Labour – oops, sorry New Labour – have embraced and built on Waldegrave’s core concept: we fund science simply because it drives economic growth. And let’s not kid ourselves – in general, industry, just like the New Labour government, has very little interest in the type of long-term economic benefits that might accrue from basic research; it has shareholders to placate.
Moreover, it is this unquestioning assumption that economic growth/wealth creation is always a societal good (i.e. necessarily produces a better quality of life for society as a whole) that rankles so much with me. I prefer the direct, honest-to-goodness crassness of Thatcher – even though I despise her – when she stated that “there is no such thing as society” as compared to the “iron fist in a velvet glove” disingenuity of the current government.
And on the subject of crassness, how about this: How can searching for life elsewhere in the Universe help economic development on Earth? . What a wonderful way to “sell” astronomy and astrophysics to the next generation of scientists: forget about the big, exciting fundamental questions – focus on the cash . I weep.
Philip
P.S. On a slightly more upbeat note, talking about Waldegrave brings back many happy memories of watching Spitting Image in the eighties and nineties!
I’d agree with Steven and others; there are more varieties of transhumanism than what you picked out. Vinge himself has varied in what he says about the Singularity, from “ever-increasing acceleration real soon now” to simply “at some point, somehow, superhuman intelligences, which make my authorial brain hurt.” Me, I prefer to avoid the Singularity now, and think instead of a Cognitive Revolution — understanding how our minds really work should have Effects. And transhumanism can go from “global upload by 2030!” to “is there any fundamental reason we can’t make full spare parts for the whole body?”
“Rapture of the nerds” may be popularized by Doctorow, but I saw it first in that wording from Ken MacLeod, who himself http://www.acceleratingfuture.com/steven/?p=21#comment-181
credits “an early Extropian”, which from my memory is Timothy C. May, though I remember his saying “the Techno-Rapture”.
As far as the plausibility of changing the human condition, one could make snarky comments about the utopian dreaming of liberating most people from working on the land, or reducing infant mortality from 50 to 0.3%, or the liberated condition of Western women.
I think transhumanism is a red herring in the context of nanotechnological ideologies. It is a consequence rather than a cause.
I believe the key feature of the viewpoint you were trying to describe is ‘the singularity’ itself. My understanding of this phenomenon is as the limit of a number of projections. For example, there is a well-known observation about computational power known as Moore’s Law. Combination of Moore’s Law with details of existing brain simulation experiments can be used to project that we will be able to simulate and then exceed the human brain within the next two or three decades. Notice that this does not depend on any ‘radical’ nanotechnology, just on continuing development of the technologies used to manufacture computing machinery.
Norio Taniguchi made equally interesting observations about the rate of improvement of the precision with which engineers can manipulate matter. If projected, his curves indicate atomic precision in a similar timescale. There are other observations that can be used to make equally bold projections over the same time period.
So what I would characterize as the extreme singularity viewpoint is the view that at that time our development will accelerate very rapidly and in unpredictable ways. Given its strong dependence on artificial intelligence, it is not surprising that the singularity is associated with those involved with mathematics and information technology, from von Neumann and Ulam to Good, Vinge and onwards.
Transhumanism is just one possible consequence.
Damien, it’s entirely defensible to maintain that the key element in the singularity is the development of artificial superhuman intelligence. But to achieve this cognitive revolution still needs dramatic enabling technological developments, both to allow us a better experimental understanding of what’s going on in the human brain, and to give us the computer power to simulate it.
Of course one make a very strong case that as a result of technology substantial changes for the better have taken place in the human condition. What’s at issue in transhumanism, though, are not changes in the human condition, but changes in human nature, which is a somewhat different thing.
Dave, what brain simulations are you thinking of? I’m not sure I’m aware of any that would allow you to make a convincing estimate of how much computer power you would need to simulate a human brain. As for Moore’s law, I don’t think incremental developments of existing technology will permit that to be extended for 20 to 30 years – the ITRS predicts that by 2019 scaling CMOS down further will be becoming asymptotically difficult, and looks to non-CMOS approaches such as spintronics, molecular electronics or some form of quantum computing to keep Moore’s law going. This is an interesting subject that I’ll return to soon.
Philip, it’s undoubtedly right that the Waldegrave White Paper set the framework for science policy that has continued into the Labour administration. Lord Sainsbury’s report on science policy for Gordon Brown will appear soon, though one would be very surprised if this marked a big departure from the knowledge transfer/wealth creation focus of science policy to date. The only thing I can try and cheer you up with is the perception that the research councils really are serious about feeding the results of public engagement back into policy, which I hope will lead to a broader definition of “societal good” than simple money-making.
Richard,
I must agree that I’d be “very surprised” – the term I’d use would be “gobsmacked” – if there were even the slightest departure from the knowledge transfer/wealth creation agenda. I hope that you’re right about the Research Councils feeding the results of public engagement back into science policy but, unfortunately, and as you might guess, I’m fairly cynical about this. Even if public engagement data are fed back into science policy development, I’m pretty confident that the data will be cherry-picked and “spun” as much as possible to ensure “alignment” with the “vision” of the “Science and Innovation 2004 -2014” framework document.
My most pressing concern is that the Science and Technology Facilities Council are probably not that far ahead of EPSRC in terms of their implementation of the “Economic Impact Advisory Board”. (I must admit that I’m very glad that I’ve significantly ramped down my synchrotron activity over the past couple of years!)
Thanks for engaging with me on this topic – I know that my ranting must get tiresome. However, I feel very strongly that RCUK is currently implementing changes which will have disastrous consequences for basic research in the UK. Once put in place, those changes, like so many other dumb New Labour (i.e. Tory) ideas (PFI, the maintenance of school league tables,…) will get embedded in the system and will become next to impossible to remove.
I’ll give it a rest for now!
Best wishes,
Philip
P.S. One final thing (I promise!): In EPSRC’s Knowledge Transfer and Economic Impact Strategy document, we have the buttock-clenchingly awful:
“The primary role of EPSRC in the innovation ecosystem…”
The “innovation ecosystem”?! The Dilbert mission statement generator must have been running on overdrive…
I was asked when “smarter” than human AI would occur, and I took this to mean when AI software would output more research than researchers like P.Moriarty. I answered that Philip would be out of work by 2070. This is very different from the supposition that software can be conscious. I think the latter isn’t possible on a substrate that doesn’t utilize things and qualities like proteins, phase transformations, solitons, and ion flows. Toggling a tiny silicon switch back and forth doesn’t bring about sentience.
The latter (believing software to be intrinsically more valuable than humans) isn’t my main concern with Transhumanism, my main H+ beef is that Libertarian over-representation jeopardizes its goal: H+ ideals need strong public health and biotech funding.
About science policy: one reason public (and open access) research is good is that more people (the whole planet) can use open research to further their own projects than if the research is segregated among less than a dozen people. If Phil-Altriusm University openly publishes a chemical recipe for extracting platinum from road dust, all Rare Earth element chemical engineers can read it. If Phil-Clamp Corporation keeps the same research as a trade-secret or patent, only their circle of researchers and allied company researchers will get a chance to use it, for 20 years or until another group pointlessly copies the same research.
I’m not even that worried about military research. I read somewhere that $2 billion of $6 billion USA Nanotechnology funding (I forget under what umbrella, maybe the big program created by Clinton) annually is used for military research…a very big problem for beekeepers of unknown cause until earlier this year, was 1/2 their colonies were dying globally (CCD). It sounds minor, but the apiculture industry punches 10x above its waist by forwarding agriculture yield gains (canola, fruits, etc). It was a USA military lab-on-a-chip that discovered CCD was caused by a parasite virus combo. Regardless of the initial application and who funds it, as long as scientific research is allowed to diffuse at some point it should command a greater ROE than most other capital uses, excluding some offensive military applications (no need for better nerve gas, example).
Phillip,
Much though I’m looking forward to interacting with my AI “colleagues” at the ripe old age of 102 (which will be well below retirement age in 2070, I’m sure), I have no idea on what you base your 2070 prediction. I’ve posted the following passage from Steven Levy’s “Artificial Life” on the Centre for Responsible Nanotechnology’s blog in the past. It’s as true for nanotechnology as it is for AI:
“But with the benefit of hindsight, Langton and most of the others consciously expressed caution when speaking of the promise of their studies [of artificial life]. They recalled the embarrassment, not to mention the bad science of it all, when the poker-faced predictions of the AI pioneers – who promised the likes of human-level intelligence in ten years – wound up as quaint buffoonery a couple of decades later, when computers still could not replicate the cognition of an average one year old. In retrospect, plenty had been done (a computer playing grand-master chess, for instance, is nothing to sneeze at) but the failure to meet exaggerated expectations had tarnished the whole enterprise”.
Philip
Why the recent aversion to synchrotron research? Is it more industrially driven than scholarly, recently? If so, is this a local phenomena or a shift in the world’s priorities? It’s too bad some sort of royalty system doesn’t funnel down to basic research from applied corporate research resulting in big innovations, analogous to stock-options.
Phillip,
I was heavily involved in synchotron-based research for quite some time – at one point, we were spending between three to four months a year at various synchrotron sources (in the UK and Europe). I’ve moved away from synchrotron research (SR) of late to focus more heavily on my “first love” – scanning probe microscopy. In particular, we’ve recently had the fortune to have had a large injection of funding into infrastructure (including SPM) in Nottingham. If you’re interested, we’ve updated our website over the last few weeks and have included an online lab. tour . (Sorry for the shameless self-promotion Richard!).
My comment with regard to SR relates to the recent formation of the Science and Technology Facilities Council. This now funds all large scale facilities research (synchrotron, neutrons, central laser facility, high performance computing) and astronomy, particle physics, nuclear physics, and space science. Before the STFC was established we had two separate funding bodies – the Council for the Central Laboratories of the Research Councils (CCLRC) and the Particle Physics and Astronomy Research Council (PPARC). It is STFC’s short-sighted and ultimately extremely damaging commitment to place economic impact on a similar footing to research quality when assessing proposals (see my posts above) which makes me grateful that SR no longer plays such a major role in my research.
Best wishes,
Philip
I think I should have made it clearer that I was only trying to explain the viewpoint, not defend it!
I don’t follow the brain simulation world but a quick Google produced this story – http://news.bbc.co.uk/1/hi/technology/6600965.stm – which talks about simulating around 1/1250 of the number of neurons in a human brain (IIRC) at 1/10 speed. So if we double power every 18 months or so, that’s in the 20-30 year range. Of course, the simulation wasn’t good enough but then the hardware wasn’t state of the art and we haven’t allowed for software development etc etc.
I agree that existing technologies will not sustain Moore’s law. There are two things that I find impressive about Moore’s law. The first is his boldness and success in prediction based on six years data, at a time when there were 50 components on a chip and the dominant technology was not yet established. The second is the ability of the semiconductor industry to make the prediction hold for over forty years to date. And yes, I know the original prediction didn’t hold. But I wouldn’t bet against it continuing to hold.
I don’t think it’s possible to reliably predict either of these outcomes. But it’s interesting to speculate.
Dave, thanks for looking that up. I can’t find any published paper from the group that would substantiate those claims. But it did prompt me to do some searching myself. “Trends in Neurosciences” in 2005 had a useful review called “Biophysically detailed modelling of microcircuits and beyond”, which cites as state of the art a study of a sub-section of the nervous system of a lamphrey, with 3000 neurons and 110 000 synapses (this is the bit that controls the beating motion of eel-like lamphrey’s body during swimming), which seems to produce realistic results. This is a long way from the 20 billion neurons of a human brain. There are other issues, too. Firstly, one needs to find out how the neurons are wired up. Then there is the question of at what level you model the system; this kind of study uses the neuron as the basic unit, with the basic biophysics of the neuron and its connections put in by hand. But of course, the essence of brain behaviour is that these connections are themselves mutable, and this mutability underlies central phenomena like learning. As for Moore’s law, I feel a post on the subject coming on…
Philip, just to respond to a couple of your points. As regards what EPSRC is doing about public engagement, you may know that a new advisory committee has been set up called the “Societal issues panel” (I’m on this). This is at the same level as the existing “User Panel” and “Techical Opportunities Panel”, directly advising Council, which is effectively EPSRC’s board. Crudely, TOP, made up mainly of academic scientists, is supposed to give advice about what neat science is on the horizon, and UP, mostly industrial people, is supposed to tell EPSRC what industry wants. The idea of SIP is to provide an input at the same strategic level of what comes back from public engagement, with its chair, (currently Robert Winston) sitting on Council. Of course, the difficulty is in turning these fine aspirations into concrete actions, and one of my September jobs is to produce a proposal for SIP of how, in detail, we will incorporate public engagement into the development of nanotechnology policy.
As for “the innovation ecosystem” – the point I would make about this is that it implies the existence of some theory about how innovation actually works. One thing I have been saying at length to anyone who will listen (and many, no doubt, who don’t) is that we need to examine these tacitly held theories of innovation, that underlie the knowledge transfer policy that irks you so, and ask what is the actual evidence for them, and if there is a need for a more sophisticated understanding of this.
Richard,
Thanks for the carefully considered response to my comments – it’s much appreciated. I will follow with great interest the “evolution” and embedding of public engagement within nanotechnology/ nanoscience policy.
Best wishes,
Philip
There’s another IBM project [1] that announced they simulated 10000 neurons in 2006, which is of a similar magnitude to the project you found. They state that their goal is ‘to create a biologically accurate, functional model of the brain’.
That’s not a goal that needs to be reached by proponents of the singularity of course, but whether it is an easier or harder goal than ‘strong AI’ is an open question. I believe there’s a clearer path to the biologically accurate model and some past AI efforts have disappointed. But I don’t know of any evidence that rules out some breakthrough.
For example, before the AFM was invented who believed that its capabilities could be achieved by such a simple device? Become a futurologist, charge a few thousand dollars a day and wishful thinking is rewarded!
[1] http://bluebrain.epfl.ch/page17871.html
Further to the discussion above re. the commercialization of academic research and RCUK’s “wealth creation” agenda, a blog dedicated to this subject http:/stormbreaking.blogspot.com has very recently been set up by Andrew Chitty, a philosopher at the University of Sussex.