It’s conventional wisdom that science is very different from technology, and that it makes sense to distinguish between pure science and applied science. Largely as a result of thinking about nanotechnology (as I discussed a few years ago here and here), I’m less confident any more that there’s such a clean break between science and technology, or, for that matter, pure and applied science.
Historians of science tell us that the origin of the distinction goes back to the ancient Greeks, who distinguished between episteme, which is probably best translated as natural philosophy, and techne, translated as craft. Our word technology derives from techne, but careful scholars remind us that technology actually refers to writing about craft, rather than doing the craft itself. They would prefer to call the actual business of making machines and gadgets technique (in the same way as the Germans call it technik), rather than technology. Of course, for a long time nobody wrote about technique at all, so there was in this literal sense no technology. Craft skills were regarded as secrets, to be handed down in person from master to apprentice, who were from a lower social class than the literate philosophers considering more weighty questions about the nature of reality.
The sixteenth century saw some light being thrown on the mysteries of technique with books (often beautifully illustrated) being published about topic like machines and metal mining. But one could argue that the biggest change came with the development of what was called then experimental philosophy, which we see now as being the beginnings of modern science. The experimental philosophers certainly had to engage with craftsman and instrument makers to do their experiments, but what was perhaps more important was the need to commit the experimental details to writing so that their counterparts and correspondents elsewhere in the country or elsewhere in Europe could reliably replicate the experiments. Complex pieces of scientific apparatus, like Robert Boyle’s airpump, certainly were some of the most advanced (and expensive) pieces of technology of the day. And, conversely, it’s no accident that James Watt, who more than anyone else made the industrial revolution possible with his improved steam engine, learned his engineering as an instrument maker at the University of Glasgow.
But surely there’s a difference between making a piece of experimental apparatus to help unravel the ultimate nature of reality, and making an engine to pump a mine out? In this view, the aim of science is to understand the ultimate fundamental nature of reality, while technology seeks merely to alter the world in some way, with its success being judged simply by whether it does its intended job. In actuality, the aspect of science as natural philosophy, with its claims to deep understanding of reality, has always coexisted with a much more instrumental type of science whose success is judged by the power over nature it gives us (Peter Dear’s book The Intelligibility of Nature is a fascinating reflection on the history of this dual character of science). Even the keenest defenders of science’s claim to make reliable truth-claims about the ultimate nature of reality – often resort to entirely instrumental arguments – “if you’re so sceptical about science”, they’ll ask a relativist or social constructionist, “why do you fly in airplanes or use antibiotics?”
It’s certainly true that different branches of science are, to a different degree, applicable to practical problems. But which science is an applied science and which is a pure science depends as much on what problems society, at a particular time and in a particular place, needs solving, as on the character of the science itself. In the sixteenth and seventeenth centuries astronomy was a strategic subject of huge importance to the growing naval powers of the time, and was one of the first recipients of large scale state funding. The late nineteenth and early twentieth centuries were the heyday of chemistry, with new discoveries in explosives, dyes and fertilizers making fortunes and transforming the world only a few years after their discoveries in the laboratory. A contrarian might even be tempted to say “a pure science is an applied science that has outlived its usefulness”.
Another way of seeing the problems of a supposed divide between pure science, applied science and technology is to ask what it is that scientists actually do in their working lives. A scientist building a detector for CERN or writing an image analysis program for some radio astronomy data may be doing the purest of pure science in terms of their goals – understanding particle physics or the distant universe – but what they’re actually doing day to day will look very similar indeed to their applied scientist counterparts designing medical imaging hardware or software for interpreting CCTV footage for the police. Of course, this is the origin of the argument that we should support pure science for the spin-offs it produces (such as the World Wide Web, as the particle physicists continually remind us). A counter-argument would say, why not simply get these scientists to work on medical imaging (say) in the first place, rather than trying to look for practical applications for the technologies they develop in support of their “pure” science? Possible answers to this might point to the fact that the brightest people are motivated to solve deep problems in a way that might not apply to more immediately practical issues, or that our economic system doesn’t provide reliable returns for the most advanced technology developed on a speculative basis.
If it was ever possible to think that pure science could exist as a separate province from the grubby world of application, like Hesse’s “The Glass Bead Game”, that illusion was shattered in the second world war. The purest of physicists delivered radar and the fission bomb, and in the cold war we emerged into it seemed that the final destiny of the world was going to be decided by the atomic physicists. In the west, the implications of this for science policy was set out by Vannevar Bush. Bush, an engineer and perhaps the pre-eminent science administrator of the war, set out the framework for government funding of science in the USA in his report “Science: the endless frontier”.
Bush’s report emphasised, not “pure” research, but “basic” research. The distinction between basic research and applied research was not to be understood in terms of whether it was useful or not, but in terms of the motivations of the people doing it. “Basic research is performed without thought of practical ends” – but those practical ends do, nonetheless, follow (albeit unpredictably), and it’s the job of applied research to fill in the gaps. It had in the past been possible for a country to make technological progress without generating its own basic science (as the USA did in the 19th century) but, Bush asserted, the modern situation was different, and “A nation which depends upon others for its new basic scientific knowledge will be slow in its industrial progress and weak in its competitive position in world trade”.
Bush thus left us with three ideas that form the core of the postwar consensus on science policy. The first was that basic research should be carried out in isolation from thoughts of potential use – that it should result from ” the free play of free intellects, working on subjects of their own choice, in the manner dictated by their curiosity for exploration of the unknown”. The second was that, even though the scientists who produced this basic knowledge weren’t motivated by practical applications, these applications would follow, by a process in which potential applications were picked out and developed by applied scientists, and then converted into new products and processes by engineers and technologists. This one-way flow of ideas from science into application is what innovation theorists call the linear model of innovation. Bush’s third assertion was that a country that invested in basic science would recoup that investment through capturing the rewards from new technologies.
All three of these assertions have subsequently been extensively criticised, though the basic picture has a persistent hold on our thinking about science. Perhaps the most influential critique, from the science policy point of view, came in a book by Donald Stokes called Pasteur’s quadrant. Stokes argued from history that the separation of basic research from thoughts of potential use often didn’t happen; his key example was Louis Pasteur, who created a new field of microbiology in his quest to understand the spoilage of milk and the fermentation of wine. Rather than thinking about a linear continuum between pure and applied research, he thought in terms of two dimensions – the degree to which research was motivated by a quest for fundamental understanding, and the degree to which it was motivated by applications. Some research was driven solely by the quest for understanding, typified by Bohr, while an engineer like Edison typified a search for practical results untainted by any deeper curiosity. But, the example of Pasteur showed us that the two motivations could coexist. He suggested that research in this “Pasteur’s quadrant” – use-inspired basic research – should be a priority for public support.
Where are we now? The idea of Pasteur’s quadrant underlies the idea of “Grand Challenges” inspired by societal goals as an organising principle for publicly supported science. From innovation theory and science and technology studies come new terms and concepts, like technoscience, and Mode 2 knowledge production. One might imagine that nobody believes in the linear model anymore; it’s widely accepted that technology drives science as often as science drives technology. As David Willetts, the UK’s Science Minister, put it in a speech in July this year, “A very important stimulus for scientific advance is, quite simply, technology. We talk of scientific discovery enabling technical advance, but the process is much more inter-dependent than that.” But the linear model is still deeply ingrained in the way policy makers talk – in phrases like “technology readiness levels”, “pull-though to application”. From a more fundamental point of view, though, there is still a real difference between finding evidence to support a hypothesis and demonstrating that a gadget works. Intervening in nature is a different goal to understanding nature, even though the processes by which we achieve these goals are very much mixed up.
Thoughtful post as ever. You might enjoy The Atlantic’s “tech canon” though arguably it is too West-Coast focus and could do with more on policy. Dare say you could provide them with some recommendations.
Fascinating. The episteme and techne stuff reminds me that Ben Flybjerg talks about the value of social science as being the third leg of Aristotle’s knowledge stool – phronesis (a sort of ethical, practical wisdom). I like this, as it avoids deciding between pysics envy and dry data-gathering. Maybe phronetic STS could contribute to better science policy by questioning such distinctions.
The physics envy is hard to shake, mind…
Thanks for that pointer, Alice; a very interesting list. Some books I’ve read, some I know I ought to have read but haven’t, some I haven’t come across (and some conspicuous omissions). But I’m not sure there’s enough universally agreed common ground about innovation to talk about a “canon”.
You’re welcome to your physics envy, Jack; I’ve got my biology envy to deal with!
Richard, you say that Pasteur’s quadrant underlies the current ‘Grand Challenges’, but there is one crucial distinction you didn’t discuss. Pasteur was a brilliant individual who was able to synthesise both fundamental science and the end-use/problem to come up with a solution that has stood the test of time, as well as a new field. However the Grand Challenges are inherently large team efforts directed at a single end point, with the synthesis being achieved by a plurality of inputs. I am not sure how significant you think this difference is.
I pose this question because it reflects a comment made on my own blog to a post about interdisciplinary research, pointing me in the direction of an article by Sean Eddy http://www.ploscompbiol.org/article/info:doi%2F10.1371%2Fjournal.pcbi.0010006. Eddy’s complaint was that the expectation (in the case he cited, of the NIH) that interdisciplinary science had to be done in teams was tantamount to saying that science has become too hard for individual humans to cope with. He obviously felt the ‘best ‘ interdisciplinary science was done by individuals – he cited Howard Berg, who had picked up elements of physics, chemistry, biology, mathematics etc and done beautiful stuff on bacterial chemotaxis with it. I don’t really agree with this position: a smattering of everything does not lead necessarily to expertise, though I would not say that of Berg himself. And for huge projects such as the Human Genome, at the very least you need lots of pairs of hands but also, I would suggest, lots of different angles from which to approach the problem in order to overcome inevitable bottlenecks.
So, for your not-so-linear model of pure to applied science, do you think Pasteur, as an individual, can still be a good role model, or are the interdisciplinary teams of Grand Challenges of the 21st century a necessity?
I do think it’s much harder now to be an individual making big progress in interdisciplinary science than it used to be. It’s not that an individual can’t learn the language of another discipline, get up to date in its literature and see the big problems in that discipline that could be solved from a different perspective. I think the barrier comes from the different sets of practical craft skills that different disciplines have – it is these that it’s very different to learn except by spending long periods of time in a laboratory. Of course, this doesn’t apply to theoretical or computing work. But then, I don’t really think that big difficult problems can be solved without recourse to experiments.