Howard Lovy’s Nanobot draws attention to an interesting piece in SciDevNet discussing bibliometric measures of the volume and impact of nanotechnology research in various parts of the world. This kind of measurement – in which databases are used to count numbers of papers published and the number of times such papers are cited by other papers – is currently very popular among governments attempting to assess whether the investments they make in science are worthwhile. I was shown a similar set of data about the UK, commissioned by the Engineering and Physical Science Research Council, at a meeting last week. The attractions of this kind of analysis are obvious, because it is relatively easily commissioned and done, and it yields results that can be plotted in plausible and scientific looking graphs.
The drawbacks perhaps are less obvious, but are rather serious. How do you tell what papers are actually about nanotechnology, given the difficulties of defining the subject? The obvious thing to do is to search for papers with “nano” in the title or abstract somewhere – this is what the body charged with evaluating the USA’s National Nanotechnology Initiative have done. What’s wrong with this is that many of the best papers on nanotechnology simply don’t feel the need to include the nano- word in their title. Why should they? The title tells us what the paper is about, which is generally a much more restricted and specific subject than this catch-all word. I’ve been looking up papers on single molecule electronics today. I’d have thought that everyone would agree that the business of trying to measure the electrical properties of single molecules, one at a time, and wiring them up to make ultra-miniaturised electronic devices, was as hardcore as nanotechnology comes. But virtually none of the crucial papers on the subject over the last five years would have shown up on such a search.
The big picture that these studies are telling us does ring true; the majority of research in nanoscience and nanotechnology is done outside the USA, and this kind of research in China has been growing exponentially in both volume and impact in recent years. But we shouldn’t take the numbers too seriously; if we do, it’s only a matter of time before some science administrator realises that the road to national nanotechnology success is simply to order all the condensed matter physicists, chemists and materials scientists to stick “nano-” somewhere in the titles of all their papers.
Leydesdorff isn’t saying “look at all of China’s citations, they must be getting good” — rather, he is pointing out that the growth of citations is proceeding at an
exponential rate unlike the linear growth found in most OECD countries. China and Singapore may not be neatly assimilated into his ‘world system of science’. This may still support the use of bibliometric data, but leaves open the use of additional metrics for understanding the robustness of national scientific activity. So if bibliometric data is somewhat misleading (as I agree it is), other metrics would need to be taken as seriously as the number of citations or patents. Collaboration, networks, etc are more difficult to measure. Suggestions?
There are two issues at stake here; the general one of whether bibliometrics is a good way of assessing scientific activity, and the specific one of whether these studies, focused on nanotechnology, are using a methodology that appropriately identifies some subset of the scientific literature that can be meaningfully identified as “nanotechnology”. I’m objecting here to these specific methodologies. Although there are some general issues about biobliometrics, I think if used carefully this approach probably is pretty valuable. But I’m sure I need to learn more about it to say more.