Delivering genes

Gene therapy holds out the promise of correcting a number of diseases whose origin lies in the deficiency of a particular gene – given our growing knowledge of the human genome, and our ability to synthesise arbitrary sequences of DNA, one might think that the introduction of new genetic material into cells to remedy the effects of abnormal genes would be straightforward. This isn’t so. DNA is a relatively delicate molecule, and organisms have evolved efficient mechanisms for finding and eliminating foreign DNA. Viruses, on the other hand, whose entire modus operandi is to introduce foreign nucleic acids into cells, have evolved effective ways of packaging their payloads of DNA or RNA into cells. One approach to gene therapy co-opts viruses to deliver the new genetic material, though this sometimes has unpredicted and undesirable side-effects. So an effective, non-viral method of wrapping up DNA, introducing it into target cells and releasing it would be very desirable. My colleagues at Sheffield University, led by Beppe Battaglia, have recently demonstrated an effective and elegant way of introducing DNA into cells, in work recently reported in the journal Advanced Materials (subscription required for full paper).

The technique is based on the use of polymersomes, which I’ve described here before. Polymersomes are bags formed when detergent-like polymer molecules self-assemble to form a membrane which folds round on itself to form a closed surface. They are analogous to the cell membranes of biology, which are formed from soap-like molecules called phospholipids, and the liposomes that can be made in the laboratory from the same materials. Liposomes are used to wrap up and deliver molecules in some commercial applications already, including some drug delivery systems and in some expensive cosmetics. They’ve also been used in the laboratory to deliver DNA into cells, though they aren’t ideal for this purpose, as they aren’t very robust. Polymersomes allow one a great deal more flexibility in designing polymersomes with the properties one needs, and this flexibility is exploited to the full in Battaglia’s experiments.

To make a polymersome, one needs a block copolymer – a polymer with two or three chemically distinct sections joined together. One of these blocks needs to be hydrophobic, and one needs to be hydrophilic. The block copolymers used here, developed and synthesised in the group of Sheffield chemist Steve Armes, have two very nice features. The hydrophilic section is composed of poly(2-(methacryloyloxy)ethyl phosphorylcholine) – this is a synthetic polymer that presents the same chemistry to the adjoining solution as a naturally occurring phospholipid in a cell membrane. This means that polymersomes made from this material are able to circulate undetected within the body for longer than other water soluble polymers. The hydrophobic block is poly(2-(diisopropylamino)ethyl methacrylate). This is a weak base, so it has the property that its state of ionisation depends on the acidity of the solution. In a basic solution, it is un-ionized, and in this state it is strongly hydrophobic, while in an acidic solution it becomes charged, and in this state it is much more soluble in water. This means that polymersomes made from this material will be stable in neutral or basic conditions, but will fall apart in acid. Conversely, if one has the polymers in an acidic solution, together with the DNA one wants to deliver, and then neutralises the solution, polymersomes will spontaneously form, encapsulating the DNA.

The way these polymersomes work to introduce DNA into cells is sketched in the diagram below. On encountering a cell, the polymersome triggers the process of endocytosis, whereby the cell engulfs the polymersome in a little piece of cell membrane that is pinched off inside the cell. It turns out that the solution inside these endosomes is significantly more acidic than the surroundings, and this triggers the polymersome to fall apart, releasing its DNA. This, in turn, generates an osmotic pressure sufficient to burst open the endosome, releasing the DNA into the cell interior, where it is free to make its way to the nucleus.

The test of the theory is to see whether one can introduce a section of DNA into a cell and then demonstrate how effectively the corresponding gene is expressed. The DNA used in these experiments was the gene that codes for a protein that fluoresces – the famous green fluorescent protein, GFP, originally obtained from certain jelly-fish – making it easy to detect whether the protein coded for by the introduced gene has actually been made. In experiments using cultured human skin cells, the fraction of cells in which the new gene was introduced was very high, while few toxic effects were observed, in contrast to a control experiment using an existing, commercially available gene delivery system, which was both less effective at introducing genes and actually killed a significant fraction of the cells.

Polymersome endocytosis
A switchable polymersome as a vehicle for gene delivery. Beppe Battaglia, University of Sheffield.

Soft Machines in paperback

My book, Soft Machines: nanotechnology and life, has now been released in the UK as a paperback, with a price of £9.99. It should be available in the USA early in the new year. It’s available from from Amazon UK here, and can be preordered from from Amazon USA here.

Having an opportunity to make corrections, I re-read the book in the summer. One very embarrassing numerical error needed correcting, and anything to do with the dimensions of semiconductor processes needed to be updated to account for four more years of Moore’s law. But in general I think what I wrote has stood the test of time pretty well.

Bangalore Nano

I’m on my way back from India, where I’ve been at the conference Bangalore Nano 07. The enthusiasm for nanotechnology in India has been well publicised; it’s traditional to bracket the country with China as two rising science powers that see nano as an area in which they can compete on equal terms with the USA, Europe and Japan. So it was great to get an opportunity to see for myself something of what’s going on.

I’ll just mention a couple of highlights from the conference itself. Prof Ramgopal Rao from the Indian Institute of Technology Bombay described a very nice looking project to make an inexpensive point of care system for cardiac diagnostics. He began with the gloomy thought that soon more than half the cases of cardiac disease in the world will be India. If acute myocardial infarction can be detected early enough a heart attack can be prevented, but this currently needs expensive and time consuming tests. The need, then, is for a simple test that’s cheap and reliable enough to be done in a doctor’s office or clinic.

To do this one needs to integrate a microfluidic system to handle the blood sample, a sensor array to detect the appropriate biochemical markers, and a box of electronics to analyse the results. The sensor array and fluid handling system needs to be disposable, and to cost no more than a few hundred rupees (i.e. a couple of dollars), while the box should only cost a few thousand rupees, even though the protocols for diagnosis need to be quite sophisticated and robust. Rao is aiming for a working prototype very soon; the biosensor is based on a cantilever which bends when the marker binds to a bound antibody. He uses a polymer photoresist to make the cantilever, with an embedded poly-silicon piezo-resistor to measure the deflection (this isn’t trivial at all, as the change in resistance amounts to about 10 parts per million).

Another nice talk was from Prof T. Pradeep, a surface chemist from the Indian Institute of Technology Madras. He described a water filter incorporating gold and silver nanoparticles mounted on a substrate of alumina, which is particularly effective at removing halogenated organic compounds such as pesticide residues. This is already marketed, with the filter cartridge costing about a few hundred rupees. He also mentioned a kit that can test for such pesticide residues with a detection limit of around 25 parts per billion.

The closing talk was given by Prof CNR Rao, and consisted of reflections on the future of nanotechnology. His opinions are worth paying attention to, if for no other reason than that he is undoubtedly the most powerful and influential scientist in India, and his views will shape the way nanotechnology is pursued there. What follows are my notes on his talk, tidied up but not verbatim.

Rao is a materials chemist, and he started by observing that now we can make pretty much any material in any form. But the question is, how can we use them, how can we assemble them and integrate them into devices? This is the biggest gap – we need products, devices and machines from nano-objects, and this is still probably at least 10 years, maybe 15 years, away. But we shouldn’t worry just about products and devices – nanotechnology is a new type of science, which will dissolve barriers between physics and chemistry and biology and bring in engineering. As an example – many people have made molecular motors. But… can they be connected together to do something? This sort of thing needs combinations of molecular science and nanoscience. Soft matter is another area with many good people and interesting work, including some in Bangalore. But there’s still a gap in applying them, what about active gels? Similarly, we see big successes in sensors, imaging but there’s much left to do. As an example of one very big challenge, many people suffer in Bangalore and everywhere else from dementia; we know this is related to the nanoscale phenomenon of peptide aggregation, but we need to understand why it happens and how to stop it. Drug delivery and tissue engineering are other examples where nanotechnology can make a real impact on human suffering. If one wants a role model, Robert Langer is a great example of someone who has produced many new results in tissue engineering and drug delivery, many graduate students and many companies; science in India should be done like this. We must remove the barriers and bureaucracy to give more freedom to scientists and engineers. At the moment, public servants like academics, cannot get involved in private enterprise, and this must change. Nanotechnology doesn’t take much money – it’s the archetypal knowledge based industry, and as such it should lead to much more linkage between industry and academia.

Nanotechnology in Korea

One of my engagements in a slightly frantic period last week was to go to a UK-Korea meeting on collaboration in nanotechnology. This had some talks which gave a valuable insight into how the future of nanotechnology is seen in Korea. It’s clearly seen as central to their program of science and technology; according to some slightly out-of-date figures I have to hand about government spending on nanotechnology, Korea ranks 5th, after the USA, Japan, Germany and France, and somewhat ahead of the UK. Dr Hanjo Lim, of the Korea Science and Engineering Foundation, gave a particularly useful overview.

He starts out by identifying the different ways in which going small helps. Nanotechnology exploits a confluence of 3 types of benefits – nanomaterials exploit surface matter, in which benefits arise from their high surface to volume ratio, with most obvious benefits from catalysis. They exploit quantum matter, size dependent quantum effects that are so important for band gap engineering and making quantum dots. And they can exploit soft matter, which is so important for the bio-nano interface. As far as Korea is concerned, as a small country with a well-developed industrial base, he sees four important areas. Applications in information and communication technology will obviously directly impact the strong position Korea has in the semiconductor industry and the display industry, as well as having an impact on automobiles. Robots and ubiquitous devices play to Korea’s general comparative advantage in manufacturing, but applications in Nano foods and medical science are relatively weak in Korea at the moment. Finally, the environmentally important applications in Fuel/solar cells, air and water treatments will be of growing importance in Korea, as everywhere else.

Korea ranks 4th or 5th in the world in terms of nano-patents; the plan is, up to 2010, to expand existing strength in nanotechnology and industrialise this by developing technology specific to applications. Beyond that the emphasis will be on systems level integration and commercialisation of those developments. Clearly, in electronics we are already in the nano- era. Korea has a dominant position in flash memory, where Hwang’s law – that memory density doubles every year – represents a more aggressive scaling than Moore’s law. To maintain this will require perhaps carbon nanotubes or silicon nanowires. Lim finds nanotubes very attractive but given the need for control of chirality and position his prediction is that this is still more than 10 years until commercialisation. An area that he thinks will grow in importance is the integration of optical interconnects in electronics. This, in his view, will be driven by the speed and heat issues in CPU that arise from metal interconnects – he reminds us that a typical CPU has 10 km of electrical wire, so it’s no wonder that heat generation is a big problem, and Google’s data centres come equipped with 5 story cooling towers. Nanophotonics will enable integration of photonic components within silicon multi-chip CPUs – but the problem that silicon is not good for lasers will have to be overcome. Either lasers off the chip will have to be used, or silicon laser diodes developed. His prognosis is, recognising that we have box to box optical interconnects now, and board to board interconnects are coming, that we will have chip to chip intercoonnects on the 1 – 10 cm scale by 2010, with intrachip connects by 2010-2015.

Anyone interested in more general questions of the way the Korean innovation system is developing will find much to interest them in a recent Demos pamphlet: Korea: Mass innovation comes of age. Meanwhile, I’ll be soon reporting on nanotechnology in another part of Asia; I’m writing this from Bangalore/Bengalooru in India, where I will be talking tomorrow at Bangalore Nano 2007.

Questions and answers

Tomorrow I am going to Birmingham to take part in a citizens’ jury on the use of nanotechnology in consumer products, run by the consumer organisation Which? They are running a feature on nanotechnology in consumer products in the New Year, and in advance of this they asked me, and a number of other people, a number of questions. Here are my answers.

How are nanomaterials created?

A wide variety of ways. A key distinction to make is between engineered nanoparticles and self-assembled nanostructures. Engineered nanoparticles are hard, covalently bonded clusters of atoms which, fundamentally, can be made in two ways. You can break down bigger particles by milling them, or you can make the particles by a chemical reaction which precipitates them, either from solution or from a vapour (a bit like making smoke with very fine particles). Examples of engineered nanoparticles are the nanoscale titanium dioxide particles used for some sunscreens, and the fullerenes, forms of carbon nanoparticles that can be thought of as well-bred soot. Because nanoparticles have such a huge surface area relative to their mass, it’s often very important to control the properties of their surfaces (if for no other reason than that most nanoparticles have a very strong tendency to want to stick together, thus stopping being nanoparticles and losing the properties you were presumably interested in them having in the first place). So, it would be very common to make the nanoparticle with an outer coating of molecules that might make it less chemically reactive.

Self-assembly, on the other hand, is a process by which rather soft and mutable nano-structures are formed by particular types of molecules sticking together in small clusters. The classic example of this is soap. Soap molecules have a tail that is repelled from water, and a head that is soluble in water. In a dilute solution in water they make the best of these conflicting tendencies by arranging themselves in clusters of maybe 50 or so molecules, with the headgroups on the outside and the tails shielded from the water in the middle. These nanoparticles are called micelles. Biology relies extensively on self-assembly to construct the nanostructures that all living organisms are made of, including ourselves. For this reason, most food is naturally nanostructured. For example, in milk protein molecules called caseins form self-assembled nanoparticles, and traditional operations like cheese-making involve making these nanoparticles stick together to make something more solid. Of course, we don’t call cooking nanotechnology, because we don’t intentionally manipulate the nanostructure of the foods, even if this is what happens without us knowing about it, but, armed with modern techniques for studying the nanoscale structure of matter, people are increasingly seeking to make artificial nanostructures for applications in food and health. An example of an artificial self-assembled nanostructure that’s becoming important in medicine is the liposome (small ones are sometimes called nanosomes) – here one has soap-like molecules that arrange themselves into sheets exactly two molecules thick (a common material would be the phospholipid lecithin, obtained from soya beans, that is currently widely used as a food emulsifier, for example being an important ingredient of chocolate. If one can arrange the sheet to fold round onto itself you get a micro- or nano- scale bag that you can fill with molecules that you want to protect from the environment (or vice versa).

Can you tell us about the existing and expected applications of developments in nanotechnology in the areas of food and health (including medical applications)?

In food applications, the line separating conventional food processing to change the structure and properties of food and nanotechnology is rather blurred. For example, it was reported that an ice cream company was using nanotechnology to make low fat ice cream; this probably involved a manipulation of the size of the natural fat particles in the ice cream. This really isn’t very different from conventional food processing, the only difference being that modern instrumentation makes it possible for the food scientists involved to see what they are doing to the nanoscale structure. This sort of activity will, I’m sure, increase in the future, driven largely by the perceived market demand for more satisfying low fat food.

One area that is very important in health, and may become important in food, is the idea of wrapping up and delivering particular types of molecules. In medicine, some drugs, particularly the anti-cancer drugs used in chemotherapy, are actually quite toxic and lead to serious side-effects. If the molecules could be wrapped up and only released at the point at which they were needed – the tumour, in the case of an anticancer drug, then the side effects would be much reduced and the drug would be much more effective. This is beginning to happen, with drugs being wrapped up in liposomes for delivery. Another way in which nanotechnology can help in medicine is for drugs which can’t easily be dissolved, and thus can’t be easily introduced into the body. These can be prepared as nanoparticles, in which form the molecules can be absorbed by the body (a new anti-breast-cancer drug – Abraxane – is in this category). In food, in the future additives which are believed to be good for the health (so-called nutriceuticals) may be added to food in this way.

Other applications in health are in fast diagnostic tests. The idea here is that, instead of a GP having to send off a patient’s blood sample for a test to detect certain bacteria or biochemical abnormalities, and having to wait a week or so for the result to come back, nanotechnology would make possible a simple and reliable test that could be done on the spot. Looking further ahead, it’s possible to imagine a device that automatically tested for some abnormality, and then if it detected it automatically released a drug to correct it (for example, a diabetic might have a device implanted under their skin that automatically tested blood sugar levels and released the right amount of insulin in response).

Another area that is in tissue engineering – the growing of artificial tissues and organs to replace those damaged by disease or injury. Here it’s important to have a “scaffold” on which to grow human cells (ideally the patient’s own cells) in a way that they make a working organ. Currently growing replacement skin for burn victims is an a fairly advanced state of development.

Are manufacturers required to disclose the presence of nanomaterials on their labelling?

Currently, no.

What are the risks or concerns about using manufactured nanomaterials in health or food products?

There are concerns that some engineered nanoparticles might be more toxic than the same chemical material present in larger particles, both because the increased surface area might make them more reactive, and because they might be able to penetrate into tissues and cells more easily than larger particles.

Are some nanomaterials more risky than others?

This is very likely. Engineered nanoparticles, made from covalently bonded inorganic materials, seem the most likely to cause concern, but even among these it is important to consider each type of nanoparticle individually. Moreover, it may well be that the dangers posed by nanoparticles might be altered by the surface coatings they are given.

Are some applications of nanotechnology more risky than others?

Yes. In my opinion the biggest risk is in the use of engineered nanoparticles in situations in which they could be ingested or breathed in. The control of naturally occurring nanostructure in foods, the use of self-assembled objects like liposomes, and the kind of nanotechnology that is likely to be used in diagnostic devices, should present few if any risks.

In your opinion, should consumers be concerned about the use of manufactured nanomaterials in health or food products?

Somewhat, but not very. The key dangers come from the potential use of engineered nanoparticles without adequate information about their toxicity. In principle food additive regulations don’t generally discriminate by size. For example, a material like titanium dioxide, that is a permitted food additive (E171), could be used in a nanoscale form without additional testing. In principle it is possible to specify permitted size ranges for particles – this is done for microcrystalline cellulose – so this measure should be extended to other materials that could be used in the form of engineered nanoparticles on the basis of testing that discriminates between particles of different sizes.

If any, what protections need to be put in place?

The government should act on the recommendations of the March 2007 report by the Council of Science and Technology

On the radio

The BBC World Service program World Business Review devoted yesterday’s program to nanotechnology, with a half-hour discussion between me, Michio Kaku and Peter Kearns from the OECD. I haven’t managed to bring myself to listen to it yet, and as it’s difficult to get a very accurate impression of a radio program while you are recording it I’ll make no comment about it. You can listen to it through the internet from this link (this will work until next Saturday).

Fantastic Voyage vs Das Boot

New Scientist magazine carries a nice article this week about the difficulties of propelling things on the micro- and nano- scales. The online version of the article, by Michelle Knott, is called Fantastic Voyage: travel in the nanoworld (subscription required); we’re asked to “prepare to dive into the nanoworld, where water turns to treacle and molecules the size of cannonballs hurl past from every direction.”

The article refers to our work demonstrating self-motile colloid particles, which I described earlier this year here – Nanoscale swimmers. Also mentioned is the work from Tom Mallouk and Ayusman Sen at Penn State; very recently this team demonstrated an artificial system that shows chemotaxis; that is, it swims in the direction of increasing fuel concentration, just as some bacteria can swim towards food.

The web version of the story has a title that, inevitably, refers back to the classic film Fantastic Voyage, with its archetypal nanobot and magnificent period special effects, in which the nanoscale environment inside a blood vessel looks uncannily like the inside of a lava lamp. The title of the print version, though, Das (nano) Boot, references instead Wolfgang Peterson’s magnificently gloomy and claustrophobic film about a German submarine crew in the second world war – as Knott concludes, riding in nanoscale submarines is going to be a bumpy business.

Home again

I’m back from my week in Ireland, regretting as always that there wasn’t more time to look around. After my visit to Galway, I spend Wednesday in Cork, visiting the Tyndall National Institute and the University, where I gave a talk in the Physics Department. Thursday I spent at the Intel Ireland site at Leixlip, near Dublin; this is the largest Intel manufacturing site outside the USA, but I didn’t see very much of it apart from getting an impression of its massive scale, as I spent the day talking about some rather detailed technical issues. On Friday I was in the Physics department of Trinity College, Dublin.

Ireland combines being one of the richest countries in the world (with a GDP per person higher than both the USA and the UK) with a recent sustained high rate of economic growth. Up until relatively recently, though, it has not spent much on scientific research. That’s changed in the last few years; the Government agency Science Foundation Ireland, has been investing heavily. This investment has been carried out in a very focused way, concentrating on biotechnology and information technology. The evidence for this investment was very obvious in the places I visited, both in terms of facilities and equipment and in people, with whole teams being brought in in important areas like photonics. The aim is clearly to emulate the success of the other small, rich countries of Europe, like Finland, Sweden, the Netherlands and Switzerland, whose contributions to science and technology are well out of proportion to their size

Not that there’s a lack of scientific tradition in Ireland, though – the lecture theatre I spoke in Trinity College was the same one in which Schrödinger delivered his famous series of lectures What is life?”, and as a keepsake I was given a reprint of the lectures at Trinity given by Richard Helsham and published in 1739, which constitute one of the first textbook presentations of the new Newtonian natural philosophy. My thanks go to the Institute of Physics Ireland, and my local hosts Ray Butler, Sile Nic Chormaic and Cormac McGuinness.

Super-vision

I’m in Ireland for the week, at the invitation of the Institute of Physics Ireland, giving talks at a few universities here. My first stop was at the National University of Ireland, Galway. In addition to the pleasure of spending a bit of time in this very attractive country, it’s always interesting to get a chance to learn what people are doing in the departments one visits. The physics department at Galway is small, but it’s received a lot of investment recently; the Irish government has recently started spending some quite substantial sums on research, recognising the importance of technology to its currently booming economy.

One of the groups at Galway, run by Chris Dainty, does applied optics, and one of the projects I was shown was about using adaptive optics to correct the shortcomings of the human eye. Adaptive optics was originally developed for astronomy (and some defense applications as well) – the idea is to correct for a rapidly changing distortion of an image on the fly, using a mirror whose shape can be changed. Although the implementations of adaptive optics are very sophisticated and very expensive, we’re starting to see much cheaper implementations of the principle. For example, some DVD players now have an adaptive optics element to correct for DVDs that don’t quite meet specifications. One idea that has excited a number of people is the hope that one might be able to use adaptive optics to achieve better than perfect vision; after all, the eye, considered as an optical system is very far from perfect, and even after one has corrected the simple failings of focus and astigmatism with glasses there are many higher order aberrations due to the eye’s lens being very far from the perfect shape. The Galway group does indeed have a system that can correct these aberrations, but the lesson from this work isn’t entirely what might first expect.

What the work shows is that adaptive optics can indeed make a significant improvement to vision, but only in those conditions in which the pupil is dilated. As photographers know, distortions due to imperfections in a lens are most apparent at large apertures, and stopping down the aperture always has the effect of forgiving the lens’s shortcomings. In the case of the eye, in normal, daytime conditions the pupil is rather narrow, so it turns out that adaptive optics only helps if the pupil is dilated, as would happen under the influence of some drugs. Of course, at night, the pupil is open wide to let as much light as possible. So, does adaptive optics help you get supervision in dark conditions? Actually, it turns out that it doesn’t – in the dark, you form the image with the more sensitive rod cells, rather than the cones that work in brighter light. The rods are more widely spaced, so it turns out that effectively the sharpness of the image you see at night isn’t limited by the shortcomings of the lens, but by the effective pixel size of the detector. So, it seems that super-vision through adaptive optics is likely to be somewhat less useful than it first appeared.

Nanotechnology and the developing world

On Wednesday, I spent the day in London, at the headquarters of the think-tank Demos, who were running a workshop on applications of nanotechnology in the developing world. Present were other nano-scientists, people from development NGOs like Practical Action and WaterAid, and industry representatives. I was the last speaker, so I was able to reflect some of the comments from the day’s discussion in my own talk. This, more or less, is what I said:

When people talk about nanotechnology and the developing world, what we generally hear is one of two contrasting views – “nanotechnology can save the developing world” or “nanotechnology will make rich/poor gap worse”. We need to move beyond this crude counterpoint.

The areas in which nanotechnology has the potential to help the developing world are now fairly well rehearsed. Here’s a typical list –
• Cheap solar power
• Solutions for clean water
• Inexpensive diagnostics
• Drug release
• Active ingredient release – pesticides for control of disease vectors

What these have in common is that in each case you could see in principle that they might make a difference, but it isn’t obvious that they will. Not least of the reasons for this uncertainty is because we know that many existing technological solutions to obvious and pressing problems, many much more simple and widely available than these promised nanotechnology solutions, haven’t been implemented yet. This is not to say that we don’t need new technology – clearly, on a global scale, we very much do. Throughout the world we are existentially dependent on technology, but the technology we have is not sustainable and must be superceded. Arguably, though, this is more a problem for rich countries.

Amongst the obvious barriers, there is profound ignorance in the scientific/technical communities of the real problems of the developing world, and of the practical realities that can make it hard to implement technological solutions. This was very eloquently expressed by Mark Welland, the director of the Cambridge Nanoscience Centre, who has recently been spending a lot of time working with communities and scientists in Egypt and other middle eastern countries. There are fundamental difficulties in implementing solutions in a market-driven environment. Currently we rely on the market – perhaps with some intervention, by governments, NGOs or foundations, of greater or lesser efficacy – to take developments from the lab into useful products. To put it bluntly, there is a problem in designing a business model for a product whose market consists of people who haven’t got much money, and one of the industry representatives described a technically excellent product whose implementation has been stranded for just this reason.

Ways of getting round this problem include the kind of subsidies and direct market interventions now being tried for the distribution of the new (and expensive) artemisinin-based combination therapies for malaria (see this article in the Economist). The alternative is to put one’s trust in the process of trickle-down innovation, as Jeremy Baumberg called it; this is the hope that technologies developed for rich-country problems might find applications in the developing world. For example, controlled pesticide release technologies marketed to protect Florida homes from termites might find applications in controlling mosquitos, or water purification technology developed for the US military might be transferred to poor communities in arid areas.

Another challenge is the level of locally available knowlege and capacity to exploit technology in developing countries. One must ensure that technology is robust, scalable and can be maintained with local resources. Mark Welland reminds us that generating local solutions with local manpower, aside from its other benefits, helps build educational capacity in those countries.

On the negative side of the ledger, people point to problems like:
• The further lock-down of innovation through aggressive intellectual property regimes,
• The possibility of environmental degradation due to dumping of toxic nanoparticles
• Problems for developing countries depending on commodities from commodity substitution as a result of new technologies.

These are all issues worth considering, but they aren’t really specific to nanotechnology, but are more general consequences of the way new technology is developed and applied. It’s worth making a few more general comments about the cultures of science and technology.

It needs to be stressed first that science is a global enterprise, and it is a trans-national culture that is not very susceptible to central steering. We’re in an interesting time now, with the growth of new science powers: China and India have received the most headlines, but shouldn’t neglect other countries like Brazil and South Africa that are consciously emphasising nanotechnology as they develop their science base. Will these countries focus their science efforts on the needs of industrialisation and their own growing middle classes, or does their experience put them in a better position to propose realistic solutions to development problems? Meanwhile, in more developed countries like the UK, it is hard to overstate the emphasis the current political climate puts on getting science to market. The old idea of pure science leading naturally to applied science that then feeding into wealth-creating technology – the “linear model” – is out of favour both politically and intellectually, and we see an environment in which the idea of “goal-oriented” science is exalted. In the UK this has been construed in a very market focused way – how can we generate wealth by generating new products? “Users” of research – primarily industry, with some representation from government departments, particularly those in the health and defense sectors, have an increasingly influential voice in setting science policy. One could ask, who represents the potential “users” of research in the developing world?

One positive message is that there is a lot of idealism amongst scientists, young and old, and this idealism is often a major driving force for people taking up a scientific career. The current climate, in which the role of science in underpinning wealth creation is emphasised above all else, isn’t necessarily very compatible with idealism. There is a case for more emphasis on the technology that delivers what people need, as well as what the market wants. In practical terms, many scientists might wish to spend time on work that benefits the developing world, but career pressures and institutional structures make this difficult. So how can we harness the idealism that motivates many scientists, while tempering it with realism about the institutional structures that they live in and understanding the special characteristics that make scientists good at their job?