Food nanotechnology – their Lordships deliberate

Today I found myself once again in Westminster, giving evidence to a House of Lords Select Committee, which is currently carrying out an inquiry into the use of nanotechnology in food. Readers not familiar with the intricacies of the British constitution need to know that the House of Lords is one of the branches of Parliament, the UK legislature, with powers to revise and scrutinise legislation, and through its select committees, hold the executive to account. Originally its membership was drawn from the hereditary peerage, with a few bishops thrown in; recently as part of a slightly ramshackle program of constitutional reform the influence of the hereditaries has been much reduced, with the majority of the chamber being made up of members appointed for life by the government. These are drawn from former politicians and others prominent in public life. Whatever the shortcomings of this system from the democratic point of view, it does mean that the membership includes some very well informed people. This inquiry, for example, is being chaired by Lord Krebs, a very distinguished scientist who previously chaired the Food Standards Agency.

All the evidence submitted to the committee is publicly available on their website; this includes submissions from NGOs, Industry Organisations, scientific organisations and individual scientists. There’s a lot of material there, but together it’s actually a pretty good overview of all sides of the debate. I’m looking forward to seeing their Lordships’ final report.

Environmentally beneficial nanotechnology

Today I’ve been at Parliament in London, at an event sponsored by the Parliamentary Office of Science and Technology to launch the second phase of the Environmental Nanoscience Initiative. This is a joint UK-USA research program led by the UK’s Natural Environment Research Agency and the USA’s Environmental Protection Agency. This is a very welcome initiative to give some more focus to existing efforts to quantify possible detrimental effects of engineered nanoparticles on the environment. It’s important to put more effort into filling gaps in our knowledge about what happens to nanoparticles when they enter the environment and start entering ecosystems, but equally it’s important not to forget that a major motivation for doing research in nanotechnology in the first place is for its potential to ameliorate the very serious environmental problems the world now faces. So I was very pleased to be asked to give a talk at the event to highlight some of the positive ways that nanotechnology could benefit the environment. Here are some of the key points I tried to make.

Firstly, we should ask why we need new technology at all. There is a view (eloquently expressed, for example, in Bill McKibben’s book “Enough”) that our lives in the West are comfortable enough, the technology we have now is enough to satisfy our needs without any more gadgets, and that the new technologies coming along – such as biotechnology, nanotechnology, robotics and neuro-technology – are so powerful and have such potential to cause harm that we should consciously relinquish them.

This argument is seductive to some, but it’s profoundly wrong. Currently the world supports more than six billion people; by the middle of the century that number may be starting to plateau out, perhaps between 8 and 10 billion people. It is technology that allows the planet to support these numbers; to give just one instance, our food supplies depend on the Haber-Bosch process, which uses fossil fuel energy to fix nitrogen to use in artificial fertilizers. It’s estimated that without Haber-Bosch nitrogen, more than half the world’s population would starve, even if everyone adopted a minimal, vegetarian diet. So we are existentially dependent on technology – but the technology we depend on isn’t sustainable. To escape from this bind, we must develop new, and more sustainable, technologies.

Energy is at the heart of all these issues; the availability of cheap and concentrated energy is what underlies our prosperity, and as the world’s population grows and becomes more prosperous, demand for energy will grow. It is important to appreciate the scale of these needs, which are measured in 10’s of terawatts (remember that a terawatt is a thousand gigawatts, with a gigawatt being the scale of a large coal-fired or nuclear power station). Currently the sources of this energy are dominated by fossil fuels, and it is the relentless growth of fossil fuel energy since the late 18th century that has directly led to the rise in atmospheric carbon dioxide concentrations. This rise, together with other greenhouse gases, is leading to climate change, which in turn will directly lead to other problems, such as pressure on clean water supplies and growing insecurity of food supplies. It is this background which sets the agenda for the new technologies we need.

At the moment we don’t know for certain which of the many new technologies being developed to address these problems will work, either technically or socio-economically, so we need to pursue many different avenues, rather than imagining that some single solution will deliver us. Nanotechnology is at the heart of many of these potential solutions, in the broad areas of sustainable energy production, storage and distribution, in energy conservation, clean water, and environmental remediation. Let me focus on a couple of examples.

It’s well known that the energy we use is a small fraction of the total amount of energy arriving on the earth from the sun; in principle, solar energy could provide for all our energy needs. The problems are ones of cost and scale. Even in cloudy Britain, if we could cover every roof with solar cells we’d end up with a significant fraction of the 42.5 GW which represents the average rate of electricity use in the UK. We don’t do this, firstly because it would be too expensive, and secondly because the total world output of solar cells, at about 2 GW a year, is a couple of orders of magnitude too small. A variety of nanotechnology enabled potential solutions exist; for example plastic solar cells offer the possibility of using ultra-cheap, large area processing technologies to make solar cells on a very large scale. This is the area supported by EPSRC’s first nanotechnology grand challenge.

It’s important to recognise, though, that all these technologies still have major technical barriers to overcome; they are not going to come to market tomorrow. In the meantime, the continued large scale use of fossil fuels looks inevitable, so the idea of the need to mitigate their impact by carbon capture and storage is becoming increasingly compelling to politicians and policy-makers. This technology is do-able today, but the costs are frightening. Carbon capture and storage increases the price of coal-derived electricity by between 43% and 91%; this is a pure overhead. Nanotechnologies, in the form of new membranes and sorbents could reduce this. Another contribution would be finding a use for carbon dioxide, perhaps using photocatalytic reduction to convert water and CO2 into hydrocarbons and methanol, which could be used as transport fuels or chemical feedstocks. Carbon capture and utilization is the general area of the 3rd nanotechnology grand challenge, whose call for proposals is open now.

How can we make sure that our proposed innovations are responsible? The idea of the “precautionary principle” is one that is often invoked in discussions of nanotechnology, but there are aspects of this notion which make me very uncomfortable. Certainly, we can all agree that we don’t want to implement “solutions” that bring there own, worse, problems. The potential impacts of any new technology are necessarily uncertain. But on the other hand, we know that there are near-certain negative consequences of failing to act. Not to actively seek new technologies is itself a decision that has impacts and consequences of its own, and in the situation we are now in these consequences are likely to be very bad ones.

Responsible innovation, then, means that we must speed up research to fill the knowledge gaps and reduce uncertainty; this is the role of the Environmental Nanotechnology Initiative. We need to direct our search for new technologies in areas of societal need, where public support is assured by a broad consensus about the desirability of the goals. This means increasing our efforts in the area of public engagement, and ensuring a direct connection between that public engagement and decisions about research priorities. We need to recognise that there will always be uncertainty about the actual impacts of new technologies, but we should do our best to choose directions that we won’t regret, even if things don’t turn out the way we first imagined.

To sum up, nanotechnologies, responsibly implemented, are part of the solution for our environmental difficulties.

Moving on

For the last two years, I’ve been the Senior Strategic Advisor for Nanotechnology for the UK’s Engineering and Physical Science Research Council (EPSRC), the government agency that has the lead responsibility for funding nanotechnology in the UK. I’m now stepping down from this position to return to a new, full-time role at the University of Sheffield; EPSRC is currently in the process of appointing my successor.

In these two years, a substantial part of a new strategy for nanotechnology in the UK has been implemented. We’ve seen new, Grand Challenge programmes targeting nanotechnology for harvesting solar energy, and nanotechnology for medicine and healthcare, with a third programme looking for new ways of using nanotechnology to capture and utilise carbon dioxide shortly to be launched. At the more speculative end of nanotechnology, the “Software Control of Matter” programme received supplementary funding. Some excellent individual scientists have been supported through personal fellowships, and looking to the future, the three new Doctoral Training Centres in nanotechnology will produce, over the next five years, up to 150 additional PhDs in nanotechnology over and above EPSRC’s existing substantial support for graduate students. After a slow response to the 2004 Royal Society report on nanotechnology, I think we now find ourselves in a somewhat more defensible position with respect to funding of nano- toxicology and ecotoxicology studies, with some useful projects in these areas being funded by the Medical Research Council and the Natural Environment Research Council respectively, and a joint programme with the USA’s Environmental Protection Agency about to be launched. With the public engagement exercise that was run in conjunction with the Grand Challenge on nanotechnology in medicine and healthcare, I think EPSRC has gone substantially further than any other funding agency in opening up decision making about nanotechnology funding. I’ve found this experience to be fascinating and rewarding; my colleagues in the EPSRC nanotechnology team, led by John Wand, have been a pleasure to work with. I’ve also had a huge amount of encouragement and support from many scientists from across the UK academic community.

In the process, I’ve learned a great deal; nanotechnology of course takes in physics, chemistry, and biology, as well as elements from engineering and medicine. I’ve also come into contact with philosophers and sociologists, as well as artists and designers, from all of whom I’ve learnt new insights. This education will stand me in good stead in my new role at Sheffield – as the Pro-Vice-Chancellor for Research and Innovation I’ll be responsible for the health of research right across the University.

Accelerating evolution in real and virtual worlds

Earlier this week I was in Trondheim, Norway, for the IEEE Congress on Evolutionary Computing. Evolutionary computing, as its name suggests, refers to a group of approaches to computer programming that draws inspiration from the natural processes of Darwinian evolution, hoping to capitalise on the enormous power of evolution to find good solutions to complex problems from a very large range of possibilities. How, for example, might one program a robot to carry out a variety of tasks in a changing and unpredictable environment? Rather than an attempting to anticipate all the possible scenarios that your robot might encounter, and then writing control software that specified appropriate behaviours for all these possibilities, one could use evolution to select a robot controller that worked best for your chosen task in a variety of environments.

Evolution may be very effective, but in its natural incarnation it’s also very slow. One way of speeding things up is to operate in a virtual world. I saw a number of talks in which people were using simulations of robots to do the evolution; something like a computer game environment is used to simulate a robot doing a simple task like picking up an object or recognising a shape, with success or failure being used as input in a fitness function, through which the robot controller is allowed to evolve.

Of course, you could just use a real computer game. Simon Lucas, from Essex University, explained to me why classic computer games – his favourite is Ms Pac-Man – offer really challenging exercises in developing software agents. It’s sobering to realise that, while computers can beat a chess grand master, humans still have a big edge on computers in arcade games. The human high-score for Ms Pac-Man is 921,360; in a competition in the 2008 IEEE CEC meeting the winning bot achieved 15,970. Unfortunately I had to leave Trondheim before the results of the 2009 competition were announced, so I don’t know whether this year produced a big breakthrough in this central challenge to computational intelligence.

One talk at the meeting was very definitely rooted in the real, rather than virtual, world – this came from Harris Wang, a graduate student in the group of Harvard Medical School’s George Church. This was a really excellent overview of the potential of synthetic biology. At the core of the talk was a report of a recent piece of work that is due to appear in Nature shortly. This described the re-engineering of an micro-organism to increase its production of the molecule lycopene, the dye that makes tomatoes red (and probably confers significant health benefits, the basis for the seemingly unlikely claim that tomato ketchup is good for you). Notwithstanding the rhetoric of precision and engineering design that often accompanies synthetic biology, what made this project successful was the ability to generate a great deal of genetic diversity and then very rapidly screen these variants to identify the desired changes. To achieve a 500% increase in lycopene production, they needed to make up to 24 simultaneous genetic modifications, knocking out genes involved in competing processes and modifying the regulation of other genes. This produced a space of about 15 billion possible combinatorial variations, from which they screened 100,000 distinct new cell types to find their winner. This certainly qualifies as real-world accelerated evolution.

How to engineer a system that fights back

Last week saw the release of a report on synthetic biology from the UK’s Royal Academy of Engineering. The headline call, as reflected in the coverage in the Financial Times, is for the government to develop a strategy for synthetic biology so that the country doesn’t “lose out in the next industrial revolution”. The report certainly plays up the likelihood of high impact applications in the short term – within five to ten years, we’re told, we’ll see synbio based biofuels, “artificial leaf technology” to fix atmospheric carbon dioxide, industrial scale production of materials like spider silk, and in medicine the realisation of personalised drugs. An intimation that progress towards these goals may not be entirely smooth can be found in this news piece from a couple of months ago – A synthetic-biology reality check – which described the abrupt winding up earlier this year of one of the most prominent synbio start-ups, Codon Devices, founded by some of the most prominent US players in the field.

There are a number of competing visions for what synthetic biology might be; this report concentrates on just one of these. This is the idea of identifying a set of modular components – biochemical analogues of simple electronic components – with the aim of creating a set of standard parts from which desired outcomes can be engineered. This way of thinking relies on a series of analogies and metaphors, relating the functions of cell biology with constructs of human-created engineering. Some of these analogies have a sound empirical (and mathematical) basis, like the biomolecular realisation of logic gates and positive and negative feedback.

There is one metaphor that is used a lot in the report which seems to me to be potentially problematic – that’s the idea of a chassis. What’s meant by this is a cell – for example, a bacteria like E.coli – into which the artificial genetic components are introduced in order to produce the desired products. This conjures up an image like the box into which one slots the circuit boards to make a piece of electronic equipment – something that supplies power and interconnections, but which doesn’t have any real intrinsic functionality of its own. It seems to me difficult to argue that any organism is ever going to provide such a neutral, predictable substrate for human engineering – these are complex systems which have their own agenda. To quote from the report on a Royal Society Discussion Meeting about synthetic biology, held last summer: “Perhaps one of the more significant challenges for synthetic biology is that living systems actively oppose engineering. They are robust and have evolved to be self-sustaining, responding to perturbations through adaptation, mutation, reproduction and self-repair. This presents a strong challenge to efforts to ‘redesign’ existing life.”

Are electric cars the solution?

We’re seeing enthusiasm everywhere for electric cars, with government subsidies being directed both at buyers and manufacturers. The attractions seem to be obvious – clean, emission free transport, seemingly resolving effortlessly the conflict between people’s desire for personal mobility and our need to move to a lower carbon energy economy. Widespread use of electric cars, though, simply moves the energy problem out of sight – from the petrol station and exhaust pipe to the power station. A remarkably clear opinion piece in today’s Financial Times, by Richard Pike, of the UK’s Royal Society of Chemistry, poses the problem in numbers.

The first question we have to ask, is how does the energy efficiency of electric cars compare to cars powered by internal combustion engines? Electric motors are much more efficient than internal combustion engines, but a fair comparison has to take into account the losses incurred in generating and transmitting the electricity. Pike’s cites figures that show the comparison is actually surprisingly close. Petrol engines, on average, have an overall efficiency of 32%, whereas the much more efficient Diesel engine converts 45% of the energy in the fuel into useful output. Conversion efficiencies in power stations, on the other hand, come in at a bit more than 40%; add to this a transmission loss getting from the power station to the plug and a further loss from the charging/discharging cycle in the batteries and you end up with an overall efficiency of about 31%. So, on pure efficiency grounds, electric cars do worse than either petrol or diesel vehicles. One further factor needs to be taken into account, though – that’s the amount of carbon dioxide emitted per Joule of energy supplied from different fuels. Clearly, if all our electricity was generated by nuclear power or by solar photovoltaics, the advantages of electric cars would be compelling, but if it all came from coal-fired power stations this would make the situation substantially worse. With the current mix of energy sources in the UK, Pike estimates a small advantage for electric cars, with an overall potental reduction of emissions of one seventh. I don’t know the corresponding figures for other countries; presumably given France’s high proportion of nuclear the advantage of electric cars there would be much greater, while in the USA, given the importance of coal, things may be somewhat worse.

Pike’s conclusion is that the emphasis on electric cars is misplaced, and the subsidy money would be better off spent on R&D on renewable energy and carbon capture. The counter-argument would be that a push for electric cars now won’t make a serious difference to patterns of energy use for ten or twenty years, given the inertia attached to the current installed base of conventional cars and the plant to manufacture them, but is necessary to begin the process of changing that. In the meantime, one should be pursuing low carbon routes to electricity generation, whether nuclear, renewable, or coal with carbon capture. It would be comforting to think that this is what will happen, but we shall see.

Another step towards (even) cheaper DNA sequencing

An article in the current Nature Nantechnology – Continuous base identification for single-molecule nanopore DNA sequencing (abstract, subscription required for full article) marks another important step towards the goal of using nanotechnology for fast and cheap DNA sequencing. The work comes from the group of Hagen Bayley, at Oxford University.

The original idea in this approach to sequencing was to pull a single DNA chain through a pore with an electric field, and detect the different bases one by one by changes in the current through the pore. I wrote about this in 2007 – Towards the $1000 human genome – and in 2005 – Directly reading DNA. Difficulties in executing this appealing scheme directly mean that Bayley is now taking a slightly different approach – rather than threading the DNA through the hole directly, he uses an enzyme to chop a single base of the end of the DNA; as each base goes through the pore the characteristic current change is sensitive enough to identify its chemical identity. The main achievement reported in this paper is in engineering the pore – this is based on a natural membrane protein, alpha-haemolysin, but a chemical group is covalently bonded to the inside of the pore to optimise its discrimination and throughput. What still needs to be done is to mount the enzyme next to the nanopore, to make sure bases are chopped off the DNA strand and read in sequence.

Nonetheless, commercialisation of the technology seems to be moving fast, through a spin-out company, Oxford Nanopore Technologies Ltd. Despite the current difficult economic circumstances, this company managed to raise another £14 million in January.

Despite the attractiveness of this technology, commercial success isn’t guaranteed, simply because the competing, more conventional, technologies are developing so fast. These so-called “second generation” sequencing technologies have already brought the price of a complete human genome sequence down well below $100,000 – this itself is an astounding feat, given that the original Human Genome Project probably cost about $3 billion to produce its complete sequence in 2003. There’s a good overview of these technologies in the October 2008 issue of Nature Biotechnology – Next-generation DNA sequencing (abstract, subscription required for full article). It’s these technologies that underlie the commercial instruments, such as those made by Illumina, that have brought large scale DNA sequencing within the means of many laboratories; a newly started company Complete Genomics – plans to introduce a service this year at $5,000 for a complete human genome. As often is the case with a new technology, competition from incremental improvements of the incumbent technology can be fierce. It’s interesting, though, that Illumina regards the nanopore technology to be significant enough for it to take a a substantial equity stake in Oxford Nanopore.

What’s absolutely clear, though, is that the age of large scale, low cost, DNA sequencing is now imminent, and we need to think through the implications of this without delay.

How cells decide

One of the most important recent conceptual advances in biology, in my opinion, is the realization that much of the business carried out by the nanoscale machinery of the cell is as much about processing information as processing matter. Dennis Bray pointed out, in an important review article (8.4 MB PDF) published in Nature in 1995, that mechanisms such as allostery, by which the catalytic activity of an enzyme can be switched on and off by the binding of another molecule, mean that proteins can form the components of logic gates, which themselves can be linked together to form biochemical circuits. These information processing networks can take information about the environment from sensors at the cell surface, compute an appropriate action, and modify the cell’s behaviour in response. My eye was recently caught by a paper from 2008 which illustrates rather nicely how it is that the information processing capacity of a single cell can be quite significant.

The paper – Emergent decision-making in biological signal transduction networks (abstract, subscription required for full article in PNAS), comes from Tomáš Helikar, John Konvalina, Jack Heidel, and Jim A. Rogers at the University of Nebraska. What these authors have done is construct a large scale, realistic model of a cell signalling network in a generic eukaryotic cell. To do this, they’ve mined the literature for data on 130 different network nodes. Each node represents a protein; in a crucial simplification they reduce the complexities of the biochemistry to simple Boolean logic – the node is either on or off, depending on whether the protein is active or not, and for each node there is a truth table expressing the interactions of that node with other proteins. For some more complicated cases, a single protein may be represented by more than one node, expressing the fact that there may be a number of different modified states.

This model of the cell takes in information from the outside world; sensors at the cell membrane measure the external concentration of growth factors, extracellular matrix proteins, and calcium levels. This is the input to the cell’s information processing system. The outputs of the systems are essentially decisions by the cell about what to do in response to its environment. The key result of the simulations is that the network can take a wide variety of input signals, often including random noise, and for each combination of inputs produce one of a small number of biologically appropriate responses – as the authors write, “this nonfuzzy partitioning of a space of random, noisy, chaotic inputs into a small number of equivalence classes is a hallmark of a pattern recognition machine and is strong evidence that signal transduction networks are decision-making systems that process information obtained at the membrane rather than simply passing unmodified signals downstream.”

Can carbon capture and storage work?

Across the world, governments are placing high hopes on carbon capture and storage as the technology that will allow us to go on meeting a large proportion of the world’s growing energy needs from high carbon fossil fuels like coal. The basic technology is straightforward enough; in one variant one burns the coal as normal, and then takes the flue gases through a process to separate the carbon dioxide, which one then pipes off and shuts away in a geological reservoir, for example down an exhausted natural gas field. There are two alternatives to this simplest scheme; one can separate the oxygen from the nitrogen in the air and then burn the fuel in pure oxygen, producing nearly pure carbon dioxide for immediate disposal. Or in a process reminiscent of that used a century ago to make town gas, one can gasify coal to produce a mixture of carbon dioxide and hydrogen, remove the carbon dioxide from the mixture and burn the hydrogen. Although the technology for this all sounds straightforward enough, a rather sceptical article in last week’s Economist, Trouble in Store, points out some difficulties. The embarrassing fact is that, for all the enthusiasm from politicians, no energy utility in the world has yet built a large power plant using carbon capture and storage. The problem is purely one of cost. The extra capital cost of the plant is high, and significant amounts of energy need to be diverted to do the necessary separation processes. This puts a high (and uncertain) price on each tonne of carbon not emitted.

Can technology bring this cost down? This question was considered in a talk last week by Professor Mercedes Maroto-Valer from the University of Nottingham’s Centre for Innovation in Carbon Capture and Storage. The occasion for the talk was a meeting held last Friday to discuss environmentally beneficial applications of nanotechnology; this formed part of the consultation process about the third Grand Challenge to be funded in nanotechnology by the UK’s research council. A good primer on the basics of the process can be found in the IPCC special report on carbon capture. At the heart of any carbon capture method is always a gas separation process. This might be helped by better nanotechnology-enabled membranes, or nanoporous materials (like molecular sieve materials) that can selectively absorb and release carbon dioxide. These would need to be cheap and capable of sustaining many regeneration cycles.

This kind of technology might help by bringing the cost of carbon capture and storage down from its current rather frightening levels. I can’t help feeling, though, that carbon capture and storage will always remain a rather unsatisfactory technology for as long as its costs remain a pure overhead – thus finding something useful to do with the carbon dioxide is a hugely important step. This is another reason why I think the “methanol economy” deserves serious attention. The idea here is to use methanol as an energy carrier, for example as a transport fuel which is compatible with existing fuel distribution infrastructures and the huge installed base of internal combustion engines. A long-term goal would be to remove carbon dioxide from the atmosphere and use solar energy to convert it into methanol for use as a completely carbon-neutral transport fuel and as a feedstock for the petrochemical industry. The major research challenge here is to develop scalable systems for the photocatalytic reduction of carbon dioxide, or alternatively to do this in a biologically based system. Intermediate steps to a methanol economy might use renewably generated electricity to provide the energy for the creation of methanol from water and carbon dioxide from coal-fired power stations, extracting “one more pass” of energy from the carbon before it is released into the atmosphere. Alternatively process heat from a new generation nuclear power station could be used to generate hydrogen for the synthesis of methanol from carbon dioxide captured from a neighboring fossil fuel plant.

Natural complexity, engineering simplicity

One of the things that makes mass production possible is the large-scale integration of nearly identical parts. Much engineering design is based on this principle, which is taken to extremes in microelectronics; a modern microprocessor will contain several hundred million transistors, every one of which needs to be manufactured to very high tolerances if the device is to work at all. One might think that similar considerations would apply to biology. After all, the key components of biological nanotechnology – the proteins that are the key components of most of the nanoscale machinery of the cell – are specified by the genetic code down to the last atom, and in many cases are folded in a unique three dimensional configuration. It turns out, though, that this is not the case; biology actually has sophisticated mechanisms whose entire purpose is to introduce extra variation into its components.

This point was forcefully made by Dennis Bray in an article in Science magazine in 2003: called Molecular Prodigality (PDF version from Bray’s own website). Protein sequences can be chopped and changed, after the DNA code has been read, by processes of RNA editing and splicing and other types of post-translational modification, and these can lead to distinct changes in the operation of machines made from these proteins. Bray cites as an example the potassium channels in squid nerve axons; one of the component proteins can be altered by RNA editing in up to 13 distinct places, changing the channel’s operating parameters. He calculates that the random combination of all these possibilities means that there are 4.5 ×1015 subtly different possible types of potassium channels. This isn’t an isolated example; Bray estimates that up to a half of human structural genes allow some such variation, with the brain and nervous system being particularly rich in molecular diversity.

It isn’t at all clear what all this variation is for, if anything. One can speculate that some of this variability has evolved to increase the adaptability of organisms to unpredictable changes in environmental conditions. This is certainly true for the case of the adaptive immune system. A human has the ability to make 1012 different types of antibody, using combinatorial mechanisms to generate a huge library of different molecules, each of which has the potential to recognise characteristic target molecules on pathogens that we’ve yet to be exposed to. This is an example of biology’s inherent complexity; human engineering, in contrast, strives for simplicity.