It’s easy to be ambivalent about nuclear power, as my last post illustrated. Nuclear power does provide a low carbon source of energy at scale – if we are serious about decarbonising our energy systems we are going to need a new wave of nuclear power stations, not least to replace an earlier generation of ageing reactors. But nuclear enthusiasts seriously underestimate the scale of the problems that need to be overcome to achieve a large scale expansion of nuclear power. Civil nuclear power has a troubled history everywhere; in Japan consequences of the Fukushima disaster are very much part of current affairs, whose repercussions have spread to countries like Germany. To move beyond this troubled history, to a future in which nuclear power does provide safe and affordable low-carbon energy, we need to understand how this technology got to its current state.
The way in which the technology of civil nuclear power has unfolded was not inevitable; it was the result of the specific circumstances in which it was born and developed. In this sense nuclear power is a great example of the way in which technological trajectories are not pre-ordained; there are many possible paths that nuclear energy could have gone down. What has happened is an example of “technological lock-in” – the particular historical environment in which nuclear power was born put the technology on one particular trajectory, from which it is difficult to make a big jump (as argued in this article by Robin Cowan). This is important because it explains why the current state of the technology is probably not the best place to be given the problems we need to solve.
We can’t understand where we are with nuclear power without appreciating its roots in military technology. This point should be obvious given the role of nuclear weapons in the twentieth century, but I still think its consequences are under-appreciated. For many anti-nuclear people the connection with weapons in itself marks the technology of nuclear power as immoral, so no further discussion is required. For many pro-nuclear people, on the other hand, there’s a tendency to de-emphasise the military roots of nuclear power. For the moment I want to discuss the issue from a perspective that is neither pro nor anti, but which explores how the design constraints imposed by military needs shaped the technology and put it on the path that led to where we are now.
To remind ourselves of the basics, recall that both civil nuclear power and nuclear fission weapons depend on harnessing the energy released during a chain reaction from a fissile nucleus. There are two important fissile elements – uranium and plutonium. Uranium is very common – about as common as tin – but only one isotope of uranium as it occurs in nature – uranium-235 – is fissile. So to make a nuclear weapon from uranium one needs to do the difficult job of enriching the uranium to have a much higher concentration of uranium-235 than the 0.72% than occurs in nature. Plutonium, on the other hand, does not occur in nature. But the fissile isotope plutonium 239 can be obtained by irradiating the common isotope of uranium – uranium 238 – with neutrons.
The first application of nuclear fission was the atomic bomb, as developed in the Manhattan project. The choice between making a weapon from uranium-235 and plutonium-239 has pros and cons on both sides, and the US programme pursued both options simultaneously. A uranium bomb is easy to make, but the uranium is hard to enrich. On the other hand, getting hold of plutonium is slightly more straightforward, but it’s more technically tricky to make a weapon from it. The Manhattan Project succeeded in both endeavours, and the Americans used one of each type against the Japanese cities of Hiroshima and Nagasaki in 1945.
After the war, the USA had a nuclear weapons monopoly, and they wanted to keep it that way. Keeping nuclear weapons out of the hands of their former ally turned cold war enemy, the Soviet Union, was the top priority, but the USA saw no reason to encourage their former partners in the Manhattan project, the British, to make their own bomb either. For the British, struggling to maintain a place as a great power in the postwar world, developing their own bomb became a priority, despite being cut off from the technology they had helped to develop in the Manhattan project. Given technical knowledge but no access to the huge infrastructure required to enrich uranium, a plutonium weapon was a natural choice. To make plutonium in the absence of access to enriched uranium, one needs a nuclear reactor that can use natural uranium, and to do the job efficiently one needs to be able to move fuel rods in and out without shutting the reactor down. The second requirement comes from the fact that to make weapons grade plutonium one should only irradiate the uranium for a short time – longer irradiation introduces other radioactive contaminants which complicate the business of bomb making. These requirements dictated the design of what became the Magnox reactor in the UK. This gas cooled, graphite moderated reactor entered service as an essentially dual use technology, generating electricity but also making military plutonium. One result of the Magnox programme which followed directly from its military purpose was a ghastly legacy of poorly managed nuclear waste at Sellafield, which is still generating substantial clean-up costs. In the UK, single Magnox reactor, in Wylfa in Anglesey, is still producing power. Meanwhile, in North Korea, a reactor based on published Magnox designs supplies plutonium for that nation’s nuclear weapons programme.
Despite the USA’s best efforts to hold on to their nuclear secrets, the Soviet Union tested their own first nuclear weapon in 1949. They too chose the plutonium route, and their version of a graphite moderated reactor running on natural uranium also led to a dual use military/civil design, the RMBK. Unlike Magnox, however, the RMBK was cooled not by gas but by water. Because water absorbs neutrons, this design has the ugly characteristic that in some circumstances, if the reactor overheats and the water starts to boil, the reactor power increases, a positive feedback that can lead to a runaway loss of control. Such a runaway happened at Chernobyl in 1986, causing an explosion and the spread of radioactive fallout across Europe.
Back in the USA, another military application for nuclear power was strongly driving the technology in another direction. Even before the end of the war the possible application of nuclear power for propelling submarines was being considered – the need for diesel-electric submarines to surface to charge their batteries was a serious drawback. Above all, a submarine reactor needs to be compact, which rules out the rather large gas cooled, graphite moderated reactors like Magnox. But given the availability of enriched uranium from the weapons programme, a design using a dense core with light water both as moderator and coolant was possible (because normal water – called light to distinguish it from heavy water, in which the hydrogen is replaced by deuterium – absorbs neutrons it isn’t possible to use it as a moderator for a reactor using natural uranium). This design was strongly favoured by Admiral Rickover, who headed the submarine programme. In 1954 the first nuclear powered submarine, Nautilus, was launched, powered by a Westinghouse manufactured type of light water reactor, the pressurised water reactor.
At the time of the launch of the Nautilus, the USA decided to establish civil nuclear power and export nuclear reactor technology to allies in the rest of the world. At the time the submarine experience had given it a working design – the pressurised water reactor – with two companies – Westinghouse and GE – actively involved in the nuclear business. The reactors were built by US utilities and aggressively marketed in Europe, through a combination of cheap loans from the US government and loss-leader pricing from Westinghouse and GE. By the 1970’s, with the wind-up of the UK’s troubled second generation programme of gas cooled reactors, light water reactors had a dominant position. Westinghouse’s Pressurised Water Reactor design was the basis of the very large scale French nuclear program, while GE’s simpler variant, the Boiling Water Reactor, captured markets in the USA and Japan. Moving to the present day, if one wants to buy a reactor, the descendants of these designs are the only ones on offer (with one exception: a Canadian heavy water moderated and cooled reactor design, CANDU).
Light water reactors need enriched uranium as their fuel. The process for separating the two isotopes of uranium, yielding uranium enriched in U-235 (at low enrichments for reactors, higher enrichments for weapons) and a depleted uranium – nearly pure U-238 – as a residue began as a military technology in the Manhattan Project, effected either by gas diffusion or by an electromagnetic method. These methods are hugely expensive in terms of the size and cost of plant and their energy inputs. A much cheaper and easier enrichment technology was developed originally in the Soviet Union. This uses centrifuges to separate the isotopic variants of the gas uranium hexafluoride on the basis of their different mass. In the west the method is associated with the scientist Gernot Zippe. The UK joined forces with Germany and the Netherlands in 1971 to create the consortium URENCO to enrich uranium using this method. The spread of centrifuge enrichment technology has been an important factor in the geopolitics surrounding nuclear weapons proliferation. A key figure is the Pakistani scientist A.Q. Khan. He worked for Urenco in 1972, during which time it is believed he acquired advanced Urenco centrifuge designs. On his return to Pakistan, he led the enrichment program which led to successful Pakistani nuclear tests in 1998. In the 1990’s he is believed to have developed a worldwide network for the sale of centrifuges and other nuclear technology; known recipients of this technology include Libya, North Korea and Iran. Libya relinquished its centrifuges and abandoned its nuclear program in 2003, leading to some rapprochement between the Ghaddafi regime and the West. North Korean is believed to be pursuing a uranium enrichment program as part of its nuclear weapons effort, in parallel to the plutonium route. Iran maintains that its centrifuge-based uranium enrichment programme is solely directed to civil nuclear power, a position that is not universally believed. As for Urenco itself, this is in the news again as a result of the current UK government’s intention to privatise its share in the consortium.
Despite the dominance of the light water reactor, it isn’t at all obvious that these are the best of all possible designs. There are many other designs that were possible but not built, that didn’t make it beyond the prototype stage, that were implemented in small numbers and abandoned, or that never achieved more than niche status. One feature of some of these designs – such as the heavy water reactor CANDU or the UK’s second generation gas cooled, graphite moderated reactor the AGR, is that the core is relatively large compared to the very compact core of the light water reactors. The small size of the PWRs core makes them cheaper to build than the competition, and of course it’s a crucial advantage in the original application in submarines. But for a power reactor this small size carries a big downside. A reactor can be shut down in an emergency, by inserting neutron absorbing rods. But even though the chain reaction has stopped, the intense radioactivity means that the core continues to emit significant heat over a period of days or even weeks. If the core is large, even if an accident has stopped the circulation of coolant, this heat can be dissipated. But in the dense core of a light water reactor a loss of coolant circulation is very serious; hot high pressure steam confined in the containment vessel can react with metal to form hydrogen, which can explode, while the core itself can get so hot that it fuses and can melt its way through the base of the reactor.
It was this kind of loss of coolant accident that happened first at Three Mile Island in the USA, in 1979. That accident led to a release of radiation and the irreparable loss of the reactor, but there was no loss of life. A much more serious release of radiation happened after the Fukushima disaster in Japan in 2011, when a tsunami led to loss of power to three boiling water reactors, causing a series of hydrogen explosions and breaches of containment. The direct costs of this accident are estimated as being in excess of $50 billion.
The benefits of settling onto a single type of nuclear reactor should have been that it led to reduced costs and higher reliability as people learnt from experience how to build and operate them better. As I mentioned in the last blog, it doesn’t seem to have turned out that way, as this study by Grubler(£) shows. In the USA, the capital cost per kilowatt of nuclear power stations has gone up from about $1000 in the early 1970’s to around $5000 in the 1990s (these are 2004 dollars, accounting for inflation). Even in France, widely considered to be the most well-run nuclear build-out, real costs doubled over a similar period. The most recent experiences are even unhappier. A new PWR was commissioned in Finland from the French company Areva in 2005. The original estimate for the cost of the 1.6 GW plant was €3 billion, with a 2009 delivery date. It is still unfinished; the expected start-up date has slipped to 2016 and the cost is currently estimated at €8.5 billion.
Safety and cost remain big issues for nuclear power, while the question of the final disposal of nuclear waste remains unsolved. Related to these is the question of the public acceptability of the technology. In fact, it’s possible to argue that the real risks of nuclear power remain relatively low, compared to other energy technologies, and the size of the waste problem for current versions of the technology should be much less than it was earlier, where the needs of the military program greatly magnified the amount of radioactive material that had to be processed. But this is missing the point; we should know by now that what’s important in determining the way people react to technologies isn’t the scale of the risk that they perceive, but the degree of trust that they have in the institutions that are in charge. The – correct – perception that the civil nuclear program has had close links with the military nuclear weapons program is one source of distrust. Another arises from the fact that in several major nuclear countries – the USA, the UK and Japan – nuclear power plants are operated by private utility companies that don’t have a very good reputation for openness, or indeed competence. The caricature of the nuclear power station owner, Montgomery Burns, in the cartoon “The Simpsons” may well be very unfair, but that’s how public perception is shaped.
Currently a program of new nuclear build is getting underway here in the UK. It’s difficult to imagine circumstances less likely to improve public trust in nuclear energy than what is now being planned. An attempt by the government to look like it is abiding by unwise political promises is driving it to make potentially very disadvantageous long-term commitments. Government policy opposes overt subsidy for nuclear power; instead a covert subsidy, paid for by increased electricity bills, is being put in place. The government is putting a guaranteed floor under the price the generators will receive for their nuclear energy, around twice the current wholesale price of power, for thirty five years. At the moment the government is in a position to borrow money at historically very low long-term interest rates; this makes it an ideal time for the government itself to finance an infrastructure program that could provide an immediate boost to the economy and would deliver the increase in low-carbon electricity capacity that everyone knows we need. But political constraints mean that the government will not borrow the money directly, but will instead raise the funds from overseas sources, including Chinese state controlled organisations. This will cost more money, it will lead in effect to a degree of direct overseas government control over a sensitive area of strategic infrastructure, and it will mean that the government will be in a very weak position in negotiating for some of the work of building the power stations to come back to UK businesses. These opaque and expensive arrangements are guaranteed to increase suspicion and reduce trust. All three main political parties bear some responsibility for this highly sub-optimal situation.
It’s hard to be an enthusiast for nuclear power in the current situation. But we do need to move – and move urgently – to decarbonise our energy system, and it is difficult to see how this will happen without nuclear energy making a substantial contribution. That will need a new nuclear building program. What should we do? To say that it would be much better not to have to be starting from here is tempting, but not very helpful. I think we do have to get moving and do something; we need to address the cost and affordability issues, and we have to address the safety problems. If we are to start to build anything this decade, we probably have to accept that we will be building light water reactors of some kind.
My own view is that we should rethink the obsession with size and scale. In principle, building larger and larger nuclear reactors brings economies of scale, making the capital cost per GW of capacity smaller, and thus reducing the overall cost of the electricity produced. But the history of escalating costs and receding delivery dates suggests that there’s a big gap between in principle and in practise. Moreover, as we’ve seen with the story of British nuclear new build, size brings economic problems too. The capital cost of a single new nuclear power station becomes significant compared to the overall capitalisation of a utility company, so a decision to build a new nuclear plant effectively puts the whole company at risk, without complicated and opaque arrangements to share that risk and expensive long-term guarantees underwritten by the tax-payer or energy consumer. An alternative approach uses much smaller individual reactors – so-called “small modular reactors” (see this OECD report, PDF, for a useful overview). The idea is to trade the elusive economies of scale for economies of manufacture.
Instead of building very large reactors on the site, the idea is to manufacture much smaller reactors, which can be made in a factory and installed as multiple units at the power station site. The advantages of these small modular reactors are that we will be able to exploit learning effects to to bring the cost down after manufacturing multiple units, we can impose a much higher degree of uniformity and quality control than is possible for on-site assembly of the much bigger reactors that have been favoured recently, and there are potential safety advantages too. To get moving with a new nuclear program, we should begin a serious design and manufacturing program for light water small modular reactors. But at the same time we should resume serious research on reactor design. As I wrote in an earlier blog, there has been a worldwide run-down of energy research over the last twenty years, a neglect of nuclear power research has been very much part of this. The aim of this research should be to explore that space of better designs that were locked-out by the military origins of civil nuclear power.