Revisiting the UK’s nuclear AGR programme: 2. What led to the AGR decision? On nuclear physics – and nuclear weapons

This is the second of a series of three blogposts exploring the history of the UK’s nuclear programme. The pivot point of that programme was the decision, in the late 60’s, to choose, as the second generation of nuclear power plants, the UK’s home developed Advanced Gas Cooled Reactor (AGR) design, instead of a light water reactor design from the USA. This has been described as one of the worse decisions ever made by a UK government. In my first post, “On the uses of White Elephants”, I discussed the way the repercussions of this decision have influenced UK government thinking about large infrastructure projects, and in particular, the influence of an article by Dennis Henderson that was highly critical of the AGR decision. In this post, I go into some detail to try to understand why the decision was made.

According to Thomas Kelsey, writing in his article When Missions Fail: Lessons in “High Technology” from post-war Britain, the decision to choose the Advanced Gas Cooled reactor design for the UK’s second generation reactor programme was forced through by “state technocrats, hugely influential scientists and engineers from the technical branches of the civil service”; sceptics did exist, but they were isolated in different departmental silos, and unable to coordinate their positions to present a compelling counter view.

But why might the scientists and engineers have been so convinced that the AGR was the right way to go, rather than the rival US designed Pressurised Water Reactor, making what Henderson argued, in his highly influential article “Two British Errors: Their Probable Size and Some Possible Lessons”, was one of the UK government’s biggest policy errors? To go some way to answering that, it’s necessary to consider both physics and history.

Understanding the decision to choose advanced gas cooled reactors: the physics underlying nuclear reactor design choices

To start with the physics, what are the key materials that make up a fission reactor, and what influences the choice of materials?

Firstly, one needs a fissile material, which will undergo a chain reaction – a nucleus, that when struck by a neutron, will split, releasing energy, and emitting a handful of extra neutrons, that go on to cause more fission. The dominant fissile material in today’s civil nuclear programmes is Uranium-235, the minority isotope that makes up 0.72% of natural uranium (the rest of it being uranium-238, which is mildly radioactive but not fissile). To make reactor fuel, one generally needs to “enrich” the uranium, increasing the concentration of U-235, typically, for civil purposes, to a few percent. Enrichment is a complex technology inextricably connected with nuclear weapons – the enrichment needed to make weapons grade uranium is different in degree, not kind, from that needed for civil power. One also needs to consider how the fissile material – the nuclear fuel – is to be packaged in the reactor.

Secondly, one needs a moderator. The neutrons produced in fission reactions are going too fast to be efficient at inducing further fissions, so they need to be slowed down. (As I’ll discuss below, it is possible to have a reactor without moderation – a so-called fast-neutron reactor. But because of the lower absorption cross-section for fast neutrons, this needs to use a much higher fraction of fissile material – highly enriched uranium or plutonium).

In a normal reactor, the purpose of the moderator is to slow down the neutrons. Moderators need to be made of a light element which doesn’t absorb neutrons too much. The main candidates are carbon (in the form of graphite), hydrogen (in the form of water) or deuterium, the heavier isotope of hydrogen (in the form of water). Hydrogen absorbs neutrons more than deuterium does, so it’s less ideal as a moderator, but is obviously much cheaper.

Finally, one needs a coolant, which takes away the heat the fission reactor produces, so the heat can be extracted and converted to electricity in some kind of turbine. The choice here, in currently operating reactors, is between normal water, heavy water, and a non-reactive gas (either carbon dioxide or helium). Experimental designs use more exotic cooling materials like molten salts and liquid metals.

So the fundamental design choice for a reactor is the choice of moderator and coolant – which dictate, to some extent, the nature of the fuel. The variety of possible combinations of moderators and coolants means that the space of possible reactor designs is rather large, but only a handful from this choice of potentials technologies is in widespread use. The most common choice is to use ordinary water as both coolant and moderator – in so-called light water reactors (“light water” in contrast to “heavy water”, in which the normal hydrogen of ordinary water is replaced by hydrogen’s heavier isotope, deuterium). Light water is an excellent coolant, cheap, and convenient to use to drive a steam turbine to generate electricity. But it’s not a great moderator – it absorbs neutrons, so a light water reactor needs to use enriched uranium as fuel, and the core needs to be relatively small.

These weren’t problems for the original use of pressurised water reactors (PWRs, the most common type of light water reactor. The other variety, Boiling Water Reactors, similarly uses light water as both coolant and moderator, the difference with PWRs being that steam is generated directly in the reactor core rather than in a secondary circuit). These were designed to power submarines, in a military context where enriched uranium was readily available, and where a compact size is a great advantage. But it underlies the great weakness of light water reactors – their susceptibility to what’s known as a “loss of coolant accident”. The problem is that, if for some reason the flow of cooling water is stopped, even if the chain reaction is quickly shut down (and this isn’t difficult to do) the fuel produces so much heat through its radioactive decay that it can melt the fuel rods, as happened in Three Mile Island. What’s worse, the alloy that the fuel rod is clad in can react with hot steam to produce hydrogen, that can explode, as happened at Fukushima.

In contrast to light water, heavy water is an excellent moderator. Although deuterium and (normal) hydrogen are (nearly) chemically identical, the interaction of neutrons with their nuclei is very different – deuterium absorbs neutrons much less than hydrogen. Heavy water is just as good a coolant as light water, so a reactor with heavy water as both moderator and coolant can be run with unenriched uranium oxide as fuel. The tradeoff, then, is the ability to do without a uranium enrichment plant, at the cost having to use expensive and hard to make heavy water in large quantities. This is the basis of the Canadian CANDU design.

Another highly effective moderator is graphite (if it’s of sufficiently high purity). But being a solid, a separate coolant is needed. The UK’s Magnox stations used carbon dioxide as a coolant and natural, unenriched uranium metal as a fuel; it was a development of this design that formed the Advanced Gas Cooled Reactor (AGR), which used lightly enriched uranium oxide as a fuel. The use of gas rather than water as the coolant makes it possible to run the reactor at a higher temperature, which allows a more efficient conversion of heat to electricity, while the lower neutron absorption of the moderator and coolant than for light water means that the core is less compact.

Another approach is to use graphite as the moderator, but to use light water as the coolant. The use of light water reduces the neutron efficiency of the design, so the fuel needs to be lightly enriched. This is the basis of the Soviet Union’s RBMK reactor. This design is cheap to build, but it has a very ugly potential failure mode. If the cooling water starts to boil, the bubbles of steam absorb fewer neutrons than the water they replace, and this means the efficiency of the chain reaction can increase, leading to a catastrophic runaway loss of control of the fission reaction. This is what happened at Chernobyl, the world’s worst nuclear accident to date.

Understanding the decision to choose advanced gas cooled reactors: the history of the UK nuclear weapons programme, and its influence on the civil nuclear programme

In the beginning, the purpose of the UK’s nuclear programme was to produce nuclear weapons – and the same can be said of other nuclear nations, USA and USSR, France and China, India and Pakistan, Israel and North Korea. The physics of the fission reaction imposes real constraints on the space of possible reactor designs – but history sets a path-dependence to the way the technology evolved and developed, and this reflects the military origins of the technology.

A nuclear weapon relies on the rapid assembly of a critical mass of a highly fissile material. One possible material is uranium – but since it’s only the minority Uranium-235 isotope that is fissile, it’s necessary to separate this from the Uranium-238 that constitutes 99.28% of the metal as it is found in nature. The higher the degree of enrichment, the smaller the critical mass required; in practise, enrichments over 60% are needed for a weapon. There is an alternative – to use the wholly artificial element plutonium. The fissile isotope plutonium-239 is formed when uranium-238 absorbs a neutron, most conveniently in a fission reactor.

As the history of nuclear weapons is usually told, it is the physicists who are usually given the most prominent role. But there’s an argument that the crucial problems to be overcome were as much ones of chemical engineering as physics. There is no chemical difference between the two uranium isotopes that need to be separated, so any process needs to rely on physical properties that depend on the tiny difference in mass between the two isotopes. On the other hand, to obtain enough plutonium to build a weapon, one needs not just to irradiate uranium in a reactor, but then use chemical techniques to extract the plutonium from a highly radioactive fuel element.

In 1941, the wartime UK government had concluded, based on the work of the so-called MAUD committee, that nuclear weapons were feasible, and began an R&D project to develop them – codenamed “Tube Alloys”. In 1943 the UK nuclear weapons programme was essentially subsumed by the Manhattan Project, but it was always the intention that the UK would develop nuclear weapons itself when the war ended. The pre-1943 achievements of Tube Alloys are often overlooked in the light of the much larger US programme, but one feature of it is worth pointing out. The UK programme was led by the chemical giant ICI; this was resented by the academic physicists who had established the principles by which nuclear weapons would work. However, arguably it represented a realistic appraisal of where the practical difficulties of making a weapon would lie – in obtaining sufficient quantities of the fissile materials needed. Tube Alloys pursued an approach to uranium enrichment based on the slightly different mass-dependent diffusion rates of uranium hexafluoride through porous membranes. This relied on the expertise in fluorine chemistry developed by ICI in Runcorn in the 1930’s, and came to fruition with the establishment of a full-scale gaseous diffusion plant in Capenhurst, Cheshire, in the late 40s and early 50s.

After the war, the UK was cut off from the technology developed by the USA in the Manhattan project, with the 1946 McMahon Act formally prohibiting any transfer of knowledge or nuclear materials outside the USA. The political imperative for the UK to build its own nuclear weapon is summed up by the reported comments of Ernest Bevin, the Foreign Secretary in the postwar Labour government: “We’ve got to have this thing over here, whatever it costs. We’ve got to have the bloody Union Jack on top of it.”

But even before a formal decision to make a nuclear weapon was made, in 1947, the infrastructure for the UK’s own nuclear weapons programme had been put in place, reflecting the experience of the returning UK scientists who had worked on the Manhattan Project. The first decision was to build a nuclear reactor in the UK, to make plutonium. This reflected the experience of the Manhattan project, which had highlighted the potential of the plutonium route to a nuclear weapon.

To put it crudely, it turned out to be easier to make a bomb from highly enriched uranium than from plutonium, but it was easier to make plutonium than highly enriched uranium. The problem with the plutonium route to the bomb is that irradiating uranium-235 with neutrons produces not just the fissile isotope Plutonium-239, but trace amounts of another isotope, Plutonium-240. Plutonium-240 undergoes spontaneous fission, emitting neutrons. Because of this the simplest design of a nuclear weapon – the gun design used for the Hiroshima bomb – will not work for plutonium, as the spontaneous fission causes premature detonation and low explosive yields. This problem was solved by the development of the much more complex implosion design, but there are still hard limits on the levels of plutonium-240 that can be tolerated in weapons grade plutonium, and these impose constraints on the design of reactors used to produce it.

The two initial UK plutonium production reactors were built in Sellafield – the Windscale Piles. The fuel was natural, unenriched, uranium (necessarily, because the uranium enrichment plant in Capenhurst had not yet been built), so this dictated the use of a graphite moderator. The reactors were air-cooled. The first reactor started operations in 1951, with the first plutonium produced in early 1952, enabling the UK’s first, successful, nuclear weapon test in October 1952.

But even as the UK’s first atom bomb test was successful, it was clear that the number of weapons the UK’s defense establishment was calling for would demand more plutonium than the Windscale piles could produce. At the same time, there was growing interest in using nuclear energy to generate electricity, at a time when coal was expensive and in short supply, and oil had to be imported and paid for with scarce US dollars. The decision was made to combine the two goals, with second generation plutonium producing reactors also producing power. The design would use graphite moderation, as in the Windscale piles, and natural uranium as a fuel, but rather than being air-cooled, the coolant was high pressure carbon dioxide. The exclusion of air made it possible to use a magnesium alloy as the casing for the fuel, which absorbed fewer neutrons than the aluminium used before.

The first of this new generation of dual purpose reactors – at Calder Hall, near Sellafield – was opened in 1956, just four years after the decision to build it. Ultimately four reactors of this design were produced – two at Calder Hall, and two at Chapelcross in Scotland. It’s important to stress that, although these reactors did supply power to the grid, they were optimised to produce plutonium for nuclear weapons, not to produce electricity efficiently. The key feature that this requirement dictated was the need to remove the fuel rods while the reactor was running; for weapons grade plutonium the exposure of uranium-238 to neutrons needs to be limited, to keep the level of undesirable plutonium 240 low. From the point of view of power production, this is sub-optimal, as it significantly lowers the effective fuel efficiency of the reactor; it also produces significantly greater quantities of nuclear waste.

The first generation of UK power reactors – the Magnox power stations – were an evolution of this design. Unlike Calder Hall and Chapelcross, they were under control of the Central Electricity Generating Board, rather than the Atomic Energy Authority, and were run primarily to generate electricity rather than weapons grade plutonium, using longer burn up times that produced plutonium with high concentrations of Pu-240. This so-called “civil plutonium” was separated from the irradiated fuel – there is now a stockpile of about 130 tonnes of this. Did the civil Magnox reactors produce any weapons grade plutonium? I don’t know, but I believe that there is no technical reason that would have prevented that.

Fast neutron reactors and the breeder dream

A reactor that doesn’t have a moderator is known as a fast-neutron reactor. This uses neutrons at the energy they have when emitted from the fission reaction, without slowing them down in a moderator. As mentioned above, the probability of a fast neutron colliding with a fissile nucleus is smaller than for a slow neutron, so this means that a fast-neutron reactor needs to use a fuel with a high proportion of fissile isotopes – either uranium highly enriched in U-235, or plutonium (both need to be in the form of the oxide, so the fuel doesn’t melt). In the absence of a moderator, the core of a fast neutron reactor is rather small, producing a lot of heat in a very small volume. This means that neither water nor gas is good enough as a coolant – fast neutron reactors to date have instead used liquid metal, most commonly molten sodium. As one might imagine, this poses considerable engineering problems.

But fast-neutron reactors have one remarkable advantage which has made many countries persist with a fast-neutron reactor programme, despite the difficulties. A fission reaction prompted by a fast neutron produces, on average, more additional neutrons than fission prompted by a slow neutron. This means that a fast-neutron reactor can produce more neutrons than are needed to maintain the chain reaction, and these additional neutrons can be used to “breed” additional fissile material. In effect, a fast-neutron reactor can produce more reactor fuel than it consumes, for example by converting non-fissile uranium-238 into fissile plutonium-239, or converting non-fissile thorium-232 into another fissile isotope of uranium, uranium-233.

In the 1940s and 50s, the availability of uranium relative to the demand of weapons programmes was severely limited, so the prospect of extracting energy from the much more abundant U-238 isotope was very attractive. Design studies for a UK fast neutron reactor started as early as 1951, with the strong backing of Christopher Hinton, the hard-driving ex-ICI engineer who ran the UK’s nuclear programme. An experimental fast reactor was built at Dounreay, in Caithness, which was completed by 1959. Using this experience, it was decided in 1966 to build a prototype fast power reactor, cooled with liquid sodium, with a 250 MW design electrical output.

The worldwide expansion of nuclear power in the 1970s seemed to strengthen the case for a breeder reaction even further, so the commissioning of the prototype fast reactor in 1974 seemed timely. However, in common with the experience of fast reactors elsewhere in the world, reliability was a problem, and the Dounreay reactor never achieved even 50% of its design output. Moreover, following the 1979 Three Mile Island accident, the worldwide expansion of nuclear power stalled, and the price of Uranium collapsed, undercutting the economic rationale for breeder reactors.

The winding down of the UK’s experiment with fast breeders was announced in Parliament in 1988: “The Government have carried out a review of the programme in the light of the expectation that commercial deployment of fast reactors in the United Kingdom will not now be required for 30 to 40 years. Our overall aim in the review has been to retain a position in the technology for the United Kingdom at economic cost.” Operations on the Dounreay prototype fast breeder came to an end in 1994, and in effect the UK’s position in the technology was lost. In the UK, as elsewhere in the world, the liquid metal cooled fast neutron breeder reactor proved a technological dead-end, where it remains – for now.

Submarines

Bombs are not the only military application of nuclear energy. Even before the 2nd World War ended, it was appreciated that a nuclear reactor would be an ideal power source for a submarine. Diesel-electric submarines need to surface frequently to run their engines and recharge their batteries; a submarine with a long-term power source that didn’t need oxygen, able to remain underwater for months on end, would be transformational for naval warfare. In the UK, work on a naval reactor began in the early 1950’s, and the UK’s first nuclear powered submarine, HMS Dreadnought, was launched in 1960. But HMS Dreadnought didn’t use UK nuclear technology; instead it was powered by a reactor of US design, a pressurised water reactor, using light water both as moderator and as coolant.

The father of the US nuclear navy was an abrasive and driven figure, Admiral Rickover. Rickover ran the US Navy’s project to develop a nuclear submarine, initially working at Oak Ridge National Laboratory in the late 1940’s. He selected two potential reactor designs – the pressurised water reactor devised by the physicist Alvin Weinberg, and a liquid sodium cooled, beryllium moderated reactor. Both were developed to the point of implementation, but it was the PWR that was regarded as the best (and particularly, the most reliable) design, and has been subsequently used for all Western nuclear submarines.

The prototype reactor went critical at a land-based test installation in 1953. At this time the first submarine was already under construction; the USS Nautilus went to sea only two years later, in 1955. The UK’s effort lagged considerably behind. In 1958, following the thawing of nuclear relations between the UK and the USA, Admiral Rickover offered the UK a complete nuclear propulsion system. It seems that this deal was sealed entirely on the basis of the personal relationship between Rickover and the UK’s Admiral of the Fleet, Lord Mountbatten. It came with two conditions. The first was that it should be a company to company deal, between the US contractor Westinghouse and the UK firm Rolls-Royce, rather than a government to government agreement. The second was that it was a one-off – Rolls-Royce would have a license to the Westinghouse design for a pressurised water reactor, but after that the UK was on its own. These two conditions have meant that there has been a certain separation between the UK’s naval reactor programme, as Rolls-Royce has developed further iterations of the naval PWR design, and the rest of its national nuclear enterprise.

Rickover’s rapid success in creating a working power reactor for submarines had far-reaching consequences for civil nuclear power. President Eisenhower’s 1953 “Atoms for Peace” speech committed the USA to developing civilian applications, and the quickest way to deliver on that was to build a nuclear power station building on the submarine work. Shippingport opened in 1957 – it was essentially a naval reactor repurposed to power a static power station, and was wholly uneconomic as an energy source, but it launched Westinghouse’s position as a supplier of civil nuclear power plants. Pressurised water reactors designed at the outset for civil use would evolve in a different direction to submarine reactors. For a submarine, reactors need to be highly compact, self-contained, and should be able to go for long periods without being refuelled, all of which dictates the use of highly enriched – essentially weapons grade – uranium. In civil use, to have any chance of being economic, uranium at much lower enrichment levels must be used, but designs can be physically bigger, and refuelling can be more frequent. By the 1960’s, Westinghouse was able to export civil PWRs to countries like Belgium and France, and it was a descendant of this design that was built in the UK at Sizewell B.

Imagined futures, alternative histories, and technological lock-in

The path of technological progress isn’t preordained, but instead finds a route through a garden of forking paths, where at each branch point the choice is constrained by previous decisions, and is influenced by uncertain guesses about where each of the different paths might lead.

So it’s a profound mistake to suppose that in choosing between different technological approaches to nuclear power, it is simply a question of choosing between a menu of different options. The choice depends on history – a chain of previous choices which have established which potential technological paths have been pursued and which ones have been neglected. It’s this that establishes what comprises the base of technological capability and underpinning knowledge – both codified and tacit – that will be exploited in the new technology. It depends on the existence of a wider infrastructure. A national nuclear programme comprises a system, which could include uranium enrichment facilities, fuel manufacturing, plutonium separation and other waste handling facilities – and, as we’ve seen, the scope of that system depends not just on a nation’s ambitions for civil nuclear power, but on its military ambitions and its weapons programme. And it depends on visions of the future.

In the early years of the Cold War, those visions were driven by paranoia, and a not unjustified fear of apocalypse. The McMahon act of 1946 had shut the UK out of any collaboration on nuclear weapons with the USA; the Soviet Union had demonstrated an atom bomb in 1949, following up in 1955 with a thermonuclear weapon in the megaton range. The architects of the UK nuclear programme – the engineer Christopher Hinton, and physicists William Penney and John Cockcroft, drove it forward with huge urgency. Achievements like delivering Calder Hall in just 4 years were remarkable – but achieved at the cost of cut corners and the accumulation of massive technical debt. We are still living with the legacy of that time – for example, in the ongoing, hugely expensive, clean-up of the nuclear waste left over in Sellafield from that period.

Energy worries dominated the 1970s, nationally and internationally. Conflicts in the Middle East led to an oil embargo and a major spike in the price of oil. The effect of this was felt particularly strongly in the USA, where domestic oil production had peaked in 1970, giving rise to fundamental worries about the worldwide exhaustion of fossil fuels. In the UK, industrial action in the coal mining industry led to rolling power cuts and a national three day week; the sense of national chaos leading to the fall of the Heath government. Fuel prices of all kinds – oil, coal and gas – seemed to be inexorably rising. For energy importers – and the UK was still importing around half its energy in the early 1970’s – security of energy supplies suddenly seemed fragile. In this environment, there was a wide consensus that the future of energy was nuclear, with major buildouts of nuclear power carried out in France, Germany, Japan and the USA.

By the 1990s, things looked very different. In the UK, the exploitation of North Sea oil and gas had turned the UK from an energy importer to an energy exporter. All aspects of fossil fuel energy generation and distribution had been privatised. In this world of apparent energy abundance, energy was just another commodity whose supply could safely be left to the market. And in an environment of high interest rates and low fuel prices, there was no place in the market for nuclear energy.

But if decisions about the technological directions are driven by visions of the future, they are constrained by the past. What is possible is determined by the infrastructure that’s been built already – uranium enrichment plants, reprocessing facilities, and so on. The nature of the stock of knowledge acquired in past R&D programmes will be determined by the problems that had emerged during those programmes, so starting work on a different class of reactors would render that knowledge less useful and necessitate new, expensive programmes of research. The skills and expertise that have been developed in past programmes – whether that is in the understanding of reactor physics that is needed to run them efficiently, or in the construction and manufacturing techniques to build them cheaply effectively – will be specific to the particular technologies that have been implemented in the past.

All this contributes to what is called “technological lock-in”. It isn’t obvious that the first class of power reactor ever developed – the pressurised water reactor – must be the optimum design, out of the large space of possible reactor types, particularly as it was originally designed for a different application – powering submarines – to the one it ended up being widely implemented for – generating power in static, civil power stations.

The UK’s decision to choose the Advanced Gas Cooled Reactor

So why did the UK’s state technocrats make the decision to roll out Advanced Gas Cooled reactors – and having made that decision, why did it take so long to reverse it? The straightforward answer is that this was another case of technological lock-in – the UK had developed an expertise in gas-cooled reactors which was genuinely world-leading, as a result of its decision in the Magnox programme to merge the goals of generating electricity and producing military plutonium. I believe there was a real conviction that the gas-cooled reactor was technically superior to the light-water designs, coupled with a degree of pride that this was an area that the UK had led in. As a UKAEA expert on gas-cooled reactors wrote in 1983, “Few other countries had the skills or resources to pioneer [gas-cooled reactors]; the easy option of the light water reactor developed by someone else has been irresistible”.

There were specific reasons to favour the AGR over PWRs – in particular, in the UK programmes there were worries about the safety of PWRs. These were particularly forcefully expressed by Sir Alan Cottrell, an expert on metallurgy and its applications in the nuclear industry, who was government Chief Scientific Advisor between 1971 and 1974. Perhaps, after Three Mile Island and Fukushima, one might wonder whether these worries were not entirely misplaced.

Later in the programme, while there may have been some agreement from its proponents that the early AGR building programme hadn’t gone well, there was a view that the teething problems had been more or less ironed out. I haven’t managed to find an authoritative figure for the final cost of the later AGR builds, but in 1980 it was reported in parliament that Torness was on track to be delivered at a budget of £1.1 bn (1980 prices), which is not a great deal different from the final cost of the Sizewell B PWR. Torness, like Sizewell B, took 8 years to build.

But I wonder whether the biggest factor in the UK’s nuclear establishment’s preference for the AGR over the PWR was from a sense that the AGR represented another step on a continuing path of technological progress, while the PWR was a mature technology whose future was likely to consist simply of incremental improvements. Beyond the AGRs, the UK’s nuclear technologists could look to the next generation of high temperature reactors, whose prototype – Dragon, at Winfrith – was already in operation, with the fast breeder reactor promising effectively unlimited fuel for a nuclear powered future. But that future was foreclosed by the final run-down of the UK’s nuclear programme in the 80s and 90s, driven by the logic of energy privatisation and cheap North Sea gas.

In the third and final part of this series, I will consider how this history has constrained the UK’s faltering post 2008 effort to revive a nuclear power industry, and what the future might hold.

Sources

For the history of the UK’s nuclear programme, both civil and military, I have relied heavily on: An Atomic Empire: A Technical History Of The Rise And Fall Of The British Atomic Energy Programme, by Charles Hill (2013)

Churchill’s Bomb, by Graham Farmelo (2013) is very illuminating on the early history of the UK’s atomic weapons programme, and on the troubled post-war nuclear relationship between the UK and USA.

On the technical details of nuclear reactors, Nuclear power technology. Volume 1. Reactor technology, edited by Walter Marshall (OUP, 1983) is still very clear. Marshall was Chair of the UK Atomic Energy Authority, then Chief Executive of the Central Electricity Generating Board, and most of the contributors worked for the UKAEA, so in addition to its technical value, the tone of the book gives some flavour of the prevailing opinion in the UK nuclear industry at the time.

On Sir Alan Cottrell’s opposition to PWRs on safety grounds, see his biographical memoir. This also provides an interesting glimpse at how intimately linked the worlds of academia, government scientific advice, and the UK’s nuclear programme (with the occasional incursion by Royalty) were in the 1960s and 70s.