## Problems with risk calculations

While we are often told that severe nuclear accidents are ‘impossible’ (which itself is impossible – maybe a subject of a future post) a more reasonable response is sometimes an ‘expert’ from the nuclear industry will quote probabilities and risk factors. Here is a quick post about why those risk assessments are wrong or misleading.

Risk is the *probability* of an event times the *consequences* of that event.

Risk = *C(A)* x *P(A)*

where *C(A)* is the consequences of event A and *P(A)* is the probability of that event.

Since I am currently backing up my computer let us use that as an example.

Assume that the probability of my hard disk crashing and loosing all the data on it is once every five years. That works out as a *probability* of about 0.05% every day. The *consequences* of this depends on how much I am storing on my computer every day – say I am doing 38 hours a week and that I back up my computer every week. So I risk 0.02 (38 x 0.05%) hours every week which is about an hour a year.

If I do less work on the computer then I risk a lot less. I can also decrease that risk by backing up more than once a week.

All sounds fine but I have made some mistakes. I have not taken into account that my computer could be stolen, or the house could burn down and not only do I loose my hard disk but also the stuff that I have backed up onto CD-Rom if I have not taken the precaution of moving them off site. Another problem is that I have assumed that the disk failure rate is once every five years. One more thing is that I actually made a (very insignificant in this case) error when calculating the probability since I assumed that I could loose the data more than once in any week.

I will not go any further with this example but I just wanted to explain that the idea that risk is *consequences* times the *probability* is wrong. There is also a finite probability that my model for calculating the probability is incorrect. Therefore the risk is:

Risk = *C(A) P(A|M) P(A) + C(A) P(A|¬M) P(¬M)*

where P(A|M) is the probability of event A assuming that our method (M) is correct, *P(A|¬M) *is the probability of event A given that the method/calculation is incorrect and *P(¬M)* is that probability that the method/calculation is incorrect.

As you might expect if you are familiar with my blog I am going to bring this around to nuclear power. I am going to talk about probability in this post, however, it is important to note that the consequences of a nuclear power accident can be very severe. I have seen it said that Chernobyl or Fukushima are ‘as bad as it can get’. This is not correct – they managed to dig under Chernobyl and pour concrete to stop the core hitting the water table and at Fukushima they did manage to cool the reactor cores and cooling ponds. If they had not done so things could be a lot worse.

For the rest of this post I shall just talk about probabilities.

I could get a better estimate of the probability of a disk failure since this has been studied^{1}. The data is reasonably good since disk crashes are reasonably frequent and there are a large number of hard disks to study. High probability, low consequence and hence high frequency events are easy to study.

However, there is a problem with high consequence events such as a major nuclear power station accident. If the risk are to remain acceptable then the probabilities have to be very small and therefore the frequency of these events is very small – so small that they may not have even been observed before. For example the risk of flooding should be a one in 10,000 year, or 10^{-4}pa event (see this post where I have already talked about some of these issues). In these cases we have much less data to test our model on and this leads to a large increase in the probability of our method being incorrect *P(¬M)*. It is not just external events but also the behaviour of systems and materials within the nuclear plant which are not well enough understood and lead to errors in the calculations of the probabilities of failure. Unfortunately it is often very difficult to say how wrong they are.

These incorrect assumptions may not lead to an accident. For example the earthquake that hit the North Anna power plant in the USA was above design basis^{2} and flood defences were only improved at Fort Calhoun nuclear plant the year before floods which would have been above design basis^{3}. In other cases – Fukushima – the errors are only too obvious.

^{1} Failure Trends in a Large Disk Drive Population, Eduardo Pinheiro, Wolf-Dietrich Weber and Luiz André Barroso, Google Inc, 2011(https://static.googleusercontent.com/external_content/untrusted_dlcp/research.google.com/en//archive/disk_failures.pdf)

^{2} North Anna Earthquake Summary, NRC (http://www.nrc.gov/about-nrc/emerg-preparedness/virginia-quake-info/va-quake-summary.pdf)

^{3} A Nuclear Plant’s Flood Defenses Trigger a Yearlong Regulatory Confrontation, New York Times, 24 June 2011 (http://www.nytimes.com/cwire/2011/06/24/24climatewire-a-nuclear-plants-flood-defenses-trigger-a-ye-95418.html)

In addition to nuclear ‘accidents’ the overriding reason to shut down all existing reactors is that they all routinely discharge poisonous radioactive gases into the atmosphere and radioactive liquids into the river/sea which provides their cooling water. These undetectable discharges are inhaled and/or ingested by local communities causing excess cancers, premature deaths from cardiovascular illnesses, central nervous system fatalities and heritable genetic mutations. See the US Environmental Protection Agency paper:

http://www.epa.gov/radiation/radionuclides and Gordon Taylor’s ‘The Real Lessons of Fukushima’ from http://www.energypolicy.co.uk/FukushimaRealLessons.pdf.