# Peter Lux

## The Problem of Complex Systems

This post is a bit abstract but I hope it will help people understand the problems involved with dealing with complex systems. The reason it is abstract is because I want to make the ‘complex systems’ as simple as possible. We are told that nuclear is ‘certainly’ safe, that there is ‘no possibility’ of an accident and ‘all possible’ accidents have been considered. This post is to explain why it is not true.

I define a few words such as state and event. These are defined below and are not connected to other definitions you may see elsewhere.

“When I use a word, it means just what I choose it to mean—neither more nor less.” – Humpty Dumpty

Our system will consist of several components which can either be on (O) or off (X). They can  interact. For example if component A switches from off to on then component B switches to on. Changing B to on might change component D to on etc. However, I am not going to worry about what all these interactions are.

The system is fully connected – all components interact with at least one other component in this way or else they are not part of our system.

A change of state of one of the components (from on to off or off to on) is called an event.

We also assume that our system is not perfect and sometimes a component fails – e.g. component A switches to on but fails to switch B to on or component C fails and switches from on to off.

We can now define a state of the system which is just one of the possible configurations of the system. For example if we had 4 components (A,B,C,D) we can have A is on, B is off, C is off, D is off (OXXX) or off, on, off, off (XOXX) etc. In fact there are 16 possible states:

ABCD
ABCD
ABCD
ABCD
XOOO
OXOO
OOXO
OOOX
XXOO
OXXO
OOXX
XOOX
XXXO
XXOX
XOXX
XOXO
OXXX
OXOX
OOOO
XXXX

The number of different states rises rapidly if we increase the number of components:

 Number of components: 4 8 16 32 Number of states: 16 256 65,536 4,294,967,296

So even our rather simple binary (on or off) systems can have extremely large number of states even with a relatively small number of components. However, if we consider the number of possible sequences of events they are even larger since several sequences can lead to the same state.

 Number of components 4 8 16 32 Number of sequences 24 40320 20,922,789,888,000 263,130,836,933,694,000,000,000,000,000,000,000

It is feasible to analyse a very simple system with only a few components. However, we have no way of fully analysing a system of 32 components – even if we had an extremely powerful computer and literally ‘all the time in the world’.

However, we might know that we start in a certain state – for example all components are off. For the state to change we need an external ‘initiating event’. For example someone comes along and changes component A to on. This should change component B to on and component C to off. Changing B to on will cause component D to on etc. We can think of all the possible outcomes of this ‘initiating event’. We can probably also think about what will happen if there is a fault.

We could then write down a list of what we think are all the possible ‘initiating events’ and work through these. If we are applying it to a real system then we can use our judgement. Below on the left is a diagram of various states (not components) and how they are interconnected. For example one dot may represent the configuration on, off, off, off and another on, off, on, off

 Connections between different states (not all lines drawn). An initial event (black arrow) leads to a number of changes of state (green lines). Other possible states (orange lines) can also be looked at.

In the diagram on the right is the initiating event (black arrow) that changes the state of one of the components. This then leads to another change of state and then another – along the green line. We can also think about “what happens if there is a fault”. There is therefore a small probability that it would change to a different state (along the orange line) rather than the one that it should (along the green line).

We then  only have to consider certain states (green dots) and events (green or orange lines).

Our analysis therefore relies on judgement of the possible initiating events and probabilities. We cannot be certain that particular states can never be reached except in the most simple, trivial systems.

If you do end up in a state that you have not analysed then that may have its own sequence of events and consequences. However, since nobody has considered this there will be no procedures to tell you what to do in these circumstances. This happened during the  Three Mile Island accident and at Fukushima.

Judgements can only be shown as errors after real events have shown them to be wrong. Accidents will happen.