If you flip an unbiased coin ten times and get 3 heads and 7 tails you could assume from this that the next time you flip a coin you only have a three in ten (30%) chance of the coin landing on heads. This, as everyone would know, is completely untrue.
Taking the simple example of the flipping of a coin as detailed above and making it more complex you can end up with something like the ‘healthcare chance calculator’ found on the BBC news website
The calculator samples from a distribution to determine whether a patient lives or dies following an operation. If you are unlucky you might end up with 20 hospitals that have an unacceptable rating (based on the number of deaths that occur). This is all down to how the coin falls determining the result for each patient.
In real life we can’t rewind and start again. The hospital which has a high number of people dying will be branded as unacceptable and will presumably have action taken against it. In this situation the hospital may not have had complete control over this outcome.
If we were able to rewind and start again using the same figures a patient who died may live. All of a sudden the hospital is sitting clear of the unacceptable category. So what has changed? The doctors are skilled the same (and are still as good), the patient has the same chance of dying – it’s just that chance has flipped its coin and this time the patient survived.
So how does this relate to simulation? Running a simulation once is exactly the same as clicking the ‘calculator’ once. You may be lucky and your hospital is acceptable, you may be unlucky and find yourself on the end of an investigation. We need to run a simulation multiple times to remove the aspect of the model that is purely influenced by chance – leaving us only with the true outcome of the model. So identify your key results, add them to the results summary and run a trial made up of many runs – you wouldn’t want to leave anything down to chance!