The Probability Problem

This entry is part 9 of 8 in the series Risk Assess­ment

In Occu­pa­tion­al Health and Safe­ty (OHS), risk is a func­tion of sever­i­ty of injury and the prob­a­bil­i­ty of the injury occur­ring. Under­stand­ing the sever­i­ty por­tion is usu­al­ly fair­ly easy, although advanced tech­nolo­gies, like lasers for instance, take advanced haz­ard analy­sis meth­ods to assess cor­rect­ly. Risk vs. Cost and EffortProb­a­bil­i­ty, on the oth­er hand, is a  chal­lenge. Math­e­mat­i­cal­ly, prob­a­bil­i­ty cal­cu­la­tions go from sub­lime­ly sim­ple to insane­ly com­plex. The sim­plest forms, like the prob­a­bil­i­ty of get­ting a num­ber from 1 to 6 when you throw a sin­gle die, can be under­stood pret­ty eas­i­ly. On any one throw of the die, you have a 1 in 6 chance of get­ting any one par­tic­u­lar num­ber, assum­ing the die is not weight­ed in any way. When we’re talk­ing about OHS risk, it’s nev­er that easy.

The First Problem: No Data

Risk assess­ment can be done quan­ti­ta­tive­ly, that is, using numer­ic data. This approach is often tak­en when numer­ic data is avail­able, or where rea­son­able numer­ic esti­mates can be made. The prob­lem with using numer­ic esti­mates in quan­ti­ta­tive assess­ments, is this: Math lends cred­i­bil­i­ty to the answer for most peo­ple. Con­sid­er these two state­ments:

  1. After ana­lyz­ing the avail­able infor­ma­tion, we believe that the risk is pret­ty low, because it is unlike­ly that the reac­tor will melt down.
  2. After ana­lyz­ing the avail­able infor­ma­tion, we believe that the risk of a fatal­i­ty in the sur­round­ing pop­u­la­tion is very low, because the prob­a­bil­i­ty that the reac­tor will melt down is less than 1 in 1 mil­lion.

Which of these state­ments sounds more ‘cor­rect’ or more ‘author­i­ta­tive’ to you?

Attach­ing num­bers to the state­ment makes it sound more author­i­ta­tive, even if there is no reli­able data to back it up! If you are going to attempt to use quan­ti­ta­tive esti­mates in a risk assess­ment, be sure you can back the esti­mates up with ver­i­fi­able data. Fre­quent­ly there is no numer­ic data, and that forces you to move from a quan­ti­ta­tive approach to semi-quan­ti­ta­tive approach, mean­ing that num­bers are assigned to esti­mates, usu­al­ly on a scale, like 1–5 or 1–10 rep­re­sent­ing least like­ly to most like­ly, or a ful­ly qual­i­ta­tive approach, mean­ing that the scales are only descrip­tive, like ‘unlike­ly, like­ly, very like­ly’. These kinds of assess­ments are much eas­i­er to make as long as the scales used are well designed, with clear descrip­tions for each incre­ment in the scale, because the data used in the assess­ment is the opin­ion of the asses­sors. Here’s an exam­ple, tak­en from Chris Steel’s 1990 arti­cle [1]:

Table 1: LO Like­li­hood of Occur­rence
Scale Val­ue
Impos­si­ble, can­not hap­pen
Almost impos­si­ble, pos­si­ble in extreme cir­cum­stances
High­ly unlike­ly, though con­ceiv­able
Unlike­ly, but could occur
Pos­si­ble, but unusu­al
Even chance, could hap­pen
Prob­a­ble, not sur­prised
Like­ly, to be expect­ed
Cer­tain, no doubt

Some peo­ple might say that this scale is too com­plex, or that the descrip­tions are not clear enough. I know that the sub­tleties some­times get lost in trans­la­tion, as I dis­cov­ered when try­ing to train a group of non-native-eng­lish-speak­ing engi­neers in the use of the scale. Lin­guis­tic chal­lenges can be a major hur­dle to over­come! Sim­pler scales, like that used in CSA Z432 [2], can be eas­i­er to use, but may result in gaps that are not eas­i­ly dealt with. For exam­ple:

Table 2: Avoid­ance [2,Table A.2]
Not like­ly
Can­not move out of way; or inad­e­quate reac­tion time; or machine speed greater than 250 mm/s.
Can move out of way; or suf­fi­cient warning/reaction time; or machine speed less than 250 mm/s.

A scale like the pre­vi­ous one may not be spe­cif­ic enough, or fine enough (some­times referred to as  ‘gran­u­lar­i­ty’, or in this case ‘gran­u­lar enough’) too be real­ly use­ful. There are soft­ware pack­ages for risk assess­ment avail­able as well. One pop­u­lar prod­uct called CIRSMA™, uses a scale that looks like this:

Table 3: Prob­a­bil­i­ty of the Haz­ardous Event
Para­me­ter Selec­tion
Pos­si­ble — Eas­i­ly able to avoid
Nor­mal­ly used to describe haz­ardous motions or events that:
  • Occur in plain view of the exposed per­son, and occur at a speed of less than 125 mm / sec­ond;
  • Can be read­i­ly fore­seen or detect­ed by the exposed per­son before the haz­ardous event occurs;
  • Are a result of the actions of the antic­i­pat­ed exposed per­son and are under the direct con­trol of the exposed per­son; and
  • Cir­cum­stances can be eas­i­ly and read­i­ly mod­i­fied or cor­rect­ed to avoid harm once a haz­ardous sit­u­a­tion has mate­ri­al­ized.
Pos­si­ble — Poten­tial­ly able to avoid
Nor­mal­ly used to describe haz­ardous motions or events that:
  • Occur in plain view of the exposed per­son, and occur at a speed of more than 125 mm / sec­ond but less than 250 mm / sec­ond;
  • Could pos­si­bly be fore­seen or detect­ed by the exposed per­son before the haz­ardous event occurs;
  • Are a result of the actions of the antic­i­pat­ed exposed per­son and are par­tial­ly under the con­trol of the exposed per­son; and
  • Cir­cum­stances can pos­si­bly be mod­i­fied or cor­rect­ed in order to avoid harm once a haz­ardous sit­u­a­tion has mate­ri­al­ized.
Unlike­ly — Unable to Avoid
Nor­mal­ly used to describe haz­ardous motions/events that:
  • Occur either in plain view of the exposed per­son and occur at a speed of more than 250 mm / sec­ond or not in plain view of the exposed per­son and occur at a speed of less than 250 mm / sec­ond;
  • Are not like­ly to be fore­seen or detect­ed by the exposed per­son before the haz­ardous event occurs; and
  • Are not a result of the actions of the antic­i­pat­ed exposed per­son but could be par­tial­ly under the con­trol of the exposed per­son.
Impos­si­ble — Injury is Unavoid­able
Nor­mal­ly used to describe haz­ardous motion/events that:
  • Regard­less of the loca­tion of the haz­ard, occur at such a speed that the exposed per­son would have lit­tle or no oppor­tu­ni­ty to escape harm;
  • Could not be fore­seen or detect­ed by the exposed per­son before the haz­ardous event occurs;
  • Are not a result of the actions of the antic­i­pat­ed exposed per­son and are not under the con­trol of the exposed per­son; and
  • Cir­cum­stances can­not to be mod­i­fied or cor­rect­ed in order to avoid harm once a haz­ardous sit­u­a­tion has mate­ri­al­ized.

A scale like this is more descrip­tive than the CSA scale, but less gran­u­lar and a bit eas­i­er to use than the Steel table.

Prob­a­bil­i­ty is also influ­enced by Fre­quen­cy of Expo­sure to the haz­ard, and each of the tools men­tioned above have scales for this para­me­ter as well. I’m not going to spend any time on those scales here, but know that they are sim­i­lar to the ones dis­played in terms of gran­u­lar­i­ty and clar­i­ty.

The Second Problem: Perception

This is the real­ly big prob­lem, and it’s one that even the great­est minds in risk assess­ment and com­mu­ni­ca­tion have yet to solve effec­tive­ly. Peo­ple judge risk in all sorts of ways, and the human mind has an out­stand­ing abil­i­ty to mis­lead us in this area. In a recent arti­cle pub­lished in the June-2012 issue of Man­u­fac­tur­ing Automa­tion Mag­a­zine, Dick Mor­ley talks about the ‘Mon­ty Hall prob­lem’ [3]. In this arti­cle, Mor­ley quotes colum­nist Mar­i­lyn vos Savant from her ‘Ask Mar­i­lyn’ col­umn in Parade Mag­a­zine:

Sup­pose you’re on a game show and you are giv­en the choice of three doors. Behind one door is a car, behind the oth­ers, goats. You pick a door, say, num­ber one (but the door is not opened). And the host, who knows what’s behind the doors, opens anoth­er door, say, num­ber three, which has a goat. He says to you, ‘Do you want to pick door num­ber two?’ Is it to your advan­tage to switch your choice?”

Here is where things start to go astray. If you keep your orig­i­nal choice, your chance of win­ning the car is 1:3, since the car could be behind any of the three doors. If you change your mind, your chances of win­ning the car become 2:3, since you know what is behind one door, and could the­o­ret­i­cal­ly choose that one, or choose one of the oth­er two. Since you know for cer­tain that a goat is behind door three, that choice is guar­an­teed. Choose Door Three and get a goat. But if you choose to change your deci­sion, your chances go from 33% to 66% in one move, yet most peo­ple get this wrong. Math­e­mat­i­cal­ly it’s easy to see, but humans tend to get emo­tion­al­ly dis­tract­ed at times like this, and make the wrong choice. Accord­ing to Mor­ley, stud­ies show that pigeons are actu­al­ly bet­ter at this than humans! When we start to talk about risk in abstract num­bers, like ‘one fatal­i­ty per year per 1 mil­lion pop­u­la­tion’ or stat­ed anoth­er way, ‘1 x 10-6 fatal­i­ties per year’ [4], peo­ple lose track of what this could mean. We like to our­selves with time frame attached to these things, so we might tell our­selves that, since it’s June now and no one has died, that some­how the risk is actu­al­ly half of what was stat­ed, since half the year is gone. In fact, the risk is exact­ly the same today as it was on Jan­u­ary 1, assum­ing noth­ing else has changed.

In a recent court case involv­ing a work­place fatal­i­ty, one expert wit­ness devel­oped a the­o­ry of the risk of the fatal­i­ty using the Human Fac­tors approach com­mon­ly used in the process and nuclear indus­tries. Using esti­mates that had no sup­port­ing data, he came to the con­clu­sion that the like­li­hood of a fatal­i­ty on this par­tic­u­lar machine was 1 x 10-8, or rough­ly two orders of mag­ni­tude less than being hit by light­ning. In an inter­nal HSE report in the UK the fol­low­ing chart In OHS, we believe that if a haz­ard exists, it will even­tu­al­ly do harm to some­one, as it did in this case. We know with­out a doubt that a fatal­i­ty has occurred. The manufacturer’s sales depart­ment esti­mat­ed that there were 80–90 units of the same type in the mar­ket­place at the time of the fatal­i­ty. If we use that esti­mate of the num­ber of that mod­el of machine in the mar­ket­place, we could cal­cu­late that the risk of a fatal­i­ty on that mod­el as 1:80 or 1:90 (8 x 10-1 or 9 x 10-1), sig­nif­i­cant­ly greater than the risk of being struck by light­ning, and more than sev­en orders of mag­ni­tude more than esti­mat­ed by the expert wit­ness. Esti­mat­ing risk based on unproven data will result in under­es­ti­ma­tion of the risk and over­con­fi­dence in the safe­ty of the work­ers involved.


Once a risk assess­ment is com­plet­ed and the appro­pri­ate risk con­trols imple­ments fol­low­ing the Hier­ar­chy of Con­trols, the resid­ual risk must be com­mu­ni­cat­ed to the peo­ple who are exposed to the risk. This allows them to make an informed deci­sion about the risk, choos­ing to do the task, mod­i­fy the task or not do the task at all. This is called ‘informed con­sent’, and is exact­ly the same as that used by doc­tors when dis­cussing treat­ment options with patients. If the risk changes for some rea­son, the change also needs to be com­mu­ni­cat­ed. Com­mu­ni­ca­tion about risk helps us to resist com­pla­cen­cy about the risks that we deal with every day, and helps to avoid con­fu­sion about what the risk ‘real­ly is’.

Risk Perception

Risk per­cep­tion is an area of study that is try­ing to help us to bet­ter under­stand how var­i­ous kinds of risks are per­ceived, and per­haps how best to com­mu­ni­cate these risks to the peo­ple who are exposed. In a report pre­pared at the UK’s Health and Safe­ty Lab­o­ra­to­ry in 2005 [5], authors Williams and Wey­man dis­cuss sev­er­al ways of assess­ing risk per­cep­tion.

One approach, described by Renn [6], attempts to chart four dif­fer­ent aspects of risk per­cep­tion in people’s think­ing.

Perspectives on Risk
Fig. 1 — Graph­ing Risk Per­cep­tion Fac­tors

An exam­ple of these fac­tors plot­ted on a graph is shown in Fig. 2 below. The data points plot­ted on the chart are devel­oped by sur­vey­ing the exposed pop­u­la­tion  and then chart­ing the fre­quen­cy of their respons­es to the ques­tions.

Fig. 2 — Graph­ing ‘Dread’

There are two fac­tors chart­ed on this graph. On the ver­ti­cal axis, ‘Fac­tor 2’ is the per­cept­abil­i­ty of the risk, or how eas­i­ly detect­ed the risk is. On the hor­i­zon­tal axis is ‘Fac­tor 1 — The Dread Risk’, or how much ‘dread’ we have of cer­tain out­comes. In Fig. 3 below you can see the assign­ment of fac­tors to the pos­i­tive and neg­a­tive direc­tions on these axes.

Fig. 3 — Dread Fac­tors



At this point, I can say that we are a long way from being able to use this approach effec­tive­ly when con­sid­er­ing machin­ery safe­ty, but as prac­ti­tion­ers, we need to being to con­sid­er these approach­es when we com­mu­ni­cate risk to our cus­tomers, users and work­ers.


When you are think­ing about risk, it’s impor­tant to be clear about the basis for the risk you are con­sid­er­ing. make sure that you are using valid, ver­i­fi­able data, espe­cial­ly if you are cal­cu­lat­ing a numer­ic val­ue to rep­re­sent the prob­a­bil­i­ty of risk. Where numer­ic data isn’t avail­able, use the semi-quan­ti­ta­tive and qual­i­ta­tive scor­ing tools that are avail­able to sim­pli­fy the process and enable you to devel­op sound eval­u­a­tions of the risk involved.

Need more help? Con­tact me! Doug Nix


[1]     C. Steel. “Risk esti­ma­tion.” The Safe­ty and Health Prac­ti­tion­er, pp. 20–22, June, 1990.

[2]     Safe­guard­ing of Machin­ery, CSA Stan­dard Z432-1994 (R1999).

[3]     R. Mor­ley. “Ana­lyz­ing Risk: The Mon­ty Hall prob­lem.” Man­u­fac­tur­ing Automa­tion, June, 2012. p.26.

[4]     J. D. Rim­ing­ton and S. A. Har­bi­son, “The Tol­er­a­bil­i­ty of Risk from Nuclear Pow­er Sta­tions,” Health and Safe­ty Exec­u­tive, Her Majesty’s Sta­tion­ary Office, Lon­don, UK, 1992.

[5]     J. Williamson and A. Wey­man, “Review of the Pub­lic Per­cep­tion of Risk, and Stake­hold­er Engage­ment”,  The Crown, Lon­don, UK. Rep. HSL/2005/16, 2005.

[6]     O. Renn, “The Role of Risk Per­cep­tion for Risk Man­age­ment.” in P.E.T. Douben (ed.): Pol­lu­tion Risk Assess­ment and Man­age­ment. Chich­ester et. al (John Wiley & Sons 1998), pp. 429–450

Digiprove sealCopy­right secured by Digiprove © 2012–2018
Acknowl­edge­ments: See ref­er­ence list at the end of the more…
Some Rights Reserved
Series Nav­i­ga­tionScor­ing Sever­i­ty of Injury — Hid­den Prob­a­bil­i­ties

Author: Doug Nix

Doug Nix is Managing Director and Principal Consultant at Compliance InSight Consulting, Inc. ( in Kitchener, Ontario, and is Lead Author and Senior Editor of the Machinery Safety 101 blog. Doug's work includes teaching machinery risk assessment techniques privately and through Conestoga College Institute of Technology and Advanced Learning in Kitchener, Ontario, as well as providing technical services and training programs to clients related to risk assessment, industrial machinery safety, safety-related control system integration and reliability, laser safety and regulatory conformity. For more see Doug's LinkedIn profile.