Machinery Safety 101

The Probability Problem

This entry is part 9 of 9 in the series Risk Assess­ment

In Occu­pa­tion­al Health and Safety (OHS), risk is a func­tion of the sever­ity of the injury and the prob­ab­il­ity of the injury occur­ring. Under­stand­ing the sever­ity por­tion is usu­ally fairly easy, although advanced tech­no­lo­gies, like lasers, for instance, take advanced haz­ard ana­lys­is meth­ods to assess cor­rectly. Risk vs. Cost and EffortProb­ab­il­ity, on the oth­er hand, is a  chal­lenge. Math­em­at­ic­ally, prob­ab­il­ity cal­cu­la­tions go from sub­limely simple to insanely com­plex. The simplest forms, like the prob­ab­il­ity of get­ting a num­ber from 1 to 6 when you throw a single die, can be under­stood pretty eas­ily. On any single throw of the die, you have a 1 in 6 chance of get­ting any one par­tic­u­lar num­ber, assum­ing the die is not weighted in any way. When we’re talk­ing about OHS risk, it’s nev­er that easy.

The First Problem: No Data

Risk assess­ment can be done quant­it­at­ively, that is, using numer­ic data. This approach is often taken when numer­ic data is avail­able, or where reas­on­able numer­ic estim­ates can be made. The prob­lem with using numer­ic estim­ates in quant­it­at­ive assess­ments, is this: Math lends cred­ib­il­ity to the answer for most people. Con­sider these two statements:

  1. After ana­lyz­ing the avail­able inform­a­tion, we believe that the risk is pretty low, because it is unlikely that the react­or will melt down.
  2. After ana­lyz­ing the avail­able inform­a­tion, we believe that the risk of a fatal­ity in the sur­round­ing pop­u­la­tion is very low, because the prob­ab­il­ity that the react­or will melt down is less than 1 in 1 million.

Which of these state­ments sounds more ‘cor­rect’ or more ‘author­it­at­ive’ to you?

Attach­ing num­bers to the state­ment makes it sound more author­it­at­ive, even if there is no reli­able data to back it up! If you are going to attempt to use quant­it­at­ive estim­ates in a risk assess­ment, be sure you can back the estim­ates up with veri­fi­able data. Fre­quently there is no numer­ic data, and that forces you to move from a quant­it­at­ive approach to semi-quant­it­at­ive approach, mean­ing that num­bers are assigned to estim­ates, usu­ally on a scale, like 1 – 5 or 1 – 10 rep­res­ent­ing least likely to most likely, or a fully qual­it­at­ive approach, mean­ing that the scales are only descript­ive, like ‘unlikely, likely, very likely’. These kinds of assess­ments are much easi­er to make as long as the scales used are well designed, with clear descrip­tions for each incre­ment in the scale, because the data used in the assess­ment is the opin­ion of the assessors. Here’s an example, taken from Chris Steel’s 1990 art­icle [1]:

Table 1: LO Like­li­hood of Occurrence
Scale Value
Impossible, can­not happen
Almost impossible, pos­sible in extreme circumstances
Highly unlikely, though conceivable
Unlikely, but could occur
Pos­sible, but unusual
Even chance, could happen
Prob­able, not surprised
Likely, to be expected
Cer­tain, no doubt

Some people might say that this scale is too com­plex, or that the descrip­tions are not clear enough. I know that the sub­tleties some­times get lost in trans­la­tion, as I dis­covered when try­ing to train a group of non-nat­ive-Eng­lish-speak­ing engin­eers in the use of the scale. Lin­guist­ic chal­lenges can be a major hurdle to over­come! Sim­pler scales, like that used in CSA Z432 [2], can be easi­er to use but may res­ult in gaps that are not eas­ily dealt with. For example:

Table 2: Avoid­ance [2, Table A.2]
Not likely
Can­not move out of way; or inad­equate reac­tion time; or machine speed great­er than 250 mm/s.
Can move out of way; or suf­fi­cient warning/reaction time; or machine speed less than 250 mm/s.

A scale like the pre­vi­ous one may not be spe­cif­ic enough, or fine enough (some­times referred to as  ‘gran­u­lar­ity’, or in this case ‘gran­u­lar enough’) to be really use­ful. There are soft­ware pack­ages for risk assess­ment avail­able as well. One non-defunct risk ana­lys­is soft­ware called CIRSMA™, used a prob­ab­il­ity scale that looks like this:

Table 3: Prob­ab­il­ity of the Haz­ard­ous Event
Para­met­er Selection
Pos­sible — Eas­ily able to avoid
Nor­mally used to describe haz­ard­ous motions or events that:
  • Occur in plain view of the exposed per­son, and occur at a speed of less than 125 mm / second;
  • Can be read­ily fore­seen or detec­ted by the exposed per­son before the haz­ard­ous event occurs;
  • Are a res­ult of the actions of the anti­cip­ated exposed per­son and are under the dir­ect con­trol of the exposed per­son; and
  • Cir­cum­stances can be eas­ily and read­ily mod­i­fied or cor­rec­ted to avoid harm once a haz­ard­ous situ­ation has materialized.
Pos­sible — Poten­tially able to avoid
Nor­mally used to describe haz­ard­ous motions or events that:
  • Occur in plain view of the exposed per­son, and occur at a speed of more than 125 mm / second but less than 250 mm / second;
  • Could pos­sibly be fore­seen or detec­ted by the exposed per­son before the haz­ard­ous event occurs;
  • Are a res­ult of the actions of the anti­cip­ated exposed per­son and are par­tially under the con­trol of the exposed per­son; and
  • Cir­cum­stances can pos­sibly be mod­i­fied or cor­rec­ted in order to avoid harm once a haz­ard­ous situ­ation has materialized.
Unlikely — Unable to Avoid
Nor­mally used to describe haz­ard­ous motions/events that:
  • Occur either in plain view of the exposed per­son and occur at a speed of more than 250 mm / second or not in plain view of the exposed per­son and occur at a speed of less than 250 mm / second;
  • Are not likely to be fore­seen or detec­ted by the exposed per­son before the haz­ard­ous event occurs; and
  • Are not a res­ult of the actions of the anti­cip­ated exposed per­son but could be par­tially under the con­trol of the exposed person.
Impossible — Injury is Unavoidable
Nor­mally used to describe haz­ard­ous motion/events that:
  • Regard­less of the loc­a­tion of the haz­ard, occur at such a speed that the exposed per­son would have little or no oppor­tun­ity to escape harm;
  • Could not be fore­seen or detec­ted by the exposed per­son before the haz­ard­ous event occurs;
  • Are not a res­ult of the actions of the anti­cip­ated exposed per­son and are not under the con­trol of the exposed per­son; and
  • Cir­cum­stances can­not to be mod­i­fied or cor­rec­ted in order to avoid harm once a haz­ard­ous situ­ation has materialized.

A scale like this is more descript­ive than the CSA scale, but less gran­u­lar and a bit easi­er to use than the Steel scale.

Prob­ab­il­ity is also influ­enced by Fre­quency of Expos­ure to the haz­ard, and each of the tools men­tioned above have scales for this para­met­er as well. I’m not going to spend any time on those scales here, but know that they are sim­il­ar to the ones dis­played in terms of gran­u­lar­ity and clarity.

The Second Problem: Perception

This is the really big prob­lem, and it’s one that even the greatest minds in risk assess­ment and com­mu­nic­a­tion have yet to solve effect­ively. People judge risk in all sorts of ways, and the human mind has an out­stand­ing abil­ity to mis­lead us in this area. In an art­icle pub­lished in the June-2012 issue of Man­u­fac­tur­ing Auto­ma­tion Magazine, Dick Mor­ley talks about the ‘Monty Hall prob­lem’ [3]. In this art­icle, Mor­ley quotes colum­nist Mar­ilyn vos Sav­ant from her ‘Ask Mar­ilyn’ column in Parade Magazine:

Sup­pose you’re on a game show and you are giv­en the choice of three doors. Behind one door is a car, behind the oth­ers, goats. You pick a door, say, num­ber one (but the door is not opened). And the host, who knows what’s behind the doors, opens anoth­er door, say, num­ber three, which has a goat. He says to you, ‘Do you want to pick door num­ber two?’ Is it to your advant­age to switch your choice?”

Here is where things start to go astray. If you keep your ori­gin­al choice, your chance of win­ning the car is 1:3, since the car could be behind any of the three doors. If you change your mind, your chances of win­ning the car become 2:3, since you know what is behind one door, and could the­or­et­ic­ally choose that one, or choose one of the oth­er two. Since you know for cer­tain that a goat is behind door three, that choice is guar­an­teed. Choose Door Three and get a goat. But if you choose to change your decision, your chances go from 33% to 66% in one move, yet most people get this wrong. Math­em­at­ic­ally it’s easy to see, but humans tend to get emo­tion­ally dis­trac­ted at times like this, and make the wrong choice. Accord­ing to Mor­ley, stud­ies show that pigeons are actu­ally bet­ter at this than humans! When we start to talk about risk in abstract num­bers, like ‘one fatal­ity per year per 1 mil­lion pop­u­la­tion’ or stated anoth­er way, ‘1 x 10-6 fatal­it­ies per year’ [4], people lose track of what this could mean. We like to fool ourselves with the time frame attached to these things, so we might tell ourselves that, since it’s June now and no one has died, that some­how the risk is actu­ally half of what was stated since half the year is gone. In fact, the risk is exactly the same today as it was on Janu­ary 1, assum­ing noth­ing else has changed.

In a recent court case involving a work­place fatal­ity in Michigan, one expert wit­ness developed a the­ory of the risk of fatal­ity using the Human Factors approach com­monly used in the pro­cess and nuc­le­ar indus­tries. Using estim­ates that had no sup­port­ing data, he came to the con­clu­sion that the like­li­hood of a fatal­ity on this par­tic­u­lar machine was 1 x 10-8, or roughly two orders of mag­nitude less than being hit by light­ning. In OHS, we believe that if a haz­ard exists, it will even­tu­ally do harm to someone, as it did in this case. We know without a doubt that a fatal­ity has occurred. The man­u­fac­turer­’s sales depart­ment estim­ated that there were 80 – 90 units of the same type of machine in the mar­ket­place at the time of the fatal­ity. If we use that estim­ate of the quant­ity of that mod­el of machine in the mar­ket­place, we could cal­cu­late that the risk of a fatal­ity on that mod­el as 1:80 or 1:90 (8 x 10-1 or 9 x 10-1), sig­ni­fic­antly great­er than the risk of being struck by light­ning, and more than sev­en orders of mag­nitude more than estim­ated by the expert wit­ness. Estim­at­ing risk based on unproven data will res­ult in under­es­tim­a­tion of the risk and over­con­fid­ence in the safety of the work­ers involved.


Once a risk assess­ment is com­pleted and the appro­pri­ate risk con­trols are imple­men­ted fol­low­ing the Hier­archy of Con­trols, the resid­ual risk must be com­mu­nic­ated to the people who are exposed to the risk. This allows them to make an informed decision about the risk, choos­ing to do the task, modi­fy the task or not do the task at all. This is called ‘informed con­sent’ and is exactly the same as that used by doc­tors when dis­cuss­ing treat­ment options with patients. If the risk changes for some reas­on, the change also needs to be com­mu­nic­ated. Com­mu­nic­a­tion about risk helps us to res­ist com­pla­cency about the risks that we deal with every day, and helps to avoid con­fu­sion about what the risk ‘really is’.

Risk Perception

Risk per­cep­tion is an area of study that is try­ing to help us to bet­ter under­stand how vari­ous kinds of risks are per­ceived, and per­haps how best to com­mu­nic­ate these risks to the people who are exposed. In a report pre­pared at the UK’s Health and Safety Labor­at­ory in 2005 [5], authors Wil­li­ams and Wey­man dis­cuss sev­er­al ways of assess­ing risk perception.

One approach, described by Renn [6], attempts to chart four dif­fer­ent aspects of risk per­cep­tion in people’s thinking.

Perspectives on Risk
Fig. 1 – Graph­ing Risk Per­cep­tion Factors [6]
An example of these factors plot­ted on a graph is shown in Fig. 2 below. The data points plot­ted on the chart are developed by sur­vey­ing the exposed pop­u­la­tion and then chart­ing the fre­quency of their responses to the questions.

Fig. 2 – Graph­ing ‘dread’ [4]
There are two factors charted on this graph. On the ver­tic­al axis, ‘Factor 2’ is the per­cept­ib­il­ity of the risk, or how eas­ily detec­ted the risk is. On the hori­zont­al axis is ‘Factor 1 – The Dread Risk’, or how much ‘dread’ we have of cer­tain out­comes. In Fig. 3 below you can see the assign­ment of factors to the pos­it­ive and neg­at­ive dir­ec­tions on these axes.

Fig. 3 – ‘Dread’ factors [4]
At this point, I can say that we are a long way from being able to use this approach effect­ively when con­sid­er­ing machinery safety, but as prac­ti­tion­ers, we need to con­sider these approaches when we com­mu­nic­ate risk to our cus­tom­ers, users and workers.


When you are think­ing about risk, it’s import­ant to be clear about the basis for the risk you are con­sid­er­ing. Make sure that you are using val­id, veri­fi­able data, espe­cially if you are cal­cu­lat­ing a numer­ic value to rep­res­ent the prob­ab­il­ity of risk. Where numer­ic data isn’t avail­able, use the semi-quant­it­at­ive and qual­it­at­ive scor­ing tools that are avail­able to sim­pli­fy the pro­cess and enable you to devel­op sound eval­u­ations of the risk involved.

Need more help? Con­tact me! Doug Nix


[1]     C. Steel. “Risk estim­a­tion.” The Safety and Health Prac­ti­tion­er, pp. 20 – 22, June, 1990.

[2]     Safe­guard­ing of Machinery, CSA Stand­ard Z432-1994 (R1999).

[3]     R. Mor­ley. “Ana­lyz­ing Risk: The Monty Hall prob­lem.” Man­u­fac­tur­ing Auto­ma­tion, June, 2012. p.26.

[4]     J. D. Rim­ing­ton and S. A. Har­bison, “The Tol­er­ab­il­ity of Risk from Nuc­le­ar Power Sta­tions,” Health and Safety Exec­ut­ive, Her Majesty’s Sta­tion­ary Office, Lon­don, UK, 1992.

[5]     J. Wil­li­am­son and A. Wey­man, “Review of the Pub­lic Per­cep­tion of Risk, and Stake­hold­er Engage­ment”,  The Crown, Lon­don, UK. Rep. HSL/2005/16, 2005.

[6]     O. Renn, “The Role of Risk Per­cep­tion for Risk Man­age­ment.” in P.E.T. Douben (ed.): Pol­lu­tion Risk Assess­ment and Man­age­ment. Chichester et. al (John Wiley & Sons 1998), pp. 429 – 450

Digiprove sealCopy­right secured by Digi­prove © 2012 – 2018
Acknow­ledge­ments: See ref­er­ence list at the end of the more…
Some Rights Reserved
Series Nav­ig­a­tionScor­ing Sever­ity of Injury – Hid­den Prob­ab­il­it­iesUnder­stand­ing Risk Assessment

4 thoughts on “The Probability Problem

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.