Skip to content

The Probability Problem

2012 June 25
by Doug Nix
This entry is part 8 of 7 in the series Risk Assessment

In Occupational Health and Safety (OHS), risk is a func­tion of sever­ity of injury and the prob­a­bil­ity of the injury occur­ring. Understanding the sever­ity por­tion is usu­ally fairly easy, although advanced tech­nolo­gies, like lasers for instance, take advanced haz­ard analy­sis meth­ods to assess cor­rectly. Risk vs. Cost and EffortProbability, on the other hand, is a  chal­lenge. Mathematically, prob­a­bil­ity cal­cu­la­tions go from sub­limely sim­ple to insanely com­plex. The sim­plest forms, like the prob­a­bil­ity of get­ting a num­ber from 1 to 6 when you throw a sin­gle die, can be under­stood pretty eas­ily. On any one throw of the die, you have a 1 in 6 chance of get­ting any one par­tic­u­lar num­ber, assum­ing the die is not weighted in any way. When we’re talk­ing about OHS risk, it’s never that easy.

The First Problem: No Data

Risk assess­ment can be done quan­ti­ta­tively, that is, using numeric data. This approach is often taken when numeric data is avail­able, or where rea­son­able numeric esti­mates can be made. The prob­lem with using numeric esti­mates in quan­ti­ta­tive assess­ments, is this: Math lends cred­i­bil­ity to the answer for most peo­ple. Consider these two statements:

  1. After ana­lyz­ing the avail­able infor­ma­tion, we believe that the risk is pretty low, because it is unlikely that the reac­tor will melt down.
  2. After ana­lyz­ing the avail­able infor­ma­tion, we believe that the risk of a fatal­ity in the sur­round­ing pop­u­la­tion is very low, because the prob­a­bil­ity that the reac­tor will melt down is less than 1 in 1 million.

Which of these state­ments sounds more ‘cor­rect’ or more ‘author­i­ta­tive’ to you?

Attaching num­bers to the state­ment makes it sound more author­i­ta­tive, even if there is no reli­able data to back it up! If you are going to attempt to use quan­ti­ta­tive esti­mates in a risk assess­ment, be sure you can back the esti­mates up with ver­i­fi­able data. Frequently there is no numeric data, and that forces you to move from a quan­ti­ta­tive approach to semi-​​quantitative approach, mean­ing that num­bers are assigned to esti­mates, usu­ally on a scale, like 1–5 or 1–10 rep­re­sent­ing least likely to most likely, or a fully qual­i­ta­tive approach, mean­ing that the scales are only descrip­tive, like ‘unlikely, likely, very likely’. These kinds of assess­ments are much eas­ier to make as long as the scales used are well designed, with clear descrip­tions for each incre­ment in the scale, because the data used in the assess­ment is the opin­ion of the asses­sors. Here’s an exam­ple, taken from Chris Steel’s 1990 arti­cle [1]:

Table 1: LO Likelihood of Occurrence
Scale Value
Description
0
Impossible, can­not happen
0.1
Almost impos­si­ble, pos­si­ble in extreme circumstances
0.5
Highly unlikely, though conceivable
1
Unlikely, but could occur
2
Possible, but unusual
5
Even chance, could happen
8
Probable, not surprised
10
Likely, to be expected
15
Certain, no doubt

Some peo­ple might say that this scale is too com­plex, or that the descrip­tions are not clear enough. I know that the sub­tleties some­times get lost in trans­la­tion, as I dis­cov­ered when try­ing to train a group of non-​​native-​​english-​​speaking engi­neers in the use of the scale. Linguistic chal­lenges can be a major hur­dle to over­come! Simpler scales, like that used in CSA Z432 [2], can be eas­ier to use, but may result in gaps that are not eas­ily dealt with. For example:

Table 2: Avoidance [2,Table A.2]
Category
Description
Criteria
A2
Not likely
Cannot move out of way; or inad­e­quate reac­tion time; or machine speed greater than 250 mm/​s.
A1
Likely
Can move out of way; or suf­fi­cient warning/​reaction time; or machine speed less than 250 mm/​s.

A scale like the pre­vi­ous one may not be spe­cific enough, or fine enough (some­times referred to as  ‘gran­u­lar­ity’, or in this case ‘gran­u­lar enough’) too be really use­ful. There are soft­ware pack­ages for risk assess­ment avail­able as well. One pop­u­lar prod­uct called CIRSMA, uses a scale that looks like this:

Table 3: Probability of the Hazardous Event
Parameter Selection
Description
Possible — Easily able to avoid
Normally used to describe haz­ardous motions or events that:

  • Occur in plain view of the exposed per­son, and occur at a speed of less than 125 mm /​ sec­ond;
  • Can be read­ily fore­seen or detected by the exposed per­son before the haz­ardous event occurs;
  • Are a result of the actions of the antic­i­pated exposed per­son and are under the direct con­trol of the exposed per­son; and
  • Circumstances can be eas­ily and read­ily mod­i­fied or cor­rected to avoid harm once a haz­ardous sit­u­a­tion has materialized.
Possible — Potentially able to avoid
Normally used to describe haz­ardous motions or events that:

  • Occur in plain view of the exposed per­son, and occur at a speed of more than 125 mm /​ sec­ond but less than 250 mm /​ sec­ond;
  • Could pos­si­bly be fore­seen or detected by the exposed per­son before the haz­ardous event occurs;
  • Are a result of the actions of the antic­i­pated exposed per­son and are par­tially under the con­trol of the exposed per­son; and
  • Circumstances can pos­si­bly be mod­i­fied or cor­rected in order to avoid harm once a haz­ardous sit­u­a­tion has materialized.
Unlikely — Unable to Avoid
Normally used to describe haz­ardous motions/​events that:

  • Occur either in plain view of the exposed per­son and occur at a speed of more than 250 mm /​ sec­ond or not in plain view of the exposed per­son and occur at a speed of less than 250 mm /​ sec­ond;
  • Are not likely to be fore­seen or detected by the exposed per­son before the haz­ardous event occurs; and
  • Are not a result of the actions of the antic­i­pated exposed per­son but could be par­tially under the con­trol of the exposed person.
Impossible — Injury is Unavoidable
Normally used to describe haz­ardous motion/​events that:

  • Regardless of the loca­tion of the haz­ard, occur at such a speed that the exposed per­son would have lit­tle or no oppor­tu­nity to escape harm;
  • Could not be fore­seen or detected by the exposed per­son before the haz­ardous event occurs;
  • Are not a result of the actions of the antic­i­pated exposed per­son and are not under the con­trol of the exposed per­son; and
  • Circumstances can­not to be mod­i­fied or cor­rected in order to avoid harm once a haz­ardous sit­u­a­tion has materialized.

A scale like this is more descrip­tive than the CSA scale, but less gran­u­lar and a bit eas­ier to use than the Steel table.

Probability is also influ­enced by Frequency of Exposure to the haz­ard, and each of the tools men­tioned above have scales for this para­me­ter as well. I’m not going to spend any time on those scales here, but know that they are sim­i­lar to the ones dis­played in terms of gran­u­lar­ity and clarity.

The Second Problem: Perception

This is the really big prob­lem, and it’s one that even the great­est minds in risk assess­ment and com­mu­ni­ca­tion have yet to solve effec­tively. People judge risk in all sorts of ways, and the human mind has an out­stand­ing abil­ity to mis­lead us in this area. In a recent arti­cle pub­lished in the June-​​2012 issue of Manufacturing Automation Magazine, Dick Morley talks about the ‘Monty Hall prob­lem’ [3]. In this arti­cle, Morley quotes colum­nist Marilyn vos Savant from her ‘Ask Marilyn’ col­umn in Parade Magazine:

Suppose you’re on a game show and you are given the choice of three doors. Behind one door is a car, behind the oth­ers, goats. You pick a door, say, num­ber one (but the door is not opened). And the host, who knows what’s behind the doors, opens another door, say, num­ber three, which has a goat. He says to you, ‘Do you want to pick door num­ber two?’ Is it to your advan­tage to switch your choice?”

Here is where things start to go astray. If you keep your orig­i­nal choice, your chance of win­ning the car is 1:3, since the car could be behind any of the three doors. If you change your mind, your chances of win­ning the car become 2:3, since you know what is behind one door, and could the­o­ret­i­cally choose that one, or choose one of the other two. Since you know for cer­tain that a goat is behind door three, that choice is guar­an­teed. Choose Door Three and get a goat. But if you choose to change your deci­sion, your chances go from 33% to 66% in one move, yet most peo­ple get this wrong. Mathematically it’s easy to see, but humans tend to get emo­tion­ally dis­tracted at times like this, and make the wrong choice. According to Morley, stud­ies show that pigeons are actu­ally bet­ter at this than humans! When we start to talk about risk in abstract num­bers, like ‘one fatal­ity per year per 1 mil­lion pop­u­la­tion’ or stated another way, ‘1 x 10–6 fatal­i­ties per year’ [4], peo­ple lose track of what this could mean. We like to our­selves with time frame attached to these things, so we might tell our­selves that, since it’s June now and no one has died, that some­how the risk is actu­ally half of what was stated, since half the year is gone. In fact, the risk is exactly the same today as it was on January 1, assum­ing noth­ing else has changed.

In a recent court case involv­ing a work­place fatal­ity, one expert wit­ness devel­oped a the­ory of the risk of the fatal­ity using the Human Factors approach com­monly used in the process and nuclear indus­tries. Using esti­mates that had no sup­port­ing data, he came to the con­clu­sion that the like­li­hood of a fatal­ity on this par­tic­u­lar machine was 1 x 10–8, or roughly two orders of mag­ni­tude less than being hit by light­ning. In an inter­nal HSE report in the UK the fol­low­ing chart In OHS, we believe that if a haz­ard exists, it will even­tu­ally do harm to some­one, as it did in this case. We know with­out a doubt that a fatal­ity has occurred. The manufacturer’s sales depart­ment esti­mated that there were 80–90 units of the same type in the mar­ket­place at the time of the fatal­ity. If we use that esti­mate of the num­ber of that model of machine in the mar­ket­place, we could cal­cu­late that the risk of a fatal­ity on that model as 1:80 or 1:90 (8 x 10–1 or 9 x 10–1), sig­nif­i­cantly greater than the risk of being struck by light­ning, and more than seven orders of mag­ni­tude more than esti­mated by the expert wit­ness. Estimating risk based on unproven data will result in under­es­ti­ma­tion of the risk and over­con­fi­dence in the safety of the work­ers involved.

Communication

Once a risk assess­ment is com­pleted and the appro­pri­ate risk con­trols imple­ments fol­low­ing the Hierarchy of Controls, the resid­ual risk must be com­mu­ni­cated to the peo­ple who are exposed to the risk. This allows them to make an informed deci­sion about the risk, choos­ing to do the task, mod­ify the task or not do the task at all. This is called ‘informed con­sent’, and is exactly the same as that used by doc­tors when dis­cussing treat­ment options with patients. If the risk changes for some rea­son, the change also needs to be com­mu­ni­cated. Communication about risk helps us to resist com­pla­cency about the risks that we deal with every day, and helps to avoid con­fu­sion about what the risk ‘really is’.

Risk Perception

Risk per­cep­tion is an area of study that is try­ing to help us to bet­ter under­stand how var­i­ous kinds of risks are per­ceived, and per­haps how best to com­mu­ni­cate these risks to the peo­ple who are exposed. In a report pre­pared at the UK’s Health and Safety Laboratory in 2005 [5], authors Williams and Weyman dis­cuss sev­eral ways of assess­ing risk perception.

One approach, described by Renn [6], attempts to chart four dif­fer­ent aspects of risk per­cep­tion in people’s thinking.

Perspectives on Risk
Fig. 1 — Graphing Risk Perception Factors

An exam­ple of these fac­tors plot­ted on a graph is shown in Fig. 2 below. The data points plot­ted on the chart are devel­oped by sur­vey­ing the exposed pop­u­la­tion  and then chart­ing the fre­quency of their responses to the questions.

Fig. 2 — Graphing ‘Dread’

There are two fac­tors charted on this graph. On the ver­ti­cal axis, ‘Factor 2′ is the per­cept­abil­ity of the risk, or how eas­ily detected the risk is. On the hor­i­zon­tal axis is ‘Factor 1 — The Dread Risk’, or how much ‘dread’ we have of cer­tain out­comes. In Fig. 3 below you can see the assign­ment of fac­tors to the pos­i­tive and neg­a­tive direc­tions on these axes.

Fig. 3 — Dread Factors

 

 

At this point, I can say that we are a long way from being able to use this approach effec­tively when con­sid­er­ing machin­ery safety, but as prac­ti­tion­ers, we need to being to con­sider these approaches when we com­mu­ni­cate risk to our cus­tomers, users and workers.

Conclusions

When you are think­ing about risk, it’s impor­tant to be clear about the basis for the risk you are con­sid­er­ing. make sure that you are using valid, ver­i­fi­able data, espe­cially if you are cal­cu­lat­ing a numeric value to rep­re­sent the prob­a­bil­ity of risk. Where numeric data isn’t avail­able, use the semi-​​quantitative and qual­i­ta­tive scor­ing tools that are avail­able to sim­plify the process and enable you to develop sound eval­u­a­tions of the risk involved.

Need more help? Contact me! Doug Nix

References

[1]     C. Steel. “Risk esti­ma­tion.” The Safety and Health Practitioner, pp. 20–22, June, 1990.

[2]     Safeguarding of Machinery, CSA Standard Z432-​​1994 (R1999).

[3]     R. Morley. “Analyzing Risk: The Monty Hall prob­lem.” Manufacturing Automation, June, 2012. p.26.

[4]     J. D. Rimington and S. A. Harbison, “The Tolerability of Risk from Nuclear Power Stations,” Health and Safety Executive, Her Majesty’s Stationary Office, London, UK, 1992.

[5]     J. Williamson and A. Weyman, “Review of the Public Perception of Risk, and Stakeholder Engagement”,  The Crown, London, UK. Rep. HSL/2005/16, 2005.

[6]     O. Renn, “The Role of Risk Perception for Risk Management.” in P.E.T. Douben (ed.): Pollution Risk Assessment and Management. Chichester et. al (John Wiley & Sons 1998), pp. 429–450

Post By Doug Nix (93 Posts)

+DougNix is Managing Director and Principal Consultant at Compliance InSight Consulting, Inc. (http://​www​.com​pli​an​cein​sight​.ca) in Kitchener, Ontario, and is Lead Author and Managing Editor of the Machinery Safety 101 blog.

Doug’s work includes teach­ing machin­ery risk assess­ment tech­niques pri­vately and through Conestoga College Institute of Technology and Advanced Learning in Kitchener, Ontario, as well as pro­vid­ing tech­ni­cal ser­vices and train­ing pro­grams to clients related to risk assess­ment, indus­trial machin­ery safety, safety-​​related con­trol sys­tem inte­gra­tion and reli­a­bil­ity, laser safety and reg­u­la­tory conformity.

Website: → Compliance inSight Consulting Inc.

Connect

Digiprove sealCopyright secured by Digiprove © 2012
Acknowledgements: See ref­er­ence list at the end of the more…
Some Rights Reserved
Series Navigation
No comments yet

Comments are closed.

All original content on these pages is fingerprinted and certified by Digiprove
WordPress Login Protected by Clef
%d bloggers like this: