The Probability Problem

This entry is part 9 of 8 in the series Risk Assessment

In Occupational Health and Safety (OHS), risk is a func­tion of sever­ity of injury and the prob­ab­il­ity of the injury occur­ring. Understanding the sever­ity por­tion is usu­ally fairly easy, although advanced tech­no­lo­gies, like lasers for instance, take advanced haz­ard ana­lys­is meth­ods to assess cor­rectly. Risk vs. Cost and EffortProbability, on the oth­er hand, is a  chal­lenge. Mathematically, prob­ab­il­ity cal­cu­la­tions go from sub­limely simple to insanely com­plex. The simplest forms, like the prob­ab­il­ity of get­ting a num­ber from 1 to 6 when you throw a single die, can be under­stood pretty eas­ily. On any one throw of the die, you have a 1 in 6 chance of get­ting any one par­tic­u­lar num­ber, assum­ing the die is not weighted in any way. When we’re talk­ing about OHS risk, it’s nev­er that easy.

The First Problem: No Data

Risk assess­ment can be done quant­it­at­ively, that is, using numer­ic data. This approach is often taken when numer­ic data is avail­able, or where reas­on­able numer­ic estim­ates can be made. The prob­lem with using numer­ic estim­ates in quant­it­at­ive assess­ments, is this: Math lends cred­ib­il­ity to the answer for most people. Consider these two state­ments:

  1. After ana­lyz­ing the avail­able inform­a­tion, we believe that the risk is pretty low, because it is unlikely that the react­or will melt down.
  2. After ana­lyz­ing the avail­able inform­a­tion, we believe that the risk of a fatal­ity in the sur­round­ing pop­u­la­tion is very low, because the prob­ab­il­ity that the react­or will melt down is less than 1 in 1 mil­lion.

Which of these state­ments sounds more ‘cor­rect’ or more ‘author­it­at­ive’ to you?

Attaching num­bers to the state­ment makes it sound more author­it­at­ive, even if there is no reli­able data to back it up! If you are going to attempt to use quant­it­at­ive estim­ates in a risk assess­ment, be sure you can back the estim­ates up with veri­fi­able data. Frequently there is no numer­ic data, and that forces you to move from a quant­it­at­ive approach to semi-​quantitative approach, mean­ing that num­bers are assigned to estim­ates, usu­ally on a scale, like 1 – 5 or 1 – 10 rep­res­ent­ing least likely to most likely, or a fully qual­it­at­ive approach, mean­ing that the scales are only descript­ive, like ‘unlikely, likely, very likely’. These kinds of assess­ments are much easi­er to make as long as the scales used are well designed, with clear descrip­tions for each incre­ment in the scale, because the data used in the assess­ment is the opin­ion of the assessors. Here’s an example, taken from Chris Steel’s 1990 art­icle [1]:

Table 1: LO Likelihood of Occurrence
Scale Value
Description
0
Impossible, can­not hap­pen
0.1
Almost impossible, pos­sible in extreme cir­cum­stances
0.5
Highly unlikely, though con­ceiv­able
1
Unlikely, but could occur
2
Possible, but unusu­al
5
Even chance, could hap­pen
8
Probable, not sur­prised
10
Likely, to be expec­ted
15
Certain, no doubt

Some people might say that this scale is too com­plex, or that the descrip­tions are not clear enough. I know that the sub­tleties some­times get lost in trans­la­tion, as I dis­covered when try­ing to train a group of non-​native-​english-​speaking engin­eers in the use of the scale. Linguistic chal­lenges can be a major hurdle to over­come! Simpler scales, like that used in CSA Z432 [2], can be easi­er to use, but may res­ult in gaps that are not eas­ily dealt with. For example:

Table 2: Avoidance [2,Table A.2]
Category
Description
Criteria
A2
Not likely
Cannot move out of way; or inad­equate reac­tion time; or machine speed great­er than 250 mm/​s.
A1
Likely
Can move out of way; or suf­fi­cient warning/​reaction time; or machine speed less than 250 mm/​s.

A scale like the pre­vi­ous one may not be spe­cif­ic enough, or fine enough (some­times referred to as  ‘gran­u­lar­ity’, or in this case ‘gran­u­lar enough’) too be really use­ful. There are soft­ware pack­ages for risk assess­ment avail­able as well. One pop­u­lar product called CIRSMA, uses a scale that looks like this:

Table 3: Probability of the Hazardous Event
Parameter Selection
Description
Possible — Easily able to avoid
Normally used to describe haz­ard­ous motions or events that:
  • Occur in plain view of the exposed per­son, and occur at a speed of less than 125 mm /​ second;
  • Can be read­ily fore­seen or detec­ted by the exposed per­son before the haz­ard­ous event occurs;
  • Are a res­ult of the actions of the anti­cip­ated exposed per­son and are under the dir­ect con­trol of the exposed per­son; and
  • Circumstances can be eas­ily and read­ily mod­i­fied or cor­rec­ted to avoid harm once a haz­ard­ous situ­ation has mater­i­al­ized.
Possible — Potentially able to avoid
Normally used to describe haz­ard­ous motions or events that:
  • Occur in plain view of the exposed per­son, and occur at a speed of more than 125 mm /​ second but less than 250 mm /​ second;
  • Could pos­sibly be fore­seen or detec­ted by the exposed per­son before the haz­ard­ous event occurs;
  • Are a res­ult of the actions of the anti­cip­ated exposed per­son and are par­tially under the con­trol of the exposed per­son; and
  • Circumstances can pos­sibly be mod­i­fied or cor­rec­ted in order to avoid harm once a haz­ard­ous situ­ation has mater­i­al­ized.
Unlikely — Unable to Avoid
Normally used to describe haz­ard­ous motions/​events that:
  • Occur either in plain view of the exposed per­son and occur at a speed of more than 250 mm /​ second or not in plain view of the exposed per­son and occur at a speed of less than 250 mm /​ second;
  • Are not likely to be fore­seen or detec­ted by the exposed per­son before the haz­ard­ous event occurs; and
  • Are not a res­ult of the actions of the anti­cip­ated exposed per­son but could be par­tially under the con­trol of the exposed per­son.
Impossible — Injury is Unavoidable
Normally used to describe haz­ard­ous motion/​events that:
  • Regardless of the loc­a­tion of the haz­ard, occur at such a speed that the exposed per­son would have little or no oppor­tun­ity to escape harm;
  • Could not be fore­seen or detec­ted by the exposed per­son before the haz­ard­ous event occurs;
  • Are not a res­ult of the actions of the anti­cip­ated exposed per­son and are not under the con­trol of the exposed per­son; and
  • Circumstances can­not to be mod­i­fied or cor­rec­ted in order to avoid harm once a haz­ard­ous situ­ation has mater­i­al­ized.

A scale like this is more descript­ive than the CSA scale, but less gran­u­lar and a bit easi­er to use than the Steel table.

Probability is also influ­enced by Frequency of Exposure to the haz­ard, and each of the tools men­tioned above have scales for this para­met­er as well. I’m not going to spend any time on those scales here, but know that they are sim­il­ar to the ones dis­played in terms of gran­u­lar­ity and clar­ity.

The Second Problem: Perception

This is the really big prob­lem, and it’s one that even the greatest minds in risk assess­ment and com­mu­nic­a­tion have yet to solve effect­ively. People judge risk in all sorts of ways, and the human mind has an out­stand­ing abil­ity to mis­lead us in this area. In a recent art­icle pub­lished in the June-​2012 issue of Manufacturing Automation Magazine, Dick Morley talks about the ‘Monty Hall prob­lem’ [3]. In this art­icle, Morley quotes colum­nist Marilyn vos Savant from her ‘Ask Marilyn’ column in Parade Magazine:

Suppose you’re on a game show and you are giv­en the choice of three doors. Behind one door is a car, behind the oth­ers, goats. You pick a door, say, num­ber one (but the door is not opened). And the host, who knows what’s behind the doors, opens anoth­er door, say, num­ber three, which has a goat. He says to you, ‘Do you want to pick door num­ber two?’ Is it to your advant­age to switch your choice?”

Here is where things start to go astray. If you keep your ori­gin­al choice, your chance of win­ning the car is 1:3, since the car could be behind any of the three doors. If you change your mind, your chances of win­ning the car become 2:3, since you know what is behind one door, and could the­or­et­ic­ally choose that one, or choose one of the oth­er two. Since you know for cer­tain that a goat is behind door three, that choice is guar­an­teed. Choose Door Three and get a goat. But if you choose to change your decision, your chances go from 33% to 66% in one move, yet most people get this wrong. Mathematically it’s easy to see, but humans tend to get emo­tion­ally dis­trac­ted at times like this, and make the wrong choice. According to Morley, stud­ies show that pigeons are actu­ally bet­ter at this than humans! When we start to talk about risk in abstract num­bers, like ‘one fatal­ity per year per 1 mil­lion pop­u­la­tion’ or stated anoth­er way, ‘1 x 10-6 fatal­it­ies per year’ [4], people lose track of what this could mean. We like to ourselves with time frame attached to these things, so we might tell ourselves that, since it’s June now and no one has died, that some­how the risk is actu­ally half of what was stated, since half the year is gone. In fact, the risk is exactly the same today as it was on January 1, assum­ing noth­ing else has changed.

In a recent court case involving a work­place fatal­ity, one expert wit­ness developed a the­ory of the risk of the fatal­ity using the Human Factors approach com­monly used in the pro­cess and nuc­le­ar indus­tries. Using estim­ates that had no sup­port­ing data, he came to the con­clu­sion that the like­li­hood of a fatal­ity on this par­tic­u­lar machine was 1 x 10-8, or roughly two orders of mag­nitude less than being hit by light­ning. In an intern­al HSE report in the UK the fol­low­ing chart In OHS, we believe that if a haz­ard exists, it will even­tu­ally do harm to someone, as it did in this case. We know without a doubt that a fatal­ity has occurred. The manufacturer’s sales depart­ment estim­ated that there were 80 – 90 units of the same type in the mar­ket­place at the time of the fatal­ity. If we use that estim­ate of the num­ber of that mod­el of machine in the mar­ket­place, we could cal­cu­late that the risk of a fatal­ity on that mod­el as 1:80 or 1:90 (8 x 10-1 or 9 x 10-1), sig­ni­fic­antly great­er than the risk of being struck by light­ning, and more than sev­en orders of mag­nitude more than estim­ated by the expert wit­ness. Estimating risk based on unproven data will res­ult in under­es­tim­a­tion of the risk and over­con­fid­ence in the safety of the work­ers involved.

Communication

Once a risk assess­ment is com­pleted and the appro­pri­ate risk con­trols imple­ments fol­low­ing the Hierarchy of Controls, the resid­ual risk must be com­mu­nic­ated to the people who are exposed to the risk. This allows them to make an informed decision about the risk, choos­ing to do the task, modi­fy the task or not do the task at all. This is called ‘informed con­sent’, and is exactly the same as that used by doc­tors when dis­cuss­ing treat­ment options with patients. If the risk changes for some reas­on, the change also needs to be com­mu­nic­ated. Communication about risk helps us to res­ist com­pla­cency about the risks that we deal with every day, and helps to avoid con­fu­sion about what the risk ‘really is’.

Risk Perception

Risk per­cep­tion is an area of study that is try­ing to help us to bet­ter under­stand how vari­ous kinds of risks are per­ceived, and per­haps how best to com­mu­nic­ate these risks to the people who are exposed. In a report pre­pared at the UK’s Health and Safety Laboratory in 2005 [5], authors Williams and Weyman dis­cuss sev­er­al ways of assess­ing risk per­cep­tion.

One approach, described by Renn [6], attempts to chart four dif­fer­ent aspects of risk per­cep­tion in people’s think­ing.

Perspectives on Risk
Fig. 1 – Graphing Risk Perception Factors

An example of these factors plot­ted on a graph is shown in Fig. 2 below. The data points plot­ted on the chart are developed by sur­vey­ing the exposed pop­u­la­tion  and then chart­ing the fre­quency of their responses to the ques­tions.

Fig. 2 – Graphing ‘Dread’

There are two factors charted on this graph. On the ver­tic­al axis, ‘Factor 2’ is the per­cept­ab­il­ity of the risk, or how eas­ily detec­ted the risk is. On the hori­zont­al axis is ‘Factor 1 – The Dread Risk’, or how much ‘dread’ we have of cer­tain out­comes. In Fig. 3 below you can see the assign­ment of factors to the pos­it­ive and neg­at­ive dir­ec­tions on these axes.

Fig. 3 – Dread Factors

 

 

At this point, I can say that we are a long way from being able to use this approach effect­ively when con­sid­er­ing machinery safety, but as prac­ti­tion­ers, we need to being to con­sider these approaches when we com­mu­nic­ate risk to our cus­tom­ers, users and work­ers.

Conclusions

When you are think­ing about risk, it’s import­ant to be clear about the basis for the risk you are con­sid­er­ing. make sure that you are using val­id, veri­fi­able data, espe­cially if you are cal­cu­lat­ing a numer­ic value to rep­res­ent the prob­ab­il­ity of risk. Where numer­ic data isn’t avail­able, use the semi-​quantitative and qual­it­at­ive scor­ing tools that are avail­able to sim­pli­fy the pro­cess and enable you to devel­op sound eval­u­ations of the risk involved.

Need more help? Contact me! Doug Nix

References

[1]     C. Steel. “Risk estim­a­tion.” The Safety and Health Practitioner, pp. 20 – 22, June, 1990.

[2]     Safeguarding of Machinery, CSA Standard Z432-​1994 (R1999).

[3]     R. Morley. “Analyzing Risk: The Monty Hall prob­lem.” Manufacturing Automation, June, 2012. p.26.

[4]     J. D. Rimington and S. A. Harbison, “The Tolerability of Risk from Nuclear Power Stations,” Health and Safety Executive, Her Majesty’s Stationary Office, London, UK, 1992.

[5]     J. Williamson and A. Weyman, “Review of the Public Perception of Risk, and Stakeholder Engagement”,  The Crown, London, UK. Rep. HSL/​2005/​16, 2005.

[6]     O. Renn, “The Role of Risk Perception for Risk Management.” in P.E.T. Douben (ed.): Pollution Risk Assessment and Management. Chichester et. al (John Wiley & Sons 1998), pp. 429 – 450

Digiprove sealCopyright secured by Digiprove © 2012 – 2014
Acknowledgements: See ref­er­ence list at the end of the more…
Some Rights Reserved
Series NavigationScoring Severity of Injury – Hidden Probabilities

Author: Doug Nix

+DougNix is Managing Director and Principal Consultant at Compliance InSight Consulting, Inc. (http://www.complianceinsight.ca) in Kitchener, Ontario, and is Lead Author and Managing Editor of the Machinery Safety 101 blog.

Doug's work includes teaching machinery risk assessment techniques privately and through Conestoga College Institute of Technology and Advanced Learning in Kitchener, Ontario, as well as providing technical services and training programs to clients related to risk assessment, industrial machinery safety, safety-related control system integration and reliability, laser safety and regulatory conformity.

Follow me on Academia.edu//a.academia-assets.com/javascripts/social.js