Last updated on August 29th, 2022 at 12:12 pm
In Occupational Health and Safety (OHS), risk is a function of the severity of the injury and the probability of the injury occurring. Understanding the severity portion is usually relatively straightforward, although advanced technologies, like lasers, take advanced hazard analysis methods to assess correctly. Probability, on the other hand, is a challenge. Mathematically, probability calculations go from sublimely simple to insanely complex. The simplest forms, like the probability of getting a number from 1 to 6 when you throw a single die, can be understood pretty easily. On any die throw, you have a 1 in 6 chance of getting any particular number, assuming the die is not weighted. When we’re talking about OHS risk, it’s never that easy.
The First Problem: No Data
Risk assessment can be done quantitatively, that is, using numeric data. This approach is often taken when numeric data is available or where reasonable numeric estimates can be made. The problem with using numeric calculations in quantitative assessments is that math lends credibility to most people’s answers. Consider these two statements:
- After analyzing the available information, we believe that the risk is pretty low because the reactor is unlikely to melt down.
- After analyzing the available information, we believe that the risk of a fatality in the surrounding population is very low because the probability that the reactor will melt down is less than 1 in 1 million.
Which statements sound more ‘correct’ or more ‘authoritative’ to you?
Attaching numbers to the statement makes it sound more authoritative, even if there is no reliable data to back it up! If you attempt to use quantitative estimates in a risk assessment, be sure you can back the estimates up with verifiable data. Frequently there is no numeric data, and that forces you to move from a quantitative approach to a semi-quantitative method, meaning that numbers are assigned to estimates, usually on a scale, like 1-5 or 1-10, representing least likely to most likely, or an entirely qualitative approach, meaning that the scales are only descriptive, like ‘unlikely, likely, very likely.’ These assessments are much easier to make as long as the scales used are well designed, with clear descriptions for each increment in the scale, because the data used in the assessment is the assessors’ opinion. Here’s an example taken from Chris Steel’s 1990 article [1]:
Impossible, cannot happen | |
Almost impossible, possible in extreme circumstances | |
Highly unlikely, though conceivable | |
Unlikely, but could occur | |
Possible, but unusual | |
Even chance, could happen | |
Probable, not surprised | |
Likely, to be expected | |
Certain, no doubt |
Some might say that this scale is too complex or that the descriptions are not clear enough. I know that the subtleties sometimes get lost in translation, as I discovered when trying to train a group of non-native-English-speaking engineers in the use of the scale. Linguistic challenges can be a major hurdle to overcome! Simpler scales, like that used in CSA Z432 [2], can be easier to use but may result in gaps that are not easily dealt with. For example:
Cannot move out of way; or inadequate reaction time; or machine speed greater than 250 mm/s. | ||
Can move out of way; or sufficient warning/reaction time; or machine speed less than 250 mm/s. |
A scale like the previous one may not be specific enough, or fine enough (sometimes referred to as ‘granularity’, or in this case ‘granular enough’) to be really useful. There are software packages for risk assessment available as well. One defunct risk analysis software called CIRSMA used a probability scale that looks like this:
Normally used to describe hazardous motions or events that:
| |
Normally used to describe hazardous motions or events that:
| |
Normally used to describe hazardous motions/events that:
| |
Normally used to describe hazardous motion/events that:
|
A scale like this is more descriptive than the CSA scale, but less granular and a bit easier to use than the Steel scale.
Probability is also influenced by Frequency of Exposure to the hazard, and each of the tools mentioned above have scales for this parameter as well. I’m not going to spend any time on those scales here, but know that they are similar to the ones displayed in terms of granularity and clarity.
The Second Problem: Perception
This is the really big problem, and it’s one that even the greatest minds in risk assessment and communication have yet to solve effectively. People judge risk in all sorts of ways, and the human mind has an outstanding ability to mislead us in this area. In an article published in the June-2012 issue of Manufacturing Automation Magazine, Dick Morley talks about the ‘Monty Hall problem’ [3]. In this article, Morley quotes columnist Marilyn vos Savant from her ‘Ask Marilyn’ column in Parade Magazine:
“Suppose you’re on a game show and you are given the choice of three doors. Behind one door is a car, behind the others, goats. You pick a door, say, number one (but the door is not opened). And the host, who knows what’s behind the doors, opens another door, say, number three, which has a goat. He says to you, ‘Do you want to pick door number two?’ Is it to your advantage to switch your choice?”
Marilyn vos Savant
Here is where things start to go astray. If you keep your original choice, your chance of winning the car is 1:3, since the car could be behind any of the three doors. If you change your mind, your chances of winning the car become 2:3, since you know what is behind one door, and could theoretically choose that one or select one of the other two. Since you know for sure that a goat is behind door three, that choice is guaranteed. Choose Door Three and get a goat. But if you choose to change your decision, your chances go from 33% to 66% in one move, yet most people get this wrong. Mathematically it’s easy to see, but humans tend to get emotionally distracted at times like this and make the wrong choice. According to Morley, studies show that pigeons are actually better at this than humans! When we start to talk about risk in abstract numbers, like ‘one fatality per year per 1 million population’ or stated another way, ‘1 x 10-6 fatalities per year’ [4], people lose track of what this could mean. We like to fool ourselves with the time frame attached to these things, so we might tell ourselves that, since it’s June now and no one has died, that somehow the risk is actually half of what was stated since half the year is gone. In fact, the risk is precisely the same today as it was on January 1, assuming nothing else has changed.
In a recent court case involving a workplace fatality in Michigan, one expert witness developed a theory of the risk of fatality using the Human Factors approach commonly used in the process and nuclear industries. Using estimates that had no supporting data, he came to the conclusion that the likelihood of a fatality on this particular machine was 1 x 10-8, or roughly two orders of magnitude less than being hit by lightning. In OHS, we believe that if a hazard exists, it will eventually do harm to someone, as it did in this case. We know without a doubt that a fatality has occurred. The manufacturer’s sales department estimated that there were 80-90 units of the same type of machine in the marketplace at the time of the fatality. Suppose we use that estimate of the quantity of that model of the machine in the market. In that case, we could calculate that the risk of a fatality on that model as 1:80 or 1:90 (8 x 10-1 or 9 x 10-1), significantly greater than the risk of being struck by lightning, and more than seven orders of magnitude more than estimated by the expert witness. Estimating risk based on unproven data will result in underestimation of the risk and overconfidence in the safety of the workers involved.
Communication
Once a risk assessment is completed and the appropriate risk controls are implemented following the Hierarchy of Controls, the residual risk must be communicated to the people who are exposed to the risk. This allows them to make an informed decision about the risk, choosing to do the task, modify the task or not do the task at all. This is called ‘informed consent’ and is the same as that used by doctors when discussing treatment options with patients. If the risk changes for some reason, the change also needs to be communicated. Communication about risk helps us to resist complacency about the risks that we deal with every day, and helps to avoid confusion about what the risk ‘really is’.
Risk Perception
Risk perception is an area of study that is trying to help us to better understand how various kinds of risks are perceived, and perhaps how best to communicate these risks to the people who are exposed. In a report prepared at the UK’s Health and Safety Laboratory in 2005 [5], authors Williams and Weyman discuss several ways of assessing risk perception.
One approach, described by Renn [6], attempts to chart four different aspects of risk perception in people’s thinking.

An example of these factors plotted on a graph is shown in Fig. 2 below. The data points plotted on the chart are developed by surveying the exposed population and then charting the frequency of their responses to the questions.

There are two factors charted on this graph. On the vertical axis, ‘Factor 2’ is the perceptibility of the risk, or how easily detected the risk is. On the horizontal axis is ‘Factor 1 – The Dread Risk’, or how much ‘dread’ we have of specific outcomes. In Fig. 3 below you can see the assignment of factors to the positive and negative directions on these axes.

At this point, I can say that we are a long way from being able to use this approach effectively when considering machinery safety, but as practitioners, we need to consider these approaches when we communicate risk to our customers, users and workers.
Conclusions
When you are thinking about risk, it’s essential to be clear about the basis for the risk you are considering. Make sure you use valid, verifiable data, especially if calculating a numeric value to represent the probability. Where numeric data isn’t available, use the semi-quantitative and qualitative scoring tools that are available to simplify the process and enable you to develop sound evaluations of the risk involved.
Need more help? Contact me! Doug Nix
References
[1] C. Steel. “Risk estimation,” The Safety and Health Practitioner, pp. 20-22, June, 1990.
[2] Safeguarding of Machinery, CSA Z432. 1994 (R1999).
[3] R. Morley. “Analyzing Risk: The Monty Hall problem.” Manufacturing Automation, June, 2012. p.26.
[4] J. D. Rimington and S. A. Harbison, “The Tolerability of Risk from Nuclear Power Stations,” Health and Safety Executive, Her Majesty’s Stationary Office, London, UK, 1992.
[5] J. Williamson and A. Weyman, “Review of the Public Perception of Risk, and Stakeholder Engagement“, The Crown, London, UK. Rep. HSL/2005/16,?2005.
[6] O. Renn, “The Role of Risk Perception for Risk Management,” in P.E.T. Douben (ed.): Pollution Risk Assessment and Management. Chichester et. al (John Wiley & Sons 1998), pp. 429-450.
© 2012 – 2022, Compliance inSight Consulting Inc.
This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.
Nice information, Thanks for providing
Thanks, Robins, much appreciated. I had a quick look at your website, and your software looks quite interesting.
Nice information, Thanks for providing
Thanks, Robins, much appreciated. I had a quick look at your website, and your software looks quite interesting.