The Problem with Probability

This entry is part 3 of 8 in the series Risk Assessment

Risk Factors

Severity

There are two key factors that need to be under­stood when assess­ing risk: Severity and Probability (or Likelihood). Sometimes the term ‘con­sequence’ is used instead of ‘sever­ity’, and in the case of machinery risk assess­ment, they can be con­sidered to be syn­onyms.  Severity seems to be fairly well under­stood — most people can fairly eas­ily ima­gine what reach­ing into a spin­ning blade might do to the hand doing the reach­ing. There is a prob­lem that arises when there is an insuf­fi­cient under­stand­ing of the haz­ard, but that’s the sub­ject for anoth­er post.

Probability

Probability or like­li­hood is used to describe the chance that an injury or a haz­ard­ous situ­ation will occur. Probability is used when numer­ic data is avail­able and prob­ab­il­ity can be cal­cu­lated, while like­li­hood is used when the assess­ment is sub­ject­ive. The prob­ab­il­ity factor is often broken down fur­ther into three sub-​factors as seen in Figure 3 below [1]:

There is No Reality, only Perception…

Whether you use prob­ab­il­ity or like­li­hood in your assess­ment, there is a fun­da­ment­al prob­lem with people’s per­cep­tion of these factors. People have a dif­fi­cult time appre­ci­at­ing the mean­ing of prob­a­bil­ity. Probability is a key fac­tor in deter­min­ing the degree of risk from any haz­ard, yet when fig­ures like “1 in 1000” or “1 x 10–5 occur­rences per year” are dis­cussed, it’s hard for peo­ple to truly grasp what these num­bers mean. When prob­a­bil­ity is dis­cussed as a rate, a fig­ure like “1 x 10–5 occur­rences per year” can make the chance of an occur­rence seem incon­ceiv­ably dis­tant, and there­fore less of a con­cern. Likewise, when more sub­jec­tive scales are used it can be dif­fi­cult to really under­stand what “likely” or “rarely” actu­ally mean. Consequently, even in cases where the sever­ity may be very high, the risk related to a par­tic­u­lar haz­ard may be neg­lected if the prob­a­bil­ity is deemed low.

To see the oth­er side, con­sider people’s atti­tude when it comes to win­ning a lot­tery. Most people will agree that “Someone will win” and the infin­ites­im­al prob­ab­il­ity of win­ning is seen as sig­ni­fic­ant.  The same odds giv­en in rela­tion­ship to a neg­at­ive risk might be seen as ‘infin­ites­im­ally small’, and there­fore neg­li­gible.

For example, con­sider the decisions made by the Tokyo Electric Power Corporation (TEPCO) when they con­struc­ted the Fukushima Dai Ichi nuc­le­ar power plant. TEPCO engin­eers and sci­ent­ists assessed the site in the 1960’s and decided that a 10 meter tsunami was a real­ist­ic pos­sib­il­ity at the site. They decided to build the react­ors, tur­bines and backup gen­er­at­ors 10 meters above the sur­round­ing sea level, then loc­ated the sys­tem crit­ic­al con­dens­ers in the sea­ward yard of the plant at a level below 10 meters. To pro­tect that crit­ic­al equip­ment they built a 5.7 meter high sea­wall, almost 50% short­er than the pre­dicted height for a tsunami! While I don’t know what rationale they used to sup­port this design decision, it is clear that the plant would have taken sig­ni­fic­ant dam­age from even a rel­at­ively mild tsunami. The 11-​Mar-​11 tsunami topped the highest pre­dic­tion by nearly 5 meters, res­ult­ing in a Level 7 nuc­le­ar acci­dent and dec­ades for recov­ery. TEPCO exec­ut­ives have repeatedly stated that the con­di­tions lead­ing to the acci­dent were “incon­ceiv­able”, and yet redund­ancy was built into the sys­tems for just this type of event, and some plan­ning for tsunami effects were put into the design. Clearly was neither unima­gin­able or incon­ceiv­able, just under­es­tim­ated.

Risk Perception

So why is it that tiny odds are seen as an accept­able risk and even a reas­on­able like­li­hood in one case, and a neg­li­gible chance in the oth­er, par­tic­u­larly when the ignored case is the one that will have a sig­ni­fic­ant neg­at­ive out­come?
According to an art­icle in Wikipedia [2], there are three main schools of thought when it comes to under­stand­ing risk per­cep­tion: psy­cho­lo­gic­al, soci­olo­gic­al and inter­dis­cip­lin­ary. In a key early paper writ­ten in 1969 by Chauncy Starr [3], it was dis­covered that people would accept vol­un­tary risks 1000 times great­er than invol­un­tary risks. Later research has chal­lenged these find­ings, show­ing the gap between vol­un­tary and invol­un­tary to be much nar­row­er than Starr found.
Early psy­cho­met­ric research by Kahneman and Tversky, showed that people use a num­ber of heur­ist­ics to eval­u­ate inform­a­tion. These heur­ist­ics included:
  • Representativeness;
  • Availability;
  • Anchoring and Adjustment;
  • Asymmetry; and
  • Threshold effects.
This research showed that people tend to be averse to risks to gains, like the poten­tial for loss of sav­ings by mak­ing risky invest­ments, while they tend to accept risk eas­ily when it comes to poten­tial losses, pre­fer­ring the hope of los­ing noth­ing over a cer­tain but smal­ler loss. This may explain why low-​probability, high sever­ity OHS risks are more often ignored, in the hope that less­er injur­ies will occur rather than the max­im­um pre­dicted sever­ity.

Significant res­ults also show that bet­ter inform­a­tion fre­quently has no effect on how risks are judged. More weight is put on risks with imme­di­ate, per­son­al res­ults than those seen in longer time frames. Psychometric research has shown that risk per­cep­tion is highly depend­ent on intu­ition, exper­i­en­tial think­ing, and emo­tions. The research iden­ti­fied char­ac­ter­ist­ics that may be con­densed into three high order factors:

  1. the degree to which a risk is under­stood;
  2. the degree to which it evokes a feel­ing of dread; and
  3. the num­ber of people exposed to the risk.

Dread” describes a risk that eli­cits vis­cer­al feel­ings of impend­ing cata­strophe, ter­ror and loss of con­trol. The more a per­son dreads an activ­ity, the high­er its per­ceived risk and the more that per­son wants the risk reduced [4]. Fear is clearly a stronger motiv­at­or than any degree of inform­a­tion.

Considering the dif­fer­ing views of those study­ing risk per­cep­tion, it’s no won­der that this is a chal­len­ging sub­ject for safety prac­ti­tion­ers!

Estimating Probability

Frequency and Duration

Some aspects of prob­ab­il­ity are not too dif­fi­cult to estim­ate. Consider the Frequency or Duration of Exposure factor. At face value this can be stated as “X cycles per hour” or “Y hours per week”. Depending on the haz­ard, there may be more com­plex expos­ure data, like that used when con­sid­er­ing aud­ible noise expos­ure. In that case, noise is often expressed as a time-​weighted-​average (TWH), like “83 dB(A), 8 h TWH”, mean­ing 83 dB(A) aver­aged over 8 hours.

Estimating the prob­ab­il­ity of a haz­ard­ous situ­ation is usu­ally not too tough either. This could be expressed as “15 minutes, once per day /​ shift” or “2 days, twice per year”.

Avoidance

Estimating the prob­ab­il­ity of avoid­ing an injury in any giv­en haz­ard­ous situ­ation is MUCH more dif­fi­cult, since the speed of occur­rence, the abil­ity to per­ceive the haz­ard, the know­ledge of the exposed per­son, their abil­ity to react in the situ­ation, the level of train­ing that they have, the pres­ence of com­ple­ment­ary pro­tect­ive meas­ures, and many oth­er factors come into play. Depth of under­stand­ing of the haz­ard and the details of the haz­ard­ous situ­ation by the risk assessors is crit­ic­al to a sound assess­ment of the risk involved.

The Challenge

The chal­lenge for safety prac­ti­tion­ers is two­fold:

  1. As prac­ti­tion­ers, we must try to over­come our biases when con­duct­ing risk assess­ment work, and where we can­not over­come those biases, we must at least acknow­ledge them and the effects they may pro­duce in our work; and
  2. We must try to present the risks in terms that the exposed people can under­stand, so that they can make a reasoned choice for their own per­son­al safety.

I don’t sug­gest that this is easy, nor do I advoc­ate “dumb­ing down” the inform­a­tion! I do believe that risk inform­a­tion can be presen­ted to non-​technical people in ways that they can under­stand the crit­ic­al points.

Risk assess­ment tech­niques are becom­ing fun­da­ment­al in all areas of design. As safety prac­ti­tion­ers, we must be ready to con­duct risk assess­ments using sound tech­niques, be aware of our biases and be patient in com­mu­nic­at­ing the res­ults of our ana­lys­is to every­one that may be affected.

References

[1] “Safety of Machinery — General Principles for Design — Risk Assessment and Risk Reduction”, ISO 12100, Figure 3, ISO, Geneva, 2010.
[2] “Risk Perception”, Wikipedia, accessed 19/​20-​May-​2011, http://​en​.wiki​pe​dia​.org/​w​i​k​i​/​R​i​s​k​_​p​e​r​c​e​p​t​ion.
[3] Chancey Starr, “Social Benefits versus Technological Risks”, Science Vol. 165, No. 3899. (Sep. 19, 1969), pp. 1232 – 1238
[4] Paul Slovic, Baruch Fischhoff, Sarah Lichtenstein, “Why Study Risk Perception?”, Risk Analysis 2(2) (1982), pp. 83 – 93.
Series NavigationHow Risk Assessment FailsThe pur­pose of risk assess­ment

Author: Doug Nix

+DougNix is Managing Director and Principal Consultant at Compliance InSight Consulting, Inc. (http://www.complianceinsight.ca) in Kitchener, Ontario, and is Lead Author and Managing Editor of the Machinery Safety 101 blog.

Doug's work includes teaching machinery risk assessment techniques privately and through Conestoga College Institute of Technology and Advanced Learning in Kitchener, Ontario, as well as providing technical services and training programs to clients related to risk assessment, industrial machinery safety, safety-related control system integration and reliability, laser safety and regulatory conformity.

Follow me on Academia.edu//a.academia-assets.com/javascripts/social.js