Machinery Safety 101

The Problem with Probability

This entry is part 4 of 10 in the series Risk Assess­ment

Risk Factors

When risk is ana­lyzed, at least in the indus­tri­al sec­tor, we usu­ally fol­low a pro­cess defined in ISO 12100. This approach defines two broad para­met­ers, sever­ity and prob­ab­il­ity, and then fur­ther sub­divides prob­ab­il­ity into sub-para­met­ers that are help­ful for ana­lyz­ing machinery haz­ards. This post explores the dif­fi­culty with assess­ing the prob­ab­il­ity parameter.

If you’re inter­ested in buy­ing ISO 12100, here’s a link to the ISO Web site where you can get a copy. If you need to save a few dol­lars, you can buy the EN ver­sion from Esto­ni­an Stand­ards. It’s tech­nic­ally identic­al and avail­able in Eng­lish as well as Estonian.


There are two key factors that need to be under­stood when assess­ing risk: Sever­ity and Prob­ab­il­ity (or Like­li­hood). Some­times the term ‘con­sequence’ is used instead of ‘sever­ity’, and in the case of machinery risk assess­ment, they can be con­sidered to be syn­onyms.  Sever­ity seems to be fairly well under­stood — most people can fairly eas­ily ima­gine what reach­ing into a spin­ning blade might do to the hand doing the reach­ing. There is a prob­lem that arises when there is an insuf­fi­cient under­stand­ing of the haz­ard, but that’s the sub­ject for anoth­er post.


Prob­ab­il­ity or like­li­hood is used to describe the chance that an injury or a haz­ard­ous situ­ation will occur. Prob­ab­il­ity is used when numer­ic data is avail­able and prob­ab­il­ity can be cal­cu­lated, while “like­li­hood” is used when the assess­ment is sub­ject­ive. The prob­ab­il­ity factor is often broken down fur­ther into three sub-factors as seen in Fig­ure 3 below [1]:

probability, The Problem with Probability, Machinery Safety 101

There is No Reality, only Perception…

Wheth­er you use prob­ab­il­ity or like­li­hood in your assess­ment, there is a fun­da­ment­al prob­lem with people’s per­cep­tion of these factors. People have a dif­fi­cult time appre­ci­at­ing the mean­ing of prob­a­bil­ity. Prob­ab­il­ity is a key fac­tor in deter­min­ing the degree of risk from any haz­ard, yet when fig­ures like “1 in 1000” or “1 x 10–5 occur­rences per year” are dis­cussed, it’s hard for peo­ple to truly grasp what these num­bers mean. When prob­a­bil­ity is dis­cussed as a rate, a fig­ure like “1 x 10–5 occur­rences per year” can make the chance of an occur­rence seem incon­ceiv­ably dis­tant, and there­fore less of a con­cern. Like­wise, when more sub­jec­tive scales are used it can be dif­fi­cult to really under­stand what “likely” or “rarely” actu­ally mean. Con­sequently, even in cases where the sever­ity may be very high, the risk related to a par­tic­u­lar haz­ard may be neg­lected if the prob­a­bil­ity is deemed low.

To see the oth­er side, con­sider people’s atti­tude when it comes to win­ning a lot­tery. Most people will agree that “Someone will win” and the infin­ites­im­al prob­ab­il­ity of win­ning is seen as sig­ni­fic­ant.  The same odds giv­en in rrela­tionto a neg­at­ive risk might be seen as ‘infin­ites­im­ally small’, and there­fore negligible.

For example, con­sider the decisions made by the Tokyo Elec­tric Power Cor­por­a­tion (TEPCO) when they con­struc­ted the Fukushi­ma Dai Ichi nuc­le­ar power plant. TEPCO engin­eers and sci­ent­ists assessed the site in the 1960’s and decided that a 10 meter tsunami was a real­ist­ic pos­sib­il­ity at the site. They decided to build the react­ors, tur­bines and backup gen­er­at­ors 10 meters above the sur­round­ing sea level, then loc­ated the sys­tem crit­ic­al con­dens­ers in the sea­ward yard of the plant at a level below 10 meters. To pro­tect that crit­ic­al equip­ment they built a 5.7 meter high sea­wall, almost 50% short­er than the pre­dicted height for a tsunami! While I don’t know what rationale they used to sup­port this design decision, it is clear that the plant would have taken sig­ni­fic­ant dam­age from even a rel­at­ively mild tsunami. The 11-Mar-11 tsunami topped the highest pre­dic­tion by nearly 5 meters, res­ult­ing in a Level 7 nuc­le­ar acci­dent and dec­ades for recov­ery. TEPCO exec­ut­ives have repeatedly stated that the con­di­tions lead­ing to the acci­dent were “incon­ceiv­able”, and yet redund­ancy was built into the sys­tems for just this type of event, and some plan­ning for tsunami effects were put into the design. Clearly was neither unima­gin­able or incon­ceiv­able, just underestimated.

Risk Perception

So why is it that tiny odds are seen as an accept­able risk and even a reas­on­able like­li­hood in one case, and a neg­li­gible chance in the oth­er, par­tic­u­larly when the ignored case is the one that will have a sig­ni­fic­ant neg­at­ive outcome?
Accord­ing to an art­icle in Wiki­pe­dia [2], there are three main schools of thought when it comes to under­stand­ing risk per­cep­tion: psy­cho­lo­gic­al, soci­olo­gic­al and inter­dis­cip­lin­ary. In a key early paper writ­ten in 1969 by Chauncy Starr [3], it was dis­covered that people would accept vol­un­tary risks 1000 times great­er than invol­un­tary risks. Later research has chal­lenged these find­ings, show­ing the gap between vol­un­tary and invol­un­tary to be much nar­row­er than Starr found.
Early psy­cho­met­ric research by Kahne­man and Tver­sky, showed that people use a num­ber of heur­ist­ics to eval­u­ate inform­a­tion. These heur­ist­ics included:
  • Rep­res­ent­at­ive­ness;
  • Avail­ab­il­ity;
  • Anchor­ing and Adjustment;
  • Asym­metry; and
  • Threshold effects.
This research showed that people tend to be averse to risks to gains, like the poten­tial for loss of sav­ings by mak­ing risky invest­ments, while they tend to accept risk eas­ily when it comes to poten­tial losses, pre­fer­ring the hope of los­ing noth­ing over a cer­tain but smal­ler loss. This may explain why low-prob­ab­il­ity, high sever­ity OHS risks are more often ignored, in the hope that less­er injur­ies will occur rather than the max­im­um pre­dicted severity.

Sig­ni­fic­ant res­ults also show that bet­ter inform­a­tion fre­quently has no effect on how risks are judged. More weight is put on risks with imme­di­ate, per­son­al res­ults than those seen in longer time frames. Psy­cho­met­ric research has shown that risk per­cep­tion is highly depend­ent on intu­ition, exper­i­en­tial think­ing, and emo­tions. The research iden­ti­fied char­ac­ter­ist­ics that may be con­densed into three high order factors:

  1. the degree to which a risk is understood;
  2. the degree to which it evokes a feel­ing of dread; and
  3. the num­ber of people exposed to the risk.

Dread” describes a risk that eli­cits vis­cer­al feel­ings of impend­ing cata­strophe, ter­ror and loss of con­trol. The more a per­son dreads an activ­ity, the high­er its per­ceived risk and the more that per­son wants the risk reduced [4]. Fear is clearly a stronger motiv­at­or than any degree of information.

Con­sid­er­ing the dif­fer­ing views of those study­ing risk per­cep­tion, it’s no won­der that this is a chal­len­ging sub­ject for safety practitioners!

Estimating Probability

Frequency and Duration

Some aspects of prob­ab­il­ity are not too dif­fi­cult to estim­ate. Con­sider the Fre­quency or Dur­a­tion of Expos­ure factor. At face value, this can be stated as “X cycles per hour” or “Y hours per week”. Depend­ing on the haz­ard, there may be more com­plex expos­ure data, like that used when con­sid­er­ing aud­ible noise expos­ure. In that case, noise is often expressed as a time-weighted-aver­age (TWH), like “83 dB(A), 8 h TWH”, mean­ing 83 dB(A) aver­aged over 8 hours.

Estim­at­ing the prob­ab­il­ity of a haz­ard­ous situ­ation is usu­ally not too tough either. This could be expressed as “15 minutes, once per day / shift” or “2 days, twice per year”.


Estim­at­ing the prob­ab­il­ity of avoid­ing an injury in any giv­en haz­ard­ous situ­ation is MUCH more dif­fi­cult, since the speed of occur­rence, the abil­ity to per­ceive the haz­ard, the know­ledge of the exposed per­son, their abil­ity to react in the situ­ation, the level of train­ing that they have, the pres­ence of com­ple­ment­ary pro­tect­ive meas­ures, and many oth­er factors come into play. Depth of under­stand­ing of the haz­ard and the details of the haz­ard­ous situ­ation by the risk assessors is crit­ic­al to a sound assess­ment of the risk involved.

The Challenge

The chal­lenge for safety prac­ti­tion­ers is twofold:

  1. As prac­ti­tion­ers, we must try to over­come our biases when con­duct­ing risk assess­ment work, and where we can­not over­come those biases, we must at least acknow­ledge them and the effects they may pro­duce in our work; and
  2. We must try to present the risks in terms that the exposed people can under­stand, so that they can make a reasoned choice for their own per­son­al safety.

I don’t sug­gest that this is easy, nor do I advoc­ate “dumb­ing down” the inform­a­tion! I do believe that risk inform­a­tion can be presen­ted to non-tech­nic­al people in ways that they can under­stand the crit­ic­al points.

Risk assess­ment tech­niques are becom­ing fun­da­ment­al in all areas of design. As safety prac­ti­tion­ers, we must be ready to con­duct risk assess­ments using sound tech­niques, be aware of our biases and be patient in com­mu­nic­at­ing the res­ults of our ana­lys­is to every­one that may be affected.


[1] “Safety of Machinery — Gen­er­al Prin­ciples for Design — Risk Assess­ment and Risk Reduc­tion”, ISO 12100, Fig­ure 3, ISO, Geneva, 2010.
[2] “Risk Per­cep­tion”, Wiki­pe­dia, accessed 19/20-May-2011,
[3] Chan­cey Starr, “Social Bene­fits versus Tech­no­lo­gic­al Risks”, Sci­ence Vol. 165, No. 3899. (Sep. 19, 1969), pp. 1232 – 1238
[4] Paul Slov­ic, Baruch Fisc­h­hoff, Sarah Licht­en­stein, “Why Study Risk Per­cep­tion?”, Risk Ana­lys­is 2(2) (1982), pp. 83 – 93.
Series Nav­ig­a­tionThe pur­pose of risk assess­mentTEPCO know about Fukushi­ma before 11-Mar-11?”>What did TEPCO know about Fukushi­ma before 11-Mar-11?