The Problem with Probability

This entry is part 3 of 8 in the series Risk Assess­ment

Risk Factors


There are two key fac­tors that need to be under­stood when assess­ing risk: Sever­i­ty and Prob­a­bil­i­ty (or Like­li­hood). Some­times the term ‘con­se­quence’ is used instead of ‘sever­i­ty’, and in the case of machin­ery risk assess­ment, they can be con­sid­ered to be syn­onyms.  Sever­i­ty seems to be fair­ly well understood—most peo­ple can fair­ly eas­i­ly imag­ine what reach­ing into a spin­ning blade might do to the hand doing the reach­ing. There is a prob­lem that aris­es when there is an insuf­fi­cient under­stand­ing of the haz­ard, but that’s the sub­ject for anoth­er post.


Prob­a­bil­i­ty or like­li­hood is used to describe the chance that an injury or a haz­ardous sit­u­a­tion will occur. Prob­a­bil­i­ty is used when numer­ic data is avail­able and prob­a­bil­i­ty can be cal­cu­lat­ed, while like­li­hood is used when the assess­ment is sub­jec­tive. The prob­a­bil­i­ty fac­tor is often bro­ken down fur­ther into three sub-fac­tors as seen in Fig­ure 3 below [1]:

There is No Reality, only Perception…

Whether you use prob­a­bil­i­ty or like­li­hood in your assess­ment, there is a fun­da­men­tal prob­lem with people’s per­cep­tion of these fac­tors. Peo­ple have a dif­fi­cult time appre­ci­at­ing the mean­ing of prob­a­bil­ity. Prob­a­bil­i­ty is a key fac­tor in deter­min­ing the degree of risk from any haz­ard, yet when fig­ures like “1 in 1000” or “1 x 10–5 occur­rences per year” are dis­cussed, it’s hard for peo­ple to tru­ly grasp what these num­bers mean. When prob­a­bil­ity is dis­cussed as a rate, a fig­ure like “1 x 10–5 occur­rences per year” can make the chance of an occur­rence seem incon­ceiv­ably dis­tant, and there­fore less of a con­cern. Like­wise, when more sub­jec­tive scales are used it can be dif­fi­cult to real­ly under­stand what “like­ly” or “rarely” actu­ally mean. Con­se­quent­ly, even in cas­es where the sever­ity may be very high, the risk relat­ed to a par­tic­u­lar haz­ard may be neglect­ed if the prob­a­bil­ity is deemed low.

To see the oth­er side, con­sid­er people’s atti­tude when it comes to win­ning a lot­tery. Most peo­ple will agree that “Some­one will win” and the infin­i­tes­i­mal prob­a­bil­i­ty of win­ning is seen as sig­nif­i­cant.  The same odds giv­en in rela­tion­ship to a neg­a­tive risk might be seen as ‘infin­i­tes­i­mal­ly small’, and there­fore neg­li­gi­ble.

For exam­ple, con­sid­er the deci­sions made by the Tokyo Elec­tric Pow­er Cor­po­ra­tion (TEPCO) when they con­struct­ed the Fukushi­ma Dai Ichi nuclear pow­er plant. TEPCO engi­neers and sci­en­tists assessed the site in the 1960’s and decid­ed that a 10 meter tsuna­mi was a real­is­tic pos­si­bil­i­ty at the site. They decid­ed to build the reac­tors, tur­bines and back­up gen­er­a­tors 10 meters above the sur­round­ing sea lev­el, then locat­ed the sys­tem crit­i­cal con­densers in the sea­ward yard of the plant at a lev­el below 10 meters. To pro­tect that crit­i­cal equip­ment they built a 5.7 meter high sea­wall, almost 50% short­er than the pre­dict­ed height for a tsuna­mi! While I don’t know what ratio­nale they used to sup­port this design deci­sion, it is clear that the plant would have tak­en sig­nif­i­cant dam­age from even a rel­a­tive­ly mild tsuna­mi. The 11-Mar-11 tsuna­mi topped the high­est pre­dic­tion by near­ly 5 meters, result­ing in a Lev­el 7 nuclear acci­dent and decades for recov­ery. TEPCO exec­u­tives have repeat­ed­ly stat­ed that the con­di­tions lead­ing to the acci­dent were “incon­ceiv­able”, and yet redun­dan­cy was built into the sys­tems for just this type of event, and some plan­ning for tsuna­mi effects were put into the design. Clear­ly was nei­ther unimag­in­able or incon­ceiv­able, just under­es­ti­mat­ed.

Risk Perception

So why is it that tiny odds are seen as an accept­able risk and even a rea­son­able like­li­hood in one case, and a neg­li­gi­ble chance in the oth­er, par­tic­u­lar­ly when the ignored case is the one that will have a sig­nif­i­cant neg­a­tive out­come?
Accord­ing to an arti­cle in Wikipedia [2], there are three main schools of thought when it comes to under­stand­ing risk per­cep­tion: psy­cho­log­i­cal, soci­o­log­i­cal and inter­dis­ci­pli­nary. In a key ear­ly paper writ­ten in 1969 by Chauncy Starr [3], it was dis­cov­ered that peo­ple would accept vol­un­tary risks 1000 times greater than invol­un­tary risks. Lat­er research has chal­lenged these find­ings, show­ing the gap between vol­un­tary and invol­un­tary to be much nar­row­er than Starr found.
Ear­ly psy­cho­me­t­ric research by Kah­ne­man and Tver­sky, showed that peo­ple use a num­ber of heuris­tics to eval­u­ate infor­ma­tion. These heuris­tics includ­ed:
  • Rep­re­sen­ta­tive­ness;
  • Avail­abil­i­ty;
  • Anchor­ing and Adjust­ment;
  • Asym­me­try; and
  • Thresh­old effects.
This research showed that peo­ple tend to be averse to risks to gains, like the poten­tial for loss of sav­ings by mak­ing risky invest­ments, while they tend to accept risk eas­i­ly when it comes to poten­tial loss­es, pre­fer­ring the hope of los­ing noth­ing over a cer­tain but small­er loss. This may explain why low-prob­a­bil­i­ty, high sever­i­ty OHS risks are more often ignored, in the hope that less­er injuries will occur rather than the max­i­mum pre­dict­ed sever­i­ty.

Sig­nif­i­cant results also show that bet­ter infor­ma­tion fre­quent­ly has no effect on how risks are judged. More weight is put on risks with imme­di­ate, per­son­al results than those seen in longer time frames. Psy­cho­me­t­ric research has shown that risk per­cep­tion is high­ly depen­dent on intu­ition, expe­ri­en­tial think­ing, and emo­tions. The research iden­ti­fied char­ac­ter­is­tics that may be con­densed into three high order fac­tors:

  1. the degree to which a risk is under­stood;
  2. the degree to which it evokes a feel­ing of dread; and
  3. the num­ber of peo­ple exposed to the risk.

Dread” describes a risk that elic­its vis­cer­al feel­ings of impend­ing cat­a­stro­phe, ter­ror and loss of con­trol. The more a per­son dreads an activ­i­ty, the high­er its per­ceived risk and the more that per­son wants the risk reduced [4]. Fear is clear­ly a stronger moti­va­tor than any degree of infor­ma­tion.

Con­sid­er­ing the dif­fer­ing views of those study­ing risk per­cep­tion, it’s no won­der that this is a chal­leng­ing sub­ject for safe­ty prac­ti­tion­ers!

Estimating Probability

Frequency and Duration

Some aspects of prob­a­bil­i­ty are not too dif­fi­cult to esti­mate. Con­sid­er the Fre­quen­cy or Dura­tion of Expo­sure fac­tor. At face val­ue this can be stat­ed as “X cycles per hour” or “Y hours per week”. Depend­ing on the haz­ard, there may be more com­plex expo­sure data, like that used when con­sid­er­ing audi­ble noise expo­sure. In that case, noise is often expressed as a time-weight­ed-aver­age (TWH), like “83 dB(A), 8 h TWH”, mean­ing 83 dB(A) aver­aged over 8 hours.

Esti­mat­ing the prob­a­bil­i­ty of a haz­ardous sit­u­a­tion is usu­al­ly not too tough either. This could be expressed as “15 min­utes, once per day / shift” or “2 days, twice per year”.


Esti­mat­ing the prob­a­bil­i­ty of avoid­ing an injury in any giv­en haz­ardous sit­u­a­tion is MUCH more dif­fi­cult, since the speed of occur­rence, the abil­i­ty to per­ceive the haz­ard, the knowl­edge of the exposed per­son, their abil­i­ty to react in the sit­u­a­tion, the lev­el of train­ing that they have, the pres­ence of com­ple­men­tary pro­tec­tive mea­sures, and many oth­er fac­tors come into play. Depth of under­stand­ing of the haz­ard and the details of the haz­ardous sit­u­a­tion by the risk asses­sors is crit­i­cal to a sound assess­ment of the risk involved.

The Challenge

The chal­lenge for safe­ty prac­ti­tion­ers is twofold:

  1. As prac­ti­tion­ers, we must try to over­come our bias­es when con­duct­ing risk assess­ment work, and where we can­not over­come those bias­es, we must at least acknowl­edge them and the effects they may pro­duce in our work; and
  2. We must try to present the risks in terms that the exposed peo­ple can under­stand, so that they can make a rea­soned choice for their own per­son­al safe­ty.

I don’t sug­gest that this is easy, nor do I advo­cate “dumb­ing down” the infor­ma­tion! I do believe that risk infor­ma­tion can be pre­sent­ed to non-tech­ni­cal peo­ple in ways that they can under­stand the crit­i­cal points.

Risk assess­ment tech­niques are becom­ing fun­da­men­tal in all areas of design. As safe­ty prac­ti­tion­ers, we must be ready to con­duct risk assess­ments using sound tech­niques, be aware of our bias­es and be patient in com­mu­ni­cat­ing the results of our analy­sis to every­one that may be affect­ed.


[1] “Safe­ty of Machinery—General Prin­ci­ples for Design—Risk Assess­ment and Risk Reduc­tion”, ISO 12100, Fig­ure 3, ISO, Gene­va, 2010.
[2] “Risk Per­cep­tion”, Wikipedia, accessed 19/20-May-2011,
[3] Chancey Starr, “Social Ben­e­fits ver­sus Tech­no­log­i­cal Risks”, Sci­ence Vol. 165, No. 3899. (Sep. 19, 1969), pp. 1232–1238
[4] Paul Slovic, Baruch Fis­chhoff, Sarah Licht­en­stein, “Why Study Risk Per­cep­tion?”, Risk Analy­sis 2(2) (1982), pp. 83–93.
Series Nav­i­ga­tionHow Risk Assess­ment FailsThe pur­pose of risk assess­ment

Author: Doug Nix

Doug Nix is Managing Director and Principal Consultant at Compliance InSight Consulting, Inc. ( in Kitchener, Ontario, and is Lead Author and Senior Editor of the Machinery Safety 101 blog. Doug's work includes teaching machinery risk assessment techniques privately and through Conestoga College Institute of Technology and Advanced Learning in Kitchener, Ontario, as well as providing technical services and training programs to clients related to risk assessment, industrial machinery safety, safety-related control system integration and reliability, laser safety and regulatory conformity. For more see Doug's LinkedIn profile.