ISO Withdraws Machinery Risk Assessment Standards

This entry is part 1 of 8 in the series Risk Assess­ment

ISO has with­drawn three long-stand­ing basic machin­ery safe­ty stan­dards used inter­na­tion­al­ly and in the EU and replaced them with a sin­gle com­bined doc­u­ment. If you design, build or inte­grate machin­ery for sale inter­na­tion­al­ly or with­in the EU, this new stan­dard needs to be on your BUY list!

ISO has with­drawn three long-stand­ing basic machin­ery safe­ty stan­dards used inter­na­tion­al­ly and in the EU and replaced them with a sin­gle com­bined doc­u­ment. If you design, build or inte­grate machin­ery for sale inter­na­tion­al­ly or with­in the EU, this new stan­dard needs to be on your BUY list!

ISO 14121–1 Withdrawn, along with ISO 12100–1 and -2

As of 20-Oct-2010 three stan­dards, ISO 14121–1, Safe­ty of Machin­ery – Risk Assess­ment – Part 1: Prin­ci­ples, ISO 12100–1, Safe­ty of machin­ery – Basic con­cepts, gen­er­al prin­ci­ples for design – Part 1: Basic ter­mi­nol­o­gy and method­ol­o­gy and ISO 12100–2, Safe­ty of machin­ery – Basic con­cepts, gen­er­al prin­ci­ples for design – Part 2: Tech­ni­cal prin­ci­ples, have been replaced by the new ISO 12100:2010, Safe­ty of machin­ery — Gen­er­al prin­ci­ples for design — Risk assess­ment and risk reduc­tion blends togeth­er three fun­da­men­tal Type A machin­ery stan­dards into one coher­ent whole. This impor­tant new doc­u­ment means that machin­ery design­ers have the fun­da­men­tal design require­ments for all machin­ery in one stan­dard. The only excep­tion is now ISO/TR 14121–2:2007, Safe­ty of machin­ery — Risk assess­ment — Part 2: Prac­ti­cal guid­ance and exam­ples of meth­ods. This Tech­ni­cal Report stands as guid­ance for risk assess­ment and pro­vides a num­ber of exam­ples of the dif­fer­ent meth­ods used to assess machin­ery risk.


This abstract is tak­en from the ISO web cat­a­logue page for the new stan­dard.

ISO 12100:2010 spec­i­fies basic ter­mi­nol­o­gy, prin­ci­ples and a method­ol­o­gy for achiev­ing safe­ty in the design of machin­ery. It spec­i­fies prin­ci­ples of risk assess­ment and risk reduc­tion to help design­ers in achiev­ing this objec­tive. These prin­ci­ples are based on knowl­edge and expe­ri­ence of the design, use, inci­dents, acci­dents and risks asso­ci­at­ed with machin­ery. Pro­ce­dures are described for iden­ti­fy­ing haz­ards and esti­mat­ing and eval­u­at­ing risks dur­ing rel­e­vant phas­es of the machine life cycle, and for the elim­i­na­tion of haz­ards or suf­fi­cient risk reduc­tion. Guid­ance is giv­en on the doc­u­men­ta­tion and ver­i­fi­ca­tion of the risk assess­ment and risk reduc­tion process.

ISO 12100:2010 is also intend­ed to be used as a basis for the prepa­ra­tion of type-B or type-C safe­ty stan­dards.

It does not deal with risk and/or dam­age to domes­tic ani­mals, prop­er­ty or the envi­ron­ment.

Table of Contents

Here is the table of con­tents from the stan­dard as pub­lished.



1 Scope

2 Nor­ma­tive ref­er­ences

3 Terms and def­i­n­i­tions

4 Strat­e­gy for risk assess­ment and risk reduc­tion

5 Risk assess­ment

5.1 Gen­er­al

5.2 Infor­ma­tion for risk assess­ment

5.3 Deter­mi­na­tion of lim­its of machin­ery

5.3.1 Gen­er­al

5.3.2 Use lim­its

5.3.3 Space lim­its

5.3.4 Time lim­its

5.3.5 Oth­er lim­its

5.4 Haz­ard iden­ti­fi­ca­tion

5.5 Risk esti­ma­tion

5.5.1 Gen­er­al

5.5.2 Ele­ments of risk

5.5.3 Aspects to be con­sid­ered dur­ing risk esti­ma­tion

5.6 Risk eval­u­a­tion

5.6.1 Gen­er­al

5.6.2 Ade­quate risk reduc­tion

5.6.3 Com­par­i­son of risks

6 Risk reduc­tion

6.1 Gen­er­al

6.2 Inher­ent­ly safe design mea­sures

6.2.1 Gen­er­al

6.2.2 Con­sid­er­a­tion of geo­met­ri­cal fac­tors and phys­i­cal aspects

6.2.3 Tak­ing into account gen­er­al tech­ni­cal knowl­edge of machine design

6.2.4 Choice of appro­pri­ate tech­nol­o­gy

6.2.5 Apply­ing prin­ci­ple of pos­i­tive mechan­i­cal action

6.2.6 Pro­vi­sions for sta­bil­i­ty

6.2.7 Pro­vi­sions for main­tain­abil­i­ty

6.2.8 Observ­ing ergonom­ic prin­ci­ples

6.2.9 Elec­tri­cal haz­ards

6.2.10 Pneu­mat­ic and hydraulic haz­ards

6.2.11Applying inher­ent­ly safe design mea­sures to con­trol sys­tems

6.2.12 Min­i­miz­ing prob­a­bil­i­ty of fail­ure of safe­ty func­tions

6.2.13 Lim­it­ing expo­sure to haz­ards through reli­a­bil­i­ty of equip­ment

6.2.14 Lim­it­ing expo­sure to haz­ards through mech­a­niza­tion or automa­tion of load­ing (feed­ing) / unload­ing (removal) oper­a­tions

6.2.15 Lim­it­ing expo­sure to haz­ards through loca­tion of set­ting and main­te­nance points out­side dan­ger zones

6.3 Safe­guard­ing and com­ple­men­tary pro­tec­tive mea­sures

6.3.1 Gen­er­al

6.3.2 Selec­tion and imple­men­ta­tion of guards and pro­tec­tive devices

6.3.3 Require­ments for design of guards and pro­tec­tive devices

6.3.4 Safe­guard­ing to reduce emis­sions

6.3.5 Com­ple­men­tary pro­tec­tive mea­sures

6.4 Infor­ma­tion for use

6.4.1 Gen­er­al require­ments

6.4.2 Loca­tion and nature of infor­ma­tion for use

6.4.3 Sig­nals and warn­ing devices

6.4.4 Mark­ings, signs (pic­tograms) and writ­ten warn­ings

6.4.5 Accom­pa­ny­ing doc­u­ments (in par­tic­u­lar — instruc­tion hand­book)

7 Doc­u­men­ta­tion of risk assess­ment and risk reduc­tion

Annex A (infor­ma­tive) Schemat­ic rep­re­sen­ta­tion of a machine

Annex B (infor­ma­tive) Exam­ples of haz­ards, haz­ardous sit­u­a­tions and haz­ardous events

Annex C (infor­ma­tive) Trilin­gual lookup and index of spe­cif­ic terms and expres­sions used in ISO 12100


Buying Advice

This is a sig­nif­i­cant change in these three stan­dards. Revi­sion to the text of the stan­dards was sig­nif­i­cant, at least from the per­spec­tive that the mate­r­i­al has been re-orga­nized into a sin­gle, coher­ent doc­u­ment. If you are bas­ing a CE Mark on these stan­dards, you should strong­ly con­sid­er pur­chas­ing the har­mo­nized ver­sion when it becomes avail­able at your favourite retail­er. The ISO ver­sion is avail­able now in Eng­lish and French as a hard copy or pdf doc­u­ment, priced at 180 CHF (Swiss Francs), or about CA$175.

As of this writ­ing, CEN has adopt­ed EN ISO 12100:2010, with a pub­lished “dow” (date of with­draw­al) of 30-Nov-2013. The “doc” (date of ces­sa­tion) will be pub­lished in a future list of har­mo­nized stan­dards in the Offi­cial Jour­nal of the Euro­pean Union under the Machin­ery Direc­tive 2006/42/EC.

My rec­om­men­da­tion is to BUY this stan­dard if you are a machine builder. If you are CE mark­ing your prod­uct you may want to wait until the har­mo­nized edi­tion is pub­lished, how­ev­er, it is worth know­ing that tech­ni­cal changes to the nor­ma­tive con­tent of the stan­dard are very unlike­ly when har­mo­niza­tion occurs.

How Risk Assessment Fails

Fukushima Dai Ichi Power Plant after the explosionsThe events unfold­ing at Japan’s Fukushi­ma Dai Ichi Nuclear Pow­er plant are a case study in ways that the risk assess­ment process can fail or be abused. In an arti­cle pub­lished on, Jason Clen­field item­izes decades of fraud and fail­ures in engi­neer­ing and admin­is­tra­tion that have led to the cat­a­stroph­ic fail­ure of four of six reac­tors at the 40-year-old Fukushi­ma plant. Clenfield’s arti­cle, ‘Dis­as­ter Caps Faked Reports’, goes on to cov­er sim­i­lar fail­ures in the Japan­ese nuclear sec­tor.

Most peo­ple believe that the more seri­ous the pub­lic dan­ger, the more care­ful­ly the risks are con­sid­ered in the design and exe­cu­tion of projects like the Fukushi­ma plant. Clenfield’s arti­cle points to fail­ures by a num­ber of major inter­na­tion­al busi­ness­es involved in the design and man­u­fac­ture of com­po­nents for these reac­tors that may have con­tributed to the cat­a­stro­phe play­ing out in Japan. In some cas­es, the cor­rect actions could have bank­rupt­ed the com­pa­nies involved, so rather than risk finan­cial fail­ure, these fail­ures were cov­ered up and the work­ers involved reward­ed for their efforts. As you will see, some­times the degree of care that we have a right to expect is not the lev­el of care that is used.

How does this relate to the fail­ure and abuse of the risk assess­ment process? Read on!

Risk Assessment Failures

Earthquake and Tsunami damage - Fukushima Dai Ichi Power PlantThe Fukushi­ma Dai Ichi nuclear plant was con­struct­ed in the late 1960’s and ear­ly 1970’s, with Reac­tor #1 going on-line in 1971. The reac­tors at this facil­i­ty use ‘active cool­ing’, requir­ing elec­tri­cal­ly pow­ered cool­ing pumps to run con­tin­u­ous­ly to keep the core tem­per­a­tures in the nor­mal oper­at­ing range. As you will have seen in recent news reports, the plant is locat­ed on the shore, draw­ing water direct­ly from the Pacif­ic Ocean.

Learn more about Boil­ing Water Reac­tors used at Fukushi­ma.

Read IEEE Spectrum’s “24-Hours at Fukushi­ma”, a blow-by-blow account of the first 24 hours of the dis­as­ter.

Japan is locat­ed along one of the most active fault lines in the world, with plate sub­duc­tion rates exceed­ing 90 mm/year. Earth­quakes are so com­mon­place in this area that the Japan­ese peo­ple con­sid­er Japan to be the ‘land of earth­quakes’, start­ing earth­quake safe­ty train­ing in kinder­garten.

Japan is the coun­ty that cre­at­ed the word ‘tsuna­mi’ because the effects of sub-sea earth­quakes often include large waves that swamp the shore­line. These waves affect all coun­tries bor­der­ing the worlds oceans, but are espe­cial­ly preva­lent where strong earth­quakes are fre­quent.

In this envi­ron­ment it would be rea­son­able to expect that con­sid­er­a­tion of earth­quake and tsuna­mi effects would mer­it the high­est con­sid­er­a­tion when assess­ing the risks relat­ed to these haz­ards. Remem­ber­ing that risk is a func­tion of sever­i­ty of con­se­quence and prob­a­bil­i­ty, the risk assessed from earth­quake and tsuna­mi should have been crit­i­cal. Loss of cool­ing can result in the cat­a­stroph­ic over­heat­ing of the reac­tor core, poten­tial­ly lead­ing to a core melt­down.

The Fukushi­ma Dai Ichi plant was designed to with­stand 5.7 m tsuna­mi waves, even though a 6.4 m wave had hit the shore close by 10 years before the plant went on-line. The wave gen­er­at­ed by the recent earth­quake was 7 m. Although the plant was not washed away by the tsuna­mi, the wave cre­at­ed anoth­er prob­lem.

Now con­sid­er that the reac­tors require con­stant forced cool­ing using elec­tri­cal­ly pow­ered pumps. The back­up gen­er­a­tors installed to ensure that cool­ing pumps remain oper­a­tional even if the mains pow­er to the plant is lost, are installed in a base­ment sub­ject to flood­ing. When the tsuna­mi hit the sea­wall and spilled over the top, the flood­wa­ters poured into the back­up gen­er­a­tor room, knock­ing out the diesel back­up gen­er­a­tors. The cool­ing sys­tem stopped. With no pow­er to run the pumps, the reac­tor cores began to over­heat. Although the reac­tors sur­vived the earth­quakes and the tsuna­mi, with­out pow­er to run the pumps the plant was in trou­ble.

Learn more about the acci­dent.

Clear­ly there was a fail­ure of rea­son when assess­ing the risks relat­ed the loss of cool­ing capa­bil­i­ty in these reac­tors. With sys­tems that are mis­sion crit­i­cal in the way that these sys­tems are, mul­ti­ple lev­els of redun­dan­cy beyond a sin­gle back­up sys­tem are often the min­i­mum required.

In anoth­er plant in Japan, a sec­tion of pip­ing car­ry­ing super­heat­ed steam from the reac­tor to the tur­bines rup­tured injur­ing a num­ber of work­ers. The pipe was installed when the plant was new and had nev­er been inspect­ed since instal­la­tion because it was left off the safe­ty inspec­tion check­list. This is an exam­ple of a fail­ure that result­ed from blind­ly fol­low­ing a check­list with­out look­ing at the larg­er pic­ture. There can be no doubt that some­one at the plant noticed that oth­er pipe sec­tions were inspect­ed reg­u­lar­ly, but that this par­tic­u­lar sec­tion was skipped, yet no changes in the process result­ed.

Here again, the risk was not rec­og­nized even though it was clear­ly under­stood with respect to oth­er sec­tions of pipe in the same plant.

In anoth­er sit­u­a­tion at a nuclear plant in Japan, drains inside the con­tain­ment area of a reac­tor were not plugged at the end of the instal­la­tion process. As a result, a small spill of radioac­tive water was released into the sea instead of being prop­er­ly con­tained and cleaned up. The risk was well under­stood, but the con­trol pro­ce­dure for this risk was not imple­ment­ed.

Final­ly, the Kashi­waza­ki Kari­wa plant was con­struct­ed along a major fault line. The design­ers used fig­ures for the max­i­mum seis­mic accel­er­a­tion that were three times low­er than the accel­er­a­tions that could be cre­at­ed by the fault. Reg­u­la­tors per­mit­ted the plant to be built even though the rel­a­tive weak­ness of the design was known.

Failure Modes

I believe that there are a num­ber of rea­sons why these kinds of fail­ures occur.

Peo­ple have a dif­fi­cult time appre­ci­at­ing the mean­ing of prob­a­bil­i­ty. Prob­a­bil­i­ty is a key fac­tor in deter­min­ing the degree of risk from any haz­ard, yet when fig­ures like ‘1 in 1000’ or ‘1 x 10-5 occur­rences per year’ are dis­cussed, it’s hard for peo­ple to tru­ly grasp what these num­bers mean. Like­wise, when more sub­jec­tive scales are used it can be dif­fi­cult to real­ly under­stand what ‘like­ly’ or ‘rarely’ actu­al­ly mean.

Con­se­quent­ly, even in cas­es where the sever­i­ty may be very high, the risk relat­ed to a par­tic­u­lar haz­ard may be neglect­ed because the risk is deemed to be low because the prob­a­bil­i­ty seems to be low.

When prob­a­bil­i­ty is dis­cussed in terms of time, a fig­ure like ‘1 x 10-5 occur­rences per year’ can make the chance of an occur­rence seem dis­tant, and there­fore less of a con­cern.

Most risk assess­ment approach­es deal with haz­ards singly. This is done to sim­pli­fy the assess­ment process, but the prob­lem that can result from this approach is the effect that mul­ti­ple fail­ures can cre­ate, or that cas­cad­ing fail­ures can cre­ate. In a mul­ti­ple fail­ure con­di­tion, sev­er­al pro­tec­tive mea­sures fail simul­ta­ne­ous­ly from a sin­gle cause (some­times called Com­mon Cause Fail­ure). In this case, back-up mea­sures may fail from the same cause, result­ing in no pro­tec­tion from the haz­ard.

In a cas­cad­ing fail­ure, an ini­tial fail­ure is fol­lowed by a series of fail­ures result­ing in the par­tial or com­plete loss of the pro­tec­tive mea­sures, result­ing in par­tial or com­plete expo­sure to the haz­ard. Rea­son­ably fore­see­able com­bi­na­tions of fail­ure modes in mis­sion crit­i­cal sys­tems must be con­sid­ered and the prob­a­bil­i­ty of each esti­mat­ed.

Com­bi­na­tion of haz­ards can result in syn­er­gy between the haz­ards result­ing in a high­er lev­el of sever­i­ty from the com­bi­na­tion than is present from any one of the haz­ards tak­en singly. Rea­son­ably fore­see­able com­bi­na­tions of haz­ards and their poten­tial syn­er­gies must be iden­ti­fied and the risk esti­mat­ed.

Over­sim­pli­fi­ca­tion of the haz­ard iden­ti­fi­ca­tion and analy­sis process­es can result in over­look­ing haz­ards or under­es­ti­mat­ing the risk.

Think­ing about the Fukushi­ma Dai Ichi plant again, the com­bi­na­tion of the effects of the earth­quake on the plant, with the added impact of the tsuna­mi wave, result­ed in the loss of pri­ma­ry pow­er to the plant fol­lowed by the loss of back­up pow­er from the back­up gen­er­a­tors, and the sub­se­quent par­tial melt­downs and explo­sions at the plant. This com­bi­na­tion of earth­quake and tsuna­mi was well known, not some ‘unimag­in­able’ or ‘unfore­see­able’ sit­u­a­tion. When con­duct­ing risk assess­ments, all rea­son­ably fore­see­able com­bi­na­tions of haz­ards must be con­sid­ered.

Abuse and neglect

The risk assess­ment process is sub­ject to abuse and neglect. Risk assess­ment has been used by some as a means to jus­ti­fy expos­ing work­ers and the pub­lic to risks that should not have been per­mit­ted. Skew­ing the results of the risk assess­ment, either by under­es­ti­mat­ing the risk ini­tial­ly, or by over­es­ti­mat­ing the effec­tive­ness and reli­a­bil­i­ty of con­trol mea­sures can lead to this sit­u­a­tion. Deci­sions relat­ing to the ‘tol­er­a­bil­i­ty’ or the ‘accept­abil­i­ty’ of risks when the sever­i­ty of the poten­tial con­se­quences are high should be approached with great cau­tion. In my opin­ion, unless you are per­son­al­ly will­ing to take the risk you are propos­ing to accept, it can­not be con­sid­ered either tol­er­a­ble or accept­able, regard­less of the legal lim­its that may exist.

In the case of the Japan­ese nuclear plants, the oper­a­tors have pub­licly admit­ted to fal­si­fy­ing inspec­tion and repair records, some of which have result­ed in acci­dents and fatal­i­ties.

In 1990, the US Nuclear Reg­u­la­to­ry Com­mis­sion wrote a report on the Fukushi­ma Dai Ichi plant that pre­dict­ed the exact sce­nario that result­ed in the cur­rent cri­sis. These find­ings were shared with the Japan­ese author­i­ties and the oper­a­tors, but no one in a posi­tion of author­i­ty took the find­ings seri­ous­ly enough to do any­thing. Rel­a­tive­ly sim­ple and low-cost pro­tec­tive mea­sures, like increas­ing the height of the pro­tec­tive sea wall along the coast­line and mov­ing the back­up gen­er­a­tors to high ground could have pre­vent­ed a nation­al cat­a­stro­phe and the com­plete loss of the plant.

A Useful Tool

Despite these human fail­ings, I believe that risk assess­ment is an impor­tant tool. Increas­ing­ly sophis­ti­cat­ed tech­nol­o­gy has ren­dered ‘com­mon sense’ use­less in many cas­es, because peo­ple do not have the exper­tise to have any com­mon sense about the haz­ards relat­ed to these tech­nolo­gies.

Where haz­ards are well under­stood, they should be con­trolled with the sim­plest, most direct and effec­tive mea­sures avail­able. In many cas­es this can be done by the peo­ple who first iden­ti­fy the haz­ard.

Where haz­ards are not well under­stood, bring­ing in experts with the knowl­edge to assess the risk and imple­ment appro­pri­ate pro­tec­tive mea­sures is the right approach.

The com­mon aspect in all of this is the iden­ti­fi­ca­tion of haz­ards and the appli­ca­tion of some sort of con­trol mea­sures. Risk assess­ment should not be neglect­ed sim­ply because it is some­times dif­fi­cult, or it can be done poor­ly, or the results neglect­ed or ignored. We need to improve what we do with the results of these efforts, rather than neglect to do them at all.

In the mean time, the Japan­ese, and the world, have some cleanup to do.

The Problem with Probability

Risk Factors


There are two key fac­tors that need to be under­stood when assess­ing risk: Sever­i­ty and Prob­a­bil­i­ty (or Like­li­hood). Some­times the term ‘con­se­quence’ is used instead of ‘sever­i­ty’, and in the case of machin­ery risk assess­ment, they can be con­sid­ered to be syn­onyms.  Sever­i­ty seems to be fair­ly well understood—most peo­ple can fair­ly eas­i­ly imag­ine what reach­ing into a spin­ning blade might do to the hand doing the reach­ing. There is a prob­lem that aris­es when there is an insuf­fi­cient under­stand­ing of the haz­ard, but that’s the sub­ject for anoth­er post.


Prob­a­bil­i­ty or like­li­hood is used to describe the chance that an injury or a haz­ardous sit­u­a­tion will occur. Prob­a­bil­i­ty is used when numer­ic data is avail­able and prob­a­bil­i­ty can be cal­cu­lat­ed, while like­li­hood is used when the assess­ment is sub­jec­tive. The prob­a­bil­i­ty fac­tor is often bro­ken down fur­ther into three sub-fac­tors as seen in Fig­ure 3 below [1]:

There is No Reality, only Perception…

Whether you use prob­a­bil­i­ty or like­li­hood in your assess­ment, there is a fun­da­men­tal prob­lem with people’s per­cep­tion of these fac­tors. Peo­ple have a dif­fi­cult time appre­ci­at­ing the mean­ing of prob­a­bil­ity. Prob­a­bil­i­ty is a key fac­tor in deter­min­ing the degree of risk from any haz­ard, yet when fig­ures like “1 in 1000” or “1 x 10–5 occur­rences per year” are dis­cussed, it’s hard for peo­ple to tru­ly grasp what these num­bers mean. When prob­a­bil­ity is dis­cussed as a rate, a fig­ure like “1 x 10–5 occur­rences per year” can make the chance of an occur­rence seem incon­ceiv­ably dis­tant, and there­fore less of a con­cern. Like­wise, when more sub­jec­tive scales are used it can be dif­fi­cult to real­ly under­stand what “like­ly” or “rarely” actu­ally mean. Con­se­quent­ly, even in cas­es where the sever­ity may be very high, the risk relat­ed to a par­tic­u­lar haz­ard may be neglect­ed if the prob­a­bil­ity is deemed low.

To see the oth­er side, con­sid­er people’s atti­tude when it comes to win­ning a lot­tery. Most peo­ple will agree that “Some­one will win” and the infin­i­tes­i­mal prob­a­bil­i­ty of win­ning is seen as sig­nif­i­cant.  The same odds giv­en in rela­tion­ship to a neg­a­tive risk might be seen as ‘infin­i­tes­i­mal­ly small’, and there­fore neg­li­gi­ble.

For exam­ple, con­sid­er the deci­sions made by the Tokyo Elec­tric Pow­er Cor­po­ra­tion (TEPCO) when they con­struct­ed the Fukushi­ma Dai Ichi nuclear pow­er plant. TEPCO engi­neers and sci­en­tists assessed the site in the 1960’s and decid­ed that a 10 meter tsuna­mi was a real­is­tic pos­si­bil­i­ty at the site. They decid­ed to build the reac­tors, tur­bines and back­up gen­er­a­tors 10 meters above the sur­round­ing sea lev­el, then locat­ed the sys­tem crit­i­cal con­densers in the sea­ward yard of the plant at a lev­el below 10 meters. To pro­tect that crit­i­cal equip­ment they built a 5.7 meter high sea­wall, almost 50% short­er than the pre­dict­ed height for a tsuna­mi! While I don’t know what ratio­nale they used to sup­port this design deci­sion, it is clear that the plant would have tak­en sig­nif­i­cant dam­age from even a rel­a­tive­ly mild tsuna­mi. The 11-Mar-11 tsuna­mi topped the high­est pre­dic­tion by near­ly 5 meters, result­ing in a Lev­el 7 nuclear acci­dent and decades for recov­ery. TEPCO exec­u­tives have repeat­ed­ly stat­ed that the con­di­tions lead­ing to the acci­dent were “incon­ceiv­able”, and yet redun­dan­cy was built into the sys­tems for just this type of event, and some plan­ning for tsuna­mi effects were put into the design. Clear­ly was nei­ther unimag­in­able or incon­ceiv­able, just under­es­ti­mat­ed.

Risk Perception

So why is it that tiny odds are seen as an accept­able risk and even a rea­son­able like­li­hood in one case, and a neg­li­gi­ble chance in the oth­er, par­tic­u­lar­ly when the ignored case is the one that will have a sig­nif­i­cant neg­a­tive out­come?
Accord­ing to an arti­cle in Wikipedia [2], there are three main schools of thought when it comes to under­stand­ing risk per­cep­tion: psy­cho­log­i­cal, soci­o­log­i­cal and inter­dis­ci­pli­nary. In a key ear­ly paper writ­ten in 1969 by Chauncy Starr [3], it was dis­cov­ered that peo­ple would accept vol­un­tary risks 1000 times greater than invol­un­tary risks. Lat­er research has chal­lenged these find­ings, show­ing the gap between vol­un­tary and invol­un­tary to be much nar­row­er than Starr found.
Ear­ly psy­cho­me­t­ric research by Kah­ne­man and Tver­sky, showed that peo­ple use a num­ber of heuris­tics to eval­u­ate infor­ma­tion. These heuris­tics includ­ed:
  • Rep­re­sen­ta­tive­ness;
  • Avail­abil­i­ty;
  • Anchor­ing and Adjust­ment;
  • Asym­me­try; and
  • Thresh­old effects.
This research showed that peo­ple tend to be averse to risks to gains, like the poten­tial for loss of sav­ings by mak­ing risky invest­ments, while they tend to accept risk eas­i­ly when it comes to poten­tial loss­es, pre­fer­ring the hope of los­ing noth­ing over a cer­tain but small­er loss. This may explain why low-prob­a­bil­i­ty, high sever­i­ty OHS risks are more often ignored, in the hope that less­er injuries will occur rather than the max­i­mum pre­dict­ed sever­i­ty.

Sig­nif­i­cant results also show that bet­ter infor­ma­tion fre­quent­ly has no effect on how risks are judged. More weight is put on risks with imme­di­ate, per­son­al results than those seen in longer time frames. Psy­cho­me­t­ric research has shown that risk per­cep­tion is high­ly depen­dent on intu­ition, expe­ri­en­tial think­ing, and emo­tions. The research iden­ti­fied char­ac­ter­is­tics that may be con­densed into three high order fac­tors:

  1. the degree to which a risk is under­stood;
  2. the degree to which it evokes a feel­ing of dread; and
  3. the num­ber of peo­ple exposed to the risk.

Dread” describes a risk that elic­its vis­cer­al feel­ings of impend­ing cat­a­stro­phe, ter­ror and loss of con­trol. The more a per­son dreads an activ­i­ty, the high­er its per­ceived risk and the more that per­son wants the risk reduced [4]. Fear is clear­ly a stronger moti­va­tor than any degree of infor­ma­tion.

Con­sid­er­ing the dif­fer­ing views of those study­ing risk per­cep­tion, it’s no won­der that this is a chal­leng­ing sub­ject for safe­ty prac­ti­tion­ers!

Estimating Probability

Frequency and Duration

Some aspects of prob­a­bil­i­ty are not too dif­fi­cult to esti­mate. Con­sid­er the Fre­quen­cy or Dura­tion of Expo­sure fac­tor. At face val­ue this can be stat­ed as “X cycles per hour” or “Y hours per week”. Depend­ing on the haz­ard, there may be more com­plex expo­sure data, like that used when con­sid­er­ing audi­ble noise expo­sure. In that case, noise is often expressed as a time-weight­ed-aver­age (TWH), like “83 dB(A), 8 h TWH”, mean­ing 83 dB(A) aver­aged over 8 hours.

Esti­mat­ing the prob­a­bil­i­ty of a haz­ardous sit­u­a­tion is usu­al­ly not too tough either. This could be expressed as “15 min­utes, once per day / shift” or “2 days, twice per year”.


Esti­mat­ing the prob­a­bil­i­ty of avoid­ing an injury in any giv­en haz­ardous sit­u­a­tion is MUCH more dif­fi­cult, since the speed of occur­rence, the abil­i­ty to per­ceive the haz­ard, the knowl­edge of the exposed per­son, their abil­i­ty to react in the sit­u­a­tion, the lev­el of train­ing that they have, the pres­ence of com­ple­men­tary pro­tec­tive mea­sures, and many oth­er fac­tors come into play. Depth of under­stand­ing of the haz­ard and the details of the haz­ardous sit­u­a­tion by the risk asses­sors is crit­i­cal to a sound assess­ment of the risk involved.

The Challenge

The chal­lenge for safe­ty prac­ti­tion­ers is twofold:

  1. As prac­ti­tion­ers, we must try to over­come our bias­es when con­duct­ing risk assess­ment work, and where we can­not over­come those bias­es, we must at least acknowl­edge them and the effects they may pro­duce in our work; and
  2. We must try to present the risks in terms that the exposed peo­ple can under­stand, so that they can make a rea­soned choice for their own per­son­al safe­ty.

I don’t sug­gest that this is easy, nor do I advo­cate “dumb­ing down” the infor­ma­tion! I do believe that risk infor­ma­tion can be pre­sent­ed to non-tech­ni­cal peo­ple in ways that they can under­stand the crit­i­cal points.

Risk assess­ment tech­niques are becom­ing fun­da­men­tal in all areas of design. As safe­ty prac­ti­tion­ers, we must be ready to con­duct risk assess­ments using sound tech­niques, be aware of our bias­es and be patient in com­mu­ni­cat­ing the results of our analy­sis to every­one that may be affect­ed.


[1] “Safe­ty of Machinery—General Prin­ci­ples for Design—Risk Assess­ment and Risk Reduc­tion”, ISO 12100, Fig­ure 3, ISO, Gene­va, 2010.
[2] “Risk Per­cep­tion”, Wikipedia, accessed 19/20-May-2011,
[3] Chancey Starr, “Social Ben­e­fits ver­sus Tech­no­log­i­cal Risks”, Sci­ence Vol. 165, No. 3899. (Sep. 19, 1969), pp. 1232–1238
[4] Paul Slovic, Baruch Fis­chhoff, Sarah Licht­en­stein, “Why Study Risk Per­cep­tion?”, Risk Analy­sis 2(2) (1982), pp. 83–93.