Understanding Risk Assessment

When peo­ple dis­cuss ‘Risk’ there are a lot of dif­fer­ent assump­tions made about what that means. For me, the study of risk and risk assess­ment tech­niques start­ed in 1995. As a tech­nol­o­gist and con­trols design­er, I had to some­how wrap my head around the whole con­cept in ways I’d nev­er con­sid­ered. If you’re try­ing to fig­ure out risk and risk assess­ment this is a good place to get start­ed!

What is risk?

From a machin­ery per­spec­tive, ISO 12100:2010 defines risk as:

com­bi­na­tion of the prob­a­bil­i­ty of occur­rence of harm and the sever­i­ty of that harm”

Risk can have pos­i­tive or neg­a­tive out­comes, but when con­sid­er­ing safe­ty, we only con­sid­er neg­a­tive risk, or events that result in neg­a­tive health effects for the peo­ple exposed.

The risk rela­tion­ship is illus­trat­ed in ISO 12100:2010 Fig­ure 3:


ISO 12100-2010 Figure 3
ISO 12100–2010 Fig­ure 3


Where

R = Risk

S = Sever­i­ty of Harm

P = Prob­a­bil­i­ty of Occur­rence of Harm

The Prob­a­bil­i­ty of Occur­rence of Harm fac­tor is often fur­ther bro­ken down into three sub-fac­tors:

  • Prob­a­bil­i­ty of Expo­sure to the haz­ard
  • Prob­a­bil­i­ty of Occur­rence of the Haz­ardous Event
  • Prob­a­bil­i­ty of Lim­it­ing or Avoid­ing the Harm

How is risk measured?

In order to esti­mate risk a scor­ing tool is need­ed. There is no one ‘cor­rect’ scor­ing tool, and there are flaws in most scales that can result in blind-spots where risks may be over or under-esti­mat­ed.

At the sim­plest lev­el are ‘screen­ing’ tools. These tools use very sim­ple scales like ‘High, Medi­um, Low’, or ‘A, B, C’. These tools are often used when doing a shop-floor inspec­tion and are intend­ed to pro­vide a quick method of cap­tur­ing obser­va­tions and giv­ing a gut-feel assess­ment of the risk involved. These tools should be used as a way to iden­ti­fy risks that need addi­tion­al, detailed assess­ment. To get an idea of what a good screen­ing tool can look like, have a look at the SOBANE Déparis sys­tem.

Every scor­ing tool requires a scale for each risk para­me­ter includ­ed in the tool. For instance, con­sid­er the CSA tool described in CSA Z434:

CSA Z434-03 Table 1As you can see, each para­me­ter (Sever­i­ty, Expo­sure and Avoid­ance) has a scale, with two pos­si­ble selec­tions for each para­me­ter.

When con­sid­er­ing selec­tion of a scor­ing tool, it’s impor­tant to take some time to real­ly exam­ine the scales for each fac­tor. The scale shown above has a glar­ing hole in one scale. See if you can spot it and I’ll tell you what I think a bit lat­er in this post.

There are more than 350 dif­fer­ent scales and method­olo­gies avail­able for assess­ing risk. You can find a good review of some of them in Bruce Main’s text­book “Risk Assess­ment: Basics and Bench­marks” avail­able from DSE online.

A sim­i­lar, although dif­fer­ent, tool is found in Annex 1 of ISO 13849–1. Note that this tool is pro­vid­ed in an Infor­ma­tive Annex. This means that it is not part of the body of the stan­dard and is NOT manda­to­ry. In fact, this tool was pro­vid­ed as an exam­ple of how a user could link the out­put of a risk assess­ment tool to the Per­for­mance Lev­els described in the nor­ma­tive text (the manda­to­ry part) of the stan­dard.

Con­sid­er cre­at­ing your own scales. There is noth­ing wrong with deter­min­ing what char­ac­ter­is­tics (para­me­ters) you want to include in your risk assess­ment, and then assign­ing each para­me­ter a numer­ic scale that you think is suit­able; 1–10, 0–5, etc. Some scales may be invert­ed to oth­ers, for exam­ple: If the Sever­i­ty scale runs from 0–10, the Avoid­abil­i­ty scale might run from 10–0 (Unavoid­able to Entire­ly Avoid­able).

Once the scales in your tool have been defined, doc­u­ment the def­i­n­i­tions as part of your assess­ment.

Who should conduct risk assessments?

Lake YogaIn many orga­ni­za­tions, I find that risk assess­ment has been del­e­gat­ed to one per­son. This is a major mis­take for a num­ber of rea­sons. Risk assess­ment is not a solo activ­i­ty for a ‘guru’ in a lone­ly office some­where!

Risk assess­ment is not a lot of fun to do, and since risk assess­ments can get to be quite involved, it rep­re­sents a sig­nif­i­cant amount of work to put on one per­son. Also, leav­ing it to one per­son means that the assess­ment will nec­es­sar­i­ly be biased to what that per­son knows, and may miss sig­nif­i­cant haz­ards because the asses­sor doesn’t know enough about that haz­ard to spot it and assess it prop­er­ly.

Risk assess­ment requires mul­ti­ple view­points from par­tic­i­pants with var­ied exper­tise. This includes users, design­ers, engi­neers, lawyers and those who may have spe­cial­ized knowl­edge of a par­tic­u­lar haz­ard, like a Laser Safe­ty Offi­cer or a Radi­a­tion Safe­ty Offi­cer. The var­ied exper­tise of the peo­ple involved will allow the com­mit­tee to bal­ance the opin­ion of each haz­ard, and devel­op a more rea­soned assess­ment of the risk.

I rec­om­mend that risk assess­ment com­mit­tees nev­er be less than three mem­bers. Five is fre­quent­ly a good num­ber. Once you get beyond five, it becomes increas­ing­ly dif­fi­cult to obtain con­sen­sus on each haz­ard. Also, con­sid­er the cost. As each com­mit­tee mem­ber is added to the team, the cost of the assess­ment can esca­late expo­nen­tial­ly.

Train­ing in risk assess­ment is cru­cial to suc­cess. Ensure that the indi­vid­u­als involved are trained, and that at least one has some pre­vi­ous expe­ri­ence in the prac­tice so that they may guide the com­mit­tee as need­ed.

When should a risk assessment be conducted?


Risk Assessment Lifetime Flow Chart
Risk Assess­ment in the Life­time of a Prod­uct


Risk assess­ment should begin at the begin­ning of a project, whether it’s the design of a prod­uct, the devel­op­ment of a process or ser­vice, or the design of a new build­ing. Under­stand­ing risk is crit­i­cal to the design process. Cost for changes made at the begin­ning of a project are min­i­mal com­pared to those that will be incurred to cor­rect prob­lems that might have been fore­seen at the start. Risk assess­ment should start at the con­cept stage and be includ­ed at each sub­se­quent stage in the devel­op­ment process. The accom­pa­ny­ing graph­ic illus­trates this idea.

Essen­tial­ly, risk assess­ment is nev­er fin­ished until the prod­uct, process or ser­vice ceas­es to exist.

What tools are available?

As men­tioned ear­li­er in this post, the book ‘Risk Assess­ment: Basics and Bench­marks” pro­vides an overview of rough­ly 350 dif­fer­ent scor­ing tools. You can search the Inter­net and turn up quite a few as well. The key thing with all of these sys­tems is that you will need to devel­op any soft­ware based tools your­self. Depend­ing on your com­fort with soft­ware, this might be a spread­sheet for­mat, a word pro­cess­ing doc­u­ment a data­base, or some oth­er for­mat that works for your appli­ca­tion.

There are a num­ber of risk assess­ment soft­ware tools avail­able as well, includ­ing ISI’s CIRSMA™ and DSE’s Design­Safe. As with the scor­ing tools, you need to be care­ful when eval­u­at­ing tools. Some have sig­nif­i­cant blind spots that may trip you up if you are not aware of their lim­i­ta­tions.

Remem­ber too that the out­put from the soft­ware can only be as good as the input data. The old saw “Garbage In, Garbage Out” holds true with risk assess­ment.

Where can you get training?

There are a few places to get train­ing. Com­pli­ance InSight Con­sult­ing pro­vides train­ing to cor­po­rate clients and will be launch­ing a series of web-based train­ing ser­vices in 2011 that will allow indi­vid­ual learn­ers to get train­ing too.

The IEEE PSES oper­ates a Risk Assess­ment Tech­ni­cal Com­mit­tee that is open to the pub­lic as well. See the RATC web site.

The Answer to the Scale Question

The Expo­sure Scale in the CSA tool has a gap between E1 and E2. Look­ing at the def­i­n­i­tions for each choice, notice that E1 is less than once per day or shift, while E2 is more than once per hour. Expo­sures that occur once per hour or less, but more than once per day can­not be scored effec­tive­ly using this scale.

Also, notice the Sever­i­ty scale: S1 encom­pass­es injuries requir­ing not more than basic first aid. One com­mon ques­tion I get is “Does that include CPR*?”. This ques­tion comes up because most basic first aid cours­es taught in Cana­da include CPR as part of the course. There is no clear answer for this in the stan­dard. The S2 fac­tor extends from injuries requir­ing more than basic first aid, like a bro­ken fin­ger for instance, all the way to a fatal­i­ty. Does it make sense to group this broad range of injuries togeth­er? This def­i­n­i­tion doesn’t quite match with the Province of Ontario’s def­i­n­i­tion of a Crit­i­cal Injury found in Reg­u­la­tion 834 either.

All of this points to the need to care­ful­ly assess the scales that you choose before you start the process. Choos­ing the wrong tool can skew your results in ways that you may not be very hap­py about.

*Car­dio-Pul­monary Resus­ci­ta­tion

Author: Doug Nix

+DougNix is Managing Director and Principal Consultant at Compliance InSight Consulting, Inc. (http://www.complianceinsight.ca) in Kitchener, Ontario, and is Lead Author and Managing Editor of the Machinery Safety 101 blog. Doug's work includes teaching machinery risk assessment techniques privately and through Conestoga College Institute of Technology and Advanced Learning in Kitchener, Ontario, as well as providing technical services and training programs to clients related to risk assessment, industrial machinery safety, safety-related control system integration and reliability, laser safety and regulatory conformity. Follow me on Academia.edu//a.academia-assets.com/javascripts/social.js

  • Pingback: Emergency Stop - What's so confusing about that?()

  • Pingback: Emergency Stop – What’s so confusing about that? | Machinery Safety 101()

  • Hi Doug,
    Very good arti­cle on a sub­ject that is as far reach­ing as it is broad. It is also one that for a com­pa­ny ini­tail­ly start­ing out on this task is very daunt­ing. Not only where does one start, but then where does one end. All of the stan­dards men­tioned help in this process, but at the end the answers tend to be sub­jec­tive in nature and are based on the knowl­edge of the per­son or indu­vid­u­als involved in the asse­ment itself.
    At the machin­ery man­u­fac­tur­ing com­pa­ny I worked for as the Cor­po­rate Prod­uct Safe­ty Man­ag­er for 25 years, I had the lead Mechan­i­cal Engi­neer, lead Elec­tri­cal Engi­neer, the lead Hydraulic/Pneumatic Engi­neer and the lead Tech­ni­cal Writer involved with the risk assess­ments for each par­tic­u­lar job from the begin­ning. As each machine pro­gressed from the design phase to the assem­bly and test­ing phas­es, Ser­vice Tech­ni­cians and Oper­a­tors were also involved as now, what was designed and man­u­fac­tured, was actu­al­ly put to test. Machin­ery man­u­fac­tur­ers are not nec­es­sar­i­ly “Process peo­ple” and most times the machines, once in the field, are changed and oper­at­ed in dif­fer­ent fash­ions than what was orig­i­nal­ly designed or intend­ed. This in itself makes the risk assess­ment process more daunt­ing as one looks into the fore­see­abil­i­ty of some­thing adverse hap­pen­ing. There are sim­ply times where an inci­dent “unfore­seen” to the man­u­fac­tur­er hap­pens. At that point it is time to reeval­u­ate your risk assess­ment for that par­tic­u­lar machine or at least that seg­ment of your par­tic­u­lar machine. That may point out that your machine is fine from a safe­ty or risk stand­point, but that an oper­a­tional or main­te­nance task needs to be addressed. Again, my feel­ing is that with most aspects of risk assess­ments being “sub­jec­tive” in nature’ it behooves the per­son­nel doing the assess­ments to be well trained and versed on the machines them­selves and the tasks required to oper­ate and main­tain them. And as with any­thing else, once you have a few risk assess­ments “under your belt” they become eas­i­er to do. I also agree with some of the com­ments you have received already and your respons­es to them. I can guar­an­tee you that to some peo­ple break­ing a fin­ger or los­ing a fin­ger­nail may not be very sig­nif­i­cant, where­as to some­one else it may be cat­a­stroph­ic. “Sub­jec­tiv­i­ty” lures its ugly head again.

    • Mike, thanks for the kind words!

      You are absolute­ly right about how daunt­ing get­ting start­ed can be. I know that’s how I felt when I first heard about risk assess­ment. There are so many more resources avail­able now than there were when I got start­ed in the mid-90’s. 🙂

      I think that the key is in defin­ing the intend­ed use and the fore­see­able mis­us­es of the prod­uct. This allows the man­u­fac­tur­er to deal with what they know, and pre­vents them from hav­ing to try to ‘blue sky’ every pos­si­ble crazy thing that some­one might try to do. I think that prod­ucts in the indus­tri­al mar­ket­place are much more sub­ject to unan­tic­i­pat­ed mod­i­fi­ca­tions and mis­us­es than in the con­sumer mar­ket. This is because most plants have peo­ple on staff that can make changes, some­times major changes, to machines in the work­place. These mod­i­fi­ca­tions often hap­pen with a min­i­mum of plan­ning, and some­times ‘on-the-fly’, bypass­ing the risk assess­ment and safe­ty man­age­ment process­es alto­geth­er. In the con­sumer mar­ket­place peo­ple some­times do odd things with prod­ucts, but rarely do they make the major changes that you see in indus­try. The oth­er big issue is that machin­ery is often kept in ser­vice for long peri­ods of time. 20–30 years is not unheard of for heavy machin­ery. A few years ago I had a client ask me to do a safe­ty review on an 1100 ton pow­er press that was built in 1932 and was still in ser­vice in 2005! In the con­sumer mar­ket, few prod­ucts last beyond 15 years, so hav­ing very old prod­ucts still in ser­vice is much less like­ly to occur.

      Risk assess­ment is inher­ent­ly sub­jec­tive. Even when there is hard data avail­able, the final deci­sions are usu­al­ly made with a degree of sub­jec­tiv­i­ty. A judge­ment must be made, and judge­ments are sub­jec­tive. The big chal­lenge is that most of the time we have no hard data. Under­stand­ing the lev­el of uncer­tain­ty in each assess­ment is impor­tant and dif­fi­cult. The less hard data we have, the greater the uncer­tain­ty. Con­se­quent­ly, the out­come of much of the risk assess­ment work that is done is uncer­tain. When unfore­seen things go wrong, it’s real­ly easy to point a fin­ger at the risk assess­ment team and assume that they weren’t com­pe­tent because they didn’t fore­see what­ev­er it was. Some inci­dents can­not be eas­i­ly fore­seen because they are only pos­si­ble is cer­tain, very rare cir­cum­stances, but they will still occur.

      Risk assess­ment gives us a chance to head off the fore­see­able, and even some of the less-eas­i­ly-fore­seen injuries and inci­dents. That alone makes it worth­while.

  • Frank Schr­ev­er

    Great sum­ma­ry Doug, spe­cial­ly the point about hav­ing a num­ber of affect­ed par­ties involved to min­imise indi­vid­ual bias. I am always harp­ing on this top­ic in my train­ing cours­es. Most peo­ple are con­fused about risk assess­ment, any won­der! Anoth­er key point we have to get across i think, is that risk assess­ment is not just risk esti­ma­tion, but also requires deter­min­ing whether the risk has been con­trolled so far as is prac­ti­ca­ble or if oth­er con­trol mea­sures are required. This implies that the risk asses­sor knows what is pos­si­ble to min­imise risk (by design, not by human behav­iour) We are run­ning a series of half day work­shops on risk assess­ment around Oz this year with the IICA (our equiv­a­lent of the US ISA)and i will ref­er­ence your mate­r­i­al if that is OK Doug, cheers Frank

    • Thanks Frank! I’d be pleased to have you ref­er­ence my mate­r­i­al! Drop me an email offline, or call me when it’s con­ve­nient!

  • Rober­ta Nel­son Shea

    The met­ric shown from CSA Z434 is one that offers the great­est sim­plic­i­ty as it is essen­tial­ly “yes, no”, with­out offer­ing shades of gray. The issue of first aid was clar­i­fied in ANSI RIA R15.06 to mean that the dis­tinc­tion is based on what our OSHA clas­si­fies as being first aid ver­sus a reportable. This was done, again, for the pur­pose of clar­i­ty and ease. CSA Z434 is based on ANSI RIA R15.06, hence the sim­i­lar­i­ty.

    Once peo­ple become more famil­iar with risk assess­ment, they feel com­fort­able using mod­els with shades of gray. One can use any met­ric, so long as at the end, the stan­dard and legal require­ments are ful­filled. The grand-dad­dy of risk assess­ment is a MIL stan­dard, which is still used today. It uses a scale of 4 for sever­i­ty and a scale of 5 for prob­a­bil­i­ty (expo­sure and abil­i­ty to avoid com­bined), to come to risk scores which are then equat­ed with actions required and man­age­ment author­i­ty require­ments. For sever­i­ty, the “injury” poten­tial list­ed (4 grades) as well as prop­er­ty dam­age poten­tial, envi­ron­men­tal dam­age, and rep­u­ta­tion dam­age. So that it is under­stood that there are mul­ti­ple rea­sons for risk to an employ­er: employ­ee injury, dam­ages costs, envi­ron­men­tal dam­age, and rep­u­ta­tion dam­age. Any one of these trig­gers a cer­tain reac­tion depend­ing on the prob­a­bil­i­ty. There are a num­ber of very good books on the top­ic of risk assess­ment.

    Both the ANSI RIA R15.06 and CSA Z434 risk assess­ment mod­els are being updat­ed to cor­re­late with ISO 13849–1.

    Rober­ta

    • Thanks for your com­ments, Rober­ta! It’s always good to hear your thoughts, par­tic­u­lar­ly with your deep involve­ment with the RIA 15.06 stan­dard.

      While I can appre­ci­ate the idea that the scales were devel­oped for sim­plic­i­ty of use, the gap in the Expo­sure scale is one that many of my clients have found to be a prob­lem. Haz­ards with expo­sure fre­quen­cies falling in between the two fac­tors in the scale can be very dif­fi­cult to score, and the gap in the scale tends to add more uncer­tain­ty scor­ing, lead­ing to a pos­si­ble loss of cred­i­bil­i­ty for the out­put of the tool. I believe that we need to elim­i­nate these gaps to make the tool use­ful, and to make the appli­ca­tion of the tool more straight­for­ward for the novice.

      Regard­ing the inclu­sion of CPR in the sever­i­ty assess­ment, while RIA may have been able to clar­i­fy the require­ment in the US based on OSHA’s def­i­n­i­tion of what con­sti­tutes an reportable injury, this is not the case in Cana­da. Ontario’s def­i­n­i­tion of a Crit­i­cal Injury is dif­fer­ent than many of the oth­er Provinces and Ter­ri­to­ries, and none of these deal specif­i­cal­ly with inclu­sion of CPR. In Ontario, a loss of con­scious­ness will result in the acci­dent being reportable (fol­low the link in the post to Reg­u­la­tion 834), but this could occur with or with­out the person’s heart or breath­ing stop­ping. This would tend to show that cas­es that require CPR are NOT includ­ed in ‘Basic First Aid’ type injuries. Also, the loss of a sin­gle fin­ger or toe is NOT REPORTABLE in Ontario (!!) while it is in oth­er juris­dic­tions. That might indi­cate that this type of injury should be con­sid­ered to be a ‘Basic First Aid’ type of sever­i­ty!! I don’t know about you, but I f I lose a fin­ger or a toe at work you can bet that I’ll be head­ing to the ER, and that will make the injury reportable in any case.

      I think the ques­tion of whether an injury is reportable or not is pri­mar­i­ly a bureau­crat­ic one, while the issues of how to clas­si­fy the sever­i­ty of injury are not. I believe that the two need to be kept sep­a­rate and apart. While I would like it to be as clear cut as what you indi­cate it is in the USA, that is not the case here.

      Thanks again for your com­ments! I real­ly appre­ci­ate hear­ing from my read­ers!

  • Pingback: Sicurezzaonline()

  • Pingback: Peter Merguerian()

  • Pingback: Sicurezzaonline()