ISO 13849–1 Analysis — Part 7: Safety-Related Software

General architecture model of software
This entry is part 7 of 9 in the series How to do a 13849–1 analy­sis

Safety-Related Software

Up to this point, I have been dis­cussing the basic process­es used for the design of safe­ty-relat­ed parts of con­trol sys­tems. The under­ly­ing assump­tion is that these tech­niques apply to the design of hard­ware used for safe­ty pur­pos­es. The remain­ing ques­tion focus­es on the design and devel­op­ment of safe­ty-relat­ed soft­ware that runs on that hard­ware. If you have not read the rest of this series and would like to catch up first, you can find it here.

In this dis­cus­sion of safe­ty-relat­ed soft­ware, keep in mind that I am talk­ing about soft­ware that is only intend­ed to reduce risk. Some plat­forms that are not well suit­ed for safe­ty soft­ware, pri­mar­i­ly com­mon off-the-shelf (COTS) oper­at­ing sys­tems like Win­dows, MacOS and Lin­ux. Gen­er­al­ly speak­ing, these oper­at­ing sys­tems are too com­plex and sub­ject to unan­tic­i­pat­ed changes to be suit­able for high-reli­a­bil­i­ty appli­ca­tions. There is noth­ing wrong with using these sys­tems for annun­ci­a­tion and mon­i­tor­ing func­tions, but the safe­ty func­tions should run on more pre­dictable plat­forms.

The method­ol­o­gy dis­cussed in ISO 13849–1 is usable up to PLd. At the end of the Scope we find Note 4:

NOTE 4 For safe­ty-relat­ed embed­ded soft­ware for com­po­nents with PLr = e, see IEC 61508–3:1998, Clause 7.

As you can see, for very high-reli­a­bil­i­ty sys­tems, i.e., PLe/SIL3 or SIL4, it is nec­es­sary to move to IEC 61508. The meth­ods dis­cussed here are based on ISO 13849–1:2015, Chap­ter 4.6.

Goals

There are two goals for safe­ty-relat­ed soft­ware devel­op­ment activ­i­ties:

  1. Avoid faults
  2. Gen­er­ate read­able, under­stand­able, testable and main­tain­able soft­ware

Avoiding Faults

Fig. 1 [1, Fig. 6] shows the “V-mod­el” for soft­ware devel­op­ment. This approach to soft­ware design incor­po­rates both val­i­da­tion and ver­i­fi­ca­tion, and when cor­rect­ly imple­ment­ed will result in soft­ware that meets the design spec­i­fi­ca­tions.

If you aren’t sure what the dif­fer­ence is between ver­i­fi­ca­tion and val­i­da­tion, I remem­ber it is this way: Val­i­da­tion means “Are we build­ing the right thing?”, and ver­i­fi­ca­tion means “Did we build the thing right?” The whole process hinges on the Safe­ty Require­ment Spec­i­fi­ca­tion (SRS), so fail­ing to get that part of the process right in the begin­ning will neg­a­tive­ly impact both hard­ware and soft­ware design. The SRS is the yard­stick used to decide if you built the right thing. With­out that, you are clue­less about what you are build­ing.

Simplified V-model of software safety lifecycle
Fig­ure 1 — Sim­pli­fied V-mod­el of soft­ware safe­ty life­cy­cle

Com­ing in from the Safe­ty Require­ment Spec­i­fi­ca­tion (also called the safe­ty func­tion spec­i­fi­ca­tion), each step in the process is shown. The dashed lines illus­trate the ver­i­fi­ca­tion process at each step. Notice that the actu­al cod­ing step is at the bot­tom of the V-mod­el. Every­thing above the cod­ing stage is either plan­ning and design, or qual­i­ty assur­ance activ­i­ties.

There are oth­er meth­ods that can be used to result in ver­i­fied and val­i­dat­ed soft­ware, so if you have a QA process that pro­duces sol­id results, you may not need to change it. I would rec­om­mend that you review all the stages in the V-mod­el to ensure that your QA sys­tem has sim­i­lar process­es.

To make set­ting up safe­ty sys­tems sim­pler for design­ers and inte­gra­tors, there are two approach­es to soft­ware design that can be used.

Two Approaches to Software Design

There are two approach­es to soft­ware design that should be con­sid­ered:

  • Pre­con­fig­ured (build­ing-block style) soft­ware
  • Ful­ly cus­tomised soft­ware

Preconfigured Building-Block Software

The pre­con­fig­ured build­ing-block approach is typ­i­cal­ly used for con­fig­ur­ing safe­ty PLCs or pro­gram­ma­ble safe­ty relays or mod­ules. This type of soft­ware is referred to as “safe­ty-relat­ed embed­ded soft­ware (SRESW)” in [1].

Pre-writ­ten func­tion blocks are pro­vid­ed by the device man­u­fac­tur­er. Each func­tion block has a par­tic­u­lar role: emer­gency stop, safe­ty gate input, zero-speed detec­tion, and so on. When con­fig­ur­ing a safe­ty PLC or safe­ty mod­ules that use this approach, the design­er selects the appro­pri­ate block and then con­fig­ures the inputs, out­puts, and any oth­er func­tion­al char­ac­ter­is­tics that are need­ed. The design­er has no access to the safe­ty-relat­ed code, so apart from con­fig­u­ra­tion errors, no oth­er errors can be intro­duced. The func­tion blocks are ver­i­fied and val­i­dat­ed (V & V) by the con­trols com­po­nent man­u­fac­tur­er, usu­al­ly with the sup­port of an accred­it­ed cer­ti­fi­ca­tion body. The func­tion blocks will nor­mal­ly have a PL asso­ci­at­ed with them, and a state­ment like “suit­able for PLe” will be made in the func­tion block descrip­tion.

This approach elim­i­nates the need to do a detailed V & V of the code by the design­ing enti­ty (i.e., the machine builder). How­ev­er, the machine builder is still required to do a V & V on the oper­a­tion of the sys­tem as they have con­fig­ured it. The machine V & V includes all the usu­al fault injec­tion tests and func­tion­al tests to ensure that the sys­tem will behave in as intend­ed in the pres­ence of a demand on the safe­ty func­tion or a fault con­di­tion. The faults that should be test­ed are those in your Fault List. If you don’t have a fault list or don’t know what a Fault List is, see Part 8 in this series.

Using pre-con­fig­ured build­ing blocks achieves the first goal, fault avoid­ance, at least as far as the soft­ware cod­ing is con­cerned. The con­fig­u­ra­tion soft­ware will val­i­date the func­tion block con­fig­u­ra­tions before com­pil­ing the soft­ware for upload to the safe­ty con­troller so that most con­fig­u­ra­tion errors will be caught at that stage.

This approach also facil­i­tates the sec­ond goal, as long as the con­fig­u­ra­tion soft­ware is usable and main­tained by the soft­ware ven­dor. The con­fig­u­ra­tion soft­ware usu­al­ly includes the abil­i­ty to anno­tate the con­fig­u­ra­tions with rel­e­vant details to assist with the read­abil­i­ty and under­stand­abil­i­ty of the soft­ware.

Fully Customised Software

This approach is used where a ful­ly cus­tomised hard­ware plat­form is being used, and the safe­ty soft­ware is designed to run on that plat­form. [1] refers to this type of soft­ware as “Safe­ty-relat­ed appli­ca­tion soft­ware (SRASW).” A ful­ly cus­tomised soft­ware appli­ca­tion is used where a very spe­cialised safe­ty sys­tem is con­tem­plat­ed, and FPGAs or oth­er cus­tomised hard­ware is being used. These sys­tems are usu­al­ly pro­grammed using full-vari­abil­i­ty lan­guages.

In this case, the full hard­ware and soft­ware V & V approach must be employed. In my opin­ion, ISO 13849–1 is prob­a­bly not the best choice for this approach due to its sim­pli­fi­ca­tion, and I would usu­al­ly rec­om­mend using IEC 61508–3 as the basis for the design, ver­i­fi­ca­tion, and val­i­da­tion of ful­ly cus­tomised soft­ware.

Process requirements

Safety-Related Embedded Software (SRESW)

[1, 4.6.2] pro­vides a laun­dry list of ele­ments that must be incor­po­rat­ed into the V-mod­el process­es when devel­op­ing SRESW, bro­ken down by PLa through PLd, and then some addi­tion­al require­ments for PLc and PLd.

If you are design­ing SRESW for PLe, [1, 4.6.2] points you direct­ly to IEC 61508–3, clause 7, which cov­ers soft­ware suit­able for SIL3 appli­ca­tions.

Safety-Related Application Software (SRASW)

[1, 4.6.3] pro­vides a list of require­ments that must be met through the v-mod­el process for SRASW, and allows that PLa through PLe can be met by code writ­ten in LVL and that PLe appli­ca­tions can also be designed using FVL. In cas­es where soft­ware is devel­oped using  FVL, the soft­ware can be treat­ed as the embed­ded soft­ware prod­ucts (SRESW) are han­dled.

A sim­i­lar archi­tec­tur­al mod­el to that used for sin­gle-chan­nel hard­ware devel­op­ment is used, as shown in Fig. 2  [1, Fig 7].

General architecture model of software
Fig­ure 2 — Gen­er­al archi­tec­ture mod­el of soft­ware

The com­plete V-mod­el must be applied to safe­ty-relat­ed appli­ca­tion soft­ware, with all of the addi­tion­al require­ments from [1, 4.6.3] includ­ed in the process mod­el.

Conclusions

There is a lot to safe­ty-relat­ed soft­ware devel­op­ment, cer­tain­ly much more than could be dis­cussed in a blog post like this or even in a stan­dard like ISO 13849–1. If you are con­tem­plat­ing devel­op­ing safe­ty relat­ed soft­ware and you are not famil­iar with the tech­niques need­ed to devel­op this kind of high-reli­a­bil­i­ty soft­ware, I would sug­gest you get help from a qual­i­fied devel­op­er. Keep in mind that there can be sig­nif­i­cant lia­bil­i­ty attached to safe­ty sys­tem fail­ures, includ­ing the deaths of peo­ple using your prod­uct. If you are devel­op­ing SRASW, I would also rec­om­mend fol­low­ing IEC 61508–3 as the basis for the devel­op­ment and relat­ed QA process­es.

 Definitions

3.1.36 appli­ca­tion soft­ware
soft­ware spe­cif­ic to the appli­ca­tion, imple­ment­ed by the machine man­u­fac­tur­er, and gen­er­al­ly con­tain­ing log­ic sequences, lim­its and expres­sions that con­trol the appro­pri­ate inputs, out­puts, cal­cu­la­tions and deci­sions nec­es­sary to meet the SRP/CS require­ments 3.1.37 embed­ded soft­ware firmware sys­tem soft­ware soft­ware that is part of the sys­tem sup­plied by the con­trol man­u­fac­tur­er and which is not acces­si­ble for mod­i­fi­ca­tion by the user of the machin­ery Note 1 to entry: Embed­ded soft­ware is usu­al­ly writ­ten in FVL.
Note 1 to entry: Embed­ded soft­ware is usu­al­ly writ­ten in FVL.
3.1.34 lim­it­ed vari­abil­i­ty lan­guage LVL
type of lan­guage that pro­vides the capa­bil­i­ty of com­bin­ing pre­de­fined, appli­ca­tion-spe­cif­ic library func­tions to imple­ment the safe­ty require­ments spec­i­fi­ca­tions
Note 1 to entry: Typ­i­cal exam­ples of LVL (lad­der log­ic, func­tion block dia­gram) are giv­en in IEC 61131–3.
Note 2 to entry: A typ­i­cal exam­ple of a sys­tem using LVL: PLC. [SOURCE: IEC 61511–1:2003, 3.2.80.1.2, mod­i­fied.]
3.1.35 full vari­abil­i­ty lan­guage FVL
type of lan­guage that pro­vides the capa­bil­i­ty of imple­ment­ing a wide vari­ety of func­tions and appli­ca­tions EXAMPLE C, C++, Assem­bler.
Note 1 to entry: A typ­i­cal exam­ple of sys­tems using FVL: embed­ded sys­tems.
Note 2 to entry: In the field of machin­ery, FVL is found in embed­ded soft­ware and rarely in appli­ca­tion soft­ware. [SOURCE: IEC 61511–1:2003, 3.2.80.1.3, mod­i­fied.]
3.1.37 embed­ded soft­ware
firmware
sys­tem soft­ware
soft­ware that is part of the sys­tem sup­plied by the con­trol man­u­fac­tur­er and which is not acces­si­ble for mod­i­fi­ca­tion by the user of the machin­ery.
Note 1 to entry: Embed­ded soft­ware is usu­al­ly writ­ten in FVL.
Field Pro­gram­ma­ble Gate Array FPGA
A field-pro­gram­ma­ble gate array (FPGA) is an inte­grat­ed cir­cuit designed to be con­fig­ured by a cus­tomer or a design­er after man­u­fac­tur­ing – hence “field-pro­gram­ma­ble”. The FPGA con­fig­u­ra­tion is gen­er­al­ly spec­i­fied using a hard­ware descrip­tion lan­guage (HDL), sim­i­lar to that used for an appli­ca­tion-spe­cif­ic inte­grat­ed cir­cuit (ASIC). [22]

Book List

Here are some books that I think you may find help­ful on this jour­ney:

[0]     B. Main, Risk Assess­ment: Basics and Bench­marks, 1st ed. Ann Arbor, MI USA: DSE, 2004.

[0.1]  D. Smith and K. Simp­son, Safe­ty crit­i­cal sys­tems hand­book. Ams­ter­dam: Else­vier/But­ter­worth-Heine­mann, 2011.

[0.2]  Elec­tro­mag­net­ic Com­pat­i­bil­i­ty for Func­tion­al Safe­ty, 1st ed. Steve­nage, UK: The Insti­tu­tion of Engi­neer­ing and Tech­nol­o­gy, 2008.

[0.3]  Overview of tech­niques and mea­sures relat­ed to EMC for Func­tion­al Safe­ty, 1st ed. Steve­nage, UK: Overview of tech­niques and mea­sures relat­ed to EMC for Func­tion­al Safe­ty, 2013.

References

Note: This ref­er­ence list starts in Part 1 of the series, so “miss­ing” ref­er­ences may show in oth­er parts of the series. Includ­ed in the last post of the series is the com­plete ref­er­ence list.

[1]     Safe­ty of machin­ery — Safe­ty-relat­ed parts of con­trol sys­tems — Part 1: Gen­er­al prin­ci­ples for design. 3rd Edi­tion. ISO Stan­dard 13849–1. 2015.

[2]     Safe­ty of machin­ery — Safe­ty-relat­ed parts of con­trol sys­tems — Part 2: Val­i­da­tion. 2nd Edi­tion. ISO Stan­dard 13849–2. 2012.

[3]      Safe­ty of machin­ery — Gen­er­al prin­ci­ples for design — Risk assess­ment and risk reduc­tion. ISO Stan­dard 12100. 2010.

[4]     Safe­guard­ing of Machin­ery. 2nd Edi­tion. CSA Stan­dard Z432. 2004.

[5]     Risk Assess­ment and Risk Reduc­tion- A Guide­line to Esti­mate, Eval­u­ate and Reduce Risks Asso­ci­at­ed with Machine Tools. ANSI Tech­ni­cal Report B11.TR3. 2000.

[6]    Safe­ty of machin­ery — Emer­gency stop func­tion — Prin­ci­ples for design. ISO Stan­dard 13850. 2015.

[7]     Func­tion­al safe­ty of electrical/electronic/programmable elec­tron­ic safe­ty-relat­ed sys­tems. 7 parts. IEC Stan­dard 61508. Edi­tion 2. 2010.

[8]     S. Joce­lyn, J. Bau­doin, Y. Chin­ni­ah, and P. Char­p­en­tier, “Fea­si­bil­i­ty study and uncer­tain­ties in the val­i­da­tion of an exist­ing safe­ty-relat­ed con­trol cir­cuit with the ISO 13849–1:2006 design stan­dard,” Reliab. Eng. Syst. Saf., vol. 121, pp. 104–112, Jan. 2014.

[9]    Guid­ance on the appli­ca­tion of ISO 13849–1 and IEC 62061 in the design of safe­ty-relat­ed con­trol sys­tems for machin­ery. IEC Tech­ni­cal Report TR 62061–1. 2010.

[10]     Safe­ty of machin­ery — Func­tion­al safe­ty of safe­ty-relat­ed elec­tri­cal, elec­tron­ic and pro­gram­ma­ble elec­tron­ic con­trol sys­tems. IEC Stan­dard 62061. 2005.

[11]    Guid­ance on the appli­ca­tion of ISO 13849–1 and IEC 62061 in the design of safe­ty-relat­ed con­trol sys­tems for machin­ery. IEC Tech­ni­cal Report 62061–1. 2010.

[12]    D. S. G. Nix, Y. Chin­ni­ah, F. Dosio, M. Fessler, F. Eng, and F. Schr­ev­er, “Link­ing Risk and Reliability—Mapping the out­put of risk assess­ment tools to func­tion­al safe­ty require­ments for safe­ty relat­ed con­trol sys­tems,” 2015.

[13]    Safe­ty of machin­ery. Safe­ty relat­ed parts of con­trol sys­tems. Gen­er­al prin­ci­ples for design. CEN Stan­dard EN 954–1. 1996.

[14]   Func­tion­al safe­ty of electrical/electronic/programmable elec­tron­ic safe­ty-relat­ed sys­tems — Part 2: Require­ments for electrical/electronic/programmable elec­tron­ic safe­ty-relat­ed sys­tems. IEC Stan­dard 61508–2. 2010.

[15]     Reli­a­bil­i­ty Pre­dic­tion of Elec­tron­ic Equip­ment. Mil­i­tary Hand­book MIL-HDBK-217F. 1991.

[16]     “IFA — Prac­ti­cal aids: Soft­ware-Assis­tent SISTEMA: Safe­ty Integri­ty — Soft­ware Tool for the Eval­u­a­tion of Machine Appli­ca­tions”, Dguv.de, 2017. [Online]. Avail­able: http://www.dguv.de/ifa/praxishilfen/practical-solutions-machine-safety/software-sistema/index.jsp. [Accessed: 30- Jan- 2017].

[17]      “fail­ure mode”, 192–03-17, Inter­na­tion­al Elec­trotech­ni­cal Vocab­u­lary. IEC Inter­na­tion­al Elec­trotech­ni­cal Com­mis­sion, Gene­va, 2015.

[18]      M. Gen­tile and A. E. Sum­mers, “Com­mon Cause Fail­ure: How Do You Man­age Them?,” Process Saf. Prog., vol. 25, no. 4, pp. 331–338, 2006.

[19]     Out of Control—Why con­trol sys­tems go wrong and how to pre­vent fail­ure, 2nd ed. Rich­mond, Sur­rey, UK: HSE Health and Safe­ty Exec­u­tive, 2003.

[20]     Safe­guard­ing of Machin­ery. 3rd Edi­tion. CSA Stan­dard Z432. 2016.

[21]     O. Reg. 851, INDUSTRIAL ESTABLISHMENTS. Ontario, Cana­da, 1990.

[22]     “Field-pro­gram­ma­ble gate array”, En.wikipedia.org, 2017. [Online]. Avail­able: https://en.wikipedia.org/wiki/Field-programmable_gate_array. [Accessed: 16-Jun-2017].

Series Nav­i­ga­tionISO 13849–1 Analy­sis — Part 6: CCF — Com­mon Cause Fail­ures”>ISO 13849–1 Analy­sis — Part 6: CCF — Com­mon Cause Fail­uresHow to do a 13849–1 analy­sis: Com­plete Ref­er­ence List

Author: Doug Nix

Doug Nix is Managing Director and Principal Consultant at Compliance InSight Consulting, Inc. (http://www.complianceinsight.ca) in Kitchener, Ontario, and is Lead Author and Senior Editor of the Machinery Safety 101 blog. Doug's work includes teaching machinery risk assessment techniques privately and through Conestoga College Institute of Technology and Advanced Learning in Kitchener, Ontario, as well as providing technical services and training programs to clients related to risk assessment, industrial machinery safety, safety-related control system integration and reliability, laser safety and regulatory conformity. For more see Doug's LinkedIn profile.