ISO 13849 – 1 Analysis — Part 7: Safety-​Related Software

General architecture model of software
This entry is part 7 of 9 in the series How to do a 13849 – 1 ana­lys­is

Safety-​Related Software

Up to this point, I have been dis­cuss­ing the basic pro­cesses used for the design of safety-​related parts of con­trol sys­tems. The under­ly­ing assump­tion is that these tech­niques apply to the design of hard­ware used for safety pur­poses. The remain­ing ques­tion focuses on the design and devel­op­ment of safety-​related soft­ware that runs on that hard­ware. If you have not read the rest of this series and would like to catch up first, you can find it here.

In this dis­cus­sion of safety-​related soft­ware, keep in mind that I am talk­ing about soft­ware that is only inten­ded to reduce risk. Some plat­forms that are not well suited for safety soft­ware, primar­ily com­mon off-​the-​shelf (COTS) oper­at­ing sys­tems like Windows, MacOS and Linux. Generally speak­ing, these oper­at­ing sys­tems are too com­plex and sub­ject to unanti­cip­ated changes to be suit­able for high-​reliability applic­a­tions. There is noth­ing wrong with using these sys­tems for annun­ci­ation and mon­it­or­ing func­tions, but the safety func­tions should run on more pre­dict­able plat­forms.

The meth­od­o­logy dis­cussed in ISO 13849 – 1 is usable up to PLd. At the end of the Scope we find Note 4:

NOTE 4 For safety-​related embed­ded soft­ware for com­pon­ents with PLr = e, see IEC 61508 – 3:1998, Clause 7.

As you can see, for very high-​reliability sys­tems, i.e., PLe/​SIL3 or SIL4, it is neces­sary to move to IEC 61508. The meth­ods dis­cussed here are based on ISO 13849 – 1:2015, Chapter 4.6.

Goals

There are two goals for safety-​related soft­ware devel­op­ment activ­it­ies:

  1. Avoid faults
  2. Generate read­able, under­stand­able, test­able and main­tain­able soft­ware

Avoiding Faults

Fig. 1 [1, Fig. 6] shows the “V-​model” for soft­ware devel­op­ment. This approach to soft­ware design incor­por­ates both val­id­a­tion and veri­fic­a­tion, and when cor­rectly imple­men­ted will res­ult in soft­ware that meets the design spe­cific­a­tions.

If you aren’t sure what the dif­fer­ence is between veri­fic­a­tion and val­id­a­tion, I remem­ber it is this way: Validation means “Are we build­ing the right thing?”, and veri­fic­a­tion means “Did we build the thing right?” The whole pro­cess hinges on the Safety Requirement Specification (SRS), so fail­ing to get that part of the pro­cess right in the begin­ning will neg­at­ively impact both hard­ware and soft­ware design. The SRS is the yard­stick used to decide if you built the right thing. Without that, you are clue­less about what you are build­ing.

Simplified V-model of software safety lifecycle
Figure 1 — Simplified V-​model of soft­ware safety life­cycle

Coming in from the Safety Requirement Specification (also called the safety func­tion spe­cific­a­tion), each step in the pro­cess is shown. The dashed lines illus­trate the veri­fic­a­tion pro­cess at each step. Notice that the actu­al cod­ing step is at the bot­tom of the V-​model. Everything above the cod­ing stage is either plan­ning and design, or qual­ity assur­ance activ­it­ies.

There are oth­er meth­ods that can be used to res­ult in veri­fied and val­id­ated soft­ware, so if you have a QA pro­cess that pro­duces sol­id res­ults, you may not need to change it. I would recom­mend that you review all the stages in the V-​model to ensure that your QA sys­tem has sim­il­ar pro­cesses.

To make set­ting up safety sys­tems sim­pler for design­ers and integ­rat­ors, there are two approaches to soft­ware design that can be used.

Two Approaches to Software Design

There are two approaches to soft­ware design that should be con­sidered:

  • Preconfigured (building-​block style) soft­ware
  • Fully cus­tom­ised soft­ware

Preconfigured Building-​Block Software

The pre­con­figured building-​block approach is typ­ic­ally used for con­fig­ur­ing safety PLCs or pro­gram­mable safety relays or mod­ules. This type of soft­ware is referred to as “safety-​related embed­ded soft­ware (SRESW)” in [1].

Pre-​written func­tion blocks are provided by the device man­u­fac­turer. Each func­tion block has a par­tic­u­lar role: emer­gency stop, safety gate input, zero-​speed detec­tion, and so on. When con­fig­ur­ing a safety PLC or safety mod­ules that use this approach, the design­er selects the appro­pri­ate block and then con­fig­ures the inputs, out­puts, and any oth­er func­tion­al char­ac­ter­ist­ics that are needed. The design­er has no access to the safety-​related code, so apart from con­fig­ur­a­tion errors, no oth­er errors can be intro­duced. The func­tion blocks are veri­fied and val­id­ated (V & V) by the con­trols com­pon­ent man­u­fac­turer, usu­ally with the sup­port of an accred­ited cer­ti­fic­a­tion body. The func­tion blocks will nor­mally have a PL asso­ci­ated with them, and a state­ment like “suit­able for PLe” will be made in the func­tion block descrip­tion.

This approach elim­in­ates the need to do a detailed V & V of the code by the design­ing entity (i.e., the machine build­er). However, the machine build­er is still required to do a V & V on the oper­a­tion of the sys­tem as they have con­figured it. The machine V & V includes all the usu­al fault injec­tion tests and func­tion­al tests to ensure that the sys­tem will behave in as inten­ded in the pres­ence of a demand on the safety func­tion or a fault con­di­tion. The faults that should be tested are those in your Fault List. If you don’t have a fault list or don’t know what a Fault List is, see Part 8 in this series.

Using pre-​configured build­ing blocks achieves the first goal, fault avoid­ance, at least as far as the soft­ware cod­ing is con­cerned. The con­fig­ur­a­tion soft­ware will val­id­ate the func­tion block con­fig­ur­a­tions before com­pil­ing the soft­ware for upload to the safety con­trol­ler so that most con­fig­ur­a­tion errors will be caught at that stage.

This approach also facil­it­ates the second goal, as long as the con­fig­ur­a­tion soft­ware is usable and main­tained by the soft­ware vendor. The con­fig­ur­a­tion soft­ware usu­ally includes the abil­ity to annot­ate the con­fig­ur­a­tions with rel­ev­ant details to assist with the read­ab­il­ity and under­stand­ab­il­ity of the soft­ware.

Fully Customised Software

This approach is used where a fully cus­tom­ised hard­ware plat­form is being used, and the safety soft­ware is designed to run on that plat­form. [1] refers to this type of soft­ware as “Safety-​related applic­a­tion soft­ware (SRASW).” A fully cus­tom­ised soft­ware applic­a­tion is used where a very spe­cial­ised safety sys­tem is con­tem­plated, and FPGAs or oth­er cus­tom­ised hard­ware is being used. These sys­tems are usu­ally pro­grammed using full-​variability lan­guages.

In this case, the full hard­ware and soft­ware V & V approach must be employed. In my opin­ion, ISO 13849 – 1 is prob­ably not the best choice for this approach due to its sim­pli­fic­a­tion, and I would usu­ally recom­mend using IEC 61508 – 3 as the basis for the design, veri­fic­a­tion, and val­id­a­tion of fully cus­tom­ised soft­ware.

Process requirements

Safety-​Related Embedded Software (SRESW)

[1, 4.6.2] provides a laun­dry list of ele­ments that must be incor­por­ated into the V-​model pro­cesses when devel­op­ing SRESW, broken down by PLa through PLd, and then some addi­tion­al require­ments for PLc and PLd.

If you are design­ing SRESW for PLe, [1, 4.6.2] points you dir­ectly to IEC 61508 – 3, clause 7, which cov­ers soft­ware suit­able for SIL3 applic­a­tions.

Safety-​Related Application Software (SRASW)

[1, 4.6.3] provides a list of require­ments that must be met through the v-​model pro­cess for SRASW, and allows that PLa through PLe can be met by code writ­ten in LVL and that PLe applic­a­tions can also be designed using FVL. In cases where soft­ware is developed using  FVL, the soft­ware can be treated as the embed­ded soft­ware products (SRESW) are handled.

A sim­il­ar archi­tec­tur­al mod­el to that used for single-​channel hard­ware devel­op­ment is used, as shown in Fig. 2  [1, Fig 7].

General architecture model of software
Figure 2 — General archi­tec­ture mod­el of soft­ware

The com­plete V-​model must be applied to safety-​related applic­a­tion soft­ware, with all of the addi­tion­al require­ments from [1, 4.6.3] included in the pro­cess mod­el.

Conclusions

There is a lot to safety-​related soft­ware devel­op­ment, cer­tainly much more than could be dis­cussed in a blog post like this or even in a stand­ard like ISO 13849 – 1. If you are con­tem­plat­ing devel­op­ing safety related soft­ware and you are not famil­i­ar with the tech­niques needed to devel­op this kind of high-​reliability soft­ware, I would sug­gest you get help from a qual­i­fied developer. Keep in mind that there can be sig­ni­fic­ant liab­il­ity attached to safety sys­tem fail­ures, includ­ing the deaths of people using your product. If you are devel­op­ing SRASW, I would also recom­mend fol­low­ing IEC 61508 – 3 as the basis for the devel­op­ment and related QA pro­cesses.

 Definitions

3.1.36 applic­a­tion soft­ware
soft­ware spe­cif­ic to the applic­a­tion, imple­men­ted by the machine man­u­fac­turer, and gen­er­ally con­tain­ing logic sequences, lim­its and expres­sions that con­trol the appro­pri­ate inputs, out­puts, cal­cu­la­tions and decisions neces­sary to meet the SRP/​CS require­ments 3.1.37 embed­ded soft­ware firm­ware sys­tem soft­ware soft­ware that is part of the sys­tem sup­plied by the con­trol man­u­fac­turer and which is not access­ible for modi­fic­a­tion by the user of the machinery Note 1 to entry: Embedded soft­ware is usu­ally writ­ten in FVL.
Note 1 to entry: Embedded soft­ware is usu­ally writ­ten in FVL.
3.1.34 lim­ited vari­ab­il­ity lan­guage LVL
type of lan­guage that provides the cap­ab­il­ity of com­bin­ing pre­defined, application-​specific lib­rary func­tions to imple­ment the safety require­ments spe­cific­a­tions
Note 1 to entry: Typical examples of LVL (lad­der logic, func­tion block dia­gram) are giv­en in IEC 61131 – 3.
Note 2 to entry: A typ­ic­al example of a sys­tem using LVL: PLC. [SOURCE: IEC 61511 – 1:2003, 3.2.80.1.2, mod­i­fied.]
3.1.35 full vari­ab­il­ity lan­guage FVL
type of lan­guage that provides the cap­ab­il­ity of imple­ment­ing a wide vari­ety of func­tions and applic­a­tions EXAMPLE C, C++, Assembler.
Note 1 to entry: A typ­ic­al example of sys­tems using FVL: embed­ded sys­tems.
Note 2 to entry: In the field of machinery, FVL is found in embed­ded soft­ware and rarely in applic­a­tion soft­ware. [SOURCE: IEC 61511 – 1:2003, 3.2.80.1.3, mod­i­fied.]
3.1.37 embed­ded soft­ware
firm­ware
sys­tem soft­ware
soft­ware that is part of the sys­tem sup­plied by the con­trol man­u­fac­turer and which is not access­ible for modi­fic­a­tion by the user of the machinery.
Note 1 to entry: Embedded soft­ware is usu­ally writ­ten in FVL.
Field Programmable Gate Array FPGA
A field-​programmable gate array (FPGA) is an integ­rated cir­cuit designed to be con­figured by a cus­tom­er or a design­er after man­u­fac­tur­ing – hence “field-​programmable”. The FPGA con­fig­ur­a­tion is gen­er­ally spe­cified using a hard­ware descrip­tion lan­guage (HDL), sim­il­ar to that used for an application-​specific integ­rated cir­cuit (ASIC). [22]

Book List

Here are some books that I think you may find help­ful on this jour­ney:

[0]     B. Main, Risk Assessment: Basics and Benchmarks, 1st ed. Ann Arbor, MI USA: DSE, 2004.

[0.1]  D. Smith and K. Simpson, Safety crit­ic­al sys­tems hand­book. Amsterdam: Elsevier/​Butterworth-​Heinemann, 2011.

[0.2]  Electromagnetic Compatibility for Functional Safety, 1st ed. Stevenage, UK: The Institution of Engineering and Technology, 2008.

[0.3]  Overview of tech­niques and meas­ures related to EMC for Functional Safety, 1st ed. Stevenage, UK: Overview of tech­niques and meas­ures related to EMC for Functional Safety, 2013.

References

Note: This ref­er­ence list starts in Part 1 of the series, so “miss­ing” ref­er­ences may show in oth­er parts of the series. Included in the last post of the series is the com­plete ref­er­ence list.

[1]     Safety of machinery — Safety-​related parts of con­trol sys­tems — Part 1: General prin­ciples for design. 3rd Edition. ISO Standard 13849 – 1. 2015.

[2]     Safety of machinery – Safety-​related parts of con­trol sys­tems – Part 2: Validation. 2nd Edition. ISO Standard 13849 – 2. 2012.

[3]      Safety of machinery – General prin­ciples for design – Risk assess­ment and risk reduc­tion. ISO Standard 12100. 2010.

[4]     Safeguarding of Machinery. 2nd Edition. CSA Standard Z432. 2004.

[5]     Risk Assessment and Risk Reduction- A Guideline to Estimate, Evaluate and Reduce Risks Associated with Machine Tools. ANSI Technical Report B11.TR3. 2000.

[6]    Safety of machinery – Emergency stop func­tion – Principles for design. ISO Standard 13850. 2015.

[7]     Functional safety of electrical/​electronic/​programmable elec­tron­ic safety-​related sys­tems. 7 parts. IEC Standard 61508. Edition 2. 2010.

[8]     S. Jocelyn, J. Baudoin, Y. Chinniah, and P. Charpentier, “Feasibility study and uncer­tain­ties in the val­id­a­tion of an exist­ing safety-​related con­trol cir­cuit with the ISO 13849 – 1:2006 design stand­ard,” Reliab. Eng. Syst. Saf., vol. 121, pp. 104 – 112, Jan. 2014.

[9]    Guidance on the applic­a­tion of ISO 13849 – 1 and IEC 62061 in the design of safety-​related con­trol sys­tems for machinery. IEC Technical Report TR 62061 – 1. 2010.

[10]     Safety of machinery – Functional safety of safety-​related elec­tric­al, elec­tron­ic and pro­gram­mable elec­tron­ic con­trol sys­tems. IEC Standard 62061. 2005.

[11]    Guidance on the applic­a­tion of ISO 13849 – 1 and IEC 62061 in the design of safety-​related con­trol sys­tems for machinery. IEC Technical Report 62061 – 1. 2010.

[12]    D. S. G. Nix, Y. Chinniah, F. Dosio, M. Fessler, F. Eng, and F. Schrever, “Linking Risk and Reliability — Mapping the out­put of risk assess­ment tools to func­tion­al safety require­ments for safety related con­trol sys­tems,” 2015.

[13]    Safety of machinery. Safety related parts of con­trol sys­tems. General prin­ciples for design. CEN Standard EN 954 – 1. 1996.

[14]   Functional safety of electrical/​electronic/​programmable elec­tron­ic safety-​related sys­tems – Part 2: Requirements for electrical/​electronic/​programmable elec­tron­ic safety-​related sys­tems. IEC Standard 61508 – 2. 2010.

[15]     Reliability Prediction of Electronic Equipment. Military Handbook MIL-​HDBK-​217F. 1991.

[16]     “IFA – Practical aids: Software-​Assistent SISTEMA: Safety Integrity – Software Tool for the Evaluation of Machine Applications”, Dguv​.de, 2017. [Online]. Available: http://​www​.dguv​.de/​i​f​a​/​p​r​a​x​i​s​h​i​l​f​e​n​/​p​r​a​c​t​i​c​a​l​-​s​o​l​u​t​i​o​n​s​-​m​a​c​h​i​n​e​-​s​a​f​e​t​y​/​s​o​f​t​w​a​r​e​-​s​i​s​t​e​m​a​/​i​n​d​e​x​.​jsp. [Accessed: 30- Jan- 2017].

[17]      “fail­ure mode”, 192−03−17, International Electrotechnical Vocabulary. IEC International Electrotechnical Commission, Geneva, 2015.

[18]      M. Gentile and A. E. Summers, “Common Cause Failure: How Do You Manage Them?,” Process Saf. Prog., vol. 25, no. 4, pp. 331 – 338, 2006.

[19]     Out of Control — Why con­trol sys­tems go wrong and how to pre­vent fail­ure, 2nd ed. Richmond, Surrey, UK: HSE Health and Safety Executive, 2003.

[20]     Safeguarding of Machinery. 3rd Edition. CSA Standard Z432. 2016.

[21]     O. Reg. 851, INDUSTRIAL ESTABLISHMENTS. Ontario, Canada, 1990.

[22]     “Field-​programmable gate array”, En​.wiki​pe​dia​.org, 2017. [Online]. Available: https://​en​.wiki​pe​dia​.org/​w​i​k​i​/​F​i​e​l​d​-​p​r​o​g​r​a​m​m​a​b​l​e​_​g​a​t​e​_​a​r​ray. [Accessed: 16-​Jun-​2017].

Series NavigationISO 13849 – 1 Analysis — Part 6: CCF — Common Cause Failures”>ISO 13849 – 1 Analysis — Part 6: CCF — Common Cause FailuresHow to do a 13849 – 1 ana­lys­is: Complete Reference List

Author: Doug Nix

+DougNix is Managing Director and Principal Consultant at Compliance InSight Consulting, Inc. (http://www.complianceinsight.ca) in Kitchener, Ontario, and is Lead Author and Managing Editor of the Machinery Safety 101 blog.

Doug's work includes teaching machinery risk assessment techniques privately and through Conestoga College Institute of Technology and Advanced Learning in Kitchener, Ontario, as well as providing technical services and training programs to clients related to risk assessment, industrial machinery safety, safety-related control system integration and reliability, laser safety and regulatory conformity.

Follow me on Academia.edu//a.academia-assets.com/javascripts/social.js