ISO 13849-1 Analysis — Part 7: Safety-Related Software

Post updated 2019-07-24. Ed.

Safety-Related Software

Up to this point, I have been discussing the basic processes used to design safety-related parts of control systems. The underlying assumption is that these techniques apply to the design of hardware used for safety purposes. The remaining question focuses on the design and development of safety-related software that runs on that hardware. If you have not read the rest of this series and would like to catch up first, you can find it here.

In this discussion of safety-related software, remember that I am talking about software only intended to reduce risk. Some platforms are not well suited for safety software, primarily common off-the-shelf (COTS) operating systems like Windows, MacOS and Linux. Generally speaking, these operating systems are too complex and subject to unanticipated changes to be suitable for high-reliability applications. There is nothing wrong with using these systems for annunciation and monitoring functions, but the safety functions should run on more predictable platforms.

The methodology discussed in ISO 13849-1 is usable up to PLd. At the end of the Scope, we find Note 4:

NOTE 4 For safety-related embedded software for components with PLr = e, see IEC 61508-3:1998, Clause 7.

As you can see, for very high-reliability systems, i.e., PLe/SIL3 or SIL4, it is necessary to move to IEC 61508. The methods discussed here are based on ISO 13849-1:2015, clause 4.6.


There are two goals for safety-related software development activities:

  1. Avoid faults
  2. Generate readable, understandable, testable and maintainable software

Avoiding Faults

Fig. 1 [1, Fig. 6] shows the “V-model” for software development. This approach to software design incorporates both validation and verification. When correctly implemented, the V-model method will result in software that meets the design specifications.

If you aren’t sure what the difference is between verification and validation, I remember it is this way: Validation means “Are we building the right thing?” and verification means “Did we build the thing right?” The whole process hinges on the Safety Requirement Specification (SRS), so failing to get that part of the process right in the beginning will negatively impact both hardware and software design. The SRS is the yardstick to decide if you built the right thing. Without that, you are clueless about what you are building.

Simplified V-model of software safety lifecycle
Figure 1 ? Simplified V-model of software safety lifecycle

Coming in from the Safety Requirement Specification (also called the safety function specification), each step in the process is shown. The dashed lines illustrate the verification process at each step. Notice that the actual coding step is at the bottom of the V-model. Everything above the coding stage is planning, design, or quality assurance activities.

Other methods can be used to verify and validate software, so if you have a QA process that produces solid results, you may not need to change it. I recommend reviewing all the stages in the V-model to ensure that your QA system has similar processes.

To make setting up safety systems simpler for designers and integrators, two software design approaches can be used.

Two Approaches to Software Design

There are two approaches to software design that should be considered:

  • Preconfigured (building-block style) software
  • Fully customized software

Preconfigured Building-Block Software

The preconfigured building-block approach is typically used for configuring safety PLCs or programmable safety relays or modules. This type of software is referred to as “safety-related embedded software (SRESW)” in [1].

Pre-written function blocks are provided by the device manufacturer. Each function block has a particular role: emergency stop, safety gate input, zero-speed detection, etc. When configuring a safety PLC or safety modules that use this approach, the designer selects the appropriate block and then configures the inputs, outputs, and other functional characteristics needed. The designer has no access to the safety-related code, so no other errors can be introduced apart from configuration errors. The function blocks are verified and validated (V & V) by the controls component manufacturer, usually with the support of an accredited certification body. The function blocks will normally have a PL associated with them, and a statement like “suitable for PLe” will be made in the function block description.

This approach eliminates the need to do a detailed V & V of the code by the designing entity (i.e., the machine builder). However, the machine builder must do a V & V on the system’s operation as they have configured it. Machine V & V includes all the usual fault injection tests and functional tests to ensure that the system will behave as intended in the presence of a demand on the safety function or a fault condition. The faults that should be tested are those in your Fault List. If you don’t have a fault list or don’t know what a Fault List is, see Part 8 in this series.

Using pre-configured building blocks achieves the first goal, fault avoidance, at least as far as the software coding is concerned. The configuration software will validate the function block configurations before compiling the software for upload to the safety controller so that most configuration errors will be caught at that stage.

This approach also facilitates the second goal, as long as the configuration software is usable and maintained by the software vendor. The configuration software usually includes the ability to annotate the configurations with relevant details to assist with the readability and understandability of the software.

Fully Customized Software

This approach is used where a fully customized hardware platform is used, and the safety software is designed to run on that platform [1]. This type of software is called “Safety-related application software (SRASW).” A fully customized software application is used where a very specialized safety system is contemplated, and FPGAs or other customized hardware is used. These systems are usually programmed using full-variability languages.

The full hardware and software V & V approach must be employed in this case. I believe ISO 13849-1 is probably not the best choice for this approach due to its simplification, and I would usually recommend using IEC 61508-3 as the basis for the design, verification, and validation of fully customized software.

Process requirements

Safety-Related Embedded Software (SRESW)

[1, 4.6.2] provides a laundry list of elements that must be incorporated into the V-model processes when developing SRESW, broken down by PLa through PLd, and then some additional requirements for PLc and PLd.

If you are designing SRESW for PLe, [1, 4.6.2] points you directly to IEC 61508-3, clause 7, which covers software suitable for SIL3 applications.

Safety-Related Application Software (SRASW)

Safety-Related Application Software (SRASW) can be written in either Low-Variability Language (LVL) or Full-Variability Language (FVL). LVL is often simpler to debug and validate than FVL, but FVL is more flexible. There is always a tradeoff.

A similar architectural model to that used for single-channel hardware development is used, as shown in Fig. 2  [1, Fig 7].

General architecture model of software
Figure 2 — General architecture model of software

The complete V-model must be applied to safety-related application software, with all of the additional requirements from [1, 4.6.3] included in the process model.


There is a lot to safety-related software development, certainly much more than could be discussed in a blog post like this or even in a standard like ISO 13849-1. If you are contemplating developing safety-related software and are not familiar with the techniques needed to develop this high-reliability software, I would suggest you get help from a qualified developer. Remember that significant liability can be attached to safety system failures, including the deaths of people using your product. If you are developing SRASW, I recommend following IEC 61508-3 as the basis for the development and related QA processes, rather than ISO 13849-1.

The final part of this series discusses fault exclusions. Designers frequently misapply fault exclusions and sometimes incorrectly use them to try to “get around” using non-safety rated components when developing a SISTEMA analysis.


3.1.36 application software
software specific to the application, implemented by the machine manufacturer, and generally containing logic sequences, limits and expressions that control the appropriate inputs, outputs, calculations and decisions necessary to meet the SRP/CS requirements 3.1.37 embedded software firmware system software software that is part of the system supplied by the control manufacturer and which is not accessible for modification by the user of the machinery Note 1 to entry: Embedded software is usually written in FVL.
Note 1 to entry: Embedded software is usually written in FVL.
3.1.34 limited variability language LVL
type of language that provides the capability of combining predefined, application-specific library functions to implement the safety requirements specifications
Note 1 to entry: Typical examples of LVL (ladder logic, function block diagram) are given in IEC 61131?3.
Note 2 to entry: A typical example of a system using LVL: PLC. [SOURCE: IEC 61511-1:2003,, modified.]
3.1.35 full variability language FVL
type of language that provides the capability of implementing a wide variety of functions and applications EXAMPLE C, C++, Assembler.
Note 1 to entry: A typical example of systems using FVL: embedded systems.
Note 2 to entry: In the field of machinery, FVL is found in embedded software and rarely in application software. [SOURCE: IEC 61511-1:2003,, modified.]
3.1.37 embedded software
system software
software that is part of the system supplied by the control manufacturer and which is not accessible for modification by the user of the machinery.
Note 1 to entry: Embedded software is usually written in FVL.
Field Programmable Gate Array FPGA
A field-programmable gate array (FPGA) is an integrated circuit designed to be configured by a customer or a designer after manufacturing ? hence “field-programmable”. The FPGA configuration is generally specified using a hardware description language (HDL), similar to that used for an application-specific integrated circuit (ASIC). [22]

Book List

Here are some books that I think you may find helpful on this journey:

[0]     B. Main, Risk Assessment: Basics and Benchmarks, 1st ed. Ann Arbor, MI USA: DSE, 2004.

[0.1]  D. Smith and K. Simpson, Safety critical systems handbook. Amsterdam: Elsevier/Butterworth-Heinemann, 2011.

[0.2]  Electromagnetic Compatibility for Functional Safety, 1st ed. Stevenage, UK: The Institution of Engineering and Technology, 2008.

[0.3] Overview of techniques and measures related to EMC for Functional Safety, 1st ed. Stevenage, UK: Overview of techniques and measures related to EMC for Functional Safety, 2013.

[0.4] “Code of practice for electromagnetic resilience, 1st ed. Stevenage, UK: IET Standards TC4.3 EMC, 2017.

[0.5] “Code of Practice: Competence for Safety Related Systems Practitioners, 1st ed. Stevenage, UK: The Institution of Engineering and Technology, 2016.


Note: This reference list starts in Part 1 of the series, so “missing” references may show in other parts of the series. Included in the last post of the series is the complete reference list.

[1]     Safety of machinery — Safety-related parts of control systems — Part 1: General principles for design, 3rd Ed. ISO 13849-1. 2015.

[2]     Safety of machinery — Safety-related parts of control systems — Part 2: Validation, 2nd Ed. ISO 13849-2. 2012.

[3]      Safety of machinery — General principles for design — Risk assessment and risk reduction, ISO 12100. 2010.

[4]     Safeguarding of Machinery, 2nd Ed. CSA Z432. 2004.

[5]     Risk Assessment and Risk Reduction — A Guideline to Estimate, Evaluate and Reduce Risks Associated with Machine Tools, ANSI Technical Report B11.TR3. 2000.

[6]    Safety of machinery — Emergency stop function — Principles for design, ISO 13850. 2015.

[7]     Functional safety of electrical/electronic/programmable electronic safety-related systems. Seven parts. IEC 61508. Ed. 2. 2010.

[8]     S. Jocelyn, J. Baudoin, Y. Chinniah, and P. Charpentier, “Feasibility study and uncertainties in the validation of an existing safety-related control circuit with the ISO 13849-1:2006 design standard,” Reliab. Eng. Syst. Saf., vol. 121, pp. 104-112, Jan. 2014.

[9]    Guidance on the application of ISO 13849-1 and IEC 62061 in the design of safety-related control systems for machinery, IEC/TR 62061-1. 2010.

[10]     Safety of machinery — Functional safety of safety-related electrical, electronic and programmable electronic control systems, IEC 62061. 2005.

[11]    Guidance on the application of ISO 13849-1 and IEC 62061 in the design of safety-related control systems for machinery, IEC/TR 62061-1. 2010.

[12]    D. S. G. Nix, Y. Chinniah, F. Dosio, M. Fessler, F. Eng, and F. Schrever, “Linking Risk and Reliability—Mapping the output of risk assessment tools to functional safety requirements for safety related control systems.” Kitchener: Compliance inSIght Consulting Inc. 2015.

[13]    Safety of machinery—Safety-related parts of control systems. General principles for design, CEN EN 954-1. 1996.

[14]   Functional safety of electrical/electronic/programmable electronic safety-related systems — Part 2: Requirements for electrical/electronic/programmable electronic safety-related systems, IEC 61508-2. 2010.

[15]     Reliability Prediction of Electronic Equipment, Military Handbook MIL-HDBK-217F. 1991.

[16]     “IFA – Practical aids: Software-Assistant SISTEMA: Safety Integrity – Software Tool for the Evaluation of Machine Applications”,, 2017. [Online]. Available: [Accessed: 30- Jan- 2017].

[17]      “failure mode”, 192-03-17, International Electrotechnical Vocabulary. IEC International Electrotechnical Commission, Geneva, 2015.

[18]      M. Gentile and A. E. Summers, “Common Cause Failure: How Do You Manage Them?,” Process Saf. Prog., vol. 25, no. 4, pp. 331-338, 2006.

[19]     Out of Control—Why control systems go wrong and how to prevent failure, 2nd ed. Richmond, Surrey, UK: HSE Health and Safety Executive, 2003.

[20]     Safeguarding of Machinery, 3rd Ed. CSA Z432. 2016.

[21]     O. Reg. 851, INDUSTRIAL ESTABLISHMENTS. Ontario, Canada, 1990.

[22]     “Field-programmable gate array”,, 2017. [Online]. Available: [Accessed: 16-Jun-2017].

© 2017 – 2022, Compliance inSight Consulting Inc. Creative Commons Licence
This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.