- ISO 13849–1 Analysis — Part 1: Start with Risk Assessment”>ISO 13849–1 Analysis — Part 1: Start with Risk Assessment
- ISO 13849–1 Analysis — Part 2: Safety Requirement Specification”>ISO 13849–1 Analysis — Part 2: Safety Requirement Specification
- ISO 13849–1 Analysis — Part 3: Architectural Category Selection”>ISO 13849–1 Analysis — Part 3: Architectural Category Selection
- ISO 13849–1 Analysis — Part 4: MTTFD — Mean Time to Dangerous Failure”>ISO 13849–1 Analysis — Part 4: MTTFD — Mean Time to Dangerous Failure
- ISO 13849–1 Analysis — Part 5: Diagnostic Coverage (DC)”>ISO 13849–1 Analysis — Part 5: Diagnostic Coverage (DC)
- ISO 13849–1 Analysis — Part 6: CCF — Common Cause Failures”>ISO 13849–1 Analysis — Part 6: CCF — Common Cause Failures
- ISO 13849–1 Analysis — Part 7: Safety-Related Software
- How to do a 13849–1 analysis: Complete Reference List
- ISO 13849–1 Analysis — Part 8: Fault Exclusion”>ISO 13849–1 Analysis — Part 8: Fault Exclusion
Up to this point, I have been discussing the basic processes used for the design of safety-related parts of control systems. The underlying assumption is that these techniques apply to the design of hardware used for safety purposes. The remaining question focuses on the design and development of safety-related software that runs on that hardware. If you have not read the rest of this series and would like to catch up first, you can find it here.
In this discussion of safety-related software, keep in mind that I am talking about software that is only intended to reduce risk. Some platforms that are not well suited for safety software, primarily common off-the-shelf (COTS) operating systems like Windows, MacOS and Linux. Generally speaking, these operating systems are too complex and subject to unanticipated changes to be suitable for high-reliability applications. There is nothing wrong with using these systems for annunciation and monitoring functions, but the safety functions should run on more predictable platforms.
The methodology discussed in ISO 13849–1 is usable up to PLd. At the end of the Scope we find Note 4:
NOTE 4 For safety-related embedded software for components with PLr = e, see IEC 61508–3:1998, Clause 7.
As you can see, for very high-reliability systems, i.e., PLe/SIL3 or SIL4, it is necessary to move to IEC 61508. The methods discussed here are based on ISO 13849–1:2015, Chapter 4.6.
There are two goals for safety-related software development activities:
- Avoid faults
- Generate readable, understandable, testable and maintainable software
Fig. 1 [1, Fig. 6] shows the “V-model” for software development. This approach to software design incorporates both validation and verification, and when correctly implemented will result in software that meets the design specifications.
If you aren’t sure what the difference is between verification and validation, I remember it is this way: Validation means “Are we building the right thing?”, and verification means “Did we build the thing right?” The whole process hinges on the Safety Requirement Specification (SRS), so failing to get that part of the process right in the beginning will negatively impact both hardware and software design. The SRS is the yardstick used to decide if you built the right thing. Without that, you are clueless about what you are building.
Coming in from the Safety Requirement Specification (also called the safety function specification), each step in the process is shown. The dashed lines illustrate the verification process at each step. Notice that the actual coding step is at the bottom of the V-model. Everything above the coding stage is either planning and design, or quality assurance activities.
There are other methods that can be used to result in verified and validated software, so if you have a QA process that produces solid results, you may not need to change it. I would recommend that you review all the stages in the V-model to ensure that your QA system has similar processes.
To make setting up safety systems simpler for designers and integrators, there are two approaches to software design that can be used.
Two Approaches to Software Design
There are two approaches to software design that should be considered:
- Preconfigured (building-block style) software
- Fully customised software
Preconfigured Building-Block Software
The preconfigured building-block approach is typically used for configuring safety PLCs or programmable safety relays or modules. This type of software is referred to as “safety-related embedded software (SRESW)” in .
Pre-written function blocks are provided by the device manufacturer. Each function block has a particular role: emergency stop, safety gate input, zero-speed detection, and so on. When configuring a safety PLC or safety modules that use this approach, the designer selects the appropriate block and then configures the inputs, outputs, and any other functional characteristics that are needed. The designer has no access to the safety-related code, so apart from configuration errors, no other errors can be introduced. The function blocks are verified and validated (V & V) by the controls component manufacturer, usually with the support of an accredited certification body. The function blocks will normally have a PL associated with them, and a statement like “suitable for PLe” will be made in the function block description.
This approach eliminates the need to do a detailed V & V of the code by the designing entity (i.e., the machine builder). However, the machine builder is still required to do a V & V on the operation of the system as they have configured it. The machine V & V includes all the usual fault injection tests and functional tests to ensure that the system will behave in as intended in the presence of a demand on the safety function or a fault condition. The faults that should be tested are those in your Fault List. If you don’t have a fault list or don’t know what a Fault List is, see Part 8 in this series.
Using pre-configured building blocks achieves the first goal, fault avoidance, at least as far as the software coding is concerned. The configuration software will validate the function block configurations before compiling the software for upload to the safety controller so that most configuration errors will be caught at that stage.
This approach also facilitates the second goal, as long as the configuration software is usable and maintained by the software vendor. The configuration software usually includes the ability to annotate the configurations with relevant details to assist with the readability and understandability of the software.
Fully Customised Software
This approach is used where a fully customised hardware platform is being used, and the safety software is designed to run on that platform.  refers to this type of software as “Safety-related application software (SRASW).” A fully customised software application is used where a very specialised safety system is contemplated, and FPGAs or other customised hardware is being used. These systems are usually programmed using full-variability languages.
In this case, the full hardware and software V & V approach must be employed. In my opinion, ISO 13849–1 is probably not the best choice for this approach due to its simplification, and I would usually recommend using IEC 61508–3 as the basis for the design, verification, and validation of fully customised software.
Safety-Related Embedded Software (SRESW)
[1, 4.6.2] provides a laundry list of elements that must be incorporated into the V-model processes when developing SRESW, broken down by PLa through PLd, and then some additional requirements for PLc and PLd.
If you are designing SRESW for PLe, [1, 4.6.2] points you directly to IEC 61508–3, clause 7, which covers software suitable for SIL3 applications.
Safety-Related Application Software (SRASW)
[1, 4.6.3] provides a list of requirements that must be met through the v-model process for SRASW, and allows that PLa through PLe can be met by code written in LVL and that PLe applications can also be designed using FVL. In cases where software is developed using FVL, the software can be treated as the embedded software products (SRESW) are handled.
A similar architectural model to that used for single-channel hardware development is used, as shown in Fig. 2 [1, Fig 7].
The complete V-model must be applied to safety-related application software, with all of the additional requirements from [1, 4.6.3] included in the process model.
There is a lot to safety-related software development, certainly much more than could be discussed in a blog post like this or even in a standard like ISO 13849–1. If you are contemplating developing safety related software and you are not familiar with the techniques needed to develop this kind of high-reliability software, I would suggest you get help from a qualified developer. Keep in mind that there can be significant liability attached to safety system failures, including the deaths of people using your product. If you are developing SRASW, I would also recommend following IEC 61508–3 as the basis for the development and related QA processes.
- 3.1.36 application software
- software specific to the application, implemented by the machine manufacturer, and generally containing logic sequences, limits and expressions that control the appropriate inputs, outputs, calculations and decisions necessary to meet the SRP/CS requirements 3.1.37 embedded software firmware system software software that is part of the system supplied by the control manufacturer and which is not accessible for modification by the user of the machinery Note 1 to entry: Embedded software is usually written in FVL.
- Note 1 to entry: Embedded software is usually written in FVL.
- 3.1.34 limited variability language LVL
- type of language that provides the capability of combining predefined, application-specific library functions to implement the safety requirements specifications
- Note 1 to entry: Typical examples of LVL (ladder logic, function block diagram) are given in IEC 61131–3.
- Note 2 to entry: A typical example of a system using LVL: PLC. [SOURCE: IEC 61511–1:2003, 184.108.40.206.2, modified.]
- 3.1.35 full variability language FVL
- type of language that provides the capability of implementing a wide variety of functions and applications EXAMPLE C, C++, Assembler.
- Note 1 to entry: A typical example of systems using FVL: embedded systems.
- Note 2 to entry: In the field of machinery, FVL is found in embedded software and rarely in application software. [SOURCE: IEC 61511–1:2003, 220.127.116.11.3, modified.]
- 3.1.37 embedded software
- system software
- software that is part of the system supplied by the control manufacturer and which is not accessible for modification by the user of the machinery.
- Note 1 to entry: Embedded software is usually written in FVL.
- Field Programmable Gate Array FPGA
- A field-programmable gate array (FPGA) is an integrated circuit designed to be configured by a customer or a designer after manufacturing – hence “field-programmable”. The FPGA configuration is generally specified using a hardware description language (HDL), similar to that used for an application-specific integrated circuit (ASIC). 
Here are some books that I think you may find helpful on this journey:
[0.2] Electromagnetic Compatibility for Functional Safety, 1st ed. Stevenage, UK: The Institution of Engineering and Technology, 2008.
Note: This reference list starts in Part 1 of the series, so “missing” references may show in other parts of the series. Included in the last post of the series is the complete reference list.
 S. Jocelyn, J. Baudoin, Y. Chinniah, and P. Charpentier, “Feasibility study and uncertainties in the validation of an existing safety-related control circuit with the ISO 13849–1:2006 design standard,” Reliab. Eng. Syst. Saf., vol. 121, pp. 104–112, Jan. 2014.
 D. S. G. Nix, Y. Chinniah, F. Dosio, M. Fessler, F. Eng, and F. Schrever, “Linking Risk and Reliability—Mapping the output of risk assessment tools to functional safety requirements for safety related control systems,” 2015.
 Functional safety of electrical/electronic/programmable electronic safety-related systems — Part 2: Requirements for electrical/electronic/programmable electronic safety-related systems. IEC Standard 61508–2. 2010.
 “IFA — Practical aids: Software-Assistent SISTEMA: Safety Integrity — Software Tool for the Evaluation of Machine Applications”, Dguv.de, 2017. [Online]. Available: http://www.dguv.de/ifa/praxishilfen/practical-solutions-machine-safety/software-sistema/index.jsp. [Accessed: 30- Jan- 2017].
 “failure mode”, 192–03-17, International Electrotechnical Vocabulary. IEC International Electrotechnical Commission, Geneva, 2015.
 M. Gentile and A. E. Summers, “Common Cause Failure: How Do You Manage Them?,” Process Saf. Prog., vol. 25, no. 4, pp. 331–338, 2006.
 Out of Control—Why control systems go wrong and how to prevent failure, 2nd ed. Richmond, Surrey, UK: HSE Health and Safety Executive, 2003.
 “Field-programmable gate array”, En.wikipedia.org, 2017. [Online]. Available: https://en.wikipedia.org/wiki/Field-programmable_gate_array. [Accessed: 16-Jun-2017].