ISO 13849-1 Analysis — Part 6: CCF — Common Cause Failures

This entry is part 6 of 6 in the series How to do a 13849-1 analysis

What is a Common Cause Failure?

There are two similar-sounding terms that people often get confused: Common Cause Failure (CCF) and Common Mode Failure. While these two types of failures sound similar, they are different. A Common Cause Failure is a failure in a system where two or more portions of the system fail at the same time from a single common cause. An example could be a lightning strike that causes a contactor to weld and simultaneously takes out the safety relay processor that controls the contactor. Common cause failures are therefore two different manners of failure in two different components, but with a single cause.

Common Mode Failure is where two components or portions of a system fail in the same way, at the same time. For example, two interposing relays both fail with welded contacts at the same time. The failures could be caused by the same cause or from different causes, but the way the components fail is the same.

Common-cause failure includes common mode failure, since a common cause can result in a common manner of failure in identical devices used in a system.

Here are the formal definitions of these terms:

3.1.6 common cause failure CCF

failures of different items, resulting from a single event, where these failures are not consequences of each other

Note 1 to entry: Common cause failures should not be confused with common mode failures (see ISO 12100:2010, 3.36). [SOURCE: IEC 60050?191-am1:1999, 04-23.] [1]

 

3.36 common mode failures

failures of items characterized by the same fault mode

NOTE Common mode failures should not be confused with common cause failures, as the common mode failures can result from different causes. [lEV 191-04-24] [3]

The “common mode” failure definition uses the phrase “fault mode”, so let’s look at that as well:

failure mode
DEPRECATED: fault mode
manner in which failure occurs

Note 1 to entry: A failure mode may be defined by the function lost or other state transition that occurred. [IEV 192-03-17] [17]

As you can see, “fault mode” is no longer used, in favour of the more common “failure mode”, so it is possible to re-write the common-mode failure definition to read, “failures of items characterised by the same manner of failure.”

Random, Systematic and Common Cause Failures

Why do we need to care about this? There are three manners in which failures occur: random failures, systematic failures, and common cause failures. When developing safety related controls, we need to consider all three and mitigate them as much as possible.

Random failures do not follow any pattern, occurring randomly over time, and are often brought on by over-stressing the component, or from manufacturing flaws. Random failures can increase due to environmental or process-related stresses, like corrosion, EMI, normal wear-and-tear, or other over-stressing of the component or subsystem. Random failures are often mitigated through selection of high-reliability components [18].

Systematic failures include common-cause failures, and occur because some human behaviour occurred that was not caught by procedural means. These failures are due to design, specification, operating, maintenance, and installation errors. When we look at systematic errors, we are looking for things like training of the system designers, or quality assurance procedures used to validate the way the system operates. Systematic failures are non-random and complex, making them difficult to analyse statistically. Systematic errors are a significant source of common-cause failures because they can affect redundant devices, and because they are often deterministic, occurring whenever a set of circumstances exist.

Systematic failures include many types of errors, such as:

  • Manufacturing defects, e.g., software and hardware errors built into the device by the manufacturer.
  • Specification mistakes, e.g. incorrect design basis and inaccurate software specification.
  • Implementation errors, e.g., improper installation, incorrect programming, interface problems, and not following the safety manual for the devices used to realise the safety function.
  • Operation and maintenance, e.g., poor inspection, incomplete testing and improper bypassing [18].

Diverse redundancy is commonly used to mitigate systematic failures, since differences in component or subsystem design tend to create non-overlapping systematic failures, reducing the likelihood of a common error creating a common-mode failure. Errors in specification, implementation, operation and maintenance are not affected by diversity.

Fig 1 below shows the results of a small study done by the UK’s Health and Safety Executive in 1994 [19] that supports the idea that systematic failures are a significant contributor to safety system failures. The study included only 34 systems (n=34), so the results cannot be considered conclusive. However, there were some startling results. As you can see, errors in the specification of the safety functions (Safety Requirement Specification) resulted in about 44% of the system failures in the study. Based on this small sample, systematic failures appear to be a significate source of failures.

Pie chart illustrating the proportion of failures in each phase of the life cycle of a machine, based on data taken from HSE Report HSG238.
Figure 1 – HSG 238 Primary Causes of Failure by Life Cycle Stage

Handling CCF in ISO 13849-1

Now that we understand WHAT Common-Cause Failure is, and WHY it’s important, we can talk about HOW it is handled in ISO 13849-1. Since ISO 13849-1 is intended to be a simplified functional safety standard, CCF analysis is limited to a checklist in Annex F, Table F.1. Note that Annex F is informative, meaning that it is guidance material to help you apply the standard. Since this is the case, you could use any other means suitable for assessing CCF mitigation, like those in IEC 61508, or in other standards.

Table F.1 is set up with a series of mitigation measures which are grouped together in related categories. Each group is provided with a score that can be claimed if you have implemented the mitigations in that group. ALL OF THE MEASURES in each group must be fulfilled in order to claim the points for that category. Here’s an example:

A portion of ISO 13849-1 Table F.1.
ISO 13849-1:2015, Table F.1 Excerpt

In order to claim the 20 points available for the use of separation or segregation in the system design, there must be a separation between the signal paths. Several examples of this are given for clarity.

Table F.1 lists six groups of mitigation measures. In order to claim adequate CCF mitigation, a minimum score of 65 points must be achieved. Only Category 2, 3 and 4 architectures are required to meet the CCF requirements in order to claim the PL, but without meeting the CCF requirement you cannot claim the PL, regardless of whether the design meets the other criteria or not.

One final note on CCF: If you are trying to review an existing control system, say in an existing machine, or in a machine designed by a third party where you have no way to determine the experience and training of the designers or the capability of the company’s change management process, then you cannot adequately assess CCF [8]. This fact is recognised in CSA Z432-16 [20], chapter 8. [20] allows the reviewer to simply verify that the architectural requirements, exclusive of any probabilistic requirements, have been met. This is particularly useful for engineers reviewing machinery under Ontario’s Pre-Start Health and Safety requirements [21], who are frequently working with less-than-complete design documentation.

In case you missed the first part of the series, you can read it here. In the next article in this series, I’m going to review the process flow for system analysis as currently outlined in ISO 13849-1. Watch for it!

Book List

Here are some books that I think you may find helpful on this journey:

[0]     B. Main, Risk Assessment: Basics and Benchmarks, 1st ed. Ann Arbor, MI USA: DSE, 2004.

[0.1]  D. Smith and K. Simpson, Safety critical systems handbook. Amsterdam: Elsevier/Butterworth-Heinemann, 2011.

[0.2]  Electromagnetic Compatibility for Functional Safety, 1st ed. Stevenage, UK: The Institution of Engineering and Technology, 2008.

[0.3]  Overview of techniques and measures related to EMC for Functional Safety, 1st ed. Stevenage, UK: Overview of techniques and measures related to EMC for Functional Safety, 2013.

References

Note: This reference list starts in Part 1 of the series, so “missing” references may show in other parts of the series. The complete reference list is included in the last post of the series.

[1]     Safety of machinery — Safety-related parts of control systems — Part 1: General principles for design. 3rd Edition. ISO Standard 13849-1. 2015.

[2]     Safety of machinery — Safety-related parts of control systems — Part 2: Validation. 2nd Edition. ISO Standard 13849-2. 2012.

[3]      Safety of machinery — General principles for design — Risk assessment and risk reduction. ISO Standard 12100. 2010.

[8]     S. Jocelyn, J. Baudoin, Y. Chinniah, and P. Charpentier, “Feasibility study and uncertainties in the validation of an existing safety-related control circuit with the ISO 13849-1:2006 design standard,” Reliab. Eng. Syst. Saf., vol. 121, pp. 104–112, Jan. 2014.

[17]      “failure mode”, 192-03-17, International Electrotechnical Vocabulary. IEC International Electrotechnical Commission, Geneva, 2015.

[18]      M. Gentile and A. E. Summers, “Common Cause Failure: How Do You Manage Them?,” Process Saf. Prog., vol. 25, no. 4, pp. 331–338, 2006.

[19]     Out of Control—Why control systems go wrong and how to prevent failure, 2nd ed. Richmond, Surrey, UK: HSE Health and Safety Executive, 2003.

[20]     Safeguarding of Machinery. 3rd Edition. CSA Standard Z432. 2016.

[21]     O. Reg. 851, INDUSTRIAL ESTABLISHMENTS. Ontario, Canada, 1990.

ISO 13849-1 Analysis — Part 4: MTTFD – Mean Time to Dangerous Failure

This entry is part 4 of 6 in the series How to do a 13849-1 analysis

Functional safety is all about the likelihood of a safety system failing to operate when you need it. Understanding Mean Time to Dangerous Failure, or MTTFD, is critical. If you have been reading about this topic at all, you may notice that I am abbreviating Mean Time to Dangerous Failure with all capital letters. Using MTTFD is a recent change that occurred in the third edition of ISO 13849-1, published in 2015. In the first and second editions, the correct abbreviation was MTTFd. Onward!

If you missed the third instalment in this series, you can read it here.

Defining MTTFD

Let’s start by having a look at some key definitions. Looking at [1, Cl. 3], you will find:

3.1.1 safety–related part of a control system (SRP/CS)—part of a control system that responds to safety-related input signals and generates safety-related
output signals

Note 1 to entry: The combined safety-related parts of a control system start at the point where the safety-related input signals are initiated (including, for example, the actuating cam and the roller of the position switch) and end at the output of the power control elements (including, for example, the main contacts of a contactor)

Note 2 to entry: If monitoring systems are used for diagnostics, they are also considered as SRP/CS.

3.1.5 dangerous failure—failure which has the potential to put the SRP/CS in a hazardous or fail-to-function state

Note 1 to entry: Whether or not the potential is realized can depend on the channel architecture of the system;
in redundant systems a dangerous hardware failure is less likely to lead to the overall dangerous or fail-tofunction
state.

Note 2 to entry: [SOURCE: IEC 61508–4, 3.6.7, modified.]

3.1.25 mean time to dangerous failure (MTTFD)—expectation of the mean time to dangerous failure

Definition 3.1.5 is pretty helpful, but definition 3.1.25 is, well, not much of a definition. Let’s look at this another way.

Failures and Faults

Since everything can and will eventually fail to perform the way we expect it to, we know that everything has a failure rate because everything takes some time to fail. Granted that this time may be very short, like the first time the unit is turned on, or it may be very long, sometimes hundreds of years. Remember that because this is a rate, it is something that occurs over time. It is also important to be clear that we are talking about failures and not faults. Reading from [1]:

3.1.3 fault—state of an item characterized by the inability to perform a required function, excluding the inability during preventive maintenance or other planned actions, or due to lack of external resources

Note 1 to entry: A fault is often the result of a failure of the item itself, but may exist without prior failure.

Note 2 to entry: In this part of ISO 13849, “fault” means random fault.
[SOURCE: IEC 60050?191:1990, 05-01.]

3.1.4 failure— termination of the ability of an item to perform a required function

Note 1 to entry: After a failure, the item has a fault.

Note 2 to entry: “Failure” is an event, as distinguished from “fault”, which is a state.

Note 3 to entry: The concept as defined does not apply to items consisting of software only.

Note 4 to entry: Failures which only affect the availability of the process under control are outside of the scope of this part of ISO 13849.
[SOURCE: IEC 60050–191:1990, 04-01.]

3.1.4 Note 2 is the important one at this point in the discussion.

Now, where we have multiples of something, like relays, valves, or safety systems, we now have a population of identical items, each of which will eventually fail at some point. We can count those failures as they occur and tally them up, and we can graph how many failures we get in the population over time. If this is starting to sound suspiciously like statistics to you, that is because it is.

OK, so let’s look at the kinds of failures that occur in that population. Some failures will result in a “safe” state, e.g., a relay failing with all poles open, and some will fail in a potentially “dangerous” state, like a normally closed valve developing a significant leak. If we tally up all the failures that occur, and then tally the number of “safe” failures and the number of “dangerous” failures in that population, we now have some very useful information.

The different kinds of failures are signified using the lowercase Greek letter \lambda (lambda). We can add some subscripts to help identify what kinds of failures we are talking about. The common variable designations used are [14]:

\lambda = failures
\lambda_{(t)} = failure rate
\lambda_s = “safe” failures
\lambda_d = “dangerous” failures
\lambda_{dd} = detectable “dangerous” failures
\lambda_{du} = undetectable “dangerous” failures

I will be discussing some of these variables in more detail in a later part of the series when I delve into Diagnostic Coverage, so don’t worry about them too much just yet.

Getting to MTTFD

Since we can now start to deal with the failure rate data mathematically, we can start to do some calculations about expected lifetime of a component or a system. That expected, or probable, lifetime is what definition 3.1.25 was on about, and is what we call MTTFD.

MTTFD is the time in years over which the probability of failure is relatively constant. If you look at a typical failure rate curve, called a “bathtub curve” due to its resemblance to the profile of a nice soaker tub, the MTTFD is the flatter portion of the curve between the end of the infant mortality period and the wear-out period at the end of life. This part of the curve is the portion assumed to be included in the “mission time” for the product. ISO 13849-1 assumes the mission time for all machinery is 20 years [1, 4.5.4] and [1, Cl. 10].

Diagram of a standardized bathtub-shaped failure rate curve.
Figure 1 – Typical Bathtub Curve [15]
ISO 13849-1 provides us with guidance on how MTTFD relates to the determination of the PL in [1, Cl. 4.5.2]. MTTFD is further grouped into three bands as shown in [1, Table 4].
Table showing the bands of Mean time to dangerous failure of each channel (MTTFD)

The notes for this table are important as well. Since you can’t read the notes particularly well in the table above, I’ve reproduced them here:

NOTE 1 The choice of the MTTFD ranges of each channel is based on failure rates found in the field as state-of-the-art, forming a kind of logarithmic scale fitting to the logarithmic PL scale. An MTTFD value of each channel less than three years is not expected to be found for real SRP/CS since this would mean that after one year about 30 % of all systems on the market will fail and will need to be replaced. An MTTFD value of each channel greater than 100 years is not acceptable because SRP/CS for high risks should not depend on the reliability of components alone. To reinforce the SRP/CS against systematic and random failure, additional means such as redundancy and testing should be required. To be practicable, the number of ranges was restricted to three. The limitation of MTTFD of each channel values to a maximum of 100 years refers to the single channel of the SRP/CS which carries out the safety function. Higher MTTFD values can be used for single components (see Table D.1).

NOTE 2 The indicated borders of this table are assumed within an accuracy of 5%.

The standard then tells us to select the MTTFD using a simple hierarchy [1, 4.5.2]:

For the estimation ofMTTFD of a component, the hierarchical procedure for finding data shall be, in the order given:

a) use manufacturer’s data;
b) use methods in Annex C and Annex D;
c) choose 10 years.

Why ten years? Ten years is half of the assumed mission lifetime of 20 years. More on mission lifetime in a later post.

Looking at [1, Annex C.2], you will find the “Good Engineering Practices” method for estimating MTTFD, presuming the manufacturer has not provided you with that information. ISO 13849-2 [2] has some reference tables that provide some general MTTFD values for some kinds of components, but not every part that exists can be listed. How can we deal with parts not listed? [1, Annex C.4] provides us with a calculation method for estimating MTTFD for pneumatic, mechanical and electromechanical components.

Calculating MTTFD for pneumatic, mechanical and electromechanical components

I need to introduce you to a few more variables before we look at how to calculate MTTFD for a component.

Variables
Variable Description
B10 Number of cycles until 10% of the components fail (for pneumatic and electromechanical components)
B10D Number of cycles until 10% of the components fail dangerously (for pneumatic and electromechanical components)
T lifetime of the component
T10D the mean time until 10% of the components fail dangerously
hop is the mean operation time, in hours per day;
dop is the mean operation time, in days per year;
tcycle is the mean operation time between the beginning of two successive cycles of the component. (e.g., switching of a valve) in seconds per cycle.
s seconds
h hours
a years

Knowing a few details we can calculate the MTTFD using [1, Eqn C.1]. We need to know the following parameters for the application:

  • B10D
  • hop
  • dop
  • tcycle

Formula for calculating MTTFD - ISO 13849-1, Equation C.1
Calculating MTTFD – [1, Eqn. C.1]
In order to use [1, Eqn. C.1], we need to first calculate nop, using [1, Eqn. C.2]:

Formula for calculating nop - ISO 13849-1, Equation C.2.
Calculating nop – [1, Eqn. C.2]
We may also need one more calculation, [1, Eqn. C.4]:

Calculating T10D using ISO 13849-1 Eqn. C.3
Calculating T10D – [1, Eqn. C.4]

Example Calculation [1, C.4.3]

For a pneumatic valve, a manufacturer determines a mean value of 60 million cycles as B10D. The valve is used for two shifts each day on 220 operation days a year. The mean time between the beginning of two successive switching of the valve is estimated as 5 s. This yields the following values:

  • dop of 220 days per year;
  • hop of 16 h per day;
  • tcycle of 5 s per cycle;
  • B10D of 60 million cycles.

Doing the math, we get:

Example C.4.3 calculations from, ISO 13849-1.
Example C.4.3

So there you have it, at least for a fairly simple case. There are more examples in ISO 13849-1, and I would encourage you to work through them. You can also find a wealth of examples in a report produced by the BGIA in Germany, called the Functional safety of machine controls (BGIA Report 2/2008e) [16]. The download for the report is linked from the reference list at the end of this article. If you are a SISTEMA user, there are lots of examples in the SISTEMA Cookbooks, and there are example files available so that you can see how to assemble the systems in the software.

The next part of this series covers Diagnostic Coverage (DC), and the average DC for multiple safety functions in a system, DCavg.

In case you missed the first part of the series, you can read it here.

Book List

Here are some books that I think you may find helpful on this journey:

[0]     B. Main, Risk Assessment: Basics and Benchmarks, 1st ed. Ann Arbor, MI USA: DSE, 2004.

[0.1]  D. Smith and K. Simpson, Safety critical systems handbook. Amsterdam: Elsevier/Butterworth-Heinemann, 2011.

[0.2]  Electromagnetic Compatibility for Functional Safety, 1st ed. Stevenage, UK: The Institution of Engineering and Technology, 2008.

[0.3]  Overview of techniques and measures related to EMC for Functional Safety, 1st ed. Stevenage, UK: Overview of techniques and measures related to EMC for Functional Safety, 2013.

References

Note: This reference list starts in Part 1 of the series, so “missing” references may show in other parts of the series. Included in the last post of the series is the complete reference list.

[1]     Safety of machinery — Safety-related parts of control systems — Part 1: General principles for design. 3rd Edition. ISO Standard 13849-1. 2015.

[2]     Safety of machinery — Safety-related parts of control systems — Part 2: Validation. 2nd Edition. ISO Standard 13849-2. 2012.

[7]     Functional safety of electrical/electronic/programmable electronic safety-related systems. 7 parts. IEC Standard 61508. Second Edition. 2010.

[14]    Functional safety of electrical/electronic/programmable electronic safety-related systems – Part 4: Definitions and abbreviations. IEC Standard 61508-4. Second Edition. 2010.

[15]    “The bathtub curve and product failure behavior part 1 of 2”, Findchart.co, 2017. [Online]. Available: http://findchart.co/download.php?aHR0cDovL3d3dy53ZWlidWxsLmNvbS9ob3R3aXJlL2lzc3VlMjEvaHQyMV8xLmdpZg. [Accessed: 03- Jan- 2017].

[16]   “Functional safety of machine controls – Application of EN ISO 13849 (BGIA Report 2/2008e)”, dguv.de, 2017. [Online]. Available: http://www.dguv.de/ifa/publikationen/reports-download/bgia-reports-2007-bis-2008/bgia-report-2-2008/index-2.jsp. [Accessed: 2017-01-04].

Digiprove sealCopyright secured by Digiprove © 2017
Acknowledgements: IEC, ISO and others as cited
Some Rights Reserved

ISO 13849 Analysis — Part 3: Architectural Category Selection

This entry is part 3 of 6 in the series How to do a 13849-1 analysis

At this point, you have completed the risk assessment, assigned required Performance Levels to each safety function, and developed the Safety Requirement Specification for each safety function. Next, you need to consider three aspects of the system design: Architectural Category, Channel Mean Time to Dangerous Failure (MTTFD), and Diagnostic Coverage (DCavg). In this part of the series, I am going to discuss selecting the architectural category for the system.

If you missed the second instalment in this series, you can read it here.

Understanding Performance Levels

To understand ISO 13849-1, it helps to know a little about where the standard originated. ISO 13849-1 is a simplified method for determining the reliability of safety-related controls for machinery. The basic ideas came from IEC 61508 [7], a seven-part standard originally published in 1998. IEC 61508 brought forward the concept of the Average Probability of Dangerous Failure per Hour, PFHD (1/h). Dangerous failures are those failures that result in non-performance of the safety function, and which cannot be detected by diagnostics. Here’s the formal definition from [1]:

3.1.5

dangerous failure
failure which has the potential to put the SRP/CS in a hazardous or fail-to-function state

Note 1 to entry: Whether or not the potential is realised can depend on the channel architecture of the system; in redundant systems a dangerous hardware failure is less likely to lead to the overall dangerous or fail-to-function state.

Note 2 to entry: [SOURCE: IEC 61508–4, 3.6.7, modified.]

The Performance Levels are simply bands of probabilities of Dangerous Failures, as shown in [1, Table 2] below.

Table 2 from ISO 13849-2:2015 showing the five Performance levels and the corresponding ranges of PFHd values.
Performance Levels as bands of PFHd ranges

The ranges shown in [1, Table 2] are approximate. If you need to see the specific limits of the bands for any reason, see [1, Annex K] describes the full span of PFHD, in table format.

There is another way to describe the same characteristics of a system, this one from IEC. Instead of using the PL system, IEC uses Safety Integrity Levels (SILs). [1, Table 3] shows the correspondence between PLs and SILs. Note that the correspondence is not exact. Where the calculated PFHd is close to either end of one of the PL or SIL bands, use the table in [1, Annex K] or in [9] to determine to which band(s) the performance should be assigned.

IEC produced a Technical Report [10] that provides guidance on how to use ISO 13849-1 or IEC 62061. The following table shows the relationship between PLs, PFHd and SILs.

Table showing the correspondence between the PL, PFHd, and SIL.
IEC/TR 62061-1:2010, Table 1

IEC 61508 includes SIL 4, which is not shown in [10, Table 1] because this level of performance exceeds the range of PFHD possible using ISO 13849-1 techniques. Also, you may have noticed that PLb and PLc are both within SIL1. This was done to accommodate the five architectural categories that came from EN 954-1 [12].

Why PL and not just PFHD? One of the odd things that humans do when we can calculate things is the development of what has been called “precision bias” [12]. Precision bias occurs when we can compute a number that appears very precise, e.g., 3.2 x 10-6, which then makes us feel like we have a very precise concept of the quantity. The problem, at least in this case, is that we are dealing with probabilities and minuscule probabilities at that. Using bands, like the PLs, forces us to “bin” these apparently precise numbers into larger groups, eliminating the effects of precision bias in the evaluation of the systems. Eliminating precision bias is the same reason that IEC 61508 uses SILs – binning the calculated values helps to reduce our tendency to develop a precision bias. The reality is that we just can’t predict the behaviour of these systems with as much precision as we would like to believe.

Getting to Performance Levels: MTTFD, Architectural Category and DC

Some aspects of the system design need to be considered to arrive at a Performance Level or make a prediction about failure rates in terms of PFHd.

First is the system architecture: Fundamentally, single channel or two channel. As a side note, if your system uses more than two channels there are ways to handle this in ISO 13849-1 that are workarounds, or you can use IEC 62061 or IEC 61508, either of which will handle these more complex systems more easily. Remember, ISO 13849-1 is intended for relatively simple systems.

When we get into the analysis in a later article, we will be calculating or estimating the Mean Time to Dangerous Failure, MTTFD, of each channel, and then of the entire system. MTTFD is expressed in years, unlike PFHd, which is expressed in fractional hours (1/h). I have yet to hear why this is the case as it seems rather confusing. However, that is current practice.

Architectural Categories

Once the required PL is known, the next step is the selection of the architectural category. The basic architectural categories were introduced initially in EN 954-1:1996 [12].  The Categories were carried forward unchanged into the first edition of ISO 13849-1 in 1999. The Categories were maintained and expanded to include additional requirements in the second and third editions in 2005 and 2015.

Since I have explored the details of the architectures in a previous series, I am not going to repeat that here. Instead, I will refer you to that series. The architectural Categories come in five flavours:

Architecture Basics
Category Structure Basic Requirements Safety Princple
For full requirements, see [1, Cl. 6]
B Single channel Basic circuit conditions are met (i.e., components are rated for the circuit voltage and current, etc.) Use of components that are designed and built to the relevant component standards. [1, 6.2.3] Component selection
1 Single channel Category B plus the use of “well-tried components” and “well-tried safety principles” [1, 6.2.4] Component selection
2 Single channel Category B plus the use of “well-tried safety principles” and periodic testing [1, 4.5.4] of the safety function by the machine control system. [1, 6.2.5] System Structure
3 Dual channel Category B plus the use of “well-tried safety principles” and no single fault shall lead to the loss of the safety function.

Where practicable, single faults shall be detected. [1, 6.2.6]

System Structure
4 Dual channel Category B plus the use of “well-tried safety principles” and no single fault shall lead to the loss of the safety function.

Single faults are detected at or before the next demand on the safety system, but where this is not possible an accumulation of undetected faults will not lead to the loss of the safety function. [1, 6.2.7]

System Structure

[1, Table 10] provides a more detailed summary of the requirements than the summary table above provides.

Since the Categories cannot all achieve the same reliability, the PL and the Categories are linked as shown in [1, Fig. 5]. This diagram summarises te relationship of the three central parameters in ISO 13849-1 in one illustration.

Figure relating Architectural Category, DC avg, MTTFD and PL.
Relationship between categories, DCavg, MTTFD of each channel and PL

Starting with the PLr from the Safety Requirement Specification for the first safety function, you can use Fig. 5 to help you select the Category and other parameters necessary for the design. For example, suppose that the risk assessment indicates that an emergency stop system is needed. ISO 13850 requires that emergency stop functions provide a minimum of PLc, so using this as the basis you can look at the vertical axis in the diagram to find PLc, and then read across the figure. You will see that PLc can be achieved using Category 1, 2, or 3 architecture, each with corresponding differences in MTTFD and DCavg. For example:

  • Cat. 1, MTTFD = high and DCavg = none, or
  • Cat. 2, MTTFD = Medium to High and DCavg = Low to Medium, or
  • Cat. 3, MTTFD = Low to High and DCavg = Low to Medium.

As you can see, the MTTFD in the channels decreases as the diagnostic coverage increases. The design compensates for lower reliability in the components by increasing the diagnostic coverage and adding redundancy. Using [1, Fig. 5] you can pin down any of the parameters and then select the others as appropriate.

One additional point regarding Category 3 and 4: The difference between these Categories is increased Diagnostic Coverage. While Category 3 is Single Fault Tolerant, Category 4 has additional diagnostic capabilities so that additional faults cannot lead to the loss of the safety function. This is not the same as being multiple fault tolerant, as the system is still designed to operate in the presence of only a single fault, it is simply enhanced diagnostic capability.

It is worth noting that ISO 13849 only recognises structures with single or dual channel configurations. If you need to develop a system with more than single redundancy (i.e., more than two channels), you can analyse each pair of channels as a dual channel architecture, or you can move to using IEC 62061 or IEC 61508, either of which permits any level of redundancy.

The next step in this process is the evaluation of the component and channel MTTFD, and then the determination of the complete system MTTFD. Part 4 of this series publishes on 13-Feb-17.

In case you missed the first part of the series, you can read it here.

Book List

Here are some books that I think you may find helpful on this journey:

[0]     B. Main, Risk Assessment: Basics and Benchmarks, 1st ed. Ann Arbor, MI USA: DSE, 2004.

[0.1]  D. Smith and K. Simpson, Safety critical systems handbook. Amsterdam: Elsevier/Butterworth-Heinemann, 2011.

[0.2]  Electromagnetic Compatibility for Functional Safety, 1st ed. Stevenage, UK: The Institution of Engineering and Technology, 2008.

[0.3]  Overview of techniques and measures related to EMC for Functional Safety, 1st ed. Stevenage, UK: Overview of techniques and measures related to EMC for Functional Safety, 2013.

References

Note: This reference list starts in Part 1 of the series, so “missing” references may show in other parts of the series. Included in the last post of the series is the complete reference list.

[1]     Safety of machinery — Safety-related parts of control systems — Part 1: General principles for design. ISO Standard 13849-1. 2015.

[7]     Functional safety of electrical/electronic/programmable electronic safety-related systems. IEC Standard 61508. 2nd Edition. Seven Parts. 2010.

[9]      Safety of machinery – Functional safety of safety-related electrical, electronic and programmable electronic control systems. IEC Standard 62061. 2005.

[10]    Guidance on the application of ISO 13849-1 and IEC 62061 in the design of safety-related control systems for machinery. IEC Technical Report 62061-1. 2010.

[11]    D. S. G. Nix, Y. Chinniah, F. Dosio, M. Fessler, F. Eng, and F. Schrever, “Linking Risk and Reliability—Mapping the output of risk assessment tools to functional safety requirements for safety related control systems,” 2015.

[12]    Safety of machinery. Safety related parts of control systems. General principles for design. CEN Standard EN 954-1. 1996.