ISO Withdraws Machinery Risk Assessment Standards

This entry is part 1 of 8 in the series Risk Assessment

ISO has withdrawn three long-standing basic machinery safety standards used internationally and in the EU and replaced them with a single combined document. If you design, build or integrate machinery for sale internationally or within the EU, this new standard needs to be on your BUY list!

This entry is part 1 of 8 in the series Risk Assessment

ISO has withdrawn three long-standing basic machinery safety standards used internationally and in the EU and replaced them with a single combined document. If you design, build or integrate machinery for sale internationally or within the EU, this new standard needs to be on your BUY list!

ISO 14121-1 Withdrawn, along with ISO 12100-1 and -2

As of 20-Oct-2010 three standards, ISO 14121-1, Safety of Machinery – Risk Assessment – Part 1: Principles, ISO 12100-1, Safety of machinery – Basic concepts, general principles for design – Part 1: Basic terminology and methodology and ISO 12100-2, Safety of machinery – Basic concepts, general principles for design – Part 2: Technical principles, have been replaced by the new ISO 12100:2010, Safety of machinery — General principles for design — Risk assessment and risk reduction blends together three fundamental Type A machinery standards into one coherent whole. This important new document means that machinery designers have the fundamental design requirements for all machinery in one standard. The only exception is now ISO/TR 14121-2:2007, Safety of machinery — Risk assessment — Part 2: Practical guidance and examples of methods. This Technical Report stands as guidance for risk assessment and provides a number of examples of the different methods used to assess machinery risk.


This abstract is taken from the ISO web catalog page for the new standard.

ISO 12100:2010 specifies basic terminology, principles and a methodology for achieving safety in the design of machinery. It specifies principles of risk assessment and risk reduction to help designers in achieving this objective. These principles are based on knowledge and experience of the design, use, incidents, accidents and risks associated with machinery. Procedures are described for identifying hazards and estimating and evaluating risks during relevant phases of the machine life cycle, and for the elimination of hazards or sufficient risk reduction. Guidance is given on the documentation and verification of the risk assessment and risk reduction process.

ISO 12100:2010 is also intended to be used as a basis for the preparation of type-B or type-C safety standards.

It does not deal with risk and/or damage to domestic animals, property or the environment.

Table of Contents

Here is the table of contents from the standard as published.



1 Scope

2 Normative references

3 Terms and definitions

4 Strategy for risk assessment and risk reduction

5 Risk assessment

5.1 General

5.2 Information for risk assessment

5.3 Determination of limits of machinery

5.3.1 General

5.3.2 Use limits

5.3.3 Space limits

5.3.4 Time limits

5.3.5 Other limits

5.4 Hazard identification

5.5 Risk estimation

5.5.1 General

5.5.2 Elements of risk

5.5.3 Aspects to be considered during risk estimation

5.6 Risk evaluation

5.6.1 General

5.6.2 Adequate risk reduction

5.6.3 Comparison of risks

6 Risk reduction

6.1 General

6.2 Inherently safe design measures

6.2.1 General

6.2.2 Consideration of geometrical factors and physical aspects

6.2.3 Taking into account general technical knowledge of machine design

6.2.4 Choice of appropriate technology

6.2.5 Applying principle of positive mechanical action

6.2.6 Provisions for stability

6.2.7 Provisions for maintainability

6.2.8 Observing ergonomic principles

6.2.9 Electrical hazards

6.2.10 Pneumatic and hydraulic hazards

6.2.11Applying inherently safe design measures to control systems

6.2.12 Minimizing probability of failure of safety functions

6.2.13 Limiting exposure to hazards through reliability of equipment

6.2.14 Limiting exposure to hazards through mechanization or automation of loading (feeding) / unloading (removal) operations

6.2.15 Limiting exposure to hazards through location of setting and maintenance points outside danger zones

6.3 Safeguarding and complementary protective measures

6.3.1 General

6.3.2 Selection and implementation of guards and protective devices

6.3.3 Requirements for design of guards and protective devices

6.3.4 Safeguarding to reduce emissions

6.3.5 Complementary protective measures

6.4 Information for use

6.4.1 General requirements

6.4.2 Location and nature of information for use

6.4.3 Signals and warning devices

6.4.4 Markings, signs (pictograms) and written warnings

6.4.5 Accompanying documents (in particular – instruction handbook)

7 Documentation of risk assessment and risk reduction

Annex A (informative) Schematic representation of a machine

Annex B (informative) Examples of hazards, hazardous situations and hazardous events

Annex C (informative) Trilingual lookup and index of specific terms and expressions used in ISO 12100


Buying Advice

This is a significant change in these three standards. Revision to the text of the standards was significant. at least from the perspective that the material has been re-organized into a single, coherent document. If you are basing a CE Mark on these standards, you should strongly consider purchasing the harmonized version when it becomes available at your favourite retailer. The ISO version is available now in English and French as a hard copy or pdf document, priced at 180 CHF (Swiss Francs), or about CA$175.

As of this writing CEN has adopted EN ISO 12100:2010, with a published “dow” (date of withdrawal) of 30-Nov-2013. The “doc” (date of cessation) will be published in a future list of harmonized standards in the Official Journal of the European Union under the Machinery Directive 2006/42/EC.

My recommendation is to BUY this standard if you are a machine builder. If you are CE marking your product you may want to wait until the harmonized edition is published, however it is worth knowing that technical changes to the normative content of the standard are very unlikely when harmonization occurs.

How Risk Assessment Fails

This entry is part 2 of 8 in the series Risk Assessment

Fukushima Dai Ichi Power Plant after the explosionsThe events unfolding at Japan’s Fukushima Dai Ichi Nuclear Power plant are a case study in ways that the risk assessment process can fail or be abused. In an article published on, Jason Clenfield itemizes decades of fraud and failures in engineering and administration that have led to the catastrophic failure of four of six reactors at the 40-year-old Fukushima plant. Clenfield’s article, ‘Disaster Caps Faked Reports‘, goes on to cover similar failures in the Japanese nuclear sector.

Most people believe that the more serious the public danger, the more carefully the risks are considered in the design and execution of projects like the Fukushima plant. Clenfield’s article points to failures by a number of major international businesses involved in the design and manufacture of components for these reactors that may have contributed to the catastrophe playing out in Japan. In some cases, the correct actions could have bankrupted the companies involved, so rather than risk financial failure, these failures were covered up and the workers involved rewarded for their efforts. As you will see, sometimes the degree of care that we have a right to expect is not the level of care that is used.

How does this relate to the failure and abuse of the risk assessment process? Read on!

Risk Assessment Failures

Earthquake and Tsunami damage - Fukushima Dai Ichi Power PlantThe Fukushima Dai Ichi nuclear plant was constructed in the late 1960’s and early 1970’s, with Reactor #1 going on-line in 1971. The reactors at this facility use ‘active cooling’, requiring electrically powered cooling pumps to run continuously to keep the core temperatures in the normal operating range. As you will have seen in recent news reports, the plant is located on the shore, drawing water directly from the Pacific Ocean.

Learn more about Boiling Water Reactors used at Fukushima.

Read IEEE Spectrum’s “24-Hours at Fukushima“, a blow-by-blow account of the first 24 hours of the disaster.

Japan is located along one of the most active fault lines in the world, with plate subduction rates exceeding 90 mm/year. Earthquakes are so commonplace in this area that the Japanese people consider Japan to be the ‘land of earthquakes’, starting earthquake safety training in kindergarten.

Japan is the county that created the word ‘tsunami’ because the effects of sub-sea earthquakes often include large waves that swamp the shoreline. These waves affect all countries bordering the worlds oceans, but are especially prevalent where strong earthquakes are frequent.

In this environment it would be reasonable to expect that consideration of earthquake and tsunami effects would merit the highest consideration when assessing the risks related to these hazards. Remembering that risk is a function of severity of consequence and probability, the risk assessed from earthquake and tsunami should have been critical. Loss of cooling can result in the catastrophic overheating of the reactor core, potentially leading to a core meltdown.

The Fukushima Dai Ichi plant was designed to withstand 5.7 m tsunami waves, even though a 6.4 m wave had hit the shore close by 10 years before the plant went on-line. The wave generated by the recent earthquake was 7 m. Although the plant was not washed away by the tsunami, the wave created another problem.

Now consider that the reactors require constant forced cooling using electrically powered pumps. The backup generators installed to ensure that cooling pumps remain operational even if the mains power to the plant is lost, are installed in a basement subject to flooding. When the tsunami hit the seawall and spilled over the top, the floodwaters poured into the backup generator room, knocking out the diesel backup generators. The cooling system stopped. With no power to run the pumps, the reactor cores began to overheat. Although the reactors survived the earthquakes and the tsunami, without power to run the pumps the plant was in trouble.

Learn more about the accident.

Clearly there was a failure of reason when assessing the risks related the loss of cooling capability in these reactors. With systems that are mission critical in the way that these systems are, multiple levels of redundancy beyond a single backup system are often the minimum required.

In another plant in Japan, a section of piping carrying superheated steam from the reactor to the turbines ruptured injuring a number of workers. The pipe was installed when the plant was new and had never been inspected since installation because it was left off the safety inspection checklist. This is an example of a failure that resulted from blindly following a checklist without looking at the larger picture. There can be no doubt that someone at the plant noticed that other pipe sections were inspected regularly, but that this particular section was skipped, yet no changes in the process resulted.

Here again, the risk was not recognized even though it was clearly understood with respect to other sections of pipe in the same plant.

In another situation at a nuclear plant in Japan, drains inside the containment area of a reactor were not plugged at the end of the installation process. As a result, a small spill of radioactive water was released into the sea instead of being properly contained and cleaned up. The risk was well understood, but the control procedure for this risk was not implemented.

Finally, the Kashiwazaki Kariwa plant was constructed along a major fault line. The designers used figures for the maximum seismic acceleration that were three times lower than the accelerations that could be created by the fault. Regulators permitted the plant to be built even though the relative weakness of the design was known.

Failure Modes

I believe that there are a number of reasons why these kinds of failures occur.

People have a difficult time appreciating the meaning of probability. Probability is a key factor in determining the degree of risk from any hazard, yet when figures like ‘1 in 1000’ or ‘1 x 10-5 occurrences per year’ are discussed, it’s hard for people to truly grasp what these numbers mean. Likewise, when more subjective scales are used it can be difficult to really understand what ‘likely’ or ‘rarely’ actually mean.

Consequently, even in cases where the severity may be very high, the risk related to a particular hazard may be neglected because the risk is deemed to be low because the probability seems to be low.

When probability is discussed in terms of time, a figure like ‘1 x 10-5 occurrences per year’ can make the chance of an occurrence seem distant, and therefore less of a concern.

Most risk assessment approaches deal with hazards singly. This is done to simplify the assessment process, but the problem that can result from this approach is the effect that multiple failures can create, or that cascading failures can create. In a multiple failure condition, several protective measures fail simultaneously from a single cause (sometimes called Common Cause Failure). In this case, back-up measures may fail from the same cause, resulting in no protection from the hazard.

In a cascading failure, an initial failure is followed by a series of failures resulting in the partial or complete loss of the protective measures, resulting in partial or complete exposure to the hazard. Reasonably foreseeable combinations of failure modes in mission critical systems must be considered and the probability of each estimated.

Combination of hazards can result in synergy between the hazards resulting in a higher level of severity from the combination than is present from any one of the hazards taken singly. Reasonably foreseeable combinations of hazards and their potential synergies must be identified and the risk estimated.

Oversimplification of the hazard identification and analysis processes can result in overlooking hazards or underestimating the risk.

Thinking about the Fukushima Dai Ichi plant again, the combination of the effects of the earthquake on the plant, with the added impact of the tsunami wave, resulted in the loss of primary power to the plant followed by the loss of backup power from the backup generators, and the subsequent partial meltdowns and explosions at the plant. This combination of earthquake and tsunami was well known, not some ‘unimaginable’ or ‘unforeseeable’ situation. When conducting risk assessments, all reasonably foreseeable combinations of hazards must be considered.

Abuse and neglect

The risk assessment process is subject to abuse and neglect. Risk assessment has been used by some as a means to justify exposing workers and the public to risks that should not have been permitted. Skewing the results of the risk assessment, either by underestimating the risk initially, or by overestimating the effectiveness and reliability of control measures can lead to this situation. Decisions relating to the ‘tolerability’ or the ‘acceptability’ of risks when the severity of the potential consequences are high should be approached with great caution. In my opinion, unless you are personally willing to take the risk you are proposing to accept, it cannot be considered either tolerable or acceptable, regardless of the legal limits that may exist.

In the case of the Japanese nuclear plants, the operators have publicly admitted to falsifying inspection and repair records, some of which have resulted in accidents and fatalities.

In 1990, the US Nuclear Regulatory Commission wrote a report on the Fukushima Dai Ichi plant that predicted the exact scenario that resulted in the current crisis. These findings were shared with the Japanese authorities and the operators, but no one in a position of authority took the findings seriously enough to do anything. Relatively simple and low-cost protective measures, like increasing the height of the protective sea wall along the coastline and moving the backup generators to high ground could have prevented a national catastrophe and the complete loss of the plant.

A Useful Tool

Despite these human failings, I believe that risk assessment is an important tool. Increasingly sophisticated technology has rendered ‘common sense’ useless in many cases, because people do not have the expertise to have any common sense about the hazards related to these technologies.

Where hazards are well understood, they should be controlled with the simplest, most direct and effective measures available. In many cases this can be done by the people who first identify the hazard.

Where hazards are not well understood, bringing in experts with the knowledge to assess the risk and implement appropriate protective measures is the right approach.

The common aspect in all of this is the identification of hazards and the application of some sort of control measures. Risk assessment should not be neglected simply because it is sometimes difficult, or it can be done poorly, or the results neglected or ignored. We need to improve what we do with the results of these efforts, rather than neglect to do them at all.

In the mean time, the Japanese, and the world, have some cleanup to do.

The Problem with Probability

This entry is part 3 of 8 in the series Risk Assessment

Risk Factors


There are two key factors that need to be understood when assessing risk: Severity and Probability (or Likelihood). Sometimes the term ‘consequence’ is used instead of ‘severity’, and in the case of machinery risk assessment, they can be considered to be synonyms.  Severity seems to be fairly well understood—most people can fairly easily imagine what reaching into a spinning blade might do to the hand doing the reaching. There is a problem that arises when there is an insufficient understanding of the hazard, but that’s the subject for another post.


Probability or likelihood is used to describe the chance that an injury or a hazardous situation will occur. Probability is used when numeric data is available and probability can be calculated, while likelihood is used when the assessment is subjective. The probability factor is often broken down further into three sub-factors as seen in Figure 3 below [1]:

There is No Reality, only Perception…

Whether you use probability or likelihood in your assessment, there is a fundamental problem with people’s perception of these factors. People have a dif­fi­cult time appre­ci­at­ing the mean­ing of prob­a­bil­ity. Probability is a key fac­tor in deter­min­ing the degree of risk from any hazard, yet when fig­ures like “1 in 1000” or “1 x 10–5 occur­rences per year” are dis­cussed, it’s hard for peo­ple to truly grasp what these num­bers mean. When prob­a­bil­ity is dis­cussed as a rate, a fig­ure like “1 x 10–5 occur­rences per year” can make the chance of an occur­rence seem inconceivably dis­tant, and there­fore less of a concern. Likewise, when more sub­jec­tive scales are used it can be dif­fi­cult to really under­stand what “likely” or “rarely” actu­ally mean. Consequently, even in cases where the sever­ity may be very high, the risk related to a par­tic­u­lar haz­ard may be neglected if the prob­a­bil­ity is deemed low.

To see the other side, consider people’s attitude when it comes to winning a lottery. Most people will agree that “Someone will win” and the infinitesimal probability of winning is seen as significant.  The same odds given in relationship to a negative risk might be seen as ‘infinitesimally small’, and therefore negligible.

For example, consider the decisions made by the Tokyo Electric Power Corporation (TEPCO) when they constructed the Fukushima Dai Ichi nuclear power plant. TEPCO engineers and scientists assessed the site in the 1960’s and decided that a 10 meter tsunami was a realistic possibility at the site. They decided to build the reactors, turbines and backup generators 10 meters above the surrounding sea level, then located the system critical condensers in the seaward yard of the plant at a level below 10 meters. To protect that critical equipment they built a 5.7 meter high seawall, almost 50% shorter than the predicted height for a tsunami! While I don’t know what rationale they used to support this design decision, it is clear that the plant would have taken significant damage from even a relatively mild tsunami. The 11-Mar-11 tsunami topped the highest prediction by nearly 5 meters, resulting in a Level 7 nuclear accident and decades for recovery. TEPCO executives have repeatedly stated that the conditions leading to the accident were “inconceivable”, and yet redundancy was built into the systems for just this type of event, and some planning for tsunami effects were put into the design. Clearly was neither unimaginable or inconceivable, just underestimated.

Risk Perception

So why is it that tiny odds are seen as an acceptable risk and even a reasonable likelihood in one case, and a negligible chance in the other, particularly when the ignored case is the one that will have a significant negative outcome?
According to an article in Wikipedia [2], there are three main schools of thought when it comes to understanding risk perception: psychological, sociological and interdisciplinary. In a key early paper written in 1969 by Chauncy Starr [3], it was discovered that people would accept voluntary risks 1000 times greater than involuntary risks. Later research has challenged these findings, showing the gap between voluntary and involuntary to be much narrower than Starr found.
Early psychometric research by Kahneman and Tversky, showed that people use a number of heuristics to evaluate information. These heuristics included:
  • Representativeness;
  • Availability;
  • Anchoring and Adjustment;
  • Asymmetry; and
  • Threshold effects.
This research showed that people tend to be averse to risks to gains, like the potential for loss of savings by making risky investments, while they tend to accept risk easily when it comes to potential losses, preferring the hope of losing nothing over a certain but smaller loss. This may explain why low-probability, high severity OHS risks are more often ignored, in the hope that lesser injuries will occur rather than the maximum predicted severity.

Significant results also show that better information frequently has no effect on how risks are judged. More weight is put on risks with immediate, personal results than those seen in longer time frames. Psychometric research has shown that risk perception is highly dependent on intuition, experiential thinking, and emotions. The research identified characteristics that may be condensed into three high order factors:

  1. the degree to which a risk is understood;
  2. the degree to which it evokes a feeling of dread; and
  3. the number of people exposed to the risk.

“Dread” describes a risk that elicits visceral feelings of impending catastrophe, terror and loss of control. The more a person dreads an activity, the higher its perceived risk and the more that person wants the risk reduced [4]. Fear is clearly a stronger motivator than any degree of information.

Considering the differing views of those studying risk perception, it’s no wonder that this is a challenging subject for safety practitioners!

Estimating Probability

Frequency and Duration

Some aspects of probability are not too difficult to estimate. Consider the Frequency or Duration of Exposure factor. At face value this can be stated as “X cycles per hour” or “Y hours per week”. Depending on the hazard, there may be more complex exposure data, like that used when considering audible noise exposure. In that case, noise is often expressed as a time-weighted-average (TWH), like “83 dB(A), 8 h TWH”, meaning 83 dB(A) averaged over 8 hours.

Estimating the probability of a hazardous situation is usually not too tough either. This could be expressed as “15 minutes, once per day / shift” or “2 days, twice per year”.


Estimating the probability of avoiding an injury in any given hazardous situation is MUCH more difficult, since the speed of occurrence, the ability to perceive the hazard, the knowledge of the exposed person, their ability to react in the situation, the level of training that they have, the presence of complementary protective measures, and many other factors come into play. Depth of understanding of the hazard and the details of the hazardous situation by the risk assessors is critical to a sound assessment of the risk involved.

The Challenge

The challenge for safety practitioners is twofold:

  1. As practitioners, we must try to overcome our biases when conducting risk assessment work, and where we cannot overcome those biases, we must at least acknowledge them and the effects they may produce in our work; and
  2. We must try to present the risks in terms that the exposed people can understand, so that they can make a reasoned choice for their own personal safety.

I don’t suggest that this is easy, nor do I advocate “dumbing down” the information! I do believe that risk information can be presented to non-technical people in ways that they can understand the critical points.

Risk assessment techniques are becoming fundamental in all areas of design. As safety practitioners, we must be ready to conduct risk assessments using sound techniques, be aware of our biases and be patient in communicating the results of our analysis to everyone that may be affected.


[1] “Safety of Machinery—General Principles for Design—Risk Assessment and Risk Reduction”, ISO 12100, Figure 3, ISO, Geneva, 2010.
[2] “Risk Perception”, Wikipedia, accessed 19/20-May-2011,
[3] Chancey Starr, “Social Benefits versus Technological Risks”, Science Vol. 165, No. 3899. (Sep. 19, 1969), pp. 1232–1238
[4] Paul Slovic, Baruch Fischhoff, Sarah Lichtenstein, “Why Study Risk Perception?”, Risk Analysis 2(2) (1982), pp. 83–93.