Sei sulla pagina 1di 18

Methods and tools used in Criticality Analysis in Industrial Systems

Abstract
In systems and equipment, the level of maintenance provided is directly related to their
criticality, determined by various criteria such as operating environment, safety, environment,
failure rate, among others. This article discusses the basic concepts of criticality analysis of
industrial systems, presenting and comparing the following methods used in criticality analysis:
Risk number (RPN), criticality matrix, ABC classification, Matrix GUT, FMEA / FMECA,
RCM, highlighting its application process, advantages and disadvantages.

Introduction
The goal of designing, operate and maintain an industrial system is to satisfy the needs
of a client, with a standard of quality, while optimizing its production capacity.
However, with increased service life and operating time, failures can affect the system
components, which makes it essential maintenance policy to restore the performance of
the components to a desired level (Mohideen et al., 2011).
Nguyen et al. (2013) emphasizes the maintenance as critical in ensuring the
performance, reliability and service life of industrial equipments, however, in many
cases is not viable execute all necessary maintenance actions, either because resource
constraints, time or complexity of the processes. In such cases, it becomes necessary an
analysis to prioritize the critical equipments of the process, due to its technical
characteristics, interactions between systems, maintenance data, parameter variations
and operational context.
The criticality analysis aims to at identifying, for a given period of time, the impact of
the unavailability of equipment in a production process, thus, manage the criticality of
all components of an industrial process is vital to set the performance of maintenance
defining policies and actions to be taken, with the distribution of resources effectively
(TOMAIDIS & PISTIKOPOULOS, 2004).
In the literature are presented a few tools to determine the criticality of equipment, such
as the Risk Priority Number (RPN), Critical Analysis of Failure Modes and Effects
(FMECA), Fault Tree. In essence these models can provide a quantitative approach,
given the failure rate, the failure rate effects and other maintenance rates (MIL-1629,
1980; IEC 60812, 2006) or qualitative approaches, noting specific operational criteria
and experience of professionals involved in the analysis (Moubray, 1997).
The objective of this paper is to present the main tools and methods traditionally applied
in the criticality analysis of industrial systems, describe your application process,
characteristics, advantages, disadvantages, and identify the likely scenarios for each
application.
Criticality in Maintenance
Hijes and Cartagena (2006) emphasize the importance of maintenance to return the lost
reliability of the system, and observe that the higher the criticality of the equipment, the
greater should be the focus of the maintenance on it, with Criticality Analysis as starting
point to prioritize the level of maintenance required on each system and resource
allocation for maintenance.
Moss and Woodhouse (1999) note that criticality has different interpretations because of
the context and purpose of his analysis, defining it as the attribute that expresses the
importance of the function of a device in the process which is inserted, under safety
aspects, quality, environmental, among others. To Aven (2009) the criticality informs
how much equipment is crucial in operational context, where its failure or poor

performance have consequences such as personal and environmental accidents,


economic impact and production, where the criticality of equipment is directly
proportional to those impacts.

The criticality analysis is a technique that identifies and classifies effects and potential
events based on their impact and relevance to the process, with application in studies of
risks, reliable designs and operating plants, besides representing a requirement in
environmental systems and security (Smith & Hawkins, 2004).
The criticality can be evaluated quantitatively, by obtaining a critical number, through
failure rates, rate of failure modes, rates effects of failures, numerical data with known
values and trusted, according documents MIL STD-1629A and IEC 60812, where
methods and formulas for using this approach are presented. Qualitative evaluation is
used when there is no data available about the failures, being necessary to classify the
criticality subjectively based on tacit knowledge of the analysis team, commonly
applied in projects or commissioning of facilities, however as the system goes into
operation it is recommended that data collection and the use of quantitative methods
(MIL-1629, 1980; IEC 60812, 2006).
Criteria for evaluating the criticality
Siqueira (2009) observes that in most industrial plants there is an adequate selection of
the parameters that affect the criticality of equipment, basing the assessment only on
experience and tacit knowledge of the technician responsible for the analysis. According
to Horenbeek and Pintelon (2010) only technical information are not sufficient to
determine the criticality of equipment, where they suggest adding other criteria such as:
interfunctional relationship of equipment-process; potential for failure; financial
impacts, environmental policies; security; economic aspects; quality, as well as specific
criteria for each industry segment.
Criteria for Safety and Environment
It is ever greater the concern of society with environmental and safety aspects, where
only economic and technological advances are not tolerated to the detriment of those
aspects. Overlook the importance of these aspects may even denigrate the company's
image in the community where it operates. Mobley (2008) highlights that security as
one of the most important aspects of contemporary industrial management.
Moubray (1997) notes that an equipment is considered critical from the standpoint of
safety or the environment, when failures generated directly or indirectly by it shape,
pose a risk to the lives of employees or the community in which the company is
inserted, or violate a law or environmental standard.
Economic / financial criteria
Any industrial undertaking is subject to financial impacts, even those in which the main
objective is not the generation of profit (military, health care), may suffer impacts from
economic fluctuations. The costs involved in industrial activities are classified into
(KARDEC & Nascif; 2009):
Cost of production: resulting in loss of production or product quality, caused by
failure or loss of performance of equipment and installations;
Direct costs: resources needed to maintain the function of the equipment, such as
preventive, predictive, repairs and general maintenance activities, and

Indirect costs: stemming from administrative needs of production and


maintenance, as management, supervision, designs, among others.
Mobray (1997) emphasizes the features present in equipment with economic impacts:
(i) ability to alter production; (ii) impact on product quality; (iii) complaints (external /
internal) customers; (iv) affect process efficiency and; (v) too much consumption of
resources (electricity, water, raw materials);
Production criteria and quality
Equipment and systems whose failures have the ability to affect production, product
quality and process have great relevance for the analysis of criticality by managers ,
primarily to the fact financially impacting the organization.
According to Ribeiro (2010) critical equipment within the context of production have
the following characteristics: (i) appears frequent breakdowns; (ii) have no spares; (iii)
impact on delivery of the output or reducing production capacities (iv) affect quality of
product or process;(v) provoke damage to equipment or process and; (vi) exhibit
intermittent failures..
Criteria for Maintenance
Availability may be defined as the probability of an equipment or system be available,
in specific conditions, where appropriate, or a given period of time (DHILLON, 2006; BS
EN 13306, 2001). To know the availability of the equipment, it is necessary to identify the
impact that the absence of its function will represent the production (MOBLEY, 2008).

Are presented in the literature various forms to calculate the availability, adapted in
view the author and the context of application, however the most widespread form is
shown in equation 3 (Smith, 2001):
D=

Equao

MTBF
MTBF+ MDT
1

Where:
MTBF Mean Time Between Failures, represents the mean time between failures
defined by the ratio the available time of machine about the number of corrective action;
MDT Mean Down Time, expresses the average unavailability time of the equipment or
system, defined by the sum of all necessary time in maintaining and restoring the system to a
desired operating level.

Some applications consider the use of the indicator MTTR, Mean Time to Repair, which
represents the average time required for recovery of the asset after a failure. The MTTR
and MDT indicators show a similar context, as both indicate a period of system
unavailability. However in calculating MTTR are not considered setup time and
production adjustments, which are computed by MDT (FILHO, 2008).
Despite presenting a simple concept, these indicators are vital for maintenance planning,
because even if an asset present a small number of faults, with excellent reliability, a
high value on MTTR will drastically reduce the availability of the same.

Reliability is the probability of an equipment or system fulfill its function within


defined standards of performance for a given time interval (DHILLON, 2006).
Gutirrez (2005) emphasizes the existence of four characteristics that define the
trustworthiness of an asset, as follows:
(i) probability - Relation between the number of favorable and possible cases for a
period of time;
(ii) Function required or satisfactory - operating limit at which the performance of the
system or equipment is below the specified level of its function;
(iii) period of time - random variable that defines the reliability, referring to the time of
operation or life cycle, and;
(iv) operating conditions - those conditions that the equipment or system will perform
its function, such as location, environmental conditions, state of the raw material,
among others.
Equation 4 presents the formula for calculating reliability:
Equation 2

R ( t ) =et

Where:
R(t) - reliability of equipment for a given time t;
e - neperianos base of logarithms (2.718);
defined by the ratio between the number of failures and the number of hours in
operation;
t - expected time of operation.
The IEC (2006) indicates that the frequency with which the asset has a fault, will
influence the criticality within the process in question. Campbell and Jardine (2001)
observed that the frequency of failure provides comparative information of the asset,
because informs the potential for equipment failure.
To estimate the frequency of equipment failures, one can use your failure rate, or in the
absence of this information, service technicians and equipment specialists can determine
the frequency of failures through their experience, production data, historical control
and maintenance, using equation 5 (CAMPBELL & JARDINE, 2001):

BF=

NB
TT DT NT

Equation 3

where:

BF: frequency of failure;


NB: number of failures (number of breakdown);
TT: total time of operation;
DT: equipment stopping time (downtime);
NT: time without operation (non utilized time).

Table 01 presents the main criteria used in the evaluation of criticality of industrial
equipment and systems:

Criteria
Safety and environment

Economic-financial
criteria

Production criteria and


quality

Availability, reliability
and frequency of
failures

Assessment requirements
Threat to life of employees;
Risk to health of employees;
Threat collective society;
Infraction standards and environmental laws;
Costs of production;
Direct and indirect costs;
Ability to change production;
Impact on the efficiency of the process;
Too much consumption of resources;
Costs of maintenance procedures;
Costs of parts and spares;
Change in the productive system;
Customer complaints (internal and external);
Impact on product quality;
Equipment without parts;
Equipment "bottlenecks" in production;
Impact causes damage to equipment or
equipment neighbors;
Mean time between failures (MTBF);
Mean Time to Repair (MDT);
Failure rate;
Reliability;
Frequency of failure;

Authors
Mobley et al.
(2008); Siqueira
(2009); Moubray
(1997)
Moubray (1997);

Siqueira (2009);
Mobley et al.
(2008);

Ribeiro (2010);

Helmann (2008);
Siqueira (2009);

Campbell &
Jardine (2001);

Dilhon (2006);
Smith &
Hinchcliffe
(2004);

Font: Author (2013)

Table 1 - Criteria and parameters used in Criticality Analysis


Selection of Critical Equipment
There are several methods and tools for criticality analysis and selection of critical
systems, both in literature, how adapted and created for the specific needs of an
industrial plan, being specific tools for analysis or reciprocally incorporated within
maintenance philosophies and quality (RCM, FMEA e FMECA). Hellmann (2008) and
Smith (2009) observe that most companies use empirical methods in the evaluation of
criticality, based on the experience of managers and maintenance technicians, who

despite serve as a reference in the preparation of activities and resources, not offers a
complete assessment that contemplates different aspects and scenarios addressing an
overview of the system, including areas such as: security, environment, production,
quality and other necessary departments. The following are shown methods used for
analyzing criticality in industrial systems.
3.1 ABC Classification
The Japan Institute of Plant Maintenance (1995) recommends the use of ABC
classification as a tool to evaluate the criticality of a machine or system within an
industrial process, through the use of a decisional flowchart shown in Figure 02.
In the flow, the system is evaluated by the criteria chosen by those responsible for
analysis, through questions that guide evaluation of the system, being the end, classified
in one of three classes (A, B or C).
After the analysis, the maintenance will be oriented to each system or equipment based
on its classification, being (JIPM, 1995):
Class A: Highly critical equipment for process and is central to a preventive
policy with: preditiva e preventiva, anlise das falhas manuteno e operao,
equipes de melhoria focadas na reduo de falhas, aplicao de metodologias
RCM ou FMECA.
Class B: important equipment for the process, being acceptable application of
any of the the following techniques: preventive or predictive, teams and
improvement teams, analysis of failures by maintenance
Class C: Equipment with low impact on the process, with the following policies of
maintenance: corrective, predictive and / or preventive in functional equipment, fault
monitoring to prevent recurrences.
Figure 2 - Classification ABC (Decision Criteria and Flow)
Source - JIPM (1995)
3.2 Matrix GUT
Quality tool used to prioritize problems, taking into consideration the
parameters: severity, urgency and tendency (GUT). The GUT matrix was
developed to guide decision making on complex problems, from the perspective
of different makers. Helmann (2008) points out that this tool can be adapted for
evaluating criticality of equipment, considering:

Severity: factor that is related to the possible effects arise in the medium and / or
long term in the event of a failure and its impact on the process, employees and
results
urgency: which is directly related to the time available for solution of the fault;
Trend: that is related to the possibility of a problem worsen or minimize.
For each factor, weights are attributed, on a qualitative scale of 1 to 5 according
to the degree of impact of equipment in each of the parameters, and then,
determine the level of criticality of the equipment by multiplying the factors
(severity, urgency and trend) (HELMANN, 2008).
Table 02 shows an example of GUT matrix used to evaluate the criticality of
equipment.

3.3 FMEA/FMECA

Originating from the US military, the FMECA - Failure Mode Effects & Criticality
Analysis is a tool for reliability, disseminated and applied in various industries, MIL-A1629 (Department of Defense), SAE-J1739 and SAE-ARP5580 (automotive industry)
and IEC-60812 and STUK-YTO-TR190 (electronics industry).
The FMECA is composed of two distinct analyzes, FMEA - Failure Mode and Effects
and Criticality Analysis (CA). The FMEA observes failure modes and their effects and
the CA performs the prioritization of each failure mode according to their level of
importance, using parameters such as the rate and severity of the fault effect (IEC,
2006).
The FMEA can be described as a sequence of logical steps in an initial analysis on
components or subsystems of equipment (lower level) by identifying the potential
failure modes and their failure mechanisms, then potentiate its effect to higher levels of
the system (IEC 60518, 2006; MOBLEY, 2008). The analysis of the systems can be
performed in ascending order (bottom-up) when initiated by the identification of the
failure modes in lower system level, tracing their effects at higher levels until the main
function of the equipment. Another embodiment of the analysis is called descending
(top-down) with an analysis of potential failures and funcional failures affecting the
equipment and an investigation of the causes of these failures in lower equipment levels
(subsystem and components).
The result of the application of FMECA is a greater knowledge and understanding of the
critical points of a system (failure modes), and provides a database to create a model of
reliability and auxiliary tool for the choose the maintenance activities to mitigate /
eliminate these failure modes. Another result is that the definition of maintenance tasks
is based on knowledge of equipment failures and their causes in order to identify the
maintenance actions that can prevent, reduce or eliminate the beginning of a fault,
making the FMEA / FMECA vital in process system reliability (SMITH e
HINCHCLIFFE, 2004).
The different versions FMECA present a similar application flow between them, where for
performing a FMECA analysis, the first step is to perform an FMEA used as a database for
criticality analysis (CA).
Dhillon (2006) proposes the following flow for application:

Understanding the function of the system chosen, your mode of operation, subsystems,
components and parts involved;

Establish the depth of analysis as to the hierarchical level of the system;

Identify each item to be analyzed (for example, the subsystem module or in part);

Identify all possible failure modes for each component in question;

Determine the effect of failure of each item for each failure mode;

Determine the effect of faults in a local system context, auxiliaries and higher levels of
the system;

Identify potential causes for the failure modes of each component;

List the methods, procedures and tools for the detection of possible failures;

Determine the severity of each failure mode;

Estimate frequency or probability of occurrence of the failure mode in a given period.

Classification of failure effects


According Carazas (2011) to classify the effects of a failure, a severity level must be used,
which aims to provide a qualitative assessment of the effect of component failure mode over the
entire system.
The severity level should be set at the end of the analysis of failure modes, still in the FMEA, in
order to identify the failure modes that have no effect on the system or have negligible effects
(IEC, 2006; HEADQUARTERS, 2006).
McDermott et al. (2009) describe the level of severity as an estimate of the impact of the effects
on the system, in the event of a failure mode. However, it should be noted that each failure
mode may have different effects, and each effect can have different impacts depending on how
it is analyzed.

Risk Priority Number


The Risk Priority Number (RPN) is a tool that analyzes the risks present in potential failures,
focusing on the prioritization of maintenance activities (JIAN-MING et al., 2011).

According to IEC 60300 (2006) Risk can be defined as the probability of an event
occurring, it combined with the effects on the process. Hokstad and Trygve (2006)
define risk as the possible occurrence of all events and unwanted conditions.
The evaluation of RPN can be performed using equation 01, or by the equation 02 using
the detection level (IEC 2006):
RPN =S F
RPN =S F D

Equation 1
Equation 2

In equations (S) denotes the severity of the fault, (F) the frequency of the failure and (D)
a detectability.
According to Turan et al. (2011) and Horenbeek et al. (2010) the severity should be
established observing all process areas (safety, environment, quality, production, among
others). The literature presents different severity scales, which vary according to the
version of FMECA used, the level of analysis and available resources. Siqueira (2009)
uses five categories to classify the severity levels, associating them with safety criteria,
environmental criteria and operational criteria, presented in Table 1.

Table 1 Severity levels

Damage
Category
I
II
III
IV
V

Value

Environmenta

Catastrophic
Critical
Marginal
Minimum

5
4
3

l
Big
Significant
Light
Acceptable

Insignificant

Severity

Personal

Economic

Fatal
Serious
Light
Insignificant

Fatal
Serious
Light
Insignifican
t
Inexistent

Inexistent
Inexistent
Siqueira (2009, p. 101)

The severity classification is in the following categories (MIL, 1980; MOUBRAY,


1997):

Catastrophic: failures with potential to cause death or major damage to the environment
and system, causing loss of main function;

Critical: failure with potential to cause serious injury, severe damage to the environment
and completely undermines the system;

Marginal: failure resulting in minor injuries, and small damage to the environment or
system, or damage that do not generate malfunctions;

Minimum: failures that generate the security damage, environment and system, but
below the maximum levels set legally;

Insignificant: failures whose effect is insufficient to generate an accident, environmental


damage or system.
The level of frequency or probability of each failure mode must be determined in

order to properly evaluate the effect or failure mode criticality (IEC, 2006). To determine the
frequency of failures in a system data are required failure rates of system components and
operating conditions in which it performs its function.
In the absence of data on the failure rate of the components, we can estimate the
frequency using the experience of the experts involved in the analysis, combined with the
historical of similar equipment in the process. Thus, selected criteria may be adjusted as needed
for a specific application (NASA, 2006; HEADQUARTERS, 2006).
Table 2 shows an example of the frequency level to a failure mode.
Table 2 Levels failure frequency
Level

Failure Frequency

Very High

Failure Rate
1/10
1/20

Description
Very high failure rate
Failure occurs continuously

1/50

High failure rate

1/100

Failure occurs frequently


Moderate failure rate

High

Moderate

Occasional

Low

1/5000

Remote

1/10000

1/200
1/500
1/1000
1/2000

Failure occurs occasionally


Occasional failure rate
Failure reasonably expected
Low failure rate
Failure occurred exceptionally
Remote likely to occur

Suggested expected to occur


Fonte: Adapted by author of Headquarters (2006, p. 4-17)

Detection Level
This level measures the difficulty in detecting the fault, through an assessment of the
detection methods available and their applicability for each failure or analyzed failure mode,
where a failure can not be detected during operation, receives a high-scale value due to
detection probability be low or nonexistent. However, a failure mode that has a reliable
detection technique according to the PF curve (potential functional), will have a high possibility
of detection being represented by the lower value of the scale (HUADONG & ZHIGANG,
2011; MCDERMOTT et al., 2009).
A detection classification example is shown in Table 3.
Table 3 Detection Levels
Leve
l
1
2
3
4
5

Detection possibility

Description

High
Moderate
Remote
Low
Almost Impossible

Failure detectable by simple operating procedures


Functional inspection need to detect
Functional test need to detect
Failure only detectable by loss of function
Failed completely hidden
Fonte: Siqueira (2009, p. 99)

3.5 Criticality Matrix


It consists of a visual tool for identification and comparison of the failure modes of all
the components of the subsystem, system or equipment in question, as a tool to evaluate
the relationship between the probability and severity of failure mode (IEC, 2006).
Figure 01 shows an example of criticality matrix, where criticality is determined by the
combination of the severity and frequency values. In this matrix, the failure modes near
the right corner are considered the most critical, requiring prioritizing the maintenance
actions (KIM et al., 2009).

Figure 1 Criticality Matrix


Fonte Kim et al. (2009)

3.6 Reliability Centered Maintenance


The Reliability Centered Maintenance (RCM) is a maintenance approach, originating from the
aircraft industry and US military in the late 60that prioritizes actions of maintenance systems
and equipment where reliability is critical, with a focus on aspects such as performance,
security, environment and financial (MOUBRAY, 1997; WANG e HWANG, 2004).
Igba et. al (2013) affirms that the objective of the RCM is to preserve the most important
function of the equipment or system, ensuring the reliability and availability necessary
combined the lowest possible cost, with an efficient maintenance strategy, which reduces and /
or eliminates the effects and consequences of a failure based on the needs of the production
process, not the equipment or componante in isolation, as a traditional maintenance plan.
However, Rausand (2008) points out that the RCM does not increase nor improves system
reliability, but only ensures the realization of reliability that is inherent, balancing costs and
benefits in obtaining an optimized program of preventive maintenance (PM).
The literature presents several deployment processes for RCM, Moubray (1997); NAVSEA
(2007); Smith & Hinchcliffe (2004), which vary according to the application context, type of
analysis, models used as a base, the maturity of analysis staff, among others.
The typical steps and accepted in most application models are (MOUBRAY, 1997; SMITH e
HINCHCLIFFE, 2004):
- Step 1: Identification System Functions
The purpose of this step is to determine all the functions performed by the system and
subsystems, operational context and the performance standard for each function. The present
actions at this stage are (MOUBRAY, 1997): (i) set the level of analysis; (ii) selection of the
systems; (iii) information collection and identification systems; and (iv) identify system
functions.
- Step 2: Analysis of Failure Modes and Effects;
Defined the system functions, the second implementation phase seeks to determine how the
system may stop performing this function, determining actions to prevent, reduce or detect early
loss of function. The fundamental points of this stage consist of focus on the analysis in the
absence of the equipment function and understand that failures are more than just a single,

simple statement of loss of function, because most functions have two or more loss conditions,
where neither all are equally important. The documento and analysis of failures in the RCM
can be accomplished by means of tools: (i) Failure Mode and Effects Analysis - FMEA; and (ii)
Failure Mode Effects & Criticality Analysis FMECA.
- Step 3: Selection of Significant Functions;
Prioritize, with the aid of a decision flow, the functions that should be preserved in the system,
evaluating them through their impact on the process nature, using as criteria: (i) operational
safety and environment; (ii) operation of the system; and (iii) economic aspects (NAVSEA,
2007).
Other relevant factors to evaluate a function, consists in verifying whether its functional failure
is evident during equipment operation process (IEC, 2006), or if there is already a preventive
maintenance activity for the same (SMITH, 1993).
- Step 4: Selection of Applicable Activities;
Establish the technical requirements and practice to determine the actions and maintenance
methods to be used. Smith (1993) defines that focus of a preventive maintenance program
consists of: (i) prevent or reduce the occurrence of failures; (ii) detecting the onset of a fault;
(iii) discover hidden failures; and (iv) identify when it is not possible preventive actions due to
limitations and system technical specifications. In this context it can be mentioned four
categories of maintenance activities:
Time-targeted activities;
Activities directed by condition;
Fault tracing activities;
Post-failure activities;
- Step 5: Effectiveness Evaluation of Activities;
Evaluate the effectiveness of the results and the technical / economic feasibility of its
application by reason the economic resources available and criteria as:

The use and cost of necessary physical resources;

Unavailability of operation for implementation of task;

Effectiveness of operations;

Execution interval.

Technical applicability and feasibility of the task;

- Step 6: Selection of Applicable Effective and activities;


Select the maintenance activities due to its applicability and effectiveness to preserve the
functions of the previous step, so that they apply in eliminating or reducing the object's failure
to analysis, safely and with appropriate economic and operational criteria.
Moubray (1997), based on the applicability criteria and presented effectiveness, suggests the
following order of priority in the selection of maintenance activities: (i) predictive inspection;
(ii) preventive restoration; (iii) a preventive replacement; (iv) detecting the failure; and (v) the

default activities.
- Step 7: Setting the Frequency of Activities:
The frequency in which it performs a preventive maintenance task is the hardest part in RCM
analysis. A detailed analysis must be based on full understanding of the changing physical
processes and materials over time, and how these changes affect the failure modes, where
basically the analysis will work in statistical form with failure rates of the components and their
variation over time (SMITH, 1993).
Leverett (2006) divides the application of the MCC in four stages macros, highlighting the
analysis of processes, tools and possible relationships present in the deployment process,
illustrated in Figure 03.

Figure

3
Implementation process RCM
Fonte Leverette (2006)

Cheng et al. (2008) summarize the results of each stage in Figure 2:

Identification of significant functions: Identify critical functions of items and systems,


subsystems and components, for which his absence is reflected in economic impacts,
financial impacts, operational impacts and risk to the environment and safety;

Analysis of failure modes and effects(FMEA): identify functional failures at every level
of the system or process in question, the flaws present in each level, its failure modes,
effects and causes, tracing the effects of each failure at all levels of the system;

Decision logic of MCC: after identifying the causes of functional failures, are selected
through existing tools in the methodology, applicable maintenance tasks and their
periodicity;

Combine, develop and update the preventive policy: to update based on the result of the
application of the MCC maintenance policy, introducing new techniques and

methodologies to optimize the result of post-implantation MCC

3.7 Comparisons between the presented methods


The choice of method to be used in criticality analysis will depend on factors such as team's
experience with the tool; level of depth of analysis; type of approach to be used (qualitative or
quantitative); available data; parameters and criteria to be used, among others.

Siqueira (2009) points out that quantitative tools such as number of risk, failure rate,
need a reliable database on the equipment and experience of the analyst in the tool,
where often a mathematical analysis is required. However, quantitative models have a
deficiency in common: do not consider the inherent characteristics of each case,
interactions between them, and specific operational criteria, such as: economic, safety
and environment (TENG & HO, 2000; TOMAIDIS & PISTIKOPOULOS, 2004).
Barendes et al. (2012) note that qualitative techniques (FMECA, RCM and ABC
classification) are subject to value analysis and experience of the analyst, should not be
indicated in plant and equipment at the beginning of its life. Oldenhof et al. (2013)
indicated several limitations FMECA / FMEA relative to other quantitative models
primarily emphasizing the fragility of the RPN calculation to use qualitative scales.
Siqueira (2009) and Mobley (2008)
destacam que apesar de difundidas em outras reas, ferramentas como a Matriz GUT
sempre precisam ser adaptadas para a rea de manuteno, no sendo totalmente efetiva
principalmente por no atender a necessidades especficas.
Fore & Misha (2010) apontam a MCC como uma metodologia robusta, com reduo de
custos e tarefas desnecessria de manuteno, contudo tambm observam o alto custo
inicial para formao dos colaboradores na metodologia e que grande parte de seu
ganho percebido somente em nvel operacional. Para Tavares (2012) um dos pontos
fracos da MCC reside na enorme quantidade de informao e dados, necessrias para
aplicao da metodologia, alm da enorme burocratizao do processo.

4. Concluso
Com base na reviso e comparao das tcnicas realizadas neste trabalho, as seguintes
concluses podem ser realizadas:

Um grande nmero de tcnicas e ferramentas utilizado em anlise de criticidade de


sistemas industriais;
Apesar de objetivos similares, os mtodos apresentam caractersticas distintas. Tcnicas
qualitativas, como FMEA, Matriz de Criticidade, Matriz GUT, para apresentarem
sucesso so altamente dependentes da experincia dos especialistas responsveis pela
anlise, j tcnicas quantitativas, como MCC, RPN, taxa de falhas, necessitam de um
banco de dados e histricos confiveis em sua aplicao;
Tcnicas como o FMEA, FMECA e MCC melhoram e fornecem uma viso mais
abrangente do processo ou sistema aps a sua aplicao;

Todas as ferramentas, sejam elas quantitativas ou qualitativas, requerem dados de falha


e conhecimento do processo em anlise, sendo a qualidade das anlises altamente
dependente da qualidade dos dados utilizados.
Nenhuma das ferramentas apresenta a possibilidade de incluso de critrios especficos
para determinado processo;
A criticidade de equipamentos envolve avaliaes de aspectos subjetivos em alguns
casos e objetivos em outros, tornando o contexto da anlise mais complexa, partindo da
premissa que o seu resultado da anlise deve ser oriundo das preferncias de um grupo
de especialista (tcnicos, gestores e operadores) e sujeita a vrios critrios a serem
ponderados.

Nesse contexto, para trabalhos futuros, sugere-se o estudo de uma metodologia para
anlise de criticidade em sistemas industriais, que trabalhando qualitativamente
possibilite incluir a viso e anlise de especialistas dos equipamentos, e contemple
critrios quantitativos como taxa de falhas, custos e ndices de manuteno, alm de
permitir a incluso de critrios adicionais, especficos em cada segmento industrial,
permitindo uma viso de conjunto e maior confiabilidade dos dados analisados.

Referncias

CAMPBELL, JOHN D.; JARDINE, ANDREW K. S. Maintenance Excellence:


Optimizing Equipment Life-Cycle Decisions. 1 ed. New York: Marcel Dekker Inc.:
2001.
AVEN, TERJE. Identication of safety and security critical systems and activities.
Reliability Engineering & System Safety, v. 94, n. 2, p. 404-411, feb. 2009.
DHILLON, B. S. Maintainability, maintenance and reliability for Engineers. 1. ed.
New York: CRC Press, 2006.
HELMANN, KURTT S. UMA SISTEMTICA PARA DETERMINAO DA
CRITICIDADE DE EQUIPAMENTOS EM PROCESSOS INDUSTRIAIS BASEADA NA
ABORDAGEM MULTICRITRIO. 95f. Dissertao (Mestrado) Programa de PsGraduao em Engenharia de Produo, Universidade Tecnolgica Federal do Paran.
Ponta Grossa, 2010.

HIJES, FLIX C. G. L.; CARTAGENA, JOS J. R. Maintenance strategy based on


a multicriterion classification of equipments. Reliability Engineering & System Safety,
v. 91, n. 4, p. 444451, apr. 2006.
HOKSTAD, PER & TRYGVE, STEIRO. Overall strategy for risk evaluation and
priority setting of risk regulations. Reliability Engineering and System Safety, n. 9, p.
100-111, 2006.
INTERNATIONAL ELECTROTECHNICAL COMMISSION'S. IEC 60812:
Analysis techniques for system reliability procedure for failure mode and effects
analysis (FMEA). Switzerland, 2006.
JAPAN INSTITUTE FOR PLANT MAINTENANCE (JIPM). 600 Forms Manual.
Japan, 1995.
JIAN-MING, CAI; ET AL. The Risk Priority Number methodology for distribution
priority of emergency logistics after earthquake disasters. Management Science and
Industrial Engineering (MSIE), 2011 International Conference on, p.560-562, 8-11
Jan. 2011.
KIM, J. H.; JEONG, H. Y. & PARK , J. S. Development of the FMECA Process and
Analysis Methodology for Railroad Systems. International Journal of Automotive
Technology. Montreal, v. 10, n. 6, p. 753-759, 2009.
LEVERETTE, J. C. An Introduction to the US Naval Air System Command RCM
Process and Integred Reliability Centered Maintenance Software. In: RCM 2006 - The
Reliability Centred Maintenance Managers Forum. 2006. Anais...: p. 22-29.
MILITARY STANDARD. MIL-1629. Procedures for Performing a Failure Mode,
Effects and Criticality Analysis. US DEPARTMENT DEFENSE. Washington, DC,
1980.
MOBLEY, K.; HIGGINS, L. R.; WIKOFF, Maintenance Engineering Handbook. 7.
ed. New York: McGraw-Hill, 2008.
MOHIDEEN, P.B. AHAMED; RAMACHANDRAN, M. & NARASIMMALU;
RAJAM RAMASAMY. Construction plant breakdown criticality analysis part
1:UAE perspective. Benchmarking: An International Journal, v. 18, n. 4, p.472-489,
2011.
MOSS, T. R. & WOODHOUSE, J. Criticality analysis revisited. Quality and
Reliability Engineering International, v. 15, n. 2, p. 117-121, mar. 1999.
MOUBRAY, J. Reliability-centered maintenance: second edition. 2. ed. New York:
Industrial Press Inc., 1997.
NGUYEN, T.P. KHANH; YEUNG, THOMAS G.; CASTANJER, BRUNO. Optimal
maintenance and replacement decisions under technological change with consideration
of spare parts inventories. International Journal of Production Economics, v. 143, n. 2,
p. 472-477, jun. 2013.
RIBEIRO, GIOVANI C. A importncia dos critrios de sustentabilidade na definio
da criticidade dos equipamentos analisados sob a tica de RCM2. Revista Comisin de
Integracin Energtica Regional (CIER), n. 55, p. 3-10, jun. 2010.
SIQUEIRA, Y. P. D. S. Manuteno centrada na confiabilidade: manual de
implantao. 1 (Reimpresso). ed. Rio de Janeiro: Qualitymark, 2009.

SMITH , A. M.; HINCHCLIFFE, G. R. RCM: gateway to world class maintenance.


2. ed. Burlington: Elsevier ButterworthHeinemann, v. 1, 2004.
THOMAIDIS, THOMAS V.; PISTIKOPOULOS, STRATOS. Criticality Analysis of
Process Systems. Reliability and Maintainability, 2004 Annual Symposium - RAMS,
vol., no., pp.451,458, 26-29 Jan. 2004.
WANG, CHENG-HUA & HWANG, SHEUE-LING. A stochastic maintenance
management model with recovery factor. Journal of Quality in Maintenance
Engineering, v. 10, n. 2, p. 154-164, Bingley (UK), abr-jun. 2004.
SMITH, RICKY; HAWKINS, BRUCE. Lean maintenance : reduce costs, improve
quality, and increase market share. 1. ed. Burlington, MA: Elsevier Butterworth
Heinemann, 2004.
IGBA, JOEL; ALEMZADEH, KAZEM; ANYANWU-EBO, IKE; GIBBONS,
PAUL & FRISS, JOHN. A Systems Approach Towards Reliability-Centred
Maintenance (RCM) of Wind Turbines, Procedia Computer Science, v. 16, p. 814-823,
2013.
CHENG, ZHONGHUA; JIA, XISHENG; GAO, PING; WU, SU & WANG,
JIANZHAO. A framework for intelligent reliability centered maintenance analysis.
Reliability Engineering & System Safety, v. 93, n. 6, p. 806-814, jun. 2008.
TENG, SHENG-HSIEN & HO, SHIN-YANN. Failure mode and effects analysis: An
integrated approach for product design and process control. International Journal of
Quality & Reliability Management, v. 13, n. 5, p.8 26, 2000.
BARENDS, D.M.; OLDENHOF, M.T.; VREDENBREGT, M.J.; NAUTA, M.J. Risk
analysis of analytical validations by probabilistic modification of FMEA. Journal of
Pharmaceutical
and
Biomedical
Analysis,
v. 6465, p. 82-86, may-jun. 2012.
OLDENHOF,, M.T.; LEEUWEN, J.F. VAN; NAUTA, M.J.; KASTE, D. DE;
ODEKERKEN-ROMBOUTS, Y.M.C.F.; VREDENBREGT, M.J.; WEDA, M.;
BARENDS, D.M. Consistency of FMEA used in the validation of analytical
procedures. Journal of Pharmaceutical and Biomedical Analysis, Volume 54, Issue 3, 20
February 2011, Pages 592-595,
RAUSAND, MARVIN. Reliability Centered Maintenance. Reliability Engineering and
System Safety, v. 60, n. 2, p. 121-132, may. 1998.
FORE, S., MSIPHA, A. Preventive Maintenance using Reliability Centred
Maintenance (RCM): A case study of a ferrochrome manufacturing company. South
African Journal of Industrial Engineering, v. 21, p. 207-23, 2010.
TAVARES, HELDER D. F. Aplicao da Metodologia RCM nos Planos de
Manuteno de Sistemas de Proteo, Comando e Controlo. 111f. Dissertao
(Mestrado) Mestrado Integrado em Engenharia Eletrotcnica e de Computadores,
Faculdade de Engenharia da Universidade do Porto. Porto, 2012.
NAVSEA. Reliability-Centered Maintenance (RCM) Handbook. S9081-AB-GIB-010.
Naval Sea Systems Command. USA, 2007.

Potrebbero piacerti anche