Sei sulla pagina 1di 4

Design and Verification of Safety Critical Systems

Mihir Shete

Department of Electrical Electronics and Instrumentation, BITS-Pilani


AH7-202 BITS-Pilani Goa Campus, Zuarinagar, Goa-403726
f2006241@bit-goa.ac.in
AbstractThis paper deals with the design and verification of
safety critical systems. It documents the need of using formal
methods for developing software for safety critical systems and
also deals with various aspects of static and dynamic analysis of
safety crtical systems
Keywords: Static Analysis, Dynamic Testing, formal methods,
symbolic execution

I.

INTRODUCTION

Safety critical system is a system where human safety is


dependant on the correct operation of the system. This paper
will focus on various testing techniques for the software of a
safety critical system. However, it must be duly noted that
safety of a system must be considered with respect to the
complete system including the hardware, software and the
operators and users.
II.

DEPENDABLE SYSTEMS

Safety is a feature that cannot be added into the system after


completion of design but safety must be designed in a system
and dangers must be designed out, we must be able to design
safety critical systems that are dependable, there are many
terms associated with dependability and considerable
international effort has been put in to standardize these, the
accepted definition of the overall concept is
A. Dependability
Dependability is that property of the system which allows
reliance to be justifiably placed on the service it delivers.
The life of a system is perceived by its users as an alternation
between proper and improper service. Delivery of proper
service is normally termed correctness. Therefore a correct
system is not necessarily a dependable system. Dependability
has other measures such as safety, reliability and availability.
1) Safety: Safety is a measure of the continuous delivery
of service free from occurences of catastrophic failures.
2) Reliability: Reliability is a measure of the continuous
delivery of proper sevice( where service is delivered according
to specified conditions) or equivalently of the time to failure.
3) Availability: Availability is a measure of proper service
with respect to the alternation of proper and improper service.
III.

FORMAL METHODS

The key notion of dependability is that reliance must be


justifiable. This means that we need testable requirements and
specifications which are refined to a system using rigorous or

formal development techniques, as well as credible analytical


and experimental evidence demonstrating the satisfaction of
the requirements by the system. The Formal methods are used
to address these concerns.
Formal methods have been the topic of research for many
years now, but still most of the software is produced by
chaotic process due to lack of skilled manpower and mainly
because of the cost involved for employing the formal
methods.
The safety critical systems need to follow the formal methods
rigorously so as to meet their specifications and prevent
accidents.
A. Industrial scale examples of use
1) Aviation: An early example of the application of formal
methods to real life systems was the SIFT project, which
probably represents the most substantial US experiment in the
safety critical sector. SIFT is an aircraft control computer
commisioned by NASA. The safety requirements proposed by
NASA and FAA for this purpose were extremely stringent and
hence formal methods were used in the SIFT project to meet
the requirements.
2) Railway Systems: In 1988 GEC Alsthom, MATRA
Transport and RATP started working on a computerized
signaling system to control RER computer trains in Paris with
the objective to increase the traffic by 25% while maintaining
the safety levels of the conventional system. The obecjtive
was achieved in 1989 and researchers believe that the system
is safer as a result of the formal methods and verification
exercise.
3) Medical Systems: A number of medical Instruments
have life critcal functionality and require a high degree of
dependabilty which require the use of formal methods.
Also the stringent requirements suggested by AAMI for
medical equipment and the strict tests a medical equipment
must pass in order to be certified fit for use, require that
formal methods be followed for the development of medical
equipment.
The formal methods are a necessity in safety critical systems
because they have to follow strict standards for safety, these
standards are described in the next section.
IV.

INTEGRITY LEVELS AND STANDARDS FOR SAFETY


CRITICAL SYSTEMS

The concept of "safety critical" is not absolute; failure of some


systems will not impact safety, failure of other systems could

occasionally result in minor injuries, and failure of some


systems could lead to disasters. The level of safety integrity
required varies from none through to a very high level of
integrity. Standards for safety critical software have now
standardized on a scale of five discrete levels of safety
integrity, with an integrity level of 4 being "very high", down
to a level of 0 for a system which is not safety related. The
term "safety related" is used to collectively refer to integrity
levels 1 to 4. Further analysis will assign an integrity level to
each component of a system, including the software.
TABLE I
RELEVANT STANDARDS

Standard
Quality Systems Model for
Quality
Assurance
in
Design/Development,
Production, Installation and
Servicing.
ISO9001/EN29001/BS5750
part1
Functional Safety : Safety
Related Systems
IEC1508
Railway Applications: Software
for Railway Control
& Protection Systems.
EN50128
Software for Computers in the
Safety Systems of
Nuclear Powers Stations.
IEC880
Software Considerations in
Airborne Systems and
Equipment Certification.
RTCA/DO178B
MISRA Development Guidelines
for Behicle Based
Software
Safety Management
Considerations for Defence
Systems Containing
Programmable Electronics.
Defence Standard 00-56
The Procurement of Safety
Critical Software in
Defence Equipment.
Defence Standard 00-55

Description
This is the recommended
standard of quality system for
software with a safety integrity
level of 0, and an essential
prerequisite for higher integrity
levels.
A general standard, which sets
the scene for most other safety
related software standards.
A standard for the railway
industry.

A standard
industry.

for the nuclear

A standard for avionics and


airborne systems.

Issued by the Motor Industry


Software Reliability Association
for automotive software.
A standard for the defence
industry.

Detailed software standard for


safety critical defence
equipment.

Differing constraints are placed on the methods and


techniques used through each stage of the development
lifecycle, depending on the required level of safety integrity.
For example, formal mathematical methods are "highly
recommended" by most standards at integrity level 4, but are
not required at integrity level 0 or 1. The required integrity
level can consequently have a major impact on development
costs, making it important not to assign an unnecessarily high
integrity level to a system or any component of a system.
This is not just limited to deliverable software. The integrity of
software development tools, test software and other support
software may also have an impact on safety.

V.

SPECIFICATION AND DESIGN OF SAFETY CRITICAL


SYSTEMS

The rule of thumb while designing a safety critical system is to


keep it as simple as possible, because simpler systems are
easier to debug and are more reliable. While designing safety
critical systems with high integrity levels it is advisable not to
use interrupts and have the system perform a single task, but it
is not always possible.
Now while designing a safety critical system we can either
create a perfect system, which cannot go wrong because there
are no faults in it or aim to create a perfect system and accept
the mistakes that have crept in and include error detection and
recovery methods to prevent the errors that have occurred in
resulting into a safety hazard.
The first of these approaches can work well for small systems,
which are sufficiently compact for formal mathematical
methods to be used in the specification and design, and for
formal mathematical proof of design correctness to be
established. However, formal mathematical specification of
larger and more complex systems is difficult, with human
error in the specification or proof becoming a significant
problem. Hence we adopt the second policy of using the error
detection and recovery methods.
Software at higher integrity levels is more expensive to
develop. A cost conscious designer may consequently try to
isolate functionality requiring a high integrity level from other
less important functions. Such an approach is essential if is
intended to use off-the-shelf software, such as a database
manager, because the vast majority of off the shelf software
products can only be attributed an integrity level of 0. A
designer can only have different integrity levels for different
parts of a system or its software if it can be proven that lower
integrity parts cannot violate the integrity of higher integrity
parts or the overall system. Consequently, split levels of
integrity are usually only viable when used between
processors in a system, and not within the software executing
on a single processor.
Whatever philosophy for the specification and design of safety
related software a designer adopts, basic quality management
principles apply, as required by the ISO9001 standard. These
include defined procedures for each stage of the lifecycle,
records and documentation, configuration management and
quality assurance.
VI.

VERIFICATION OF SAFETTY CRITICAL SYSTEMS

Verification is the most important and most expensive group


of activities in the development of safety critical systems, with
verification activities being associated with each stage of the
development lifecycle. The actual activities are basically the
same as a conscientious developer would apply to any other
software development, but with a more formal and rigorous
approach. An added complication is that independent
verification is usually required. The means by which this is
achieved depends upon the integrity level and the user or
certification body. Independent verification can vary from
independent witnessing of tests, participation at reviews and

audit of the developer's verification, to fully independent


execution of all verification activities. The degree of
independence can vary from a separate team within the overall
project structure, to a verification team supplied by a
completely independent company, who may never even meet
the development team.
While testing a safety critical system at any stage care must be
taken to ensure that faults in verification tools do not give an
incorrect "pass" result to any verification activity, when the
results should have been a "fail". Some verification tools have
been developed with this requirement in mind, using safety
critical standards during the development of the tool. Also to
meet this requirement sometimes proprietary application
specific safety verification tools are developed by a third
party.
ISO9001 accredited quality management system is taken as a
baseline for developing safety critical systems; reviews
become more formal, including techniques such as detailed
walkthroughs of even the lowest level of design. The scope of
reviews is extended to include safety criteria. If formal
mathematical methods have been used during the specification
and design, formal mathematical proof is a verification
activity.
Static analysis is the analysis of program source code before
it is executed. If static analysis is conducted at all for non
safety critical software, it is usually limited to checking coding
standards and gathering metrics. For safety critical software,
more complex static analysis techniques such as control flow
analysis, data flow analysis, and checking the compliancy of
source code with a formal mathematical specification can be
applied.
Dynamic testing is the mainstay of verification, extending
from the testing of individual units of code in isolation from
the rest of the software, through various levels of integration,
to system testing. For safety critical software, dynamic test
results are only valid in the target environment. You can
however develop dynamic tests in the convenience of a host
environment, then repeat fully developed tests in the target
environment.
A. Static Analysis
Static analysis is the term used for any method that analyzes
the properties of a program without executing it.
Source code inspection by humans is also a form of static
analysis but we will concentrate on automatic methods.
Examples of errors static analysis can detect are, variables that
are read before being initialized, out of bounds access to
arrays, integer overflow, division by zero and dereferencing
pointers that contain no data. Additionally static analysis may
detect anomalies such as variables not being used, ineffective
assignments and unreachable code.
The objectives of static analyzers are:

To fully handle a general purpose programming


language (such as Ada or C, including the standard
libraries).

To require no user-provided specification/annotation


(except maybe light ones such as, e.g., ranges of
input variables or stubs for library functions the
source of which is not available);
To be precise enough to provide interesting
information for most programs (e.g. information
pertinent to static program manipulation or
debugging);
To be efficient enough to handle very large programs
(from a cost of a few megabytes and minutes to
gigabytes and hours when dealing with hundreds
of thousands of source code lines).

The above mentioned objectives are very general and


analyzers designed to meet the above objectives are called
general purpose static analyzers. Now even for simple set of
specifications the static analyzers take long time to execute
besides they can raise false alarms due to the inherent
approximations wired in the analyzer. These drawbacks lead
us to the idea of special purpose static program analyzers.
Their objectives are:
To handle a restricted family of programs (usually
not using the full complexity of modern
programming languages);
To handle a restricted class of general-purpose
specifications (without user intervention except,
maybe, light ones such as, e.g., ranges of input or
volatile variables);
To be precise enough to eliminate all false alarms
(maybe through a redesign or, better, a convenient
parameterization of the analyzer by a trained end user
that does not need to be a static analysis or Abstract
Interpretation specialist);
To be efficient enough to handle very large programs
(a cost of a few megabytes and minutes for hundreds
of thousands of source code lines).
B. Dynamic Testing
Dynamic testing involves execution of the code in a testing
environment; symbolic execution is a popular method for
dynamic testing.
The main idea behind symbolic execution is to use symbols
instead of numbers as input values, and to represent the values
of program variables with symbolic expressions. As a result,
the output values computed by a program are expressed as
function of the program input symbols. In order to describe the
state of a symbolically executed program the instruction
counter and the set of pairs <variable, value> that usually
defines the state of a numeric execution, is not sufficient. As
an example, let us consider the program shown in Figure 1, in
which each statement is labeled for the sake of clarity.
L1: int example(int data) {
L2: int out;
L3: if (data > 0)
L4: out = data;
L5: else
sample program Ex1.
L6:Figure
out 1:
= The
-data;
L7: return(out);
L8: }

Let the symbol represent the value of the input parameter


data.
When executing the conditional statement, the value of data
Does not allow one to choose along which branch the
computation has to continue. Hence, one has to assume either
> 0 to execute the then branch or 0, to execute the else
branch. Such assumptions are represented by means of a first
order predicate referred to as path condition (PC). Thus, the
state of a symbolic execution is represented by a triple <IP,
PC, V>, where
IP is the instruction pointer referring to the statement
to
execute next;
PC is the path condition;
V is the set of pairs <variable, expression>.
Symbolic execution is a very sophisticated process and
requires considerable expertise, for detailed theory on
symbolic execution refers [4].
Symbolic execution is not popular in industries for the
following reasons:
The inadequacy of existing symbolic executors in
dealing with dynamic data structures, loops, etc.
The necessity of coupling symbolic executors with
expression simplifiers and/or theorem provers, which,
in general, are difficult to use and require ad hoc
skills.
The (alleged) low cost effectiveness of introducing
any formal technique (and symbolic execution in
particular) into a real life software development
process.
Even though the above statements are true symbolic execution
can be used if we use subsets of C language to avoid linguistic
features like dynamic allocation of memory.
C. Static Testing Vs. Dynamic Tesitng
Static testing is the verification of code done before
compiling. So we can say that code analysis by humans is also
type of a static analysis, but for huge firmware we use
automatic methods, since static testing finds bugs before
compilation of the code, which is in a very early stage of
development, it is easier and cheaper to debug the firmware.
Also static testing can find ANSI and other standard violations
which are either impossible or very difficult to locate during
dynamic testing. Whether or not our code follows or coding
standards can only be checked by static testing.
Now, Dynamic testing can be done only after the code is
compiled and linked. It may involve running of various test
cases and the overall time required to perform dynamic testing
will increase exponentially with the size of the firmware also
only the parts of firmware that are executed can be tested. And
typically nearly half of the bugs detected by dynamic testing
can be detected earlier by static analysis.
So static analysis is more effective in terms of time and cost as
compared to dynamic testing but in extremely critical
applications both static and dynamic testing are done to ensure
the safety of the system.

VII. CONCLUSIONS
Human wellbeing depends on the working of safety critical
systems used in various fields and hence these systems must
strictly adhere to specified standards. Therefore safety critical
software must be written and verified using formal methods,
which can be proven to be secure, mathematically.
ACKNOWLEDGEMENTS
This paper was produced in partial fulfillment of the course
Embedded Systems Design and I would like to thank Dr.
Anupama K .R instructor incharge of the course for her help
and guidance.
REFERENCES
[1]
[2]
[3]

[4]

[5]

[6]
[7]

[8]

Jonathan Bowen, Victoria Stavridou, Safety-Critical Systems,


Formal Methods and Standards, Software Engineering Journal.
An Introduction to Safety Critical Systems, IPL Information
Processing, UK.
Bruno Blanchet1, Patrick Cousot2, Radhia Cousot1, Jrme Feret1,
Laurent Mauborgne1, Antoine Min1, David Monniaux1 and Xavier
Rival1, Design and Implementation of a Special-Purpose Static
Program Analyzer for Safety-Critical Real-Time Embedded
Software, 1CNRS & cole normale suprieure, 75005 Paris, France
and 2CNRS & cole polytechnique, 91128 Palaiseau cedex, France.
Alberto Coen-Porisini, Giovanni Denaro, Carlo Ghezzi, Mauro
Pezz, Using Symbolic Execution for Verifying Safety-Critical
Systems, Dipartimento di Ingegneria dellInnovazione Universit
di Lecce via per Monteroni I-73100 Lecce, Italy.
Sarfraz Khurshid1, Corina S. Pasareanu2, and Willem Visser3,
Generalised Symbolic Execution for Model Checking and
Testing, 1MIT Laboratory for Computer Science, Cambridge, MA
02139, USA, 2Kestrel Technology LLC and 3RIACS/USRA, NASA
Ames Research Center, Moffett Field, CA 94035, USA.
Static and Dynamic Testing Compared, PR:QA White paper
series: WP1.1.
Nancy J. Leveson, Stephan S Cha, Safety Verification of ADA
Programs Using Software Fault Trees, University of California at
Irvine.
Robyn R. Lutz, Analyzing Safety Requirements Errors in SafetyCritical, Embedded Systems, Jet Propulsion Laboratory, California
Institute of Technology, Pasadania, CA 91109.

Potrebbero piacerti anche