Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
03 Test report
...
ACRuDA Project
DG VII RTD Programme
RA-96-SC.231
Deliverable D3:
The proposed assessment and certification
methodology for digital architectures
Document.ID: WP2/D3/V2
Status: Proposed
Confidentiality: Public
Distribution: All
Document Abstract:
The deliverable D3 presents the result of the tasks achieved in the ACRuDA Work Package 2 and Work
Package. This work consists mainly in the description of an assessment and certification schema, of assessment
and certification procedures and of assessment criteria.
J.L. DUFOUR (MATRA T.I. France) Ph. GABRIEL (MATRA T.I.. France)
3.11.2010 12:59
WCS_AP v.03 Test report
Project Sponsor
This report reflects work which is partially funded by the Commission of the European Communities (CEC)
under the Framework IV in the area of Specific RTD programme project: ACRuDA» Assessment and
Certification Rules for Digital Architectures".
CONTENT
Foreword for the HTML version of this document
1. INTRODUCTION
1.1. Objectives
1.2. Structure of the document
2. BACKGROUND
2.1. Introduction
2.2. The certification procedure
2.2.1. Licensing
2.2.2. Certification
2.2.3. The certificate
2.2.4. The certification body
2.2.5. Conclusion
3. MAIN CONCEPTS
3.1. Product, Safety Requirements Specification.
3.2. Assessment process
3.3. Certification process
4. PRODUCT DEVELOPMENT PROCESS
4.1. Introduction
4.2. Life cycle of a product
4.2.1. Presentation
4.2.2. Paper study
4.2.3. Model
4.2.4. Prototype
4.2.5. Pre Production
4.2.6. Production
4.3. Development Life Cycle and documentation
4.3.1. Development Life Cycle
4.4. Safety life cycle and documentation
4.4.1. Safety life cycle
4.4.2. Safety Plan
4.4.3. Safety case
4.4.3.1. Introduction
4.4.3.2. General
4.4.3.3. Purpose of the safety case
4.4.3.4. Content of the product safety case
4.5. Quality Assurance provisions
4.5.1. Relationship of Quality Assurance to other Plans
5. ASSESSMENT AND CERTIFICATION PROCESS
5.1. Introduction
3.11.2010 12:59
WCS_AP v.03 Test report
3.11.2010 12:59
WCS_AP v.03 Test report
6.6.1. Contents
6.6.2. Basic criteria
6.7. Validation and off line testing
6.7.1. Contents
6.7.2. Basic criteria
6.8. Fault and failure analyses
6.8.1. Contents
6.8.2. Basic criteria
6.9. Operation, Maintenance and Support
6.9.1. Contents
6.9.2. Basic criteria
6.10. Software Assessment Criteria
6.10.1. Software integrity level
6.10.2. Life cycle issues and documentation
6.11. Hardware Assessment Criteria
6.11.1. Life cycle issues and documentation
7. TERMINOLOGY
7.1. Introduction
7.2. Terminology, Definitions and Abbreviations
8. REFERENCES
8.1. European Council Directives
8.2. European Technical Specifications
8.3. Standards
8.4. ACRuDA Project
8.5. CASCADE Project
8.6. ERTMS project
8.7. Information Technology domain
8.8. Others
9. ANNEX I: STRUCTURE OF A SAFETY PLAN
10. ANNEX II: STRUCTURE OF A PRODUCT SAFETY CASE
11. ANNEX III: STRUCTURE OF AN ASSESSMENT PLAN
12. ANNEX IV: STRUCTURE OF A TECHNICAL ASSESSMENT REPORT
13. ANNEX V: STRUCTURE OF THE CERTIFICATION REPORT
14. ANNEX VI: STRUCTURE OF A CERTIFICATE
List of FIGURES
Figure 1: Product and Safety requirements specification
Figure 2: Assessment process
Figure 3: Certification process
Figure 4: Development life cycle
Figure 5: Relationships between categories and aspects
Figure 6: Equipment/Generic Product life cycle
Figure 7: Safety life cycle
Figure 8: Structure of Plans
Figure 9: Assessment and certification schema
Figure 10: assessment inputs
Apart from that, only four typing errors in the original text have been corrected:
3.11.2010 12:59
WCS_AP v.03 Test report
1. Chapter 4.5 contained a sub-chapter that was erroneously numbered 4.5.1.3 and not mentioned in the table of contents. In this text,
the number has been corrected to 4.5.1 and the corresponding reference added to the table of contents.
2. The identification of the document referenced in [ACR02] in chapter 8.4 was erroneously typed as "24 February 97. Reference:
ACRuDA/INRETS/MK-PM/WP1/D1/97.13/V2" (identical to the identification of [ACR01]). It has been corrected to "29
September 97. Reference: ACRuDA/INRETS/PM-MK/WP1/D2/97.39/V3".
3. Annex II, Chapter 3, section "Safety Plan", erroneously referred to chapter 3.4.2 of this document. The reference has been
corrected to 4.4.2.
4. Annex III, Chapter 5, erroneously referred to chapter 3.4.3 of this document. The reference has been corrected to 4.4.3.
No other corrections have been made. Any other differences between the wording of the original document and this text are unintentional
and unnoticed! Please inform me if you discover any.
Note: If you print this text, some of the images may not appear completely on the printed page. To get a complete print-out of such an
image, right-click it and store it on your hard disk. (This works with both Netscape and Explorer). Then print the image separately (e.g.
by viewing the file with your browser and then printing it!). You may have to adjust the margin settings in your page setup.
1. Introduction
The ACRuDA project aim to develop a methodology for safety assessment of safety critical digital architectures.
This methodology has to comply with the different requirements of the end-users and the suppliers. In priority
order:
it has to be the minimum activity of the assessor to gain confidence on the safety of the architecture,
it has to use the best practices on assessment so that the end-users maintain or gain confidence on the
automatism supported be safety critical digital architecture,
it has to be non ambiguous so that it cannot be misinterpreted by the different assessors. This contributes
to the harmonisation of the European market,
it has to be cost effective so that the effort are well proportioned between the different activities of
assessment and does not create lacks or deviation.
The minimum activity of the assessor depends on the complexity of the architecture and on the development
process. In all cases, the activity consists in more than just a conformity checking. The assessor has to perform
safety studies and expertise to complete assessment with an effectiveness evaluation of the protections ,
principles, specific mechanism developed for the architecture and gain confidence on the safety under the
proposed conditions of use.
The best practices in assessment in the European countries have been studied in [ACR02]. ACRuDA project has
used results from CASCADE project, European Standard ([CEN01], [CEN02] and [CEN03]) and Directives
([DIN01], [DIN02] and [DIN03]) and the experiences of the different ACRuDA partners. This has been
formalised through high level assessment criteria. The set of criteria obtained hereafter is a basic set. These set
of criteria can not be applied under this form and the assessors should add work to refine this basic set of criteria
in a set of detailed criteria that could be applied to assess a digital architecture. This chapter has been updated
once, after the ACRuDA case studies results and it must be updated regularly as the practice evolves in Europe.
The principal aim of this document is to define the framework for the assessment method. The objective is to
ensure that the safety digital architecture meets its safety requirements according to relevant standards, and best
practice as well as any specific safety requirements contained in contractual or technical specifications for
the equipment. The process of certification of safety critical products or systems can involve many different
bodies, for example, the sponsor, the supplier, the assessors and the notified body. It is, therefore, essential that
a harmonised, mutually agreed approach to the assessment of the products should be established which takes
into account the needs of each partner.
1.1. Objectives
This document provides information on the way a process or product is to be assessed by a third party. The
foundation of the assessors work is this standard which are « codes of practice ». The philosophy of assessment
3.11.2010 12:59
WCS_AP v.03 Test report
of safety digital architectures in the railway sector, is based on the product and the process. This document is
principally aimed at the assessors who need to perform an assessment of a safety critical digital architecture
based on the high level criteria. Each criterion defines a high level requirement that the item under assessment
must fulfil. This is the top level document to be used during an assessment and from this the assessors will
develop the detailed criteria necessary to assess the specific architecture.
2. Background
2.1. Introduction
Articles 129b to 129d of the amended EC Treaty established the intention of introducing trans-European
networks in the areas of transport, telecommunication and energy infrastructures. The need for inter-operability
and technical standardisation is stated.
The European Commission has expressed the urgent need for an effective, integrated transport system providing
a high degree of inter-operability between rail, air, road and water transport systems. This is referred to as
cross-modal transport. To achieve this, there must be efficient cross-border and cross-modal operations between
all transport systems. The European Commission has recognised the need for an effective railway as part of this
European transport system.
Furthermore, the European Commission is now, on behalf of the European Council, developing the necessary
legislation, mainly in the form of Council Directives. This legislation will become part of Member State
legislation.
3.11.2010 12:59
WCS_AP v.03 Test report
For the railway, the following directives, standards, and technical specification must be considered:
[DIN03] refers to the concept of an approval body which certifies that the product or system satisfies the
essential requirements making it fit for use. Safety is the first of the essential requirements that Council
described in Annex III of [DIN01].
2.2.1. Licensing
[DIN01] and [DIN03] are examples of international legislation which when that member states are required to
implement into their own national legislation. This will results in all member states having similar laws defining
responsibilities and authorities with respect to various kinds of public transport.
In turn, regulations will issued to implement such laws and these will define who is authorised to grant a license
(licensing authority) to operators of transport systems or part thereof. This licensing authority may be a
department within the government, or an external organisation.
The licensing authority will set up the rules to be applied in order for granting a license. The key requirements is
that it must be demonstrate that a given system or product is safe to be used in its intended application.
As many countries have already privatised transportation, the owner of a public transportation system may not
necessarily be the operator of that system. For the purposes of licensing, this reflects mainly an issue of liability
and does not affect the actual licensing procedure.
The owner/operator will order anything from individual constituents all the way up to a complete transportation
system from one or more suppliers, who themselves will subcontract out to sub-suppliers etc.
In order to get permission to use the system he has ordered, the owner/operator must convince the licensing
authority that it fulfils the requirements that the licensing authority has defined. This evidence must be provided
by an unbiased, independent body: the assessor. The assessor must be accepted by the licensing authority.
It is important here to remember that the owner/operator does not define the requirements: that is done by the
licensing authority. The owner/operator will, however, identify which requirements he wants an assessment for,
in dependence on the product or system being assessed and the intended application.
2.2.2. Certification
In order to avoid to repeat the assessment process each time an existing product or system is deployed, it is
desirable to perform some kind of generic assessment and to document the results in a form that is acceptable to
all licensing authorities. This is the fundamental concept of certification!
Thus certification requires an unbiased, qualified assessment of the generic properties of a system or product.
The term generic is significant: the actual properties of a given object are dependent of the way it is deployed.
Therefore, the assessment can only evaluate those properties that are common to all reasonably expectable
environments and deployments. However, for complex systems, embedded components can certainly be
certified for the context of the system that they are embedded in.
This means that individual products can be certified for use in specific assemblies, which can be certified for use
in specific subsystems that are certified for use in specific systems. For software, this is equivalent to certifying
specific modules for use in specific programmes in specific programme systems.
[DIN01] defines in Article 2 the notified bodies as "the bodies which are responsible for assessing the ...
suitability for use ... or for appraising ... verification ..." and in annex VI refers to the "certificate from the
notified body ...". In other words, assessment and certification are to be performed by the notified bodies.
3.11.2010 12:59
WCS_AP v.03 Test report
The liability of the notified body is not clearly defined in the directive but it is said that: the notified body must
subscribe a civil liability insurance, excepted if this responsibility is covered by the State on the basis of national
law or if the controls are directly achieved by the State member.
From the above it is clear that the validity of a certification is heavily dependent on the context of the object
being certified. It is therefore of paramount importance that the certificate clearly indicates the limits and
conditions of validity.
The details of the assessment performed is to be contained in a certification report. This report must be explicitly
identified on the certificate as an integral part. It must clearly state the conditions and limitations of the
assessment and in particular identify the requirements against which the assessment has been performed.
Annex VI of the above mentioned Council directive [DIN01] defines the "Contents of the EC declaration" as
being:
As in the case of assessments performed for the purposes of obtaining a license from the licensing authority,
certification must be performed by a person or body that is recognised and accepted by the licensing authority.
In other words, the certification body must be certified!
It was pointed out earlier that assessment and certification are to be performed by the "notified body". Article
20 of Council directive [DIN01] states:
"2. Member States shall apply the criteria provided for in Annex VII for the assessment of the bodies to be
notified. Bodies meeting the assessment criteria provided for in the relevant European standards shall be
deemed to meet the said criteria."
where Annex VII identifies the minimum criteria which must be taken into account by the member states when
notifying bodies.
These minimum criteria are very generic and refer to the independence and impartiality of the notified body's
staff (must not be involved in the design, development, manufacture, construction, marketing, maintenance or
operation of the product and system they assessed). The technical qualification of the assessors must be
"adequate". Thus, the licensing authority must define the detailed criteria that a certification body shall fulfil,
just as it defines the criteria for certification. By harmonising the criteria for certification body and certification
across boarders we will achieve a situation where certification body and certificates throughout Europe will be
recognised by all licensing authorities, as indicated in Article 20 (5) of Council directive [DIN01].
2.2.5. Conclusion
The three directives [DIN01], [DIN02] and [DIN03] are the base for the definition of a new European model for
assessment and certification in the railway fields but the directives do not give the detailed procedures for
assessment/certification of product or system and the detailed criteria for assessment of product or system.
Safety digital architectures can be parts of sub-systems. One objective of ACRuDA is the definition of
assessment/certification procedure and assessment criteria for safety digital architectures.
3.11.2010 12:59
WCS_AP v.03 Test report
3. MAIN CONCEPTS
3.1. Product, Safety Requirements Specification.
[CEN03] (chapter 6.5.1, page 45) standard considers three different categories of programmable electronics
systems:
generic product: (independent of application) A generic product can be re-used for different independent
applications,
generic application: (for a class of application) A generic application can be re-used for different
class/type applications with common function,
specific application: (for a specific application) A specific application is used for only one particular
installation.
The deliverable [ACR01] of the ACRuDA project gives a definition for safety digital architecture, and describes
the differences between basic architecture and application architecture. ACRuDA deals with the basic
architecture. The basic architecture includes hardware and software, is railway generic and can be used in
different railway application.
The definition of ACRuDA’s basic architecture is similar to the definition of generic product of [CEN03]. In
this document , the use of the term « Product » is equivalent to « Generic product »
[CEN03] also gives a definition of Product : « a collection of elements, interconnected to form a system,
sub-system, or item of equipment, in a manner which meets the specified requirements ».
A product will be included in different system(s)/sub-system(s)/equipment(s). The supplier of the product can
only have general and theoretical hypothesis on the operational environment. It is necessary for end users and
suppliers of system(s)/sub-system(s)/equipment(s) to verify that the hypothesis on the environment of the used
products are consistent with the real environment.
A product can be composed of several different components. From the safety point of view, some of the
component don’t influence the safety, some other will contribute to the safety. These components are called
safety critical components.
Before the beginning of the assessment of a product, it is necessary to give two precise descriptions:
The basic definition of safety requirement specification is taken from [CEN03] (See Annex A, sub-chapter
A.2, page 52). This definition has been refined by addition of new items in the definition.
For the purpose of assessing a product, the safety requirements specification should contain:
the safety functional requirements (the safety functions that the product is required to carry out) (comes
from[CEN03]),
the safety requirements: safety integrity level (SIL) and/or a numerical safety target that has been derived
from a higher level. A digital architecture must be considered within the context of the overall railway
system. This safety requirement comes from a allocation of safety given at a higher level (in general
system level) and apportioned to the different sub-levels (sub-system and equipment level and finally the
architecture level which is a sub-level of the equipment level). (comes from [CEN03] but modified for
ACRuDA project)
the applicable standards and rules (new item, not defined in [CEN03]),
the type of application considered (Interlocking, ATP, ATC, etc.) (new item, not defined in [CEN03]),
a description of the environment of the product (new item, not defined in [CEN03]).
3.11.2010 12:59
WCS_AP v.03 Test report
Figure 1 shows the process to obtain the definition of the product and the safety requirement specification:
[CEN01], [CEN02] and [CEN03] defines the safety integrity and the safety integrity levels. The safety integrity
is the probability of a safety critical system satisfactorily performing the required safety functions under all the
stated conditions within a stated period of time. The Safety Integrity Level (SIL) of a safety requirements
specification is one of four possible discrete levels for specifying the safety integrity requirements of the safety
functions to be allocated to the safety related products/systems. Safety integrity level 4 has the highest level of
safety integrity and safety integrity level 0, the lowest. The ACRuDA project only considered SIL4 products.
In this document, process, procedure, criteria are built for SIL4 level. For SIL level less than 4, the requirements
will be lower, but the basic procedures will still be applicable.
The assessment process is described in Figure 2. The confidence in the safety, is obtained by examination of the
product, by examination of all its representations and by the understanding of its development process. An
assessment is composed of preliminary analysis of the product, observations, theoretical studies, and
experimentation. The assessment of the product is based on the ACRuDA safety case (see also chapter 4.4.3).
The main organisations involved in the assessment process are: the sponsor, the supplier, the notified body and
the assessor(s). The supplier builds and sells the product and is also responsible for the establishing the safety
case of the product. The sponsor asks and finances the assessment of the product. In most cases, the end user of
the product or the supplier of the product can be the sponsor. The notified body assesses, with internal and/or
external assessors, the product on the base on the safety case. In the end of the assessment, the notified body
certifies a product or not on the base of the assessment results. In this document the term « Assessors » will
cover the Internal and External Assessors.
3.11.2010 12:59
WCS_AP v.03 Test report
There are two main concepts for the assessment: the assessment of conformity and the assessment of
effectiveness.
The assessment of conformity consists in the assessment of the implementation of a product. It assesses the
degree to which a given real product correspond to its description.
The assessment of effectiveness consists in the assessment of the safety functions, mechanisms and measures
chosen to satisfy the safety requirements specification of a product. The assessment is focused on the pertinence
of the functions, mechanisms and measures, the cohesion (if all the functions, mechanisms and measures operate
together in a good way), the consequences of risks of hazardous failure (from the construction and operational
points of view), the ease of use (installation, adaptation to real environment, maintenance...), the safety plan
(tools, methods, organisation of the supplier), and the research of remaining scenarii.
The apportionment of work between conformity and effectiveness assessment of product is dependant upon the
definition and description of the safety requirements specification and the product. If the safety requirements
specification and the product are well described, defined and detailed then, the main part of the assessment of
the product will involve the conformity and a few part of the assessment will involve the effectiveness.
The assessors need well defined criteria and procedures to assess a product. The safety of the products is
achieved, for the main part, by procedural measures, like organisation controls, staff controls, staff training, and
so on. But it is necessary to make also technical controls. The assessment criteria must be procedure and
product based.
These criteria will cover the aspects of effectiveness and conformity. The assessment criteria must list the
necessary assessment inputs to achieve the assessment. The supplier is responsible for the delivery of the
assessment inputs and must verify that all the assessment inputs, given to the assessors, satisfy the requirement
on the content and the structure and that all the assessment inputs give the proof or help to establish the proof of
safety.
The ideal situation is to begin the assessment at the same time as the development of the product. It is necessary
for the assessors to have a good understanding of the product but the assessors must stay independent and must
not influence the development of the product (sources [ITS01], [ITS02]).
It is the responsibility of the supplier to prove the safety of its product with all proofs of safety contained in the
safety case. The role of the assessors is to assess the methods used by the manufacturer for testing and safety
analyses and the result of the tests and the safety analyses. When necessary the assessors can make further
independent tests and safety analysis to verify the supplier’s results, to complete some proof element,
demonstration or safety analysis. It is recommended that tests and safety analyses are performed using methods
3.11.2010 12:59
WCS_AP v.03 Test report
Besides the supplier and assessor, there are four more bodies involved in the certification process: the European
Union (EU), the authority of a state member of EU, the accreditation body. The European Union is the body
that accepts or refuses the notified bodies appointed by the authority of a state member. The European Union
is also in charge of defining a common European policy for assessment and certification. The accreditation
body is responsible for issuing accreditation certificates ([EN01] to [EN07] standards) to the assessors and to
the notified bodies. This accreditation it is not explicitly required in the EU directives [DIN01], [DIN02] and
[DIN03]. If a body becomes notified body, this could be regarded as an accreditation in its own right.
3.11.2010 12:59
WCS_AP v.03 Test report
A product is developed during a development process. A product is assessed against well defined criteria during
an assessment process. A certification presume the validity of the assessment and confirmed that an assessment
was performed. The next stage is the integration of the product in a system or the installation of the product in
its real environment. This is the approval stage. The approval process is the means to confirm that the use of a
product in a particular environment for a particular goal is acceptable. In a safe operational exploitation, a
product is used according to specified procedures. In this case, the changes made to the environment of the
product can involve modification of the product which can influence the development process (sources [ITS01],
[ITS02]). A new assessment will be necessary if changes are made to the product (it can be only a simplified «
delta assessment »). The definition of approval procedure and operational exploitation procedures is
outside of the field of ACRuDA project.
Certification is based on the results of the assessment. It is a formal declaration that confirms the results of the
assessment and the fact that the assessment criteria were correctly applied and satisfied. All the bodies involved
in the certification and assessment process must be recognised and competent for their role in the certification
and assessment process. The certificate is issued by the notified body (sources [ITS01], [ITS02]).
The certification process requires that the impartiality and independence of the assessment. The liability of the
different bodies involved in the assessment and certification process is not clearly defined but will depend of
national rules and laws.
The Assessment criteria will be most effective if all the countries have a common and harmonised certification
process and apply the same set of criteria. The notified body, in each country, is responsible for the application
of the criteria. It should be noted that these criteria are not only defined for assessors and notified bodies. They
are also useful for the suppliers and it would be very inefficient for a supplier to develop and build a product
without considering the assessment criteria (it can lead to an unsuccessful assessment).
Summary of the main concepts developed in the chapter and which form the basis of the document:
Some development life cycle are proposed in the European railway standards [CEN01], [CEN02], [CEN03] and
in the [IEC01] standard.
3.11.2010 12:59
WCS_AP v.03 Test report
concept
system definition and application condition
risk analysis
system requirements
apportionment of system requirements
design and implementation
manufacture
installation
system validation (including safety acceptance and commissioning)
system acceptance
operation and maintenance
decommissioning and disposal
performance monitoring
modification and retrofit
[CEN02] (page 53) defines a life cycle for the development of software and a little bit of system life cycle:
software maintenance
software planning
[IEC01] defines:
The life cycles proposed in [CEN01] and [CEN03] are essentially for the development of system and do not
give details on the life cycle of electronic equipment. The life cycle proposed in [CEN02] is essentially for the
development of software but with few aspects of the development of systems (hardware/software integration).
The life cycle proposed in [IEC01] is specifically focused on safety and does not give details on the different
phases of the development life cycle. The different proposed life cycle (chapters 4.3.1 and 4.4.1) are synthesis
of life cycle described in all the above mention standards with some additional contributions given by the
ACRuDA project members.
4.2.1. Presentation
A digital architecture can be considered as a product with a life cycle composed of several stage:
3.11.2010 12:59
WCS_AP v.03 Test report
paper study
model
prototype
pre-production
production
The definition of the product is refined at each necessary stage. The life cycle for the product should be defined
as clearly as possible.
Each stage (paper study, model, prototype, pre-production, production, etc.) should be clearly defined. These
stages may vary in duration and generally overlap in practice.
A new architecture may be an upgrade of an existing architecture. Consequently, the life cycle must give
consideration to existing designs.
4.2.3. Model
The aim of this stage is to define a model. To achieve an accurate model it is important to perform a users
requirements analysis. A preliminary specification and a high level design must be done to determine what
functionality must be realised by the model, to demonstrate the feasibility of the product. It is essential to
produce a documentation (but not necessary complete) and all changes are formally controlled as there are a
great number of iterations during the phases of design, realisation and test, to refine the model.
4.2.4. Prototype
In this stage, the specification and design depend not only on the users requirements analysis but also on the
results of the analyses of the model. The prototype takes into account (unlike the model) all the functionality of
the definitive product but in a provisional form. Testing of the prototype in real or simulated environments is a
very important stage in development of the product. All procedures and results must be formally documented.
4.2.6. Production
In this stage, the specification and design activities are essentially completes with changes only to correct errors,
or if end user’s demands lead to modifications or evolution of the product. Testing is less rigorous than for the
pre-production product and utilises sampling, depending on the quantity of the product and the quality assurance
level requirements.
Chapter 4.2 defines the stages of the life of a product. In each of these stages, the supplier should follow a
defined development life cycle.
3.11.2010 12:59
WCS_AP v.03 Test report
The development life cycle is normally considered as a « V ». In the descendant branch of the V, there are a
succession of analyses and study activities. In the ascendant branch, the activities are all activities of testing .
3.11.2010 12:59
WCS_AP v.03 Test report
General abbreviation:
3.11.2010 12:59
WCS_AP v.03 Test report
- Arch : Architecture
Abbreviation:
3.11.2010 12:59
WCS_AP v.03 Test report
[CEN03] (chapter 6.5.1, page 45) standard considers three different categories of programmable electronics
systems:
generic product: (independent of application) A generic product can be re-used for different independent
applications,
generic application: (for a class of application) A generic application can be re-used for different
class/type applications with common function,
specific application: (for a specific application) A specific application is used for only one particular
installation.
Figure 5 shows the different categories and the different aspects defined in [CEN03]:
The scope of ACRuDA project is to consider the generic product. It is necessary to define a precise
development life cycle for the product. A product generally comprises hardware and software. There are two
life cycles: one for the software and one for the hardware with interactions between the two cycles (for
example: software may be modified because of problems with the design the hardware).
A equipment life cycle is shown in the Figure 6. The software life cycle is taken from [CEN02].
3.11.2010 12:59
WCS_AP v.03 Test report
3.11.2010 12:59
WCS_AP v.03 Test report
Abbreviation:
- Hw : Hardware
- Sw : Software
3.11.2010 12:59
WCS_AP v.03 Test report
3.11.2010 12:59
WCS_AP v.03 Test report
Others phases:
3.11.2010 12:59
WCS_AP v.03 Test report
3.11.2010 12:59
WCS_AP v.03 Test report
3.11.2010 12:59
WCS_AP v.03 Test report
- Hw : Hardware
- Sw : Software
The safety plan defines the safety requirements for each phase of the system life cycle and cover the whole life
cycle. The safety plan is produce by the supplier
3.11.2010 12:59
WCS_AP v.03 Test report
The Safety Plan is defined as follow in [CEN01]: « a documented set of time scheduled activities, resources and
events serving to implement the organisational structure, responsibilities, procedures, activities, capabilities and
resources that together ensure that an item will satisfy given safety requirements relevant to a given contract or
project ».
In addition, [CEN03] specifies the development of the plan thus:" A Safety Plan shall be drawn up at the start of
the life cycle. This plan shall identify the safety management structure, safety-related activities and approval
mile-stones throughout the life-cycle and shall include the requirements for review of the Safety Plan at
appropriate intervals. The Safety Plan shall be updated and reviewed if subsequent alterations or additions are
made to the original system/sub-system/equipment. If any such change is made, the effect on safety shall be
assessed, starting at the appropriate point in the life-cycle. ". The Safety Plan is a part of the requirements for
the demonstration of the evidence of the safety management.
The Safety Plan identifies the safety management structure, safety-related activities and approval milestones
throughout the life-cycle. It also includes the requirements for review of the Safety Plan at appropriate intervals.
The Safety Plan shall define all management and technical activities during the whole safety life-cycle which are
necessary to ensure that the safety-related products and external risk reduction facilities achieve and maintain
the required functional safety. The Safety Plan for a product, such as Digital Architecture, is not mentioned
explicitly in the relevant standards but it is possible to derive a Product Safety Plan from the System Safety Plan
provide be the standard.
The Safety Plan also outlines the methods and techniques to be used to develop, validate and verify the safety
digital architecture against the safety requirements. ANNEX VIII, summarises the techniques and tools
prescribed by relevant standards and proposed in various current practices.
The Safety Plan shall be implemented and functional safety internal audits initiated as required. All those
involved in implementing the Safety Plan shall be informed of responsibilities assigned to them under the plan.
4.4.3.1. Introduction
This chapter describes the issues to be considered in developing the Safety Case for a product, and addresses the
basic structure for the Safety Case.
Safety is defined as freedom from unacceptable risk of harm, where risk is defined as the probable rate of
occurrence of a hazard causing harm times the degree of severity of the harm. In general, the aim of a Safety
Case is to provide the evidence to demonstrate that in all aspects of specified operation, the risk of harm is
reduced to the lowest practicable level. This is achieved by demonstrating that:
The supplier is ideally placed to develop the Safety Case and in practice it is generally his responsibility.
4.4.3.2. General
There are fundamental safety requirements which apply to all safety critical products. These are described
below.
Any credible fault within any part of a safety critical product can be a potential source of a hazard. In
developing a product, the bounds of its operation must be defined in terms of the application(s) in which
supplier envisages the product will be utilised. While the supplier may not know the exact nature of the hazards
relating to any particular application, he should use his expertise of existing applications and his railway
experience to identify a set of hazards common to the applications envisaged. The supplier must eliminate or
mitigate these wherever possible. The completeness of any such analyses undertaken by the supplier must be
3.11.2010 12:59
WCS_AP v.03 Test report
All credible failures in a product must be assumed to be hazardous and, consequently, every effort must be
made to eliminate or mitigate the hazard and potential consequences to be within acceptable and practicable
limits or evidence must be provided that the failures are not hazardous.
A safety critical product must perform vital operations including fault detection in a reliable and timely manner.
The safety case of a product should provide evidence that it meets the above requirements. It should be
demonstrated that the SIL of the product is commensurate with that of the applications for which it has been
developed. Alternatively, this demonstration may be performed for specific functions rather than the product as
a whole.
The Safety Case must demonstrate to the satisfaction of the notified body, the operator, the user of the product
and the suppliers themselves, that these requirements have been satisfied.
It may be preferable to develop and issue the safety case in stages. These stages may be linked to life cycle
phases, delivery milestones or design reviews. The advantage of this approach is that it gives visibility of the
development of the safety case to the stakeholders in the product, and thus provides opportunity for early
comment.
The proposed contents of the safety case at each stage, and an outline safety case describing the proposed
format and contents should be issued at a very early stage in the project.
In general, a Safety Case must provide a clear comprehensive, convincing, and defensible argument, supported
by calculation, procedure and management, that a product will, inherently, provide a framework within which a
design may be realised and implemented and be acceptably safe throughout its life.
The safety case for a product will assist in the safe implementation of an application, using the product, and will
provide a major contribution to the application safety case.
In turn, the safety case developed for the application will be built into the safety cases of higher level systems,
and finally, into the overall railway system Safety Case. This will contain, or reference, Hazard/Error Logs,
design decisions, a history of development and use, and concluding safety arguments for all the components of
the system, including the safety critical products. In total this will provide the safety argument for the railway.
As well as aspects of product integration in the application, maintenance must be described in detail, defining
what maintenance is required by whom, where it will be performed, training and spares required, and any
maintenance aids needed, to ensure the level of safety of the product is upheld.
A requirement of [CEN03] is that operational safety of a railway system must be monitored to ensure that the
safety features of the design remain valid during use. This should include the monitoring of safety-related
performance and comparison of this to the performance predicted in design analyses, assessment of failures and
accidents to establish actual or potential failure trends, and identifying from these, changes required to improve
safety performance. This is clearly the responsibility of the end user. However, it is this requirement which
makes it essential that the owner or duty-holder, responsible for the operation of the railway, has access to
sources of any information needed to enable the assessment of safety performance and to propose and
implement any changes needed. Clearly, therefore, support may be required from any of the suppliers and it is
essential that the appropriate commercial agreements are in place to enable the necessary modifications to be
made and any revisions to the safety arguments developed.
For details of the requirements for the Safety Case for software programmed into, or used directly for
developing, safety critical elements of the architecture, refer to the CASCADE Generalised Assessment Method
[GAM01]. [GAM01] should also be used to assess any software development tools or utilities which contribute
directly to the integrity level of any aspect of the architecture.
3.11.2010 12:59
WCS_AP v.03 Test report
Contents
High-level documentation
Safety Management Documentation
Safety Objective
Description of the Architecture
Functional Elements
Safety Studies
Ownership & Responsibilities- Operation, Evolution, Modification
User Support
Conclusion - Safety Argument Summary
The Quality Assurance Plan is applicable to all the supplier activities. These are described in the management
plan that has been prepared to carry these activities. The measures defined by the Quality Assurance plan are,
consequently, complementary to the management plan and can be detailed in all other plans derived from them.
The Quality Assurance program is written based on the approach defined in the management plans. These plans
are based on a development process described in the chapter 4.3 of this document.
EN 29001 requires that the supplier sets up an organisation capable of assuring the design and production
quality of the product.
To achieve this, the supplier will ensure that his plan are consistent with the safety and reliability requirements
of the product and that these plans are correctly implemented.
3.11.2010 12:59
WCS_AP v.03 Test report
1. The organisation documentation: this should describe the organisational structure under which the product
has been developed, and defines the roles, responsibilities and reporting structure of personnel involved in
management, development, safety, maintainability reliability and user support. The named organisation
chart showing persons working on the project shall be kept available.
2. The development plan: this plan defines the development of the product in terms of development stages
and establishes the criteria for demonstration and acceptance that each stage has been completed. This is
a "living" document which must reflect not only the original plan, but also the actual life cycle of the
development that took place.
3. The Quality plan: this plan is the basic guideline of the quality plan. The Quality Plan defines the quality
requirements that will be applied to all aspects of the work in developing the product. This will include the
Quality Management System (QMS) used on the project together with a traceable path to enable
demonstration that the QMS is in accordance with EN 29001 and related standards. Two Software and
Hardware Quality plans could be prepared in addition to the Quality Plan. The producers (internal entities
or subcontractors) use these plans to prepare their own plans and include specific provisions into their
activities.
4. The Safety plan: the Safety Plan defines the way in which the safety of the product is to be assured.
Details of techniques and processes to be used, at what stage they are to be used and how the findings of
each analysis is to be addressed as part of the development process shall be described.
5. Configuration management plans: this document describes the principles and processes by which the
product under consideration has been controlled throughout its life cycle from conception through
detailed specification, design, build, validation. The Configuration Management Plan should detail the
timing of design reviews, configuration baselines, status reporting mechanisms and procedures for
deviation from prescribed processes. This document is vital since traceability is a central requirement of a
Safety Case and rigorous traceability is only truly achievable when all evidence is from configured
sources.
6. Verification & Validation (V &V) Plan: this document defines the objective and approach to be adopted
in demonstrating that the requirements described in the Requirement specification documentation and
safety criteria drawn from the various safety analyses have been met. Procedures for, and evidence of,
traceability of specific requirements to particular test elements of V &V activities shall be briefly
described and appropriate, detailed documentation should be referenced.
3.11.2010 12:59
WCS_AP v.03 Test report
The main objective of an assessment is to gain confidence in the fact that the product meets its safety
requirements specification. The final objective is to obtain the certification of the product. The certification is
based on the result of the assessment.
The main concepts for an assessment are repeatability, reproducibility, impartiality (sources [ITS01], [ITS02].)
An assessment is repeatable if the repetition of the assessment of the same product, with the same safety
requirements specification evaluated by the same assessor gives the same overall verdict as the first assessment
(sources [ITS01], [ITS02]). An assessment is reproducible if the repetition of the assessment of the same
product, with the same safety requirements specification evaluated by another assessor gives the same overall
verdict as the first assessment (sources [ITS01], [ITS02]). An assessment is impartial if it is free from bias
towards achieving any particular result (sources [ITS01], [ITS02]).
An assessment can be concurrent or consecutive. If the assessment is done after the development of the
product, the assessment is consecutive. If the assessment is done in parallel with the development of the
product, the assessment is concurrent. For a consecutive assessment, the totality of the assessment inputs
(documentation, hardware, software, etc.), are available at the beginning of the assessment. For a concurrent
assessment, the assessment inputs are available as the development progresses (sources [ITS01], [ITS02]). The
recommended assessment is the concurrent assessment. This allows problems to be resolved at an early stage.
Where existing products are utilised, consecutive assessments is the only solution. For new designs all
assessment should be concurrent (sources [ITS01], [ITS02]).
The assessment criteria describe the elements of proof necessary for the assessment. The information on the
product must be as clear and complete as possible and he assessors should have a good understanding of the
product particularly the safety requirements specification. An assessment is based on preliminary analyse,
observation, theory, and experimentation.
The preliminary analyse of the product is a very important. The assessment requires inputs from the supplier.
These inputs should include a description of the product a set of requirement specifications which provide
sufficient level of detail to undertake the assessment.
A product is composed of components and each of these may be composed of lower level components. It is
essential that the requirements at product level are broken down to all components and that each low level
requirement is traceable to the top level requirement. Safety requirement should be clearly identified separately
from other requirements.
An assessment is successful if all the assessment criteria are satisfied . For each criterion assessed, there are
three possible outcomes:
By the end of the assessment, all criterion classed as " to be confirmed " verdict must become " success " or "
fail " verdict.
3.11.2010 12:59
WCS_AP v.03 Test report
Figure 9 shows the overall assessment and certification schema with the bodies involved, the roles of the bodies
and the data exchanged between the bodies.
3.11.2010 12:59
WCS_AP v.03 Test report
3.11.2010 12:59
WCS_AP v.03 Test report
The EU gives an identification number to each notified body and publishes all the information on the notified
body in the Official Journal of the European Communities. The EU keeps the official list of notified bodies
(sources [DIN01], [DIN02]).
The EU is in charge of defining the directives (for example: Interoperability Directive for the European Railway
High Speed System), the standards (for example: [CEN01], [CEN02], [CEN03], [EN01] to [EN07]).
The European Union can be helped by a committee made up of representative peoples of each country and
chaired by the European Union. If it exists, this committee is in charge to define the European policy for
assessment and certification in Europe. It defines the procedures, methods, rules and criteria for assessment and
certification for all the countries (sources [DIN01], [DIN02]).
The authority appoints notified bodies. In order, to maintain the notification, the authority must regularly
monitor the competence and the independence of the notified bodies (sources [DIN01], [DIN02]).
Details of the notified bodies are published in the Official Journal of the European Communities (sources
[DIN01], [DIN02]).
The authority establishes the national accreditation body (the accreditation body must function in conformance
with the EN 45ACC and EN 45ASS project standards).
The notified body is governed by the laws of the state member who notifies it (source [DIN01]).
The notified body can be accredited ([EN01] standard). In other ways, the state member must justify to the
European Union the competencies (for these standards) of the notified body (sources [DIN02]).
The employees of the notified body must be independent (remuneration non proportional to the number of
achieved assessment) and are bound by professional secrecy (industrial property) (source [DIN01]).
The notified body must subscribe a civil liability insurance (source [DIN01]).
The notified body leads the assessment and defines the procedures and the means to fulfil the assessment of
the product. The notified body may use external assessors to perform some parts of the assessment work.
Upon satisfactory assessment results, the notified body will issue a certification report and a certificate for the
product.
The notified body is responsible for the assessment technical report (source [DIN01]).
The notified body (sources [DIN01], [DIN02]) maintains, and publishes the list of:
3.11.2010 12:59
WCS_AP v.03 Test report
The sponsor is the person or body who requests the assessment to show that the product meets the safety
requirements specification. The sponsor orders the assessment (he asks and finances the assessment). The
sponsor is responsible for the appropriate utilisation of the certification report and the certificate (sources
[ITS03], [ITS04]).
The sponsor may choose a notified body from any European country (sources [DIN01]).
The assessors are bodies of proven integrity, independence (notably financial), technical competence. They
must be independent of design, manufacture, marketing, maintenance, and operation of the product (sources
[DIN01], [DIN02]).
The employees of the assessors must be independent (i.e.. remuneration must not be based on the number of
certificate issued) and are subjected to requirements confidentiality (sources [DIN01], [DIN02]).
The supplier demonstrates the safety of the product according to the defined level (in the case of ACRuDA, the
level is SIL4). The evidence and argument that the product is safe is contained in the safety case.
The sponsor must give a precise description of the product and defines the safety requirements specification and
the boundaries of the assessment. When the sponsor is not the supplier, the participation of the supplier is
strongly recommended.
The sponsor enters in contact with a notified body and asks an assessment for the product (sources [ITS03],
[ITS04]).
The notified body consults the assessors (internal and/or external assessors). A preliminary analyse of the
product description and the Safety requirement specification must be done by the notified body and the
assessors, to control the completeness and the coherence of the two description. On the base of the product
description and safety requirements specification, the notified body and the assessors make an assessment
feasibility study. The results of these study are:
a) The result of the feasibility study is positive: the notified body and the assessors define an assessment plan.
The structure of the assessment plan is given in ANNEX III. This plan contains a detailed assessment work plan
with a detailed assessment inputs delivery plan. The assessment plan is submitted for approval to the sponsor
and the supplier. The sponsor and the supplier (if the sponsor is not the supplier) prepare an assessment case
which content:
3.11.2010 12:59
WCS_AP v.03 Test report
The assessment case is transmitted to the notified body. The notify body can make remarks and comments on
the assessment case (in particular, it can focus on some points which could be problems for the delivery of the
certificate).
The notify body draws up a contract for the assessment (based on the assessment case). The contract is signed
by the notified body and the sponsor. The assessment is registered by the notified body in its list of assessments
in progress.
The assessment can divided into one or more work packages. An assessor can be in charge of all the work
packages. If external assessors are required subcontractors), contracts are signed between the notified body and
the subcontractors before the beginning of the assessment. These contracts must define the assessment work, the
assessment inputs delivery plan, and the financial forms.
The assessment plan can be annexed to the contract. It can have a draft status. This plan can be modified during
the assessment (because of changes of documentation, tools, increase of information, etc.).
b) The result of the feasibility study is negative, the notified body asks the sponsor and the supplier to make the
necessary changes.
The notified body (with internal and/or external assessors) execute the assessment tasks defined in the
assessment plan. For each task, reports are regularly produced to control the progress of the assessment tasks.
The assessment inputs are analysed in conformance with the criteria. The assessors must verify the product
according to the criteria. During the assessment, the notified body and the assessors must seek to understand the
product which they investigate. In particular, they must seek to understand whether the product can behave in
any way contrary to the safety requirements specification. In other words, they seek to discover potential risks
of hazardous failure. It is recommended that the assessors build models, realise experiments and observations on
the product. All the assessment criteria must be verified.
During the assessment, some problems can be detected: non delivery of an assessment input, refusal to correct a
design, etc. These problems are called anomalies and they must be submitted to a particular treatment. They
must be analysed to determine their consequences on the assessment. It is necessary to take in account these
anomalies the earlier as possible in the assessment process. A special procedure to treat the anomalies must be
defined. All the bodies involved in the assessment must be informed. In general, there are two categories of
anomalies: minor or major. The minor anomalies can be easily corrected and are registered in the assessment
report. The major anomalies are recorded in anomaly reports. An anomaly report can contain:
Each anomaly report is examined and validated by the notified body and the assessors. These reports are sent to
the sponsor and to the supplier. All the anomalies must be treated. The supplier can dispute an anomaly but he
must have good arguments to convince the notified body and the assessors. The sponsor can also dispute an
anomaly if he judges that the treatment of the anomaly can have important consequences on the assessment. In
3.11.2010 12:59
WCS_AP v.03 Test report
all cases, all the anomaly must be solved at the end of the assessment and the decision of closing an anomaly
must be taken with the agreement of all the partners involved in the assessment.
At the end of each assessment task, an assessment report, which contains the results of the assessment work, is
written. Each assessor writes an assessment report for the notified body. Each assessment report is examined
and internally approved by the notified body. The assessment reports can contain confidential information. Their
diffusion must be controlled These assessment reports are sent for approval to the sponsor and supplier.
Confidentiality clauses are applied. If the supplier wants to protect its industrial knowledge, the assessment
reports are only sent to the supplier but the sponsor is informed of the delivery of the reports. The assessment
reports contain:
In case of the work packages are apportioned between several assessors, it is necessary to make a synthesis of
all the assessment reports in a final technical assessment report. The notified body is responsible for the
constitution of the technical assessment report. The technical assessment report contains a description of all the
work achieved by the assessors, all the results of the assessment, and the conclusion of the assessment.
Sometimes, restrictions for the use of the product can be mentioned in the report. All the references of the
technical assessment report must be available.
When the technical assessment report is internally approved by the notified body, it is sent, for approval, to the
supplier. The technical assessment report can contain confidential information. Its diffusion must be controlled.
Confidentiality clauses are applied. If the supplier wants to protect its industrial knowledge, the technical
assessment report is only sent to the supplier but the sponsor is informed of the delivery of the report. The
diffusion of the technical assessment report to entities, not involved in the assessment, is submitted to the
approval of the sponsor and the supplier.
An evaluation rating can be regarded as the assignment of a pass/fail verdict. A pass verdict is assigned if all
criteria are satisfied and, in particular, no risk of hazardous failure have been found. A fail verdict is assigned if
any error is found and is not corrected, or if a risk of hazardous failure is found.
This is the final step. The technical assessment report, containing the results of the assessment is approved by
the notified body and the supplier. Confidentiality clauses will be applied. On the base of the technical
assessment report, the notified body summarises the conclusion in the certification report. The certification
report is a public document. When a end user uses the product, it can only have access to this document. In
consequence, the certification report must contain all the observations, measures and recommendation
necessary to have a safe use of the product.
When the certification report is approved by the notified body the sponsor, the notified body delivers a
certificate. The certificate is signed by the sponsor and by the notified body. The product is added to the list of
certified products. The certification report and the certificate are published in official national and European
documents.
The structure of the technical assessment report is given in ANNEX IV, the structure of the certification report
in ANNEX V and the structure of the certificate in ANNEX VI.
3.11.2010 12:59
WCS_AP v.03 Test report
An assessment is a complex process, which demand lot of time, important resources and money, depending
upon the complexity of the product and the integrity level. The certification report and the certificate for a
product are valid only for the assessed version and configurations of the product. To limit the quantity of work
to achieve for an assessment and when it is possible, it can be interesting to re-use assessment results from a
previous product assessment. There are two cases where the re-use of assessment results can be applied (sources
[ITS02], [ITS04]):
In the first case, it is a new version of the product. The supplier must identify, by a clear and precise analysis,
and must describe in a report, the modifications and the consequences of these modifications on the safety of
the product. The supplier or the sponsor must submit this report to the notified body. The notified body analyses
the report and decides if it is necessary to re-assessed the product or not. A re-assessment is identical to the
assessment described in the chapters 5.1 to 5.8 except that some results of the previous assessment of the
product can be re-used. If the re-assessment is successful, the certification report is written and the certificate is
delivered by the notified body. If no re-assessment is needed, the notified body extends the certificate to the
new version of the product (sources [ITS02], [ITS04]).
In the second case, the situation is different because the product is new but it uses some assessed/certified
components. The sponsor asks the assessment of this new product. This assessment is considered as a totally
new assessment as described in the chapters 5.1 to 5.8. As it is said before, some part of the product has been
assessed/certified in a previous assessment. It is possible to re-use some results of these previous assessment
during the assessment of this new product. The assessors must carefully verified if the assessed/certified
components are correctly used in the composed product (in particular: verification of the interfaces and
verification that the use of the assessed certified components can not degrade the safety of the composed
product). If the assessment is successful, the certification report is written and the certificate is delivered by the
notified body (sources [ITS02], [ITS04]).
In all cases, all the partners involved in the assessment and certification process, must be careful in the re-use of
previous assessment results or with product composed of assessed/certified components.
This report must talk about all the methods, techniques , tools used during the assessment and about the lessons
and benefit found by the assessors.
This report must also talk about the opinion, the judgement of the assessors on the methods, tools and
techniques used by the supplier for the development of the product.
This report can contain some confidential information but some of the results (evolution of the criteria, new
methods, etc.) can be published to the overall community of the assessors. The objective is to improve the global
quality of assessment in the European Community.
3.11.2010 12:59
WCS_AP v.03 Test report
The certificate is valid for the assessed version and configuration of the product. The safety of the product may
reasonably assumed for the correct use of the product in accordance with the recommendation of use contained
in the certification report.
The certification report and the certificate are the properties of the notified body. The reproduction and
publication of the two documents are authorised only if there are reproduced in their whole.
The notified body can withdraw the certificate (for example if it is discovered that the data supplied during the
assessment were not exact).
A certification report structure is proposed in ANNEX V and the certificate structure is proposed in ANNEX VI
of this document.
The assessors are not concerned with the relationship between sponsor and supplier. It is recommended to
define, before the beginning of the assessment a complete list of the assessment inputs with the date of delivery.
The following points should be defined:
the medium and the format of the assessment inputs (computer medium, tape, paper, etc.),
the program for the delivery of the assessment inputs,
the number of each assessment input to deliver,
the policy for provisional assessment inputs,
the development environment,
the access of the development site.
During the assessment, the assessors will have access to confidential information (industrial protection). All the
body involved in the assessment process must have the assurance that all the information will stay confidential.
This will influence a lot of aspects of the assessment process (reception, management, stocking, and restitution
of the assessment inputs).
3.11.2010 12:59
WCS_AP v.03 Test report
- EN 29001 series
- [EN01] to [EN07] standards
- International standards
- National standards
- Standards on safety analyses methods
The assessment phase is an essential phase in the life cycle of the product (see [CEN01] life cycle) . The aim of
the assessment is to have the final users confident about the safe use of the product. In this context, a quality
system, in the assessment activity, is highly recommended. This quality system has to be described in a Quality
Handbook.
Here after are the ACRuDA recommendations for the content of the Quality Handbook of the assessor.
The referential used by the assessor includes the best practices and the applicable norms, standards, and
regulations. The domain of the railways safety architectures is submitted to numerous regulations and norms.
The assessor should then clearly specify, in the quality handbook, the norms that will be checked in his
assessment.
The quality handbook should explain how the assessor makes sure that he has always the updated referential.
The risk of having a bad referential is to have a certification which will not be recognised by the others
European partners.
The quality handbook should explain which dispositions are taken to identify and qualify the validity domain of
the methods used for assessment.
These methods should be in coherence with the referential of the assessor. The assessor should make sure that
these methods haven been previously tested in safety applications. The assessor should use methods that have
been defined by national or international standard.
The "training" chapter of the handbook should explain how the assessor is able to use the methods.
It is recommend to use methods which lead to objective results as far as possible. This is the best assurance for
objective and reproducible results. It is recommended to have a wide panel of methods available for the
assessment so that the verification is strengthen by a diversification of the studies and points of views taken by
the assessor.
3.11.2010 12:59
WCS_AP v.03 Test report
For the tools used for measurement and test, it is recommended to apply the requirements of [EN01] on
equipment.
The automation of the tests and analyses is recommended as far as possible, to have a guarantee of reproduction
of the evaluation and to limit the human error factor.
The risk of not applying the quality requirements on tools are to have bad measures.
A procedure about the safety audit activity should be referenced in the quality handbook. Some quality
requirements for audit traceability must be ensured: an audit plan should be written, a list of documents
reviewed should be produced.
The Quality handbook should explain the configuration management applied to:
The general policy for the configuration management of the product under assessment (versions of the software,
versions of the components of the architectures) should be presented too.
This is a very important requirement for the assessor because the risks are major:
to assess a version of the product which is not the version under operation.
to have lacks in the evaluation of the development process
to loose the traceability of the assessment
Further more, in railways the life cycle duration of the products can be 30 years. This needs to develop a
configuration system to keep the safety documentation (including the assessment reports) valid during this
duration. The quality handbook should state whether the assessor provides a service like recording the different
assessment reports made on a product.
The assessment activity will produce assessment reports and anomalies. We suggest to refer to [EN01]
requirements on reports. The closure of the safety anomalies should be submitted to strong conditions. The
anomalies that remain open should be presented in the final report and should then imply restrictions in the
certification or use of the architecture.
The risk of a bad traceability of the reports and anomalies is to forget some important points and restrictions of
the certification.
The legitimacy of the assessor is mainly funded on this characteristic. This issue should then be specially
detailed in the quality Handbook . The competence and the knowledge of the assessors should be closely related
to the technology, process, and methods used by the developer in railways. The " training" chapter should
explain how the competencies and knowledge are kept up to date.
The use of experts in the assessment team should be organised too. Their activity should be submitted to an
assessment project review with the rest of the team, that their independence be demonstrated towards the
3.11.2010 12:59
WCS_AP v.03 Test report
methods and product evaluated. Further more it is good to keep some expert out of the assessment process as a
potential resource in case of conflict.
The risk of a lack of competencies is to be unable to perform the investigations correctly or to judge the
technical criteria. The risk of a lack of knowledge in the specialised domains can be to overpass a problem, to
refuse a knew technology even if it could give some improvement in the safety process.
The responsibility of the assessors in their results is important and it is recommended to have an organisation
that deals with this specificity:
The risk of a bad organisation is to loose time, money and to give unnecessary stress to the members of the
assessment.
The legitimacy of the assessor is mainly funded on this characteristic. It is recommended to satisfy the
requirements of impartiality of [EN01] and Interoperability directive.
The risk of a lack of independence of judgement are to: hide some anomalies, reject the product of a concurrent,
accept light justification of the process of the developer.
A minimum set of procedures in the quality system should explain the protection to reduce the vulnerability of
the information given by the sponsor and produced by the assessor. The level of security reached should be
defined.
The assessor should define a procedure to make sure that its report contain is not altered or changed in the
needed information for the notified body and the final user.
The subcontractor should comply to the same quality requirements than the notified body himself. Further more,
a reception procedure should be defined to accept the work of the subcontractor.
The relationship between the assessor and other partners should be presented in the Quality Handbook.
3.11.2010 12:59
WCS_AP v.03 Test report
Software assessment and certification is basically assessment and certification of design. Therefore, the methods
and tools that are used in the design process must be assessed in order to determine if they lead to the desired
quality. Here, quality encompasses error avoidance, error correction and error tolerance.
Error avoidance is clearly an activity that must be performed during the design process. It can be facilitated by
the use of recognised design and control principles. Error correction is used here to mean an activity performed
by the software (at run-time) to correct recognisable errors before they have any effect. Error tolerance is the
ability of the software to function correctly even if certain boundary conditions are not fulfilled.
Verification is the process of demonstrating that the software truly fulfils the specified requirements, validation
is the process of demonstrating that the requirements were correct. For hierarchically structured software,
validation of requirements at a lower level consists of demonstrating that they correspond to at least part of a
requirement at the next higher level. Then, only the top level requirements must be validated against the safety
and reliability requirements of the encompassing system.
If all requirements at all lower levels can be validated against requirements at a higher level, then verifying
requirements at the bottom will very often also be a verification of the higher levels too.
This must of course be confirmed, and such confirmation is of course part of the assessment and certification
process. But when that can be done, the subsequent validation of the top level requirements becomes the
remainder of the assessment and certification process. And that is where the encompassing hardware and its
operational context must be considered.
5.10.4. Interfaces
At the beginning of this section it was pointed out that software is always embedded in a more complex
environment and that the interaction between the software and its environment determine the quality of the
software product. This interaction is defined through the interfaces between the software and its environment.
Thus, the correct definition and implementation of interfaces must be confirmed. Confirming the correct
definition of the interfaces is a part of the validation task, confirming their correct implementation is part of
the verification task.
3.11.2010 12:59
WCS_AP v.03 Test report
The supplier of the architecture shall provide all the evidence required to demonstrate compliance with the
detailed criteria. The evidence should be organised in accordance with a Safety Case Structure (see chapter
4.4.3), and shall be readily available for audit, walk-through, review and detailed examination.
The assessment should be based on the judgement resulting of the verification of a set of criteria on the
following properties:
The assessment should be focused, mainly, on the conformity and effectiveness of the techniques and measures.
The assessment of Effectiveness is a judgement about the abstraction of the product, the safety principles or the
method. Effectiveness characterises how effective the techniques and measures are, in identifying and
eliminating or mitigating the hazards.
Effectiveness includes:
suitability of the safety principles and mechanisms, standards, safety functions, methods and tools used
to construct a safe product,
cohesion of the set of safety principles and safety critical functions,
cohesion of the set of tasks described in the safety plan.
The Conformity deals with the completeness of the implementation and the accuracy of the representation of
the specification. Conformity characterises how accurately the techniques and measures are implemented and
how well they are explained in the supplied documentation.
does the implementation contain all the requirements that are stated in the specification?
does the implementation not contain more than the requirements stated in the specification?
is the implementation an accurate representation of the specification?
are the methods planned in the safety plan used and applied?
Where necessary, single criterion can be broken down into several lower-level criterion in order to make the
assessment. Each criterion shall be applied according to current best practice and experience. In addition, the
assessor shall assess the design of the architecture independently, for example, by carrying out as a minimum, an
independent hazard analysis.
The assessor shall provide an assessment report which should summarise the approach, findings, criteria and
provide detailed reasons why the elements of the architecture passed or failed the criteria.
The assessor shall make judgements about the evidence presented by the supplier. The assessment criteria must
cover all the techniques and procedures used by the developer to achieve the integrity of the architecture.
According to [CEN01], the means to achieve railway dependability relates to controlling the factors which
influence dependability throughout the life of the system. Effective control requires the establishment of
mechanisms and procedures to defend against sources of error being introduced during the realisation and
maintenance of the system. Such defences need to take account of both random and systematic failure.
The means used to achieve dependability are based on the concept of taking precautions to minimise the
possibility of failure occurring as a result of an error during the realisation phases. Precaution is a combination
of:
The strategy to achieve dependability for the system, including the use of prevention and/or protection means,
shall be justified in the safety case.
3.11.2010 12:59
WCS_AP v.03 Test report
By defining a management process based on a life cycle, [CEN01] elaborates the means to ensure dependability
through minimising the effects of errors and by controlling the factors influencing railway dependability (see
section 6 of the standard). Methods, tools and techniques appropriate to engineering dependable systems are
presented in other CENELEC standards, [CEN02] and [CEN01] and in IEC standard [IEC01].
A general overview of the manner in which methods and techniques are used to support dependability
engineering and management is given in [CEN01] (chapter 5.3.7, figure 12).
The following documentary evidence is a condition (required by the standards [CEN03] and [IEC01]) for the
safety acceptance of the safety-related electronic system.
These documents included in a structured safety justification document (Safety Case), have to present the
methods and techniques used to develop the system and ensure the safety. Examples of methods and techniques
to be used for the validation of safety digital architectures are given in the standards.
This chapter contains the basic criteria which are expected to provide the infra-structure and rules for
understanding an assessment of safety critical digital architecture. These assessment criteria have been derived
from the State of the Art and the standards [CEN01], [CEN02], [CEN03] and [IEC01]. They provide the basis
for the Development of detailed criteria for the individual architectures.
The safety management assessment will examine all technical and management activities, during the whole
architecture life-cycle, to ensure that the safety-related systems and external risk reduction facilities allow the
required functional safety to be attained.
Competence of staff, departments or other groups involved in safety management activities will also form part
of this assessment.
The aim of this element of the assessment is to assess the capability of the organisation to administer safety
procedures. It has to ensure that the responsibilities of the staff and their competence and training requirements
are clearly specified and this process is being implemented.
3.11.2010 12:59
WCS_AP v.03 Test report
The structure and the content of the safety plan shall be examined to check whether they conform to the
ACRuDA Safety Plan Requirements.
The safety case structure and content shall be checked for conformance to the ACRuDA Safety Case Structure
(see chapter 4.4.3).
The basic criteria are presented in the form of process and product properties. They state the requirements for
the life-cycle processes and products and each requirement is devised to address a specific set of hazards. These
requirements will, in general, be satisfied by using the relevant techniques and measures recommended by the
safety critical standards. Therefore, with each set of basic criteria, a table of relevant techniques and measures is
attached. These tables also identify the objects to which they apply.
The effectiveness with which these techniques control the hazards or cover the faults, depends on various
factors, such as their frequency of application, accuracy of fault detection and timeliness of fault negation. The
effectiveness therefore, depends on the degree of sophistication used to implement the measure. For example,
effectiveness of a coding technique could very well depend on the size of code word, the bigger the size, more
effective is the implementation.
6.4.1. Contents
The structure and activities of the product life cycle shall provide a systematic approach to the development,
production, support and maintenance of the product.
The activities required to identify, control or eliminate hazards at each life cycle phase shall be described. A
structured plan of these activities constitutes the safety plan.
1. The life cycle plans shall cover all development phases and describe the processes used to ensure the
quality, reliability, maintainability and integrity of the products.
2. The life cycle plans shall identify all the resources to be used and their essential qualities, such as the
designers and their competence, tools and their reliability, validation teams and their independence.
3. Each development phase shall precisely specify:
the inputs, information and resources required to carry-out the activity
summary of the processes
its successful termination conditions
its outputs
4. All development activities shall be covered by an appropriate safety plan. The safety plan should have the
approval of the supplier's project manager and the supplier's internal independent safety organisation.
5. Personnel and responsibilities
Personnel in the safety should be suitably qualified,
3.11.2010 12:59
WCS_AP v.03 Test report
3.11.2010 12:59
WCS_AP v.03 Test report
6.5. Requirements
A systematic approach to requirements development is essential to ensure high integrity.
6.5.1. Contents
The functionality and integrity, reliability and performance requirements of the architecture should be specified.
The desired features, such as protection against some specific component faults or target time for fault
detection, are regarded as an integral part of requirements.
1. The approach for establishing and identifying detailed requirements shall be described. This should
include procedures for:
deviation of the safety target from the top level architecture
decomposition of system level requirements to lower level requirements specifications,
verifying the consistency of requirements,
tracing their relationships to the design objects, components and code,
providing traceability to test specification to enable testing of each requirements to validate the
systems,
mechanisms to ensure that changes to requirements are fully controlled
2. The safety critical digital architecture shall meet the SIL 4 requirements as prescribed by the standards
[CEN03] and [IEC01].
3. The safety requirements shall consider Human factor issues, reliability of the operators, information
overloading, operator errors, etc.
4. The techniques and measures, equivalent to those recommended by the standards, [CEN03] and [IEC01]
3.11.2010 12:59
WCS_AP v.03 Test report
shall be used for the requirements. Variance from the recommendation of these standards should be fully
described and justified. The techniques and measures from the standards which are applicable to life-cycle
processes and products, are listed in Table 2.
6.6. Design
Digital architectures are designed to reduce random and systematic credible faults to an acceptable level by
using appropriate techniques and measures.
6.6.1. Contents
The design describes all the elements of the architecture, their interrelationships and interfaces, and their role in
fulfilling the requirements.
The techniques and measures used to achieve the design goal are also explained.
1. The procedures used to derive the design from the requirements and to verify the design against the
requirements shall be described.
2. The safety critical digital architecture shall provide the following functionality:
implementation of requirements derived from mitigation or elimination of hazard identified for the
range of perceived applications of the architecture,
execution of application programs,
collection of inputs and delivery of outputs,
3.11.2010 12:59
WCS_AP v.03 Test report
3.11.2010 12:59
WCS_AP v.03 Test report
hardware
3.11.2010 12:59
WCS_AP v.03 Test report
6.7.1. Contents
A well structured validation and test plan is required. This plan shall describe all the activities from test
environment set-up and test scenario selection to test execution and analysis of the test results. It also describes
test organisation, test processes and test documentation. The test specifications, acceptance criteria and test
results form an essential part of evidence of safety.
3.11.2010 12:59
WCS_AP v.03 Test report
6.8.1. Contents
1. The analyses should be carried out as detailed in the safety plan. Their application, procedures and scope
of analysis should be explained.
2. The main finding of the analyse should be available for examination.
1. The analyses shall be planned and performed in a timely manner, so that their findings are effectively used
in the development process.
2. Review and incorporation of the finding of the analyses shall be part of a formal implementation process.
3. The analyses shall identify all credible failure modes, estimate their criticality and frequency of
occurrence.
4. The types of failures considered shall be specified. They shall cover as far as possible all static and
intermittent failures, combination of failure modes, hazardous and safe failures, and latent and undisclosed
failure modes.
5. The results and findings of the analyses shall be integrated in the safety case of the architecture, they shall
form the core of the safety argument, evidence to support functional and technical safety.
6. The faults arising from the following sources shall be considered:
hardware and software and their interactions,
environmental factors, eg. EMC,
network elements and data and field buses,
operators and operating conditions,
critical operations including start up and close-down.
7. The techniques and measures, equivalent to those recommended by the standards, [CEN03] and [IEC01]
3.11.2010 12:59
WCS_AP v.03 Test report
shall be used for fault and failure analysis. Variance from the recommendation of these standards should
be fully described and justified. The techniques and measures from the standards which are applicable to
life-cycle processes and products, are listed in Table 5.
3.11.2010 12:59
WCS_AP v.03 Test report
The objective of the overall operation and maintenance is to operate and maintain the safety architecture, its
control system and the total combination of safety-related systems and external risk reduction facilities such that
the designed functional safety is maintained.
6.9.1. Contents
User manual, maintenance manual, upgrade and new release procedure FRACAS, user support.
1. There shall be a maintenance plan for the product which should include collection of field data. Inspection
and off-line tests shall be performed at regular interval.
2. A support service plan shall specify support organisation, its responsibilities and policies. The support
procedure shall explain the mechanisms used for fault reporting and incorporating new releases.
3. Safety operation procedures, inspection and maintenance procedures shall be formulated and defined in a
way that ensures safety and minimises operator errors. All relevant issues from the hazard and safety
analyses shall be addressed.
4. The digital architecture components shall be kept as simple as possible to reflects the limits of the human
capacity. Appropriate metrics may be used to assess the relative complexity of these components.
5. Data-driven systems (including parametric or configurable systems) shall be protected against possible
errors arising from entry of incorrect data.
6. The control devices and means of surveillance shall be such that additional hazards due to operator error
are remote.
7. There shall be a well specified procedure for collecting and analysing the product's history of use data.
8. The techniques and measures, equivalent to those recommended by the standards, [CEN03] and [IEC01]
shall be used for operation, maintenance and support. Variance from the recommendation of these
standards should be fully described and justified. The techniques and measures from the standards which
are applicable to life-cycle processes and products, are listed in Table 6.
3.11.2010 12:59
WCS_AP v.03 Test report
1. Software planning. The supplier shall produce the following documents and the assessor shall perform a
judgement on their contents :
Software Configuration Management Plan
Software development plan
Software quality assurance plan
Software validation plan
Software maintenance plan
Software/hardware integration plan
Software integration plan
Software verification plan
2. Software Requirements. The supplier shall produce the following documents and the assessor shall
perform a judgement on their contents:
Software requirements specification
Software requirements test specification
Software requirements verification report
3. Software Design: The supplier shall produce the following documents and the assessor shall perform a
judgement on their contents:
Software architecture specification
Software design specification
Software design test specification
Software integration specification
Software architecture and design verification report
4. Software Module Design. The supplier shall produce the following documents and the assessor shall
perform a judgement on their contents:
Software module design specification
Software module test specification
Software module verification report
5. Code. The supplier shall produce the following documents and the assessor shall perform a judgement on
their contents:
Software source code and supporting documentation
Software source code verification report
6. Module Testing. The supplier shall produce the following documents and the assessor shall perform a
judgement on their contents:
Software module test report
7. Software Integration. The supplier shall produce the following documents and the assessor shall perform a
judgement on their contents:
Software integration report
8. Software/Hardware Integration. The supplier shall produce the following documents and the assessor shall
perform a judgement on their contents:
Software/hardware integration report
9. Software Validation. The supplier shall produce the following documents and the assessor shall perform a
judgement on their contents:
Software validation report
10. Software Assessment. The supplier shall produce the following documents and the assessor shall perform
a judgement on their contents:
Software assessment report
11. Software Maintenance. The supplier shall produce the following documents and the assessor shall perform
3.11.2010 12:59
WCS_AP v.03 Test report
3.11.2010 12:59
WCS_AP v.03 Test report
(1/4) HOL
LOTOS
OBJ
temporal logic
VDM
Z
B
Semi-Formal Methods:
Logic/function block diagrams
Sequence diagrams
Data flow diagrams
Finite state machines/state
transition diagrams
Temporal Petri nets
Decision/truth tables
3.11.2010 12:59
WCS_AP v.03 Test report
Statement list
Subset of C with coding
standards
3.11.2010 12:59
WCS_AP v.03 Test report
3.11.2010 12:59
WCS_AP v.03 Test report
3.11.2010 12:59
WCS_AP v.03 Test report
Cause-Consequence Diagrams
Event Tree Analysis
Software Error Effect Analysis
Common Cause Failure
Analysis
Markov Model
Reliability Block Diagram
Field Trial Before
Commissioning
3.11.2010 12:59
WCS_AP v.03 Test report
3.11.2010 12:59
WCS_AP v.03 Test report
7. TERMINOLOGY
7.1. Introduction
This document is a list of working definitions for terms used in ACRuDA project related to the safety critical
application. The following principle have been used in selecting and forming these definitions:
3.11.2010 12:59
WCS_AP v.03 Test report
1. Existing definitions in accepted documents (Standards for example) should be used where possible. In
these cases, the source document of definitions is indicated,
2. Where no satisfactory existing definition can be agreed, a new term or phrase should be coined rather
than using an existing one in a new or non-standard way. This will reduce confusion,
3. If different definitions exist for a same term, the different definitions are presented (each definition is
preceded by a number between brackets) and a definition in relation to ACRuDA project will be agreed.
(1) Procedure by which the technical competence and the impartiality of a testing laboratory is
recognised. Source: [ITS01].
(2) Formal recognition of the laboratory competence to achieve some tests or some established type tests.
Source: [EN01].
4. Accreditation Body: Body which manages a laboratory accreditation system and pronounced
accreditation. Source: [EN01].
5. Accreditation Criteria (for a laboratory): Set of requirements defines and applied by an accreditation
body, and that a testing laboratory must satisfy to be accredited. Source: [EN01].
6. Accreditation System: System with its own procedures and management rules to proceed laboratory
accreditation. Source: [EN01].
7. Accredited Laboratory: Testing laboratory that has been accredited. Source: [EN01].
8. Apportionment: A process, whereby the RAMS elements for a system, are sub-divided between the
various items which comprise the system to provide individual targets. Source: [CEN01]
9. Approval: The status given to any product by the requisite Authority when the product has fulfilled a set
of predetermined conditions. Source: [CEN01]
10. Assurance of Conformity: Procedure resulting in a statement giving confidence that a product, process or
service fulfils specified requirements
11. Assessment:
(1) The undertaking of an investigation in order to arrive at a judgement based on evidence, of the
suitability of a product. Source: [CEN01].
(2) The process of analysis to determine whether the design authority and the validator have achieved a
product that meets the specified requirements and to form a judgement as to whether the product is fit for
its intended purpose. Source: [CEN03].
12. Assessment inputs: In the ACRuDA project, the assessment inputs are all the data necessary to achieve
the assessment (hardware, software, documentation, tools, standards, etc.).
13. Assessment Repeatability: the repetition of the assessment of the same product, with the same safety
requirements specification evaluated by the same assessor must give the same judgement than the overall
verdict as the first assessment. Source: [ITS01].
14. Assessment Reproducibility: assessment of the same product, with the same safety requirements
specification evaluated by an other assessor must give the same overall verdict as the first assessor.
Source: [ITS01].
15. Assessor:
3.11.2010 12:59
WCS_AP v.03 Test report
19. Behaviour: The description of any sequence of states and transitions likely to exist in one system.
20. Certification:
(1) Formal declaration which confirms the results of an assessment and the fact that the assessment
criteria were correctly applied. Source: [ITS01].
(2) Action by a third party, demonstrating that the specific sample tested is in conformity with a specific
standard or other normative document.
21. Certification Body or Notified body: impartial and independent body which achieves certifications.
Source: [ITS01].
22. Certification System: A system that has its own rules of procedures and management for carrying out
certification of conformity. Source: [ITS01].
23. Coding: This is the work of translating the results of the detail design into a program using a given
programming language - one of the phases of the software life cycle.
24. Cohesion: The degree to which measures taken, interact with and depend on each other.
25. Commercial of the shelf (COTS) software: Software defined by market-driven need, commercially
available and whose fitness for purpose has been demonstrate by a broad spectrum of commercial users.
Source: [CEN02].
26. Common cause failure: A failure which is the result of an event(s) which because of dependencies,
causes a coincidence of failure states of components in two or more separate channels of a redundancy
system , leading to the defined system failing to perform its required function. Source: [CEN01].
27. Common Mode Failure: Failure of apparently independent components or communication links due to
an initiating event which effects them all. Source: [IDS01].
28. Common Mode Fault: Fault common to items which are intended to be independent. Source: [CEN03].
29. Compliance: A demonstration that a characteristic or property of a product satisfies the stated
requirements. Source: [CEN01].
30. Component:
(1) A part of a product that has been determined to be a basic unit or building block. A component may be
simple or complex. Source: [CEN03].
31. Configuration: The structuring and interconnection of the hardware and software of a system for its
intended application. Source: [CEN03].
32. Configuration management: A discipline applying technical and administrative direction and
surveillance to identify and document the functional and physical characteristics of a configuration item,
control change to those characteristics, record and report change processing and implementation status
and verify compliance with specified requirements. Source: [CEN01].
33. Conformity: the degree to which a given real product correspond to its description.
34. Conformance Testing: Testing whose purpose is checking whether the system satisfies its specification.
Source: [LAP01].
35. Control Flow Analysis:Analysis of the sequence of execution in a computer program. This analysis can
show unreachable code, dynamic halts or false entry points. Source: [IDS01].
36. Coverage: Measure of the representatively of the situations to which a system is submitted during its
validation compared to the actual situations it will be confronted with during its operational life. Source:
[LAP01].
37. Criterion: A standard by which a correct judgement may be formed.
38. Criticality (system): Level of safety integrity of function or component. Source: [IDS01].
39. Defensive Programming: Writing programs which detect erroneous input and output values and control
flow. Such programs prevent propagation of errors and recover by software where possible. Source:
[IDS01].
40. Dependability: Trustworthiness of a computer system such that reliance can justifiably be placed on the
service it delivers. Source: [LAP01].
41. Dependent failure: The failure of a set of events, the probability of which cannot be expressed as the
simple product of the unconditional probabilities of the individual events. Source: [CEN01].
42. Design: The pre - build exercise of defining elements and their interconnection such that the product will
meet its specified requirements. Source: [CEN03].
43. Detection (error): The action of identifying that a system state is erroneous. Source: [LAP01].
44. Deterministic Testing: Form of testing where the test patterns are predetermined by a selective choice.
Source: [LAP01].
45. Development Environment: set of organisational measures, procedures and standards which must be
3.11.2010 12:59
WCS_AP v.03 Test report
3.11.2010 12:59
WCS_AP v.03 Test report
(1) Set of software and/or hardware which performs a function design and used or included in multiple
systems. Source: [ITS01].
(2) A collection of elements, interconnected to form a system, sub-system, or item of an equipment, in a
manner which meets the specified requirements. Source: [CEN03].
95. Proof Obligations: The requirement to prove a theorem to demonstrate the correctness of a development
step. Source: [IDS01].
96. Prototype: A rapidly produced program which is used to validate (part of) a specification. Source:
[IDS01].
97. Quality: A user perception of the attributes of a product. Source: [CEN03].
98. RAMS: An acronym meaning a combination of Reliability, Availability, Maintainability and Safety.
Source: [CEN01].
99. Random Faults: An occurrence of a fault based on probability theory and previous performance. Source:
[CEN03].
100. Random Hardware Failure: Failures, occurring at random time, which result from a variety of
degradation mechanism in the hardware. Source: [CEN01].
3.11.2010 12:59
WCS_AP v.03 Test report
(1) The combination of the frequency, or probability, and the consequence of the hazardous event.
Sources: [CEN02], [CEN03], [IDS02].
(2) The probable rate of occurrence of a hazard causing harm and the degree of severity of the arm.
Sources: [CEN01].
110. Risk Analysis: Analysis allowing identification of critical points and safety criteria in system elements.
This can be made at the different stages of building a system.
111. Safe State: A condition which continues to preserve safety. Source: [CEN03]
112. Safety: Freedom from unacceptable level of risk. Sources: [CEN01], [CEN02], [CEN03].
113. Safety Case: The documented demonstration that the product complies with the specified safety
requirements. Sources: [CEN01], [CEN03]
114. Safety Critical: Carries direct responsibility for safety. Source: [CEN03].
115. Safety Critical Software: Software used to implement a safety critical function. Source: [IDS01].
116. Safety Integrity:
(1) The likelihood of a system satisfactorily performing the required safety function under all the stated
conditions within a stated period of time. Sources: [CEN01].
(2) The likelihood of a safety related system achieving its required safety features under all the stated
conditions within a stated operational environment and within a stated period of time. Source: [CEN03].
117. Safety Integrity Level:
(1) One of four possible discrete levels for specifying the safety integrity requirements of the safety
functions to be allocated to the safety related products/systems. Safety integrity level 4 has the highest
level of safety integrity and safety integrity level 1, the lowest. Sources: [CEN01].
(2) A number which indicates the required degree of confidence that a system will meets its specified
safety features. Source: [CEN03].
118. Safety Involved: Carries indirect responsibility for safety. Source: [CEN03].
119. Safety Plan:
(1) A documented set of time scheduled activities, resources and events, serving to implement the
organisational structure, responsibility, procedures, activities, capabilities and resources that together
ensure that an item will satisfy given safety requirements relevant to a given contract or project. Source:
[CEN01].
(2) The implemented details of how the safety requirements of the project will be achieved. Source:
[CEN03].
120. Safety Process: The series of procedures that are followed to ensure that the safety requirements of a
product are identified, analysed and fulfilled. Source: [CEN03].
121. Safety Related: Carries responsibility for safety. Source: [CEN03].
122. Safety Related Software: Software which carries responsibility for safety. Source: [CEN02].
123. Safety Requirements: the requirements of the safety functions that have to be performed by the safety
related products/systems comprising safety functional requirements and safety integrity requirements.
Source: [CEN01].
3.11.2010 12:59
WCS_AP v.03 Test report
124. Safety Requirement Specification: Specification of the safety, necessary for a product and which is the
base for the assessment. The safety requirements specification must specify: the safety integrity target, the
risks, the standards and rules to apply, the safety functions, the type of application considered
(Interlocking, ATP, ATC, etc.), the different configurations, assumptions on environment of the product.
125. Security: Dependability with respect to the prevention of unauthorised access and/or handling of
information. Source: [LAP01].
126. Semantic Analysis: Checking the relationship between input and output for every semantically possible
path through a program, or part of a program. It can reveal semantically possible paths of which the
programmer was unaware, and coding errors. Source: [IDS01].
127. Severity (failure): Grade of the failure consequences upon the system environment. Source: [IDS01].
128. Software: Intellectual creation comprising the programs procedures rules and any associated
documentation pertaining to the operation of a data processing system. Source: [CEN02].
129. Software assessment: The process of product evaluation either by an official regulatory body or an
independent third party to establish that it complies with all necessary requirements, regulations and
standards.
130. Software Errors Effects Analysis (SEEA): Analysis intended to determine, inductively and similarly to
FMECA, the nature and criticality of consequences of software failures.
131. Sponsor: person or body who ask an assessment of a product. Source: [ITS01].
132. Static Code Analysis: Using mathematical techniques to analyse a program and reveal its structure. It
does not need execution of the program, but verifies the program against the specification. Techniques
include control flow, data use, information flow and semantic analysis. Source: [IDS01].
133. Static Verification: Verification conducted without exercising the system. Source: [LAP01].
134. Statistical Testing: Form of testing where the test patterns are selected according to a defined probability
distribution on the input domain. Source: [LAP01].
135. Structural Testing: Form of testing where the testing inputs are selected according to criteria relating to
the system's structure.
136. Sub-system: A portion of a system which fulfils a specialised function. Source: [CEN03].
137. Supplier: in the ACRuDA project, the supplier is the person or body who builds and sells a product or a
system. In the case of a product, the requirement, design, and validation phases are achieved by the
supplier. In the case of a system, the requirements can be defined by the end user.
138. System:
(1) A set of sub-systems or elements which interact according to a design. Source: [CEN03].
(2) A Specific installation with a particular goal and a particular operational environment. Source:
[ITS01].
139. Systematic Failure: Failures due to errors in any safety life cycle activity, within any phase, which cause
to fail under some particular combination of inputs or some particular environmental condition. Source:
[CEN01].
140. Systematic Faults: An inherent fault in the specification, design , construction, installation, operation or
maintenance of a system, sub-system, or equipment. Source: [CEN03].
141. Testing: Dynamic verification performed with valued inputs. Source: [EN01].
142. Testing Laboratory: Laboratory which achieves tests. Source: [EN01].
143. Traceability: The ability to trace the history, application or location of an item or activity, or similar
items or activities, by means of recorded identifications. Source: [NFX01].
144. Validation:
(1) Confirmation by examination and provision of objective evidence that the particular requirements for
a specific intended use have been fulfilled. Source: [CEN01].
(2) The activity of demonstration, by test and analysis that the product meets in all respects its specified
requirements. Source: [CEN03].
145. Validator: The person or agent appointed to carry out validation. Source: [CEN01], [CEN02], [CEN03].
146. Verification:
(1) Confirmation by examination and provision of objective evidence that the specified requirements have
been fulfilled. Source: [CEN01].
(2) The activity of determination by analysis and test that the output of each phase of the life cycle fulfils
the requirements of the previous phase. Source: [CEN02], [CEN03].
147. Verifier: The person or agent appointed to carry out verification. Source: [CEN01], [CEN02], [CEN03].
3.11.2010 12:59
WCS_AP v.03 Test report
8. References
8.1. European Council Directives
[DIN01] Council directive 96/48/EC: « Interoperability of the European High speed train network », 23 July
1996.
[DIN02] Council directive 93/465/EC: « Modules related to the different phases of assessment procedures of
conformity and rules of affixing and using CE mark, intended to be used in the technical harmonisation
directives », 22 July 1993 .
[DIN03] Council directive 90/531/EEC: « Procurement procedures of entities operating in the water, energy,
transport and telecommunications sectors », 17 September 1990.
8.3. Standards
[CEN01] prEN 50126 - CENELEC: « Railway Applications: The specification and demonstration of
dependability, reliability, availability, maintainability and safety (RAMS) », June 1997.
[CEN02] prEN 50128 - CENELEC: « Railway Applications: Software for Railway Control and Protection
System », January 1997.
[CEN03] prENV 50129 - CENELEC: « Railway Applications: Safety Related Electronic Systems for Signalling
», Version 1.0 - January 1997.
[EN01] EN 45001: « General criteria for the operation of testing laboratories », 1989.
[EN02] EN 45002: « General criteria for the assessment of testing laboratories », 1989.
[EN03] EN 45003: « General criteria for the accreditation body of testing laboratories », 1989.
[EN04] EN 45011: « General criteria for certification bodies operating product certification », 1989.
[EN05] EN 45012: « General criteria for certification bodies operating certification of quality system », 1989.
[EN06] EN 45013: « General criteria for certification bodies operating certification of personnel », 1989.
[EN07] EN 45014: « General criteria for declaration of conformity by the suppliers », 1989.
3.11.2010 12:59
WCS_AP v.03 Test report
[IDS02] Interim Defence Standard 00 - 56 (Draft) « Hazard analysis and safety classification of the computer
and Programmable Electronic System elements of Defence Equipment » 00 - 56 - issue 1 - 5 April 1991
[ACR02] ACRuDA Project: « State of the Art - Method Synthesis », 29 September 97. Reference:
ACRuDA/INRETS/PM-MK/WP1/D2/97.39/V3
Part 1: Rules,
Part 2: Guidelines ,
Part 3: Examples.
[ITS02] ITSEM: « Information Technology Security Evaluation Manual », Version 1.0 - September 1993.
8.8. Others
[AQC01] « Assurance Qualité en Conception » (« Quality Assurance in Design »). Marc Reynier. Collection
MASSON.
[LAP01] J.C. LAPRIE « Dependability: Basic concepts and Terminology » springer-Verlag Wien, NewYork.
1992.
This section provides an overview of the plan and includes any necessary background material
Aims and Objectives
Scope of the plan
The policy and strategy for achieving safety
This section will provide the policy and strategy for achieving safety, together with the means for
evaluating its achievement, and the means by which this is communicated within the organisation to
ensure a culture of safe working,
Assumptions and Constraints
This part of the plan will detail any assumptions being made in connection with the product development
together with the constraints under which the safety construction is to be conducted.
Interfaces with other related programs and plans
Applicable standards
The main design characteristics of the product and its main applications are described in this section
The main selected safety measures and techniques
The main selected measures and techniques used to meet the safety requirements should be listed in the
Safety Plan
The role, responsibilities, competencies and relationships of bodies undertaking tasks within the life cycle
will be described in this section. The identification of the persons, departments, organisations or other
units which are responsible for carrying out and reviewing each of the safety life-cycle phases will be
described. A description of the relationship between the bodies will be identified
Qualification and training of the staff
The procedures for ensuring all staff involved in all safety life-cycle activities are competent to carry out
activities for which they are accountable (Competence of persons).
This section outlines the safety life cycle and safety activities to be undertaken within the life cycle along
with any dependencies
The safety analysis, engineering, verification and validation
The analyses and validations to be applied during the life cycle, should be clearly identified and should
3.11.2010 12:59
WCS_AP v.03 Test report
take into account the processes for ensuring an appropriate degree of personnel independence in tasks and
necessary safety reviews, to demonstrate compliance of the management process with the Safety Plan
This section presents the list of main documents to be produced and the milestones and the mechanism of
the documentation review and acceptance.
the mechanism of the documentation review and acceptance
Requirements for periodic safety validation and safety review
This section contains the planned safety reviews throughout the life cycle and appropriate to the safety
relevance of the element under consideration, including any personnel independence requirements and
shall be implemented, reviewed and maintained throughout the life cycle of the system.
the Safety Plan should include a safety case plan, which identifies the intended structure and principal
components of the final safety case.
the mechanism to prepare Safety Case
the procedures for the preparation and scheduling (draft, number of version, etc.) of the safety case
should be described in the Safety Plan.
The procedure for maintaining accurate documentation on potential hazards, safety related systems and
external risk reduction facilities.
Each of the components of the Safety Plan shall be formally reviewed by the organisations concerned and
agreement gained on the contents.
Chapter 1 - Contents
The section should contain a description of how the safety case has been constructed and how it will
demonstrate that the product meets the safety requirements for its intended purpose.
3.11.2010 12:59
WCS_AP v.03 Test report
This list of documents comprises all the documents which form the safety case or support the safety
arguments contained in the safety case. The structure of this list must be consistent with the
structure of the safety case. If the safety case is self-contained, this section will not be necessary.
Safety Process
This section shall describe the process (plan, execution and evidence) through which the required
level of safety has been met.
Glossary
A comprehensive glossary should be included to provide a clear definition of all technical terms
used.
Organisation Documentation
This should describe the organisational structure under which the product has been developed, and
defines the roles, responsibilities and reporting structure of personnel involved in management, quality,
development, safety, maintainability reliability and user support.
Development Plan
This defines the development of the product in terms of development stages and establishes the criteria
for demonstration and acceptance that each stage has been completed. This is a "living" document which
must reflect not only the original plan, but also the actual life cycle of the development that took place.
Quality Plan
The Quality Plan defines the quality requirements that will be applied to all aspects of the work in
developing the product. This will include the Quality Management System (QMS) used on the project
together with a traceable path to enable demonstration that the QMS is in accordance with EN 29001 and
related standards.
Safety Plan
The Safety Plan defines the way in which the safety of the product is to be assured. Details of techniques
and processes to be used, at what stage they are to be used and how the findings of each analysis is to be
addressed as part of the development process shall be described.
Refer to chapter 4.4.2 and the ANNEX I of this document, for a detailed description of the Safety Plan
Structure. A clear description of the safety case structure should also be included within the Safety Plan.
Those elements of the V & V Plan which relate directly to safety requirements may be addressed or
referenced in this document.
This document defines the objective and approach to be adopted in demonstrating that the requirements
described in the Requirement specification documentation and safety criteria drawn from the various
safety analyses have been met. Procedures for, and evidence of, traceability of specific requirements to
particular test elements of V&V activities shall be briefly described and appropriate, detailed
documentation should be referenced. This document should address the V & V of all requirements
including those relating to safety which may have been covered in the Safety Plan.
Configuration Management Plan
This document describes the principles and processes by which the build standard of, and changes to, the
product under consideration has been controlled throughout its lifecycle from conception through detailed
specification, design, build, validation. The Configuration Management Plan should detail the timing of
3.11.2010 12:59
WCS_AP v.03 Test report
design reviews, configuration baselines, status reporting mechanisms and procedures for deviation from
prescribed processes. This document is vital since traceability is a central requirement of a Safety Case
and rigorous traceability is only truly achievable when all evidence is from configured sources.
Despite the rigorous application of management processes, there will be a number of occasions in any
development activity when deviation from strict requirements will be unavoidable in order to maintain
control of overall schedule and cost. For example, the design process in a particular development may
dictate that all safety analyses are completed before a design is committed to production build. Events
may be such that the delays caused in adhering strictly to this requirement could seriously impact the
programme. To deal with this situation, a formal mechanism which manages deviation from procedures or
requirements should be put in place. This will detail the mechanism for recording the assessment, ,
acceptance and resolution or disposition of the deviation. This deviation procedure should be described
together with a reference to the list of deviations raised during the development together with their
resolution.
Clearly, because of the complex functionality of product and its development process, each plan
described above is likely to comprise a number of individual documents which should be controlled as
part of the overall configuration management system (as should all documentation)The structure of any
such plan should be described, within the Safety Case.
Evidence should be included to demonstrate that the supplier has fully assessed that the use of the
assessed/certified element in the product is entirely within the functional and safety related application
conditions specified for that element.
0. Product designs are the intellectual property of the suppliers and any critical details which are proprietary
to the supplier may not be included.
1. A self-contained safety case would be extremely bulky for a complex product making distribution and
availability difficult. Changes would be harder to incorporate
2. The Safety Case of this nature would be difficult to produce as the lifecycle of the product itself develops
and would increase the tendency to produce the Safety Case at the end of the lifecycle.
3. It is not clear how the traceability of the data contained in the safety case could be shown with no
external referencing
The second viewpoint is that the Safety Case is a reference document describing that documentation relating the
product needed to demonstrate the safety of that product.
A high level description of the product , and the scope and limitation of the safety analyses shall be given
in this section. The preliminary hazard list, identification of the system boundary with respect to hazards
and classification of those hazards in terms of risk and consequence should also be presented.
Safety Objective
3.11.2010 12:59
WCS_AP v.03 Test report
The safety objectives must be clearly stated and a description included of the method for allocating the
safety objectives from the top level product requirements to lower levels of the product. Reference should
be made to documentation detailing functional and safety requirements for all elements of the product,
e.g. between and within hardware and software.
Description of the Architecture of the Product
The structure, functionality, operation, interface, and the environment envelope of the product shall be
described. This shall include a list of applications for which the product was considered or envisaged by
the designers. For each application, the element of configurability/adaptability must be considered and
described.
Functional Elements
Modes of operation including restricted and degraded operational modes shall be described together
with the failure modes of each of these.
Evidence must be provided that demonstrates all functional requirements are met and that the
design process to achieve the specified functionality is such that the "services" the product
provides, meet Safety Integrity Level 4 (SIL4). Consideration of alternative designs or approaches
should be presented together with the rationale for the approach implemented.
Correct and full operation at the limits of "normal conditions" must be demonstrated.
The safety properties (or principles) such as redundancy, diversity, error detection, self test,
information redundancy, shall be described together with the safety rationale for the design concept
and the underlying assumptions about what is "safe". Reference shall be made to documentation
containing quantitative analysis which supports the product design approach.
The effects of faults due to hardware or software must be comprehensively analysed, and evidence
that the effects have been addressed in the design should be presented. Established analyses such as
HAZOP, HAZID, Hazard Analysis, Fault Tree, FMECAs, shall be described and used as part of the
Safety Assurance lifecycle which shall be described in the Safety Plan documentation. A
3.11.2010 12:59
WCS_AP v.03 Test report
description of how the results of these analyses were fed back to the design should be given.
Susceptibility of the product. - The suppliers must demonstrate that the product functions correctly
within the range of specified external influences, e.g. temperature, vibration, humidity,
electromagnetic radiation (EMC), power supply variation, contaminants, etc. They must also
demonstrate that for conditions outside the defined specification for normal operation, the product
will default to a defined, safe state and upon return to normal operating conditions, the product will
behave in a safe predictable manner.
Influences upon susceptibility of other equipment. The supplier must define the characteristics of
the product which could influence the operation of the system, application or other equipment e.g.
thermal radiation, electromagnetic emission, etc. Evidence of testing to confirm that these are as
defined should be provided.
This section of the safety case should contain a description of the safeguards in the design which
protect the safety properties of the product from compromise during the implementation of the
product. For example, a product may have provision to accept site specific software. Reference
should be made to evidence that demonstrates that the safety functionality of the product is
unaffected (ring-fenced) when used within its defined range of application. Any mechanism to
indicate that the ring-fence has been breached should be given in this section.
Verification and validation reports shall be included or referenced. V&V activities must demonstrate that
at every level, each requirement within the product has been tested and that there is traceability of
evidence from the highest level requirement through to final testing. The key objective of this activity is to
ensure that there is clear evidence that the product conforms with every requirement.
Components (i.e. COTS items resistors, ICs, ASICS, capacitors, gaskets etc.) are very "fuzzy" items in that they
have a vast range of parameters which specify their performance. Many of these parameters are a function of
the fabrication process rather than the design and so can vary from supplier to supplier. These parameters need
to be considered by the designer prior to inclusion in the design. Further, suppliers of complex components such
as microprocessors will release only very limited data about the parametric performance of their devices due to
commercial sensitivity. In all cases, availability must be a major consideration in component selection and
multiple sourced components from suppliers with a track record of long term product support should be
considered.
Evidence must be included to show that a formal parts and materials selection process had been established and
followed. The process should include detailed criteria for selection of components and assessment of suitability
by parties other than the design team. Policy and approach taken to minimise the effect of obsolescence should
also be discussed.
3.11.2010 12:59
WCS_AP v.03 Test report
enable any modification of the element to be undertaken and the revised safety case prepared.
Ideally, the most straightforward situation is that when all product documentation is available to the operator
who will then be equipped to make safety assessments of modifications at all levels of the system. In practice,
documentation relating significant elements of a will not be available because of commercial confidentiality
issues. In this situation, the responsibility for update of the safety case may reside with the supplier.
Modification may be required many years after initial delivery and, because any changes to the product could
affect safety, it is essential that all documentation is available. The safety case should, therefore, contain a
discussion on how issues of availability of documentation for design changes after hand-over, have been
addressed. This should include commercial aspects such as copyright, design authority, non-disclosure
agreements and intellectual property rights together with model agreements for dealing with these issues. As part
of this, there should be a clearly defined process to address the correction of errors which may be discovered
after hand-over of the product safety case.
The safety case should include a model programme covering technical support to the user, repair capability,
repair times and calibration of support equipment supplied, including any test equipment and support tools
(hardware and software). The model should also consider the needs of in-service support which in addition to
the above, will cover issues such as spares holdings, spares manufacturing capability including ownership of
support equipment and retention of repair facilities.
The nature and level of support provided by the supplier will be the subject of commercial discussion between
supplier and user. However, while support issues may not directly impact the safety related functioning of the
product, lack of a clear strategy and inadequate or contractual agreements user support in place will almost
certainly result in significant difficulties in obtaining appropriate documentation and expert technical help. This
in turn can lead to poorly effected modifications and flawed safety arguments. Ultimately, this could affect the
safety of the system. It is, therefore, essential that a structure for post hand-over support is developed and
presented as part of the safety case.
Chapter 1 - Introduction
Context
3.11.2010 12:59
WCS_AP v.03 Test report
A description of the context of the assessment is presented in this section. This description must contain :
Objectives
The assessment process should be considered as a project. The management of this process should then follow
the project management requirements.
Special dispositions in the quality process may be taken : a specific project organisation of the staff , specific
confidentiality agreements, configuration management requirements can be identified in this chapter in addition
to the dispositions of the Quality Handbook.
It is recommended to define a specific assessment steering committee. The members should be identified by
name.
In case of a simultaneous assessment, the relations between the development phases of the architecture and the
assessment activity must be described carefully in this chapter. It is recommended that the assessment starts
during the specification phase of the architecture. A special care has to be taken to the modification process in
the development. The assessor has to define how the modifications during the development are impacting his
investigations.
In case of consecutive assessment, the chapter must identify in the planning some review meetings with the
supplier and the supplier activities to answer the assessment needs.
The estimation contains a planning of the activity. This planning must describe the duration of each activity and
the effort in men month allocated to the each activity.
It is recommended to identify the assessors members who are going to work on each activity by name.
(including the subcontractors).
The risk of underestimating the times and costs is to have a decrease in the global quality of the assessment (ex:
safety analyses not exhaustive).
The role, responsibilities, competencies and relationship of all the assessors is given in this chapter.
The procedure for ensuring that all staff involved in the safety activities, is competent to carry out activities for
which they are accountable.
3.11.2010 12:59
WCS_AP v.03 Test report
This section shall contain a summary of the operational role and the functions of the product.
History of development
This section shall present the development phases of the product (with tools, standards, method, techniques,
etc.)
This section shall present the high level architecture of the product. The separation between safety and non
safety components. The apportionment of the safety functions on hardware and software. A description of
hardware and software must be given and in particular of the components concerned by the safety.
If external assessors are in the assessment process, a procedure to collect all the reports to compose the final
report must be defined. The notified body has to do this work.
The minimum documentation supplied is the documentation identified in the safety case (chapter 4.4.3 of the
present guide).
Several assessment activities can be defined. All these activities are based on the criteria.
Preliminary activities
examine the definition of the product to assess : the assessor should evaluate the relevance of the
definition of the product in the point of view of safety. For example, the assessor can consider that some
elements should be integrated in definition of the product to assess because they have a potential impact
on the safety of the whole architecture.
3.11.2010 12:59
WCS_AP v.03 Test report
examine the safety requirements specification (safety integrity target, standards, risks, etc.) : the
assessment plan must explain whether the allocation of the safety requirements is taken as an input of the
assessment or if this activity will be examined by the assessor.
preliminary analyse of the architecture : a system approach is recommended for the assessment of
complex architectures. That is to say : first a global comprehension of the architecture with a global safety
study and then a detailed comprehension on each component.
verification of the set of criteria
If relevant, the assessor should prefer to evaluate the results of the supplier using a different method of
evaluation than the method used by the supplier.
The report should be compliant with [EN01] standard (chapter 5.4.3 of the standards)
Chapter 6 - Conclusions
In the conclusion of the assessment plan the assessor should give a justification that the process defined above
covers all the identified risks in the safety specification requirements. This justification should be based only on
the work of the supplier examined by the assessor and on the results of the assessor.
The main conclusion of the assessment allows to know if the criteria are verified and if there is no risk of
hazardous failure. The recommendations can remind that the results of the assessment are valid for a particular
version of the product when it is configured a certain way.
This annex must identify all the acronyms and terms used in the technical assessment report.
Chapter 1 - Introduction
Context
A description of the context of the assessment is presented in this section. This description must contain :
Objectives
3.11.2010 12:59
WCS_AP v.03 Test report
This section must precise all the assessment tasks covered by the technical assessment report. In general, it
covers all the assessment. If it is not all the case, justification must be given.
Organisation
Chapter 2 - Summary
This chapter is the base of all information on the results of the assessment, published by the assessor. So, This
summary must not contain confidential information on the product (commercial, technical, etc.).
History of development
This section must present the development phases of the product (with tools, standards, method, techniques,
etc.)
This section must present the high level architecture of the product. The separation between safety and non
safety components. The apportionment of the safety functions on hardware and software. A description of
hardware and software must be given and in particular of the component concerned by the safety.
Description of hardware
This section gives details on all the hardware components necessary for the assessment.
Description of software
This section gives details on all the software components necessary for the assessment.
A good understanding of the content of the safety requirements specification is necessary for the understanding
of the assessment report.
This chapter points reference to the safety requirements specification or re - describes it in its totality.
3.11.2010 12:59
WCS_AP v.03 Test report
Chapter 5 : Assessment
History of the assessment
This section is designed like chapter 3. This section must present the assessment process and the main stages :
expected and defined in the assessment plan, at the beginning of the assessment,
really reached during the assessment.
This main stages can be : meetings, delivery, end of technical work, etc.
Assessment procedure
A summary of the assessment plan must be presented in this section. The tasks of the assessors defined in the
assessment plan and the work packages achieved. All the differences between the proposed work of the
assessment plan and the real work achieved must be recorded and argued.
A summary on the conformity of the real delivered assessment inputs to the aimed supplied must be given All
the differences between the delivered assessment inputs and the aimed assessment inputs must be recorded and
argued.
This section must identify, clearly and precisely, all the components of the assessed product and all the
hypothesis made on the components not assessed.
Each sub-sections of the chapter must contain the name of the assessor and the reference of the work package.
All the criteria must be covered.
3.11.2010 12:59
WCS_AP v.03 Test report
and if there is no risk of hazardous failure . The recommendations can contain suggestions towards the sponsor
and the supplier. The recommendation reminds that the results of the assessment are valid for a particular
version of the product when it is configured of a certain way.
Chapter 1 - Introduction
Objectives
References
This section defines the list of all the references used in the certification report.
Chapter 2 - Results
Conclusion
3.11.2010 12:59
WCS_AP v.03 Test report
the precise identification of the product (with the identification number, version number, etc.),
the conclusions in terms of remaining risk/scenarii
the recommendations of use
This section must contain a detailed description of the operational role, the components and the functions of the
product.
Description of hardware
Description of software
This section gives details on all the software components assessed.
This section gives details on all the documentation associated with the product. The minimum is the users
documentation.
Chapter 4 - Assessment
Technical assessment report
The reference (s) of the technical assessment report.
Some recommendations can be done on the configuration and safe use of the product notably by describing
procedural, technical, organisational measures. Some recommendation can end to a restriction of use of the
product.
Chapter 6 - Certification
This chapter describes the scope of the certificate.
3.11.2010 12:59
WCS_AP v.03 Test report
3.11.2010 12:59