Sei sulla pagina 1di 12

Common Mistakes in Method

Validation and How to Avoid them –


Part 1: Specificity
The validation of analytical methods is undoubtedly a difficult and complex
task. Unfortunately this means that mistakes are all too common. In this
series of articles are reported some examples for discussion related to the
method performance characteristics as listed in the current ICH guidance, ICH
Q2(R1), namely: Specificity; Robustness; Accuracy; Precision; Linearity;
Range; Quantitation limit; and Detection limit.

In this first instalment we will consider some mistakes associated with


‘Specificity’. This characteristic is evaluated for both qualitative and
quantitative methods but the aim is different for each. For qualitative
methods, the aim is to demonstrate that the method can provide the correct
information, e.g., an identification method. For quantitative methods, the
aim is to demonstrate that the final result generated by the method is not
affected by all the potential interferences associated with the method.

Generally, I find that mistakes relating to specificity arise from a basic lack of
understanding about what is required to demonstrate that the method is
satisfactory. I have selected the following three examples as being ones that I
regularly encounter when advising people on method validation, during both
consultancy and training courses.

1. Not setting appropriate acceptance criteria


2. Not investigating all the potential interferences

3. Not considering potential changes that could occur in the


sample/method being tested

Mistake 1: Not setting appropriate acceptance criteria

When the results of a validation study don’t comply with the acceptance
criteria defined in the protocol, then either the method is not suitable for its
intended use, or the acceptance criteria set in the protocol were
inappropriate. I am often asked for help on how to explain why it’s okay that
results did not meet the acceptance criteria, and not just for specificity. The
usual reason for this problem is that generic acceptance criteria were used,
typically predefined in an SOP, and no evaluation of their suitability to the
method being validated was performed.

Example 1: An identification method by FTIR, which was based on a


percentage match with standards spectra in a database, was being validated.
The validation failed because the acceptance criteria for the percentage
match was set at 98% and the match in the validation study was always in the
region of 97%. On investigation it was determined that the percentage match
of 98% had no scientific justification, it was just what had been used before.
No investigation of the method had been performed prior to the validation.

Example 2: A chromatographic impurities method was being validated. The


method validation SOP defined that impurity peaks should have a resolution
of 1.5 and thus an acceptance criterion of 1.5 was set in the validation
protocol. During the validation study, one of the impurity peaks had a
resolution of 1.4. On review of the method development information, it was
found that the resolution of this peak was always around 1.4 and the
chromatography had been considered acceptable but this information had
not made it into the validation protocol.

TIP: Review all the acceptance criteria defined in the validation protocol
against what is known about the method. Assess whether the criteria are
reasonable, in terms of the method capability and what is considered
acceptable. The use of generic acceptance criteria can be a very useful
strategy as long they are used in a scientific manner by assessing what is
known about the actual method being validated.

Mistake 2: Not investigating all the potential interferences

In order to demonstrate that the final result generated by the method is not
affected by potential interferences, it is essential that all the potential
interferences are considered. This can sometimes be difficult for complex
sample matrices so it is important to identify the constituents of the sample
matrix as fully as possible. Additionally, it is easy to overlook other sources of
interferences that may be introduced as part of the method such as solvents,
buffers, derivatisation reagents, etc.

TIP: Carry out a thorough review of all potential interferences when designing
the validation protocol, particularly if the sample matrix is complex in nature,
or if the sample preparation involves the use of multiple reagents.

Mistake 3: Not considering potential changes that could occur in the sample
being tested
The potential interferences that are present in a sample matrix can change
due to changes in the sample composition. The most common example of
this situation is probably sample degradation. In situations where a method
will be used for samples of different ages, such as in a stability programme, it
is essential that this is taken into account during validation and that it is
demonstrated that the method can be used for any sample which may
require analysis.

This means that for some methods, particularly those which are considered
to be stability indicating, the specificity section of the validation protocol
should include experiments to gather evidence to prove that the method
may be successfully used for stability analysis. For methods which analyse the
degradation products it would be expected that forced degradation studies
were performed during method development to allow the creation of a
method that can separate all the components of interest. For other methods
this may not have been necessary in method development but a forced
degradation study may now be required as part of method validation to
demonstrate that the method is stability indicating.

TIP: Consider the long term use of a method when designing the validation
protocol. What samples will be tested and are there any anticipated changes
that could occur to the samples that would affect the potential interferences
for the method? If the method is to be used for stability testing, are there any
additional requirements, such as a degradation study?
Part 2: Robustness
In the previous instalment I wrote about some common mistakes associated
with ‘Specificity’. This time I’ll take a look at ‘Robustness’. The common
mistakes that I have selected for discussion are:

1. Investigating robustness during method validation

2. Not investigating the right robustness factors

3. Not doing anything with the robustness results

The purpose of a robustness study is to find out as much as possible about


potential issues with a new analytical method and thus how it will perform in
routine use. Usually, we deliberately make changes in the method
parameters to see if the method can still generate valid data. If it can, it
implies that in routine use small variations will not cause problems. This
definition is provided in the ICH guideline: “The robustness of an analytical
procedure is a measure of its capacity to remain unaffected by small, but
deliberate variations in method parameters and provides an indication of its
reliability during normal usage.”

There is another aspect to robustness that doesn’t neatly fit under this
definition which applies to the performance of consumable items in the
method, such as chromatography columns. The performance of the column
when different batches of the same column packing are used may vary.
Although column manufacturers aim for batch to batch reproducibility, most
practitioners of HPLC will have come across at least one example of this
problem. Another issue is the aging of the column, the column performance
generally decreases with age and at some stage the column will have to be
discarded. Strictly speaking, these column challenges would actually come
under the heading of intermediate precision, following the ICH guideline, but
it makes much more sense to investigate them during method development
as part of robustness.

The method validation guidelines from both ICH and FDA mention the
importance of robustness in method development and how it is a method
development activity but they do not define whether it needs to be
performed under a protocol with predefined acceptance criteria. Since the
use of a protocol is a typical approach in most pharma companies it brings
me to my first common mistake associated with robustness.

Mistake 1: Investigating robustness during method validation

What I mean by this is that the robustness investigation is performed during


the method validation, i.e. the outcome of the investigation is not known. I
do not mean the approach where the robustness has already been fully
investigated and then it is included as a section in the validation protocol for
the sole purpose of generating evidence which can be included in the
validation report.

If robustness is investigated during validation for the first time, the risk is that
the method may not be robust. Any modifications to improve robustness
may invalidate other validation experiments since they are no longer
representative of the final method. It will of course depend on what
modifications have to be made. As FDA suggests… “During early stages of
method development, the robustness of methods should be evaluated
because this characteristic can help you decide which method you will submit
for approval.”

TIP: If for some reason robustness hasn’t been thoroughly evaluated in


method development then investigate it prior to execution of the validation
protocol using a specific robustness protocol. If any robustness issues are
identified, these can be resolved prior to the validation. The nature of the
robustness problems will determine whether the resolution is just a more
careful use of words in the written method or if method parameters need to
be updated.

Mistake 2: Not investigating the right robustness factors

If you choose the wrong factors you may conclude that the method is robust
when it isn’t. Typically what happens then is that there are a lot of
unexpected problems when the method is transferred to another laboratory,
and since transfer is a very common occurrence in pharma, this can be very
expensive to resolve.

When choosing robustness factors it is tempting to read through the method


and select all the numerical parameters associated with instrumentation. For
example, when assessing HPLC methods there is a tendency to only look at
the parameters of the instrument without consideration of the other parts of
the method, such as the sample preparation. Unfortunately, sample
preparation is an area where robustness problems often occur. Detailed
knowledge of how the method works is required to identify the most
probable robustness factors.
TIP: The most important factors for robustness are often those which were
adjusted in method development. Review all the steps in the method to
choose robustness factors and use a subject matter expert to help if
necessary.

Mistake 3: Not doing anything with the robustness results

The reason for investigating robustness is to gain knowledge about the


method and to ensure that it can be kept under control during routine use.
Very often robustness data is presented without any comments in the
validation report and is not shared with the analysts using the method. This
tick-box approach may be in compliance with regulatory guidance but it is
not making the most of the scientific data available. The discussion of the
method robustness in the validation report should be a very useful resource
when the method needs to be transferred to another laboratory and will
assist in the risk assessment for the transfer.

TIP: Review the robustness data thoroughly when it is available and ensure
that there is a meaningful discussion of its significance in the validation
report.
Part 3: Accuracy
In previous articles I wrote about some common mistakes associated with
‘Specificity’ and 'Robustness'. This time I’ll take a look at ‘Accuracy’. The
common mistakes that I have selected for discussion are:

1. Not evaluating accuracy in the presence of the sample matrix


components
2. Performing replicate measurements instead of replicate preparations
3. Setting inappropriate acceptance criteria

The definition of accuracy given in the ICH guideline is as follows: ‘The


accuracy of an analytical procedure expresses the closeness of agreement
between the value which is accepted either as a conventional true value or
an accepted reference value and the value found.’ This closeness of
agreement is determined in accuracy experiments and expressed as a
difference, referred to as the bias of the method. The acceptance criterion
for accuracy defines how big you are going to let the bias be and still consider
the method suitable for its intended purpose.

The term accuracy has also been defined by ISO to be a combination of


systematic errors (bias) and random errors (precision) and there is a note
about this in the USP method validation chapter, <1225>: ‘A note on
terminology: The definition of accuracy in <1225> and ICH Q2 corresponds to
unbiasedness only. In the International vocabulary of Metrology (VIM) and
documents of the International Organization for Standardization (ISO),
accuracy” has a different meaning. In ISO, accuracy combines the concepts of
unbiasedness (termed “trueness”) and precision.’
From the point of view of performing validation, the difference in the
definitions doesn’t make a lot of difference, we usually calculate both bias
and precision from the experimental data generated in accuracy
experiments. Personally, I prefer the ISO definition of accuracy.

Mistake 1: Not evaluating accuracy in the presence of the sample matrix


components

Since the purpose of the accuracy experiments is to evaluate the bias of the
method, the experiments that are performed need to include all the
potential sources of that bias. This means that the samples which are
prepared should be as close as possible to the real thing. If the sample matrix
prepared for the accuracy experiments is not representative of the real
sample matrix then a source of bias can easily be missed or underestimated.

TIP: The samples created for accuracy experiments should be made to be as


close as possible to the samples which will be tested by the method. Ideally
these ‘pseudo-samples’ will be identical to real samples except that the
amount of the component of interest (the true value) is known. This can be
very difficult for some types of sample matrix, particularly solids where the
component of interest is present at low amounts (e.g., impurities
determination).

For impurities analysis, it may be necessary to prepare the accuracy samples


by using spiking solutions to introduce known amounts of material into the
sample matrix. Although this carries the risk of ignoring the potential bias
resulting from the extraction of the impurity present as a solid into a
solution, there isn’t really a workable alternative.
Mistake 2: Performing replicate measurements instead of replicate
preparations

Performing replicate preparations of accuracy ‘pseudo-samples’ allows a


better evaluation of what differences in the data are due to the bias and
what are due to variability of the method, the precision. A minimum of 9
replicates are advised by the ICH guidance and these should be separate
preparations. For solids, this could be 9 separate weighings into 9 separate
volumetric flasks, as per the method.

However, the preparation does depend on the nature of the sample matrix
and the practicality of controlling the known value for the component of
interest. As discussed above, sometimes in the case of impurities methods,
solutions may be required for practical reasons even though the sample
matrix exists as a solid. In this case 9 separate weighings does not offer more
representative ‘pseudo-samples’ and thus a single stock solution for the
impurity would probably be a better choice.

TIP: Assess the sample matrix and try to prepare separate replicates when
possible so that the data produced is as representative as possible and
includes typical sources of variability.

Mistake 3: Setting inappropriate acceptance criteria

As mentioned previously, the acceptance criterion for accuracy is based on


how much bias you will allow in the results from the method. It is obviously
better not to have any bias in a method but there is always a certain amount
of potential bias associated with the combination of the sample matrix, the
level of the components of interest in the sample, and the instrumentation
used for the measurement. For the method to be capable the bias needs to
be less than the specification for the result. For example, if a drug substance
specification requires that there must be between 99 and 101 %w/w of the
drug present, then a method which has a bias of 2% is not going to be
acceptable.

TIP: Make sure that the acceptance criteria set for accuracy in method
validation are compatible with the requirements for the method, and in
particular, the specification for the test.

Potrebbero piacerti anche