Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
Generally, I find that mistakes relating to specificity arise from a basic lack of
understanding about what is required to demonstrate that the method is
satisfactory. I have selected the following three examples as being ones that I
regularly encounter when advising people on method validation, during both
consultancy and training courses.
When the results of a validation study don’t comply with the acceptance
criteria defined in the protocol, then either the method is not suitable for its
intended use, or the acceptance criteria set in the protocol were
inappropriate. I am often asked for help on how to explain why it’s okay that
results did not meet the acceptance criteria, and not just for specificity. The
usual reason for this problem is that generic acceptance criteria were used,
typically predefined in an SOP, and no evaluation of their suitability to the
method being validated was performed.
TIP: Review all the acceptance criteria defined in the validation protocol
against what is known about the method. Assess whether the criteria are
reasonable, in terms of the method capability and what is considered
acceptable. The use of generic acceptance criteria can be a very useful
strategy as long they are used in a scientific manner by assessing what is
known about the actual method being validated.
In order to demonstrate that the final result generated by the method is not
affected by potential interferences, it is essential that all the potential
interferences are considered. This can sometimes be difficult for complex
sample matrices so it is important to identify the constituents of the sample
matrix as fully as possible. Additionally, it is easy to overlook other sources of
interferences that may be introduced as part of the method such as solvents,
buffers, derivatisation reagents, etc.
TIP: Carry out a thorough review of all potential interferences when designing
the validation protocol, particularly if the sample matrix is complex in nature,
or if the sample preparation involves the use of multiple reagents.
Mistake 3: Not considering potential changes that could occur in the sample
being tested
The potential interferences that are present in a sample matrix can change
due to changes in the sample composition. The most common example of
this situation is probably sample degradation. In situations where a method
will be used for samples of different ages, such as in a stability programme, it
is essential that this is taken into account during validation and that it is
demonstrated that the method can be used for any sample which may
require analysis.
This means that for some methods, particularly those which are considered
to be stability indicating, the specificity section of the validation protocol
should include experiments to gather evidence to prove that the method
may be successfully used for stability analysis. For methods which analyse the
degradation products it would be expected that forced degradation studies
were performed during method development to allow the creation of a
method that can separate all the components of interest. For other methods
this may not have been necessary in method development but a forced
degradation study may now be required as part of method validation to
demonstrate that the method is stability indicating.
TIP: Consider the long term use of a method when designing the validation
protocol. What samples will be tested and are there any anticipated changes
that could occur to the samples that would affect the potential interferences
for the method? If the method is to be used for stability testing, are there any
additional requirements, such as a degradation study?
Part 2: Robustness
In the previous instalment I wrote about some common mistakes associated
with ‘Specificity’. This time I’ll take a look at ‘Robustness’. The common
mistakes that I have selected for discussion are:
There is another aspect to robustness that doesn’t neatly fit under this
definition which applies to the performance of consumable items in the
method, such as chromatography columns. The performance of the column
when different batches of the same column packing are used may vary.
Although column manufacturers aim for batch to batch reproducibility, most
practitioners of HPLC will have come across at least one example of this
problem. Another issue is the aging of the column, the column performance
generally decreases with age and at some stage the column will have to be
discarded. Strictly speaking, these column challenges would actually come
under the heading of intermediate precision, following the ICH guideline, but
it makes much more sense to investigate them during method development
as part of robustness.
The method validation guidelines from both ICH and FDA mention the
importance of robustness in method development and how it is a method
development activity but they do not define whether it needs to be
performed under a protocol with predefined acceptance criteria. Since the
use of a protocol is a typical approach in most pharma companies it brings
me to my first common mistake associated with robustness.
If robustness is investigated during validation for the first time, the risk is that
the method may not be robust. Any modifications to improve robustness
may invalidate other validation experiments since they are no longer
representative of the final method. It will of course depend on what
modifications have to be made. As FDA suggests… “During early stages of
method development, the robustness of methods should be evaluated
because this characteristic can help you decide which method you will submit
for approval.”
If you choose the wrong factors you may conclude that the method is robust
when it isn’t. Typically what happens then is that there are a lot of
unexpected problems when the method is transferred to another laboratory,
and since transfer is a very common occurrence in pharma, this can be very
expensive to resolve.
TIP: Review the robustness data thoroughly when it is available and ensure
that there is a meaningful discussion of its significance in the validation
report.
Part 3: Accuracy
In previous articles I wrote about some common mistakes associated with
‘Specificity’ and 'Robustness'. This time I’ll take a look at ‘Accuracy’. The
common mistakes that I have selected for discussion are:
Since the purpose of the accuracy experiments is to evaluate the bias of the
method, the experiments that are performed need to include all the
potential sources of that bias. This means that the samples which are
prepared should be as close as possible to the real thing. If the sample matrix
prepared for the accuracy experiments is not representative of the real
sample matrix then a source of bias can easily be missed or underestimated.
However, the preparation does depend on the nature of the sample matrix
and the practicality of controlling the known value for the component of
interest. As discussed above, sometimes in the case of impurities methods,
solutions may be required for practical reasons even though the sample
matrix exists as a solid. In this case 9 separate weighings does not offer more
representative ‘pseudo-samples’ and thus a single stock solution for the
impurity would probably be a better choice.
TIP: Assess the sample matrix and try to prepare separate replicates when
possible so that the data produced is as representative as possible and
includes typical sources of variability.
TIP: Make sure that the acceptance criteria set for accuracy in method
validation are compatible with the requirements for the method, and in
particular, the specification for the test.