Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
FINAL REPORT
JULY 2017
6 FINDINGS..........................................................................................................................................................................26
6.1 Summary ...................................................................................................................................................................27
6.2 Analysis: PQS Effectiveness ....................................................................................................................................29
6.2.1 Service Level Delivery (OTIF) as Surrogate for PQS Effectiveness ..................................................................29
6.2.2 Inventory - Stability Matrix (ISM) .........................................................................................................................29
6.2.3 Moderating Effects ................................................................................................................................................... 33
6.2.4 Impact of C-categories on PQS Effectiveness...................................................................................................... 35
6.2.5 Impact of Performance Metrics on PQS Effectiveness ...................................................................................... 37
6.2.6 CAPA Effectiveness and PQS Effectiveness ......................................................................................................... 37
6.3 Analysis: PQS Effectiveness and Efficiency .......................................................................................................... 37
6.3.1 Linkage between PQS Effectiveness and Efficiency ........................................................................................... 37
6.3.2 Linkage between PQS Effectiveness and Efficiency with peer-group split.....................................................39
6.4 Analysis: Customer Complaint Rate .....................................................................................................................39
6.4.1 Linkage between Customer Complaint Rate and PQS Effectiveness..............................................................39
6.4.2 Linkage between Customer Complaint Rate and PQS Effectiveness for DS/DP Split ................................. 41
6.4.3 Linkage between Customer Complaint Rate and Rejected Batches
moderated by Operational Stability ...................................................................................................................... 41
6.5 Analysis: Cultural Excellence .................................................................................................................................43
6.5.1 Linkage between Quality Maturity and Quality Behavior ................................................................................43
6.5.2 Top-10 Quality Maturity Attributes that drive Quality Behavior ....................................................................43
6.5.3 Cultural Excellence as the foundation for PQS Effectiveness ..........................................................................45
6.5.4 Linkage between St.Gallen OPEX Enablers and Operational Stability .......................................................... 46
6.6 Limitations of the Data Analysis .......................................................................................................................... 46
The FDA Quality Metrics initiative has emerged directly from the into year two and an outline of future research activities planned
FDA Science and Innovation Act (US Congress, 2012) (FDASIA, can be found in Chapter 8.
2012) and aims to provide both industry and regulators with better
The report is structured to provide Background and Research De-
insight into the current state of quality across the global pharma-
sign in Chapters 2 and 3 respectively. A key body of work is then
ceutical manufacturing sector that serves the American publics’
introduced in Chapter 4, outlining the design and development of
healthcare needs.
a holistic, system-based approach to performance management,
As part of this initiative the FDA awarded a research grant to the namely the Pharmaceutical Production System Model (PPSM). The
University of St.Gallen to help establish the scientific base for rele- Analysis Approach is explained in Chapter 5 with all of the de-
vant performance metrics which might be useful in predicting risks tailed analysis and Findings provided comprehensively in Chapter
of quality failures or drug shortages. An important factor in the 6. These detailed analyses are further supported and referenced
academic collaboration for this research was the availability of the with additional materials provided in numbered appendices. The
St.Gallen Pharmaceutical OPEX Benchmarking database, consist- implications of the research on the current FDA Metrics Initiative
ing of key performance indicator and enabler data related to more is discussed in Chapter 7, while Chapter 8 provides the conclusions
than 330 pharmaceutical manufacturing sites. and future outlook.
This report provides an account of the research activities, initial The main findings arising from the research conducted by the Uni-
data analysis undertaken and key findings arising from the first versity of St.Gallen in close collaboration with the FDA Quality
year of this research program. The research has now progressed Metrics Team are summarized below.
» Operational Stability high performing sites have a › Improve the understanding of the actual performance
significantly lower level of Customer Complaints and a of the company’s production system in general, and the
significantly lower level of Rejected Batches compared to reported FDA metrics in particular.
Operational Stability low performing sites. » Fostering Quality Maturity will have a positive impact on
the Quality Behavior at a firm, leading to superior Cultural
Excellence and subsequently providing the foundation of
Implications for FDA Quality Metrics Program PQS Excellence.
» Lot Acceptance Rate and Customer Complaint Rate are
reasonable measures to assess Operational Stability and PQS
Effectiveness and should remain part of the Quality Metrics
Program.
» The level of detail of FDA suggested quality metrics
definitions is appropriate given the limited number of
metrics requested.
1. One of the suggested metrics from FDA’s revised draft guidance, invalidated Out-of-specification (OOS) could not be tested based on existing St.Gallen
OPEX data. However as Invalidated OOS iis part of the recently launched St.Gallen OPEX Benchmarking in QC Labs, such an analysis will be conducted
in year 2 in the context of the Pharmaceutical Production System Model (PPSM).
2. As this metric is the only performance indicator in the entire database that covers time, quantity and quality from a customer perspective, no other met-
rics have been considered and tested as surrogates.
3. Operational Stability is an average of multiple variables. The impact of single metrics is assessed in chapter 5.2.5.
4. To do this bears some complexity: first risk has to be operationalized and then it needs a certain amount of data to be able to find relations between the
metric values and the risk exposure. As FDA intends to do the analysis only in combination with other data they already have available there may be other
patterns arising that serve the aim to identify respective risks.
5. This conclusion has not been derived from data analysis but from theory and from the study of sources like the Drug Shortages report by International
Society for Pharmaceutical Engineering [ISPE] and The Pew Charitable Trusts [PEW] (2017).
165 53 66 25 27
Mixed API Solids & Semi Solids Liquids & Sterile Liquids Unassigned
55 126 87 50 18
61 12 29 234
TPM T QM J IT
OP E R AT IONAL P E R F OR MANC E
E ffec tive C ros s-func t. S upplier
Hous e- Tec hnology P roduc t Quality P lanning L ayout
keeping Us age Development Management Adherenc e Optimization
8. Key performance indicators (KPIs) are a set of quantifiable measures that a company uses to gauge its performance over time. These metrics are used to
determine a company’s progress in achieving its strategic and operational goals, and also to compare a company’s finances and performance against other
businesses within its industry.
9. Enablers are production principles (methods & tools but also observable behaviour). The values show the degree of implementation based on a self-assess-
ment on a 5 point Likert scale.
10. Structural factors provide background information on the site, such as size and FTEs, technology, product program. Structural factors allow to build
meaningful peer groups for comparisons (“compare apples with apples”).
11. Key performance indicators (KPIs) are a set of quantifiable measures that a company uses to gauge its performance over time. These metrics are used to
determine a company’s progress in achieving its strategic and operational goals, and also to compare a company’s finances and performance against other
businesses within its industry.
E PQS
Lot Acceptance Excellence [S]
Rate/
1 -Rejected Batches Invalidated OOS
Result System D
PQS Effectiveness [S] PQS Efficiency [S]
Structural Factors
B CAPA Effectiveness
Enabling
System
A
Cultural Excellence [S]
PQS EXCELLENCE: SCORE BUILD FROM PQS EFFECTIVENESS & PQS EFFICIENCY
PQS EFFECTIVENESS: PQS
PQS EFFICIENCY:
Service Level Delivery (OTIF) Excellence [S] » Maintenance Cost/Total Cost
Customer Complaint Rate » Quality Cost/Total Cost
PQS Effectiveness [S] PQS Efficiency [S]
» Cost for Preventive Maintenance/
OPERATIONAL STABILITY: Total Cost
Supplier Reliability Operational Stability Lab Quality &
» Unplanned Maintenance [S] [S] Robustness [S] » FTE QC/ Total FTE
» OEE (average) » FTE QA/Total FTE
CAPA Effectiveness
» Rejected batches » Inventory
» Deviation Cultural Excellence [S]
» Yield SUPPLIER RELIABILITY
» Scrap rate » Service level supplier (OTIF)
NR. OF OBSERVATIONS: From internal audit
» Release time (formerly DQ) » Complaint rate supplier
» Deviation closure time
(formerly DQ) LAB QUALITY & ROBUSTNESS:
» Analytical Right First Time
ENGAGEMENT METRICS
» Lab Investigations
» Suggestions (Quantity) CULTURAL EXCELLENCE: QUALITY MATURITY
» Invalidated OOS
» Suggestions (Quality) » Preventive maintenance [3]
» Total OOS
» Employee turnover » Housekeeping [2]
» Lab Deviation Events
» Sick leave » Process management [6]
» Recurring Deviation
» Training » Cross functional product development [3]
» CAPAs Overdue
» Level of qualification » Customer involvement [2]
» Customer Complaints Requiring
» Level of safety (Incidents) » Supplier quality management [5]
Investigation
» Set-up time reduction [1]
» Product Re-Tests due to Complaints
CULTURAL EXCELLENCE: QUALITY BEHAVIOR » Routine Product Re-tests
» Preventive maintenance [4] CAPA SYSTEM » Annual Product Quality Reviews
» Housekeeping [1] » Number of CAPAs (APQR)
» Process management [1] » Number of critical overdue CAPAs » APQR On Time Rate
» Cross functional product » Number of non-critical overdue CAPAs » Stability Reports
development [1] » Audits
Speed
Reliability
12. Enablers are production principles (methods & tools but also observable behaviour). The values show the degree of implementation based on a self-assess-
ment on a 5 point Likert scale.
13. Effective Management System
14. Please find the detailed assignment of Enablers to the categories Quality Maturity and Quality Behavior in the Appendix.
15. Assigned to both categories.
16. The term ‘relative value’ indicates that not the absolute metrics values have been considered but their relative position within the sample. E.g. the lowest
absolute value for Sick leave in the sample is considered as the best value in the sample (see Table 1 better if: ) and therefore went as 100% into the calcu-
➔
lation of the Engagement Metrics Score. The highest value for Sick Leave went as 0% into the calculation of the Engagement Metrics Score. For Number
of Suggestions, the highest absolute value was assigned 100%, the lowest absolute value was assigned 0% (see Table 1 better if: ). Therefore, the sites with
➔
the highest (closest to 100%) aggregated Engagement Metrics Score is the best site of the sample in this category.
The transformation from absolute to relative values has been done with the excel function ‘percentile rank’.
➔
Suggestions (Quality) Currency unit Level of qualification %
➔
Employee turnover %
➔
Level of safety
Number per month
➔
Sick leave % (Incidents)
➔
Table 1: Engagement Metrics of the PPSM
Quality Behavior
Basic
Standardization and simplification 3/6 H01-H03
Elements
Quality Maturity
E20-E22, E24,
TQM Supplier quality management 5/7
E26
Basic
Visual management 4/4 H07-H10
Elements
Average of Average of
Number of CAPAs 14
Number of recalls 14
Overall Equipment Effectiveness Measurement of Equipment Stability and Availability / Maintenance Effectiveness
Unplanned Maintenance %
➔
Rejected Batches %
➔
➔
Lab Investigations/1’000 Tests No./1’000 Tests
➔
Invalidated OOS/100’000 Tests No. 100’000 Tests
➔
Total OOS/100’000 Tests No./100’000 Tests
➔
Lab Deviations Event/1’000 Tests No./1’000 Tests
➔
Recurring Deviation %
➔
CAPAs Overdue %
➔
Lab Quality &
Robustness Score Customer Complaints req. Investigation/100’000 Tests No./100’000 Tests
➔
Product Re-Test due to Complaints %
➔
Routine Product Re-Tests No.
➔
Annual Product Quality Reviews (APQR)/Products tested No./Product
➔
APQR On Time Rate %
➔
Stability Batches/Stability Reports No./Report
➔
Batches/Audits No./Audit
➔
Table 9: Calculation of Lab Quality & Robustness Score
Unplanned Maintenance
➔
OEE
➔
Scrap Rate
➔
Release Time
➔
Better
Average of relative value of Unit
if
PQS Efficiency
Cost for Preventive Maintenance/Total Cost %
➔
Score
FTE QC/ Total FTE %
➔
Tool Description
» A t-Test allows to test two groups to determine whether the mean for
a specific variable of these two groups is equal. Consequently, it can be
T-Test
identified if there is a significant difference of the means and which group
has a higher value. (Abramowitz & Weinberg, 2008; Huizingh, 2007)
Inventory – OTIF » High level of inventories may compensate for stability issues
Analysis II » Sites with low operational stability show significant higher levels of Rejected Batches
Rejected Batches - OTIF » Sites with high levels of Rejected Batches and low inventory show a comparably low level of
Service Level Delivery
» Sites with high levels of Rejected Batches and high inventory, have a similar level of Service
Level Delivery as sites with few Rejected Batches
» Inventory mitigates the negative effect of high levels of Rejected Batches on the Service
Level Delivery (OTIF)
Analysis III » Sites with low stability and low inventory show a weak performance for both metrics,
Rejected Batches and Customer Complaint Rate
Rejected Batches - Customer
Complaint Rate » Sites with low stability and high inventory also show higher levels of Rejected Batches as
high stability groups, however demonstrate a Customer Complaint Rate level similar to
high stability groups
» Mitigating effect of inventory on the impact of low operational stability and high level of
Rejected Batches on the Customer Complaint Rate
6.2.3 Moderating Effects » High level of inventory reduces the negative impact of Rejected Batches on the Service
Analysis I Level Delivery level
Customer Complaint Rate » A higher Customer Complaint Rate is accompanied by a low aggregated PQS Effectiveness
and PQS Effectiveness for Score
DS/DP Split
» For drug substance sites this relationship is stronger than for drug product sites
Customer Complaint » Operational Stability High Performer have both a low level of Customer Complaints as well
Rate and Rejected Batches as a low level of Rejected Batches
moderated by Operational
Stability
6.5 Cultural Excellence » High Quality Maturity is accompanied with a high degree of Quality Behavior
Quality Maturity and Quality
Behavior
Top-10 Quality Maturity » A special focus of the Top-10 Maturity Attributes includes the use of standardization,
Attributes driving Quality visualization and best-practice sharing
Behavior
Cultural Excellence as » PQS Effectiveness High Performer have a significantly higher implementation level of
the foundation for PQS Cultural Excellence compared to the PQS Effectiveness Low Performer
Effectiveness
St.Gallen OPEX Enablers and » For most St.Gallen OPEX Enabler (sub)categories the Operational Stability High Performer
Operational Stability have a significantly higher level of implementation compared to the Operational Stability
Low Performer
» Category Total Quality Management (TQM) however does not show a significantly
different implementation level for the two peer-groups (only same or slightly better
implementation level)
GROUP STATISTICS
Table 14: Differences of mean of (aggregated) PQS Effectiveness (Score) for OTIF HP and OTIF LP
Table 15: T-test for equality of means of (aggregated) PQS Effectiveness (Score) between OTIF HP and OTIF LP
17. Definition of Service Level Delivery (OTIF): perfect order fulfillment (percentage of orders shipped in time from a site (+/- 1 days of the agreed shipment
day) and in the right quantity (+/- 3% of the agreed quantity) and right quality) to its customer.
18. OTIF will be used as a synonym for Service Level Delivery (OTIF)
Figure 6 visualizes the concept of splitting the sample in four contrast to Group 3, a higher level of safety stock. Table 17 shows
groups along the dimensions Stability and Inventory. The 2 x 2 ma- that the performance of Group 4 regarding Service Level Delivery is
trix is referred to as the Inventory-Stability Matrix (ISM). significantly better than the performance of Group 3. This fact in-
dicates that having a high level of inventory compensates for a lack
The dimension Stability is operationalized by the Operational Sta-
of operational stability concerning the availability to provide drugs
bility Score (see section 4.5). Sites that have an over median value for
on time and in the right quality. The second best performance is
OS score are categorized in Group 1 or 2.
achieved by Group 2. Interestingly the very best performance re-
The dimension Inventory is operationalized by the metrics: Days on garding Service Level Delivery is achieved by Group 1 even with low
Hand (DOH)19. Sites that have an over median value (30 days) for inventory. This can be explained due to a very high level of opera-
DOH are categorized in the Group 2 or 4. tional stability of this group (OS Score (Group1) = 66% > OS Score
(Group2) = 64%).
Sample:
Summary: A high level of operational stability (OS Score) is the
The concept leads to four distinct groups, drawn from the overall
major lever to achieve high levels of Service Level Delivery. While
sample, as shown in Table 16.
high levels of inventories may compensate for stability issues, the
Note on sample used: the basic sample comprises all 336 sites avail- inherent risks introduced by low stability present a threat to the or-
able from the St.Gallen OPEX benchmarking database at the start ganization’s ability to consistently meet market demand, on-time,
of the research project. In order to assign any given site to one of in full. The combination of low stability and low inventory, as rep-
the four ISM groups, values for Days on Hand and the OS Score are resented by Group 3, results in a lower capability to deliver in time.
needed based on the criteria given above. In total, 204 sites have
The following two analysis will focus on the metrics proposed by
been assigned to the ISM groups as shown in Table 16.
the revised FDA Draft Guidance (FDA, 2016): Firstly, the Lot Ac-
Implementation ceptance Rate and secondly, the Customer Complaint Rate.
The result of implementing the ISM concept into MS Excel is
shown in Figure 7. The larger point per group indicates the aver- 6.2.2.3.2 Service Level Delivery and Lot Acceptance Rate
age value of all sites within that group. The large blue point for in-
stance represents the average Stability level (OS Score) and average This analysis assesses the position of the four groups with respect
Inventory level measured by the absolute number of Days on Hand to the FDA proposed metrics Lot Acceptance Rate, represented by
(DOH_abs) for group 4. The first figure per average point indicates the metric Rejected Batches and the PQS Effectiveness surrogate
the value according to the x-axis, the second figure the value of the Service Level Delivery (OTIF).
y-axis. Figure 9 shows that Group 3 and Group 4 have the highest level
Besides generating diagrams, such as Figure 9, the excel tool pro- of Rejected Batches. Both groups are characterized by a low level of
vides a detailed overview on the average, the median value, the val- stability. In comparison Groups 1 and 2, which have a higher level
ue of the 0.75 percentile, the value of the 0.25 percentile as well of operational stability reveal a significant20 lower level of Rejected
as the rank within the four ISM-groups of the selected metric (see Batches. Figure 8 does not support the assumption that a high level
Table 17 left side). of Rejected Batches is directly linked with a low level of Service Level
Delivery as Group 4, which has a higher level of Rejected Batches,
A second table provides an overview of the differences of the means still achieves OTIF values very similar to the high stability groups.
between the four groups. Difference between Group I and J is de- However, the weak performance of Group 3 indicates a strong link
fined as the difference of Group I’s mean and Group J’s mean, divid- between Rejected Batches and poorer Service Level Delivery when no
ed by mean of the overall sample. inventory is available. This analysis demonstrates evidence of the
Diff = absolute value ( (Mean(Group i) - Mean(Group j) / Mean(Over- buffering or masking effect of inventory on poor performance.
all Sample)).
Along with calculating the difference as defined above, Excel offers
6.2.2.3.3 Customer Complaint Rate and Lot Acceptance Rate
the functionality to calculate a t-test of two samples. If the differ- The third analysis assesses the position of the four groups regard-
ence between two Groups is highlighted in green (see Table 17 right ing the two metrics Rejected Batches and Customer Complaint Rate.
side), the group’s mean values show a significant difference (p-value
below 0.05). In the course of the project, discussions came up on the question
whether or not the metrics Rejected Batches and Customer Com-
plaint Rate are redundant and, as a consequence, collecting and
6.2.2.3 Results: reporting both metrics provides little additional value compared to
asking for only one of them.
6.2.2.3.1 Service Level Delivery and Level of Inventory
Figure 10 shows (as also seen in Figure 9) a significant higher level
The first analysis assesses the relationship between the level of In- of Rejected Batches for the low stability Groups 3 and 4 regardless of
ventory and the Service Level Delivery (OTIF) level. inventory. However, the performance of these two groups differ in
the comparison regarding the level of customer complaints. Group
Figure 8 shows that Group 3 has the lowest level of Service Level De-
4 actually shows a similar level of customer complaints as Group 1
livery. According to Table 17 the average value of Service Level Deliv-
and 2, even though the level of Rejected Batches is more than double
ery of Group 3 is significantly lower (p-value<0.05) than the average
that of the high stability groups. Within the high stability groups,
value of the other groups. Group 4 also has a low stability but, in
even though Group 2 experiences a higher level of Rejected Batches
19. Days on Hand (DOH): average inventory less write downs x 365 divided by the ‘Cost of Goods Sold’ Difference in mean between Group2 and Group4.
P-value of t-test is 0.00<0.05.
20. Cf. Appendix 1.1: Questions and Definitions from St.Gallen OPEX Report – Structural Factors
1 2
Group N value –
stability inventory – Average
Average [%]
[Days]
3 4
4 48 Low 40% High 127
Low stability, low Low stability, Table 16: Average stability and inventory of four ISM-groups
inventory high inventory
Low
Low Inventory High
Check for
interaction effects
21. Cf. Appendix 1.1: Questions and Definitions from St.Gallen OPEX Report – Structural Factors
Figure 13: Effect of selected Production Strategy on Relationship Rejected Batches vs. Service Level Delivery
.90
OTIF
y=0.77+0.21*x
Del.
Delivery
Level
.80
Service
Service Level
.70
.60
.00 .20 .40 .60 .80 1.00
OS_Score
Operational Stability Score
Figure 14: Effect of selected Production Strategy on Relationship Operational Stability vs. Service Level Delivery
6.2.6 CAPA Effectiveness and PQS Effectiveness 6.3.1 Linkage between PQS Effectiveness and Efficiency
Table 21: Results MLR Impact of KPIs on OTIF * Significant to 0.05 level
Release Time
Number of Recalls
Deviation closure time
Figure 15: Plot: Number of non-critical overdue CAPAs vs. Service Level Delivery (OTIF)
Figure 16: Scatter plot between agg. PQS Effectiveness and PQS Efficiency
Stability(OS)/Inventory
Peer
1.00
High Stability, Low Inventory
Rest
High Stability, Low Inventory
Rest
.60
y=0.46+0.1*x
y=0.03+0.85*x
PQS
PQS
.40
.20
.00
.00 .20 .40 .60 .80 1.00
PQS Effectiveness
Aggregated Agg.
PQS Effectiveness Score
Figure 17: Scatter plot between agg. PQS Effectiveness and PQS Efficiency with peer-group
Table 23: Group statistics showing the mean difference between CCR HP and LP
Table 24: Independent Samples Test showing the significance of the statistical t-Test
6.4.2.4 Implications
This analysis shows that for the selected peer-groups the same rela-
tionship was identified, only the degree of determination differed.
Further investigation into the various drug product types will be
needed to better understand these differences (e.g. between chemi-
cal and biotech sites and will form part of the future research scope
of the research team.
Figure 18: Scatter plot for Customer Complaint Rate and the aggregated PQS Effectiveness
Customer Complaint Rate (absolute)
R2 Linear = 0.664
1.00
.90
Quality Behavior
.80
.70 y=0.19+0.77*x
.60
.50
.40
.40 .60 .80 1.00
Quality Maturity
Figure 20: Linkage between Quality Maturity and Quality Behavior - PDA results (left) and St. Gallen results (right)
6.5.1.3 Results
The statistical measure, adjusted R2 of 0.66, means that 66% of the
variation of the Quality Behavior Score can be explained with the
Quality Maturity Score. This supports and enhances the finding of
the PDA study in 2014 which showed a degree of determination of
34% (Patel et al. 2015).
24. Appendix 1.3: Questions and Definitions from St.Gallen OPEX Report – Enabler
Table 26: Independent Sample Test showing the significance of the statistical t-Test
Figure 21: Significant differences of the implementation level of Enabler Categories and Sub-Categories of the St.Gallen OPEX Benchmarking
6.5.3.4 Implications
6.5.2.4 Implications
The result of this analysis shows that a high degree of PQS Effec-
A special focus of these Top-10 Maturity Attributes are in the fields
tiveness is accompanied with a high level of Cultural Excellence ev-
of standardization, visualization and best-practice sharing. These
idenced at the site. Taking into account that there are other influ-
attributes support a high Quality Behavior for the pharmaceutical
encing factors to achieve a high PQS Effectiveness (e.g. Operational
manufacturing sites of the St.Gallen OPEX Benchmarking data-
Stability, Supplier Reliability) the results also shows a significant re-
base.
lationship between these two categories of the PPSM and the level
of Cultural Excellence at a site.
6.5.3 Cultural Excellence as the foundation for As a consequence, the widely discussed and understood impor-
PQS Effectiveness tance of Cultural Excellence and Quality Culture on the effective-
ness of a site’s PQS can be statistically confirmed with the data of
the St.Gallen OPEX Benchmarking database.
6.5.3.1 Motivation and Objectives
The importance of Cultural Excellence and Quality Culture has
widely discussed in the industry in recent times and is generally
understood on a qualitative basis. The research team aimed to ana-
lyze if this qualitative understanding of the role of Cultural Excel-
lence can be confirmed quantitatively.
The researcher’s objective was to identify whether there is a signif-
icant impact of Cultural Excellence on PQS Effectiveness at pharma-
ceutical manufacturing sites.
28. The concern has been raised whether the St.Gallen data set is not representative as only well performing sites are choosing to share data voluntarily. In the
last decade however, companies with different level of maturity and performance, ranging from very poor performance to world class performance have
participated in the benchmarking. Therefore, we do not believe that the database is biased in favor of well-performing companies.
Scope of Performance Analysis Three single performance metrics Overall Production System
Depending on capability
Possibility for overall system analysis (pattern
7 Potential to derive risks to identify critical values
recognition) helps to determine risks.
« calibration »
Table 27: Comparison FDA Quality Metrics with St.Gallen PPSM Approach
Appendix 2.1: Questions and Definitions from Table 28: Appendix: Structural Factors from
St.Gallen OPEX Report – Structural Factors St.Gallen OPEX Questionnaire
UID Questions
Corporate Level
A01 How many production sites does your company have?
A02 What was your total sales in the last year?
Please fill in the cost structure of your company as a percentage of sales.
A03 R&D
A04 Manufacturing costs
A05 General & administration costs
A06 Sales & marketing costs
A07 Net profit
Compared to your competitors, indicate the development of your company on the following dimensions within the last 3
years.
A09 Market share
A10 Sales growth
A11 Return on sales
A12 Launches of new promising products
A13 Share price
Company type - Please indicate your company type ( yes/ no). Multiple answers are possible!
A14 Pharmaceutical company with R&D
A15 Generics manufacturer
A16 Contract manufacturer
A17 Biotechnology
A18 Miscellaneous (If the stated types do not reflect your business please provide us the information)
Site role - If your site is part of a manufacturing network, does the site have a specific role within this network? Multiple
answers are possible!
A19 We have a manufacturing network. (yes/no)
If yes – Rate the following metrics from “No competence” to “High competence”
A20 Launch site
A21 Special technology
A22 Special capacity size
A23 High packaging and production flexibility
A24 Access and entrance to markets
A25 Close to regional technology clusters
A26 Follow-the-customer
A27 Low cost site
A28 Securing of raw material sources
A29 Development site
A30 Back-up site (redundancy/ capacity)
A31 No special site role (yes/no)
Quality Quality
Culture Culture None
UID Enabler Behavior Maturity
Preventive maintenance
D01 We have a formal program for maintaining our machines and equipment. x
Maintenance plans and checklists are posted closely to our machines and
x
D02 maintenance jobs are documented.
We emphasize good maintenance as a strategy for increasing quality and
x
D03 planning for compliance.
All potential bottleneck machines are identified and supplied with additional
x
D04 spare parts.
We continuously optimize our maintenance program based on a dedicated
x
D05 failure analysis.
Our maintenance department focuses on assisting machine operators per-
x
D06 form their own preventive maintenance.
Our machine operators are actively involved into the decision making pro-
x
D07 cess when we decide to buy new machines.
Our machines are mainly maintained internally. We try to avoid external
x
D08 maintenance service as far as possible.
Technology assessment and usage
D09 Our plant is situated at the leading edge of new technology in our industry. x
We are constantly screening the market for new production technology and
x
D10 assess new technology concerning its technical and financial benefit.
D11 We are using new technology very effectively. x
D12 We rely on vendors for all of our equipment. x
D13 Part of our equipment is protected by firm`s patents. x
overdue CAPAs Sig. (2-tailed) .038 .205 .156 .281 .088 .139 .800 .429 .805 .008
N 13 13 14 10 14 10 11 14 14 9
# of observations of a health Pearson Correlation -.366 -.033 -.120 -.267 -.101 .068 .072 .322 -.186 -.211
authority inspection Sig. (2-tailed) .218 .915 .684 .456 .730 .852 .832 .262 .525 .586
N 13 13 14 10 14 10 11 14 14 9
# of observations per Pearson Correlation -.271 -.413 .580
*
-.267 -.609
*
.575 .433 .371 -.158 -.017
internal audit Sig. (2-tailed) .371 .161 .030 .456 .021 .082 .184 .192 .589 .965
N 13 13 14 10 14 10 11 14 14 9
Number of recalls Pearson Correlation .601
*
.118 .195 -.522 -.052 .289 -.207 -.139 .280 .224
Sig. (2-tailed) .030 .701 .505 .122 .859 .417 .541 .637 .333 .562
N 13 13 14 10 14 10 11 14 14 9
*. Correlation is significant at the 0.05 level (2-tailed).
**. Correlation is significant at the 0.01 level (2-tailed).
b. Cannot be computed because at least one of the variables is constant.
Figure 27: Appendix: Correlation table Compliance Metrics and Performance Metrics
Figure 28: Appendix: Implementation Level of Quality Behavior and Maturity for OTIF HP vs. OTIF LP
Figure 29: Appendix: Quality Behavior and Maturity for OTIF HP vs. OTIF LP t-Test Output
Figure 30: Appendix: Engagement Metrics Score for OTIF HP vs. OTIF LP