Documenti di Didattica
Documenti di Professioni
Documenti di Cultura
Critical appraisal
(Making Reading More Worthwhile)
What is Critical Appraisal?
1. Critical appraisal = quality assessment
2. .process of weighing up evidence to see
how useful it is in decision making
3. .a process of assessing the validity, reliability
and usefulness of evidence
4. Critical appraisal is about considering,
evaluating and interpreting information in a
systematic and objective way
What is Evidence?
People disagree on what constitutes
evidence
Evidence - what is generally regarded as a
scientific fact
Evidence - a combination of information
obtained from 3 sources: research, clinical
experience, and client preferences (Kitson,
Harvey, & McCormack, 1998)
EVIDENCE-INFORMED MEDICINE
Sources of Evidence
Primary sources
Based on experiments and published
research
Secondary sources
Systematic reviews
Clinical guidelines
Journals of secondary publication e.g.
Evidence Based Medicine
5S Pyramid of Evidence
Resources
Levels of evidence
1. Systematic reviews of RCTs and high quality
RCTs
2. Systematic reviews of Cohort studies, lower
quality RCTs, Outcomes research
3. Systematic reviews of case controls, case
control studies
4. Case Series
5. Expert opinion
See http://www.cebm.net/levels_of_evidence.asp for full
descriptions
Types of Evidence
Question Types
Type of Question
Best Evidence
Health care interventions:
treatment, prevention
Quantitative:
Systematic Review of RCTs or RCT
Harm or Etiology
Prognosis
Diagnosis or Assessment
Quantitative:
Comparison to Gold Standard
Economics
Quantitative:
Cost-effectiveness Study
Meaning
Qualitative:
case study, ethnography, grounded
theory, phenomenologic approach
VALIDITY
INTERNAL
Is the study designed in such a way that I
can trust the findings?
EXTERNAL
Is the study designed in such a way that I
can generalize the findings?
RELIABILITY
If the study was conducted again,
would the results be the same?
Usually interpreted as the accuracy
of measurement.
IMPORTANCE
What was the effect size
or magnitude of effect?
Clinical vs. statistical
significance.
Evaluate
performance
Implement
changes in
clinical practice
Title
Authors
Abstract: structured? Informative? Abbreviation?
Introduction: length? Relevant references? Target
population?
5. Methods:
Design
Inclusion criteria
Exclusion criteria
Sample size, sampling method
Randomization technique
Intervention: masking?
Outcome measurement: blinding?
Analysis
7. Discussion
General
Strength and weakness
Conclusions
8. References
Vancouver style
Constant
9. Acknowledgment
10.Ethics approval
11.Conflict interest
What to assess?
(in study of cause-effect relationship)
A. General description
Type of design
Target population, source population,
sample
Sampling method
Dependent and independent variables
Main results?
What to assess?
(in study of cause-effect relationship)
B. Internal validity, non-causal relationship
Influence of bias
Influence of chance
Influence of confounders
Bias
What is a bias? A process that tends to produce
Types of bias
1. Sample (subject selection) biases, which may result
in the subjects in the sample being unrepresentative
of the population which you are interested in
2. Measurement (detection) biases, which include
issues related to how the outcome of interest was
measured
3. Intervention (performance) biases, which involve how
the treatment itself was carried out.
What to assess?
(in study of cause-effect relationship)
C. Internal validity, causal relationship
Temporality (cause precedes effect)
Strength of association (large difference, RR, OR, etc) or
small p value or narrow confidence interval
Biological gradient (dose dependence)
Consistency among studies (diff. populations or designs)
Specificity (certain factor results in certain effect)
Coherence (does not conflict with current knowledge)
Biological plausibility: can be explained with current
knowledge (at least in part)
What to assess?
(in study of cause-effect relationship)
D. External validity
Applicable to study subjects
Applicable to source population
Applicable to target population
11 items
1. What is the research question?
2. What is the study type?
3. What are the outcome factors and how are they measured?
4. What are the study factors and how are they measured?
5. What important confounders are considered?
6. What are the sampling frame and sampling method?
7. In an experimental study, how were the subjects assigned to
groups? In a longitudinal study, how many reached final followup? In a case control study, are the controls appropriate? (Etc)
8. Are statistical tests considered?
9. Are the results clinically/socially significant?
10. Is the study ethical?
11. What conclusions did the authors reach about the study
question?
Appraisal Tools
Tools from the Critical Appraisal Skills
Programme (CASP)
Systematic Reviews
Randomised Controlled Trials
Qualitative Research Studies
Cohort Studies
Case-Control Studies
Diagnostic Test Studies
Economic Evaluation Studies
Risk Factors /
Prognosis
Cohort Study
Diagnosis
Qualitative (Interviews,
Observations, etc)
CRITICAL APPRAISAL
METHODS
- VALID
- IMPORTANT RESULTS
- APPLICABLE DISCUSSION
THANKS
Types of evaluations
Efficacy
treatment does more good than harm when
offered to those who adhere to treatment
recommendations
Does it work under ideal conditions?
Types of evaluations
Effectiveness
treatment does more good than harm in those
to whom it is offered, under ordinary (clinical)
circumstances
Can it work in the real world?
Study purpose
Application to occupational therapy
Study design
Bias
Sampling issues/ sample size/ drop outs
Outcome measurement (reliability, validity)
Intervention description & implementation
Results - statistical & clinical significance
Implications