Sei sulla pagina 1di 12

LEVELS OF EVIDENCE

Rating System for the Hierarchy of Evidence: Quantitative Questions


Level I: Evidence from a systematic review of all relevant randomized controlled trials
(RCT's), or evidence-based clinical practice guidelines based on systematic reviews of RCT's
Level II: Evidence obtained from at least one well-designed Randomized Controlled Trial
(RCT)
Level III: Evidence obtained from well-designed controlled trials without randomization,
quasi-experimental
Level IV: Evidence from well-designed case-control and cohort studies
Level V: Evidence from systematic reviews of descriptive and qualitative studies
Level VI: Evidence from a single descriptive or qualitative study
Level VII: Evidence from the opinion of authorities and/or reports of expert committees

STEPS IN ANALYZING A RESEARCH ARTICLE


When choosing sources it is important for you to evaluate each one to ensure that you have
the best quality source for your project. Here are common categories and questions for you
to consider:
ABSTRACT
Does the first sentence contain a clear statement of the purpose of the article (without
starting....The purpose of this article is to.....)
Is the test population briefly described?
Does it conclude with a statement of the experiments conclusions?
INTRODUCTION
Does it properly introduce the subject?
Does it clearly state the purpose of what is to follow?
Does it briefly state why this report is different from previous publications?
METHODS AND MATERIALS
Is the test population clearly stated? Is it appropriate for the experiment? Should it be
larger? more
comprehensive?
Is the control population clearly stated? Are all variables controlled? Should it be larger?
more
comprehensive?
Are methods clearly described or referenced so the experiment could be repeated?
Are materials clearly described and when appropriate, manufacturers footnoted?
Are all statements and descriptions concerning design of test and control populations and
materials
and methods included in this section?
RESULTS
Are results for all parts of the experimental design provided?

Are they clearly presented with supporting statistical analyses and/or charts and graphs
when
appropriate?
Are results straightforwardly presented without a discussion of why they occurred?
Are all statistical analyses appropriate for the situation and accurately performed?
DISCUSSION
Are all results discussed?
Are all conclusions based on sufficient data?
Are appropriate previous studies integrated into the discussion section?
Northern Arizona Universityhttp://jan.ucc.nau.edu/pe/exs514web/How2Evalarticles.htm
WHERE TO FIND THE EVIDENCE?
Systematic research review
Where are they found? Cochrane Library, PubMed, Joanna Briggs Institute
Clinical practice guidelines
Where are they found? Many places! Don't get resources like MDConsult.
National Guideline Clearinghouse (NGC) http://www.guideline.gov or choose "guideline" or
"Practice Guidelines" within the Publication Type limit in PubMed or CINAHL.
Current Practice Guidelines in Primary Care (AccessMedicine)
This handy guide draws information from many sources of the latest guidelines for
preventive services, screening methods, and treatment approaches commonly encountered
in the outpatient setting.
ClinicalKey also has a number of Guidelines: https://www-clinicalkeycom.ezproxy.library.wisc.edu/#!/browse/guidelines
Original research articles
Where are they found? CINAHL, MEDLINE, Proquest Nursing & Allied Health, PsycINFO,
PubMed
SOURCE:
http://researchguides.ebling.library.wisc.edu/c.php?g=293229&p=1953406

Different Levels of Evidence


Practising evidence-based medicine encourages clinicians to integrate valid and useful evidence
with clinical expertise and each patient's unique features, enabling clinicians to apply evidence to the
treatment of patients.[1] There are five main steps to practising evidence-based medicine:[1]

Identify knowledge gaps and formulate a clear clinical question.

Search the literature to identify relevant articles.

Critically appraise the articles for quality and the usefulness of results; always question
whether the available evidence is valid, important and applicable to the individual patient.

Implement clinically useful findings into practice.

Evaluate performance using audit.

Healthcare professionals must always apply their general medical knowledge and clinical judgement
not only in assessing the importance of recommendations but also in applying the recommendations
which may not be appropriate in all circumstances. The following questions should be asked when
deciding on the applicability of evidence to patients:[2]

Is my patient so different from those in the study that results cannot be applied?

Is the treatment feasible in my setting?

What are my patient's likely benefits and harms from the therapy?

How will my patient's values influence the decision?

Finding the evidence

When looking for appropriate evidence:

Search for available guidelines - eg, National Institute for Health and Care
Excellence (NICE), Health Information Resources, professional bodies (eg, a relevant
specialist site such as the Royal College of Obstetricians and Gynaecologists (RCOG)).

If no guidelines are available, search for systematic reviews - eg, Cochrane


database.

If no systematic reviews are available, look for primary research - eg, PubMed.

If no research is available, consider general internet searching (eg, Google), or


discuss with a local specialist (at this level beware poor-quality information from the
internet or individual personal bias from even the most respected specialist).

The National Library for Health provides access to a range of medical search sites, including
PubMed, Medline, EMBASSY, Bandolier, University of York's Centre for Review and
Dissemination and the Cochrane database.

National guidelines and guidance sites include NICE and the Scottish Intercollegiate
Guidelines Network (SIGN). Guidance on many topics is also available at the website for NICE
Clinical Knowledge Summaries (NICE CKS) - formerly 'PRODIGY'.

Critical appraisal of medical research

[3]

Initial questions

The topic and conclusions: consider whether the message is important and believable, and
whether it fits with existing knowledge and opinion (always look for other research, reviews and
guidelines on the same topic).

Consider whether there any obvious problems with the research and whether the research
has been ethical.

Consider whether the objectives are clear and the precise nature of the hypothesis being
considered.

Funding: drugs companies might seek to publish studies that show their product in a
favorable light, but ignore negative studies.

Conflict of interest: consider whether the authenticity of the research can be relied upon.

Type of study
In general, the hierarchy of studies for obtaining evidence is:

Systematic reviews of randomised controlled trials (RCTs).

RCTs.

Controlled observational studies - cohort and case control studies.

Uncontrolled observational studies - case reports.

However, the hierarchy is dependent on the issue being researched. The Centre for Evidence-Based
Medicine (CEBM) has recently published a table to identify the different levels of evidence for
different types of questions (eg, prognosis, treatment benefits), including: [4]

For issues of therapy or treatment, the highest possible level of evidence is a systematic
review or meta-analysis of RCTs or an individual RCT.

For issues of prognosis, the highest possible level of evidence is a systemic review of
inception cohort studies.

Expert opinion must not to be confused with personal experience (sometimes called eminencebased medicine). Expert opinion is the lowest level of acceptable evidence but, in the absence of
research evidence, may be the best guide available.

RCTs:

RCTs, especially those with double-blind placebo controls, are regarded as the gold
standard of clinical research.

These studies work very well for certain interventions - eg, drug trials, but it is much
more difficult for other interventions, such as using sham acupuncture or sham
manipulation as the control.

Longitudinal or cohort studies:

A group of people is followed over many years to ascertain how variables such as
smoking habits, exercise, occupation and geography may affect outcome.

Prospective studies are more highly rated than retrospective ones, although the
former obviously take many years to perform. Retrospective studies are more likely to
produce bias.

Meta-analysis:

The more data are pooled, the more valid the results but possibly the less relevant
they become to individual patients.[5] Meta-analysis can therefore be a useful tool but it
has some important limitations.

A meta-analysis takes perhaps 10 trials of 100 patients and to combine the results as
if it were a trial of 1,000 patients.

Although this technique rates highly, the methodology may not be identical in all
studies and further errors may be caused by a bias to certain publications. A good metaanalysis should contain funnel plotting with cut and fill to assess the completeness of a
publication.[6]

A large, well-conducted trial is, therefore, far more valuable than a meta-analysis.

Method

Selection of subjects is very important; some diseases are difficult to define - eg, irritable
bowel syndrome, chronic fatigue syndrome, fibromyalgia. For many diseases there is huge
variation in severity - eg, asthma. If subjects have been paid for taking part in the study, there
may be possible bias.

Questionnaires: assess the design of the questionnaires, whether they were piloted, whether
the interviewers were properly trained and the interviews standardised.

Recall bias may be important. The timing of the questionnaire may be important, especially
for seasonal illness such as hay fever. Minor events may easily be forgotten.

Setting and subjects:

The study population should be clearly defined, as should whether the whole
population or a subset has been studied. Consider whether the sample size seems big
enough, whether the duration of the study was long enough for the outcome measure to
occur and whether there is any possible selection bias - eg, only patients treated in
hospital have been selected.

Assess whether the control group was well matched and whether any exclusion
criteria were valid.

Consider the relevance of any patients who have dropped out of the study, the
reasons for dropping out and the relevance for the results and conclusions of the
research.

Outcome measures: should be clearly defined, relevant to the objectives, reliable and
reproducible, valid and consistent.

PatientPlus

Is This New Treatment Any Good?

Read more articles

Results

Consider how convincing the results are, whether the statistics (eg, P value, confidence
limits) are appropriate and impressive, and whether there are any possible alternative
explanations for the results.

Type of outcome: the results of a trial may be relatively simple to express in terms of
numbers dying or surviving or may be much harder to quantify. The quality adjusted life years
(QALY) index may be used for such parameters as pain, incontinence and disability.[7]

The results should be clearly and objectively presented in sufficient detail (eg, age or gender
breakdown of results). Consider whether there was an adequate response rate in a
questionnaire study (ideally above 70%) and whether the numbers in any study add up.

Identify the rate of loss of follow-up during the study and how non-responders have been
dealt with - eg, whether they have been considered as treatment failures or included separately
in the analysis.

Assess whether the results are clinically relevant and whether the conclusions are supported
by the results of the research study.

Conclusions

Check that the conclusions relate to the stated aims and objectives of a study and whether
any generalisations made from a study carried out in one population have been applied
inappropriately to a different type of population.

Consider the possibility of any confounding variables - eg, age, social class, ethnicity,
smoking, disease duration, comorbidity. Multiple regression analysis or strict matching of
controls reduces this problem.

Bias may have many forms - eg, observer bias such as non-blinding, trying to ensure a
patient has drug rather than placebo, contamination where the intervention group passes on
information to the control group in health education intervention studies.

Annual and seasonal factors in the variation of disease may be important, especially for
respiratory infections, rhinitis and asthma.

Discussion

The discussion should include whether the initial objectives have been met, whether the
hypothesis has been proved or disproved, whether the data have been interpreted correctly
and the conclusions justified.

The discussion should include all the results of the study and not just those that have
supported the initial hypothesis.

Hierarchical systems for levels of evidence used


in recommendations and guidelines
[8]

A variety of grading systems for evidence and recommendations is currently in use. The
system used is usually defined at the beginning of any guidelines publication.

The hierarchy of evidence and the recommendation gradings relate to the strength of the
literature and not necessarily to clinical importance.[9]

Grading of evidence

Ia: systematic review or meta-analysis of RCTs.

Ib: at least one RCT.

IIa: at least one well-designed controlled study without randomisation.

IIb: at least one well-designed quasi-experimental study, such as a cohort study.

III: well-designed non-experimental descriptive studies, such as comparative studies,


correlation studies, case-control studies and case series.

IV: expert committee reports, opinions and/or clinical experience of respected authorities.

Grading of recommendations

A: based on hierarchy I evidence.

B: based on hierarchy II evidence or extrapolated from hierarchy I evidence.

C: based on hierarchy II evidence or extrapolated from hierarchy I or II evidence.

D: directly based on hierarchy IV evidence or extrapolated from hierarchy I, II or III evidence

A simpler system of ABC is recommended by the US Government Agency for Health Care Policy
and Research (AHCPR):

A: requires at least one RCT as part of the body of evidence.

B: requires availability of well-conducted clinical studies but no RCTs in the body of


evidence.

C: requires evidence from expert committee reports or opinions and/or clinical experience of
respected authorities. Indicates absence of directly applicable studies of good quality.

Guideline Recommendation and Evidence Grading (GREG)


In an attempt to improve the way recommendations and evidence statements are graded, the GREG
grading system has been used:

Evidence grade:

I (High): the described effect is plausible, precisely quantified and not vulnerable to
bias.

II (Intermediate): the described effect is plausible but is not quantified precisely or


may be vulnerable to bias.

III (Low): concerns about plausibility or vulnerability to bias severely limit the value of
the effect being described and quantified.

Recommendation grade:

A (Recommendation): there is robust evidence to recommend a pattern of care.

B (Provisional recommendation): on balance of evidence, a pattern of care is


recommended with caution.

C (Consensus opinion): evidence being inadequate, a pattern of care is


recommended by consensus.

Provide Feedback

Further reading & references

Centre for Evidence-Based Medicine

Journals and Databases; NICE Evidence Search

National Institute for Health and Care Excellence (NICE)

The Cochrane Collaboration

Scottish Intercollegiate Guidelines Network (SIGN)

Centre for Reviews and Dissemination; University of York

PubMed

Clinical Knowledge Summaries; NICE

Song F, Parekh S, Hooper L, et al; Dissemination and publication of research findings: an


updated review of related biases. Health Technol Assess. 2010 Feb;14(8):iii, ix-xi, 1-193. doi:
10.3310/hta14080.

1.

Straus SE, Sackett DL; Using research findings in clinical practice. BMJ. 1998 Aug
1;317(7154):339-42.

2.

Straus SE, Sackett DL; Applying evidence to the individual patient. Ann Oncol. 1999
Jan;10(1):29-32.

3.

Counsell C; Formulating questions and locating primary studies for inclusion in systematic
reviews. Ann Intern Med. 1997 Sep 1;127(5):380-7.

4.

Levels of Evidence; Centre for Evidence-Based Medicine, June 2010

5.

Tonelli MR; The limits of evidence-based medicine. Respir Care. 2001 Dec;46(12):1435-40;
discussion 1440-1.

6.

Sterne JA, Egger M, Smith GD; Systematic reviews in health care: Investigating and dealing
with publication and other biases in meta-analysis. BMJ. 2001 Jul 14;323(7304):101-5.

7.

Johannesson M; QALYs, HYEs and individual preferences--a graphical illustration. Soc Sci
Med. 1994 Dec;39(12):1623-32.

8.

Eccles M, Mason J; How to develop cost-conscious guidelines. Health Technol Assess.


2001;5(16):1-69.

9.

Burns PB, Rohrich RJ, Chung KC; The levels of evidence and their role in evidence-based
medicine. Plast Reconstr Surg. 2011 Jul;128(1):305-10. doi: 10.1097/PRS.0b013e318219c171.

SOURCE:
http://patient.info/doctor/different-levels-of-evidence

Potrebbero piacerti anche