Sei sulla pagina 1di 10

Periodontology 2000, Vol.

58, 2012, 112120


Printed in Singapore. All rights reserved

 2012 John Wiley & Sons A/S

PERIODONTOLOGY 2000

Analytic epidemiology and


periodontal diseases
BRENDA HEATON & THOMAS DIETRICH

A primary aim of epidemiological research is to


identify the causes of disease. Analytic study designs
are strategies developed to measure disease causation according to fundamental principles of
causation and causal theory. Two key principles of
causation remain at the forefront in critical discussions of analytic designs. One is the temporal
sequence of the exposure and the disease outcome,
and a second is the elimination of alternative explanations (e.g. bias and confounding). In addition to
issues of validity, the importance of the precision of
effect estimates has also received attention.
Previous work has reviewed the analytic epidemiological literature and highlighted issues in the study
of periodontitis etiology other than those related to
study design (2). We review analytic study designs for
periodontal epidemiological research within the
context of validity. Specifically, our aim is to re-frame
the perspective on the strength of evidence in reviews
of the literature, as well as to promote proper design
and conduct of periodontal studies as they relate to
validity. We provide an overview of the design and
conduct of cohort studies, including randomized
designs, casecontrol and cross-sectional studies,
while also drawing specifically upon the periodontal
research literature.

Theoretical framework for analytic


designs
The goal of any epidemiological study is to accurately
and precisely estimate the effect of a factor on
development of a given outcome. This estimation is
arrived at through a process of study design, conduct
and analysis. The aim of this multi-step process is to
produce a causal contrast by comparing outcomes
across levels of the causal component of interest. The
causal contrast that is aimed for through formal study

112

is best illustrated by the counterfactual ideal, and is


common to both randomized and non-randomized
studies. The counterfactual ideal, discussed in detail
elsewhere in this volume, is an ideal that emphasizes
the properties required for a valid causal contrast at
the outset of any study (27), namely that the unexposed group is comparable to the exposed group in
every other way except for the causal component (the
exposure) under study. Therefore, the manner in
which we design epidemiological studies to estimate
causal effects should mimic the counterfactual ideal.
The counterfactual approach, which is best illustrated by the potential outcomes model, provides a
unifying framework for designing, analyzing and
interpreting epidemiological studies, and assists in
understanding the concerns related to validity (46).
The notion of designing a study with a valid causal
contrast is readily accepted by many investigators
because of their grounding in the paradigm of randomized controlled trials (6). Randomization can be
used as an exposure assignment mechanism that
assists in achieving a valid causal contrast by producing comparability of groups. The strength of this
tool produces a level of comfort in terms of validity
that at times results in restriction of causal inference
to the paradigm of randomized trials by many
investigators (68). Although this paradigm is helpful
in understanding the basic framework of causal
contrasts under the counterfactual ideal, it does not
serve as a supreme model for analytic epidemiology
(52). Rubins (67) work in the 1970s, which extended
the counterfactual approach (potential outcomes
model) to allow understanding of causal effects
beyond randomization and randomization-based
inference, initiated development of the unifying
principles described here. This extension emphasizes
that randomization is only one type of exposure
assignment mechanism among many that adhere to
the same principles of causal contrasts. All analytic

Analytic epidemiology and periodontal diseases

designs should therefore show careful consideration


of the assignment mechanism leading to an individuals exposure status, a process that is paramount to
designing analytic strategies (68). To paraphrase
Maldonado and Greenland (48), two prominent epidemiologists, because all analytic designs should
estimate causal contrasts, different analytic designs
can be viewed simply as different ways of (1)
choosing a target population that corresponds to the
study question, (2) choosing a substitute population
for the counterfactual experience, and (3) sampling
subjects from the target and substitute populations to
balance trade-offs among bias, precision, study costs
and study time. This perspective is particularly
important for appreciation of the casecontrol design.
Understanding causal contrasts under the counterfactual framework is important to our criticisms of
analytic epidemiology as it pertains to validity, particularly confounding and selection bias. The use
of randomization of exposure assignment is only
applicable in experimental designs; confounding is
therefore an additionally important consideration in
non-randomized studies. Approaches to identifying
confounders and evaluating confounding are best
understood under the counterfactual framework,
which facilitates definitions of confounding that are
reflective of the analytic design (28). Generally, an
association measure estimating the causal contrast is
confounded (or biased due to confounding) if it does
not equal the causal contrast in the target population
because of imperfect substitution (non-comparability) for the counterfactual experience (48). In case
control studies, this is manifested as selection bias.
Finally, application of this framework aids in verification of assumptions required to statistically evaluate causal effects (28, 61).

Cohort studies
Design, conduct and analysis
The classic cohort study is introduced first in order to
illustrate that all analytic designs discussed here can
be thought of as nested within an underlying cohort
from which design principles emanate (51). There are
important conceptual ties between randomized and
non-randomized cohorts and between non-randomized cohorts and casecontrol studies.
Cohort studies are studies in which the incidence
of disease outcomes is measured and compared
across two or more populations, typically a cohort of

those individuals exposed to the factor under study


and a cohort of those who are not. If the assignment
mechanism for exposure is not random, the reference
cohort can be thought of as a confounded substitute
for the counterfactual experience. Cohort studies are
inherently prospective, in that exposure levels of
individuals in the cohort are measured at a time prior
to any disease outcomes of interest. Individuals are
followed over time, and persontime is accrued until
development (incidence) of the outcome, death or
loss to follow-up. The accumulation of persontime
allows direct calculation of the incidence rate, the
most preferred measure of disease frequency for
studying causal effects. The source population enumerated in a cohort study is most often formed on
the basis of some common characteristic or experience. For general cohorts, individuals are separated
according to their level of the exposure variable at
baseline and often at subsequent intervals over a
specified follow-up period as well (time-varying
exposure).
In a cohort study with regular follow-up, an
investigator has enormous flexibility with respect to
analytic approaches. The added flexibility comes in
large part due to the accumulation of time rather
than individuals in the denominator of the rate. By
accumulation of persontime, individuals contribute
time at risk to various categories of an exposure or
confounder over time. Adequate analysis may require
the consideration of additional analytic options for
effect estimation, as assumptions necessary for
conventional approaches are often not met in this
scenario (23, 60).
The cohort design allows study of myriad health
effects that stem from a single exposure. For this
reason, the cohort design has proven particularly
useful for the study of systemic effects of periodontal
disease. For example, possible systemic effects of
periodontal disease have recently been evaluated in
the Veterans Administration Dental Longitudinal
Study (36), including outcomes relating to stroke,
coronary heart disease and cognition (8, 35, 37). Cohort studies of periodontal disease etiology are primarily limited to the evaluation of smoking, diabetic
and bacterial exposures (2).

Validity
General, non-randomized cohorts are often established without regard to a specific causal contrast.
Examples include the Health Professionals Follow-up
Study (31), the Dental Longitudinal Study (36), the
Black Womens Health Study (63) and the Nurses

113

Heaton & Dietrich

Health Study (1), to name but a few. Therefore, each


evaluation of a particular causal contrast within the
cohort as the source population will have to give
thought to how well the unexposed experience can
stand in for the exposed experience in order to guide
assessments of validity, particularly confounding,
specifically whether factors leading to an individuals
exposure status (e.g. the assignment mechanism)
have been measured and can be enumerated so that
objective inferences for causal effects can be made,
similar to randomized studies (19, 27). In doing so,
one can evaluate the extent to which these factors are
balanced across exposure levels and identify possible
approaches to achieving balance prior to any evaluation of outcomes. This endeavor becomes particularly worthwhile when studying multiple outcomes of
one exposure.
Possible approaches to achieving balance prior to
any evaluation of an outcome include utilization of
sub-classification (6), propensity scores (62) or
instrumental variables (16, 21, 34), to name but a
few, some of which have been applied in an oral
health context (72). A more commonly known and
undertaken approach is matching. Matching unexposed individuals to exposed individuals within
categories of a potential confounder does not eliminate confounding by that factor as many would
expect, but instead works to increase the statistical
efficiency of confounder control in the analysis.
Furthermore, with time-varying exposures, competing risks and loss to follow-up, matching at baseline
becomes largely inefficient in cohort studies, as the
original match does not extend throughout the
persontime available for analysis (25). The use of
matching in any analytic design should be carefully
considered. Inadequate justification for matching
can lead to bias and losses in efficiency (overmatching) (42).
Aside from confounding, the inherent prospective
nature of cohort studies prevents many systematic
errors that favor exposure or disease. Therefore,
selection into the cohort is rarely biased, and errors in
the collection and measurement of exposure
information are, on average, expected to be non-differential. Some thought must be taken to avoiding
differential errors in disease classification but they
are often not difficult to avoid. Of primary concern in
a study with longitudinal follow-up is the potential
for biased loss to follow-up (18). This is of particular
concern when substantial loss to follow-up occurs
prior to ascertainment of the outcome. When this
happens, methods should be used to estimate the
possible bias (22).

114

As classic analytic approaches can only account for


measured confounding and random errors, concern
often remains regarding the possible presence of bias
in the effect estimate due to mis-classification,
selection bias or unmeasured confounding. Additional analytic approaches have been developed to
assess the possible influence of remaining bias on the
effect estimate (22). Among them is bias analysis,
more commonly known as sensitivity analysis (20, 24,
45). This quantitative assessment greatly assists in
discussion of the possible influence of bias on the
effect estimate by determining what unobserved covariates or other biases would have to be like in order
to alter conclusions drawn from the effect estimate.
Given the potential for mis-classification of periodontal disease, this tool becomes particularly useful.
Dietrich and Garcia (7) performed one of the few
applications of basic sensitivity analysis in the periodontal literature in order to evaluate the extent of
possible bias due to mis-classification of periodontal
disease status in studies of oralsystemic disease
associations.
Given the possibility of additional design approaches in non-randomized studies, as well as use
of quantitative approaches to analyzing the extent of
bias, non-randomized cohort studies should not
generally be considered inferior to studies employing
randomization. Overall, properly conducted cohort
studies can promote confidence in accurate estimation of the causal contrast under study, whether
randomization is employed or not. Furthermore, this
should, by extension, promote confidence in the
casecontrol design, which is always theoretically
nested within a cohort, whether enumerated or not
(51, 79).

Randomized cohort studies


Pre-eminent among analytic study designs has been
the randomized trial a design approach which
most closely resembles the counterfactual ideal
through use of randomization of exposure. Randomization increases the probability that the unexposed experience can stand in for the exposed
experience had they been unexposed (no confounding) (28). Randomized trials can be thought of
as a special type of cohort study as they share several design features, most notably prospective follow-up with accumulation of persontime for the
direct calculation of incidence rates. However, these
studies have limited application and can only be
utilized when the causal contrast of interest can be
randomized.

Analytic epidemiology and periodontal diseases

The feature of randomization provides benefit from


a design viewpoint beyond that of general cohort
studies. Most basically, randomization lends meaning to the inferential statistics used to evaluate random error in the effect estimate derived from a study
(19). Additionally, in large studies, randomization
addresses the issue of both measured and unmeasured confounding at the design stage by creating a
balance of those factors across exposure groups. The
balance that randomization provides can also add
greater precision for estimating small effects.
Although randomized studies enjoy the added
benefit of an expected balance of covariates at study
baseline, they still suffer from the same vulnerabilities to bias as a non-randomized cohort design.
Whether a randomized study can lead to more valid
causal inferences than non-randomized studies depends on study size, the maintenance of randomization through compliance, blinding and minimized
loss to follow-up patient drop-out (33). The ability
of randomization to create a balanced distribution of
covariates across exposure groups is dependent on
study size. Cluster randomized or community trials,
for example, do not reap the same benefit from
randomization that a large, individually randomized
trial would. Additionally, over longitudinal follow-up,
randomized studies are subject to non-compliance
with the study protocol. Analytic approaches that
maintain initial randomization (intention to treat)
may address the issue of confounding, but effect
estimates will still be influenced in the face of noncompliance (exposure mis-classification), albeit
most likely towards the null value of the effect
measure. However, if non-compliance is biased, the
possible direction of the bias will have to be evaluated. Disease or outcome classification may also be
biased if subjects or investigators become unblinded
over the study period. Lastly, despite compliance,
biased loss to follow-up may occur over the study
period.

Casecontrol studies
Casecontrol studies are studies in which a source
population is identified, and within which all cases of
disease (preferably incident) are identified. Subsequently, members of the same source population are
sampled in a representative manner to provide an
estimate of the exposure distribution (controls). A
sound understanding of the casecontrol design
reinforces conceptualization of the casecontrol
study as simply a more efficient version of a cohort

study performed in the same source population (50,


51, 56, 64).
Of all analytic designs, casecontrol studies are
probably the most misunderstood (64). Foremost
among such misconceptions is the view that case
control studies are inferior to cohort studies in terms
of producing valid effect estimates (75). This and
other common misconceptions are probably the result of what Fienstein terms the trohoc fallacy the
fallacious thinking that casecontrol studies are
simply cohort studies performed backwards, and that
they are therefore inherently retrospective (11).
Extensions of this problem include over-emphasized
vulnerability to biased exposure information and
subject selection. However, these vulnerabilities are
an issue of study conduct, not study type, and can
occur equally in cohort and casecontrol studies.
In fact, casecontrol studies can be performed in
such a way that they produce effect estimates that are
no different than those obtained from a cohort study
conducted within the same study base (40, 55),
regardless of the rarity of disease (29, 30). In fact, a
casecontrol study could be thought of as a cohort
study that is simply missing some exposure data at
random (78). A well-conducted casecontrol study is
subject to the same limitations as the underlying
cohort within which it is nested, whether the cohort
has been enumerated or not. For example, if the
underlying cohort or study base is inherently prospective (e.g. exposure data are derived prior to or
independent of the outcome), a casecontrol study
performed within that source would also be inherently prospective. Additionally, if disease is misclassified in the underlying cohort through diagnostic
bias or some other means, cases selected from that
study base will also be biased. Careful consideration
of this potential source of bias is particularly important in clinic or hospital-based casecontrol studies
(3). Approaches to minimizing confounding in a
casecontrol study are the same as for a cohort study,
except for the formal use of randomization. To reiterate, matching controls to cases does not prevent
confounding, but improves the statistical efficiency
of confounder control (10, 32, 77). As in cohort
studies, unjustified matching can lead to losses in
study efficiency (26, 66, 77).
In addition to the vulnerabilities for biased exposure and disease information that are common to
both cohort and casecontrol studies, concerns
relating to validity in a casecontrol study primarily
hinge on selection of controls. Principles of valid
control selection have received widespread attention
in the methodology literature (12, 50, 57, 59, 65, 71,

115

Heaton & Dietrich

79, 80). It is important to recall that, with any analytic


study, we aim to measure the effect of an exposure on
disease by comparing the disease incidence among
those with the exposure level or condition of interest
to that among those with the referent level or condition. The sole purpose of controls is to represent
the exposure distribution in the study base containing the cases. To do so, they must be selected in such
a way that they are fully representative of that population. The sampling of controls is responsible for
the gains in efficiency over a cohort study. Understanding that casecontrol studies are always theoretically nested within a cohort is useful in avoiding
bias in control selection. This remains regardless of
the source of controls, i.e. population-based, hospital-based, etc. (80). Furthermore, this understanding
assists in determining what exposure distribution the
control group represents, and, by extension, what the
observed effect estimate will approximate (e.g. the
risk or rate ratio) (40, 55).
Assuming that the underlying cohort is fixed and
cases represent incident disease, if controls are selected from among all subjects at baseline or the at
risk population, they will represent the distribution of
exposure among the total population at risk at study
baseline (denominator of the risk), and the observed
odds ratio will therefore approximate the risk ratio
with no need for a rare disease assumption (29). This
particular approach is sometimes referred to as a
casecohort design. If controls are selected from
among the underlying cohort at the time a case
develops (risk-set density sampling), the controls
will represent the distribution of exposed persontime
(denominator of the rate). In this case, the observed
odds ratio will therefore estimate the rate ratio (15).
Note that in both of the previous scenarios, individuals may be selected as controls who later develop
disease during follow-up. If controls are selected from
among non-diseased subjects at the theoretical end of
follow-up, the observed estimate of effect will be the
true odds ratio. It is only under this scenario that the
rare disease assumption becomes meaningful for
approximation of the risk ratio. If cases of disease are
selected from existing cases (prevalent), the observed
odds ratio will be the prevalence odds ratio.
Although casecontrol studies and cohort studies
are similar from a validity standpoint, they are often
in direct opposition from a utility standpoint. Case
control studies may be the only useful alternative to a
cohort study when diseases are rare, latency periods
are long, and or a study base has not previously
been enumerated. Similarly, cohort studies are the
only practical approach when the exposure of inter-

116

est is sufficiently rare. By extension, cohort studies


are particularly useful for studying multiple outcomes of a single exposure and casecontrol studies
are particularly useful for studying multiple exposures of a single outcome.

Common problems with study conduct


Despite the wealth of literature to the contrary, classic
misconceptions about the casecontrol design have
been perpetuated in the periodontal literature (39).
The continued incorrect use of case control terminology and inappropriate conduct of studies perpetuates this problem. The most basic and minimally
consequential problem is the use of case control
terminology to refer to exposed and unexposed subjects. This mistake is often committed when an
investigator is most familiar with studies performed in
a controlled environment and is therefore accustomed to the term control referring to unexposed
subjects (9). To briefly illustrate, Campus et al. (5)
recently conducted a study of the effect of diabetes on
periodontal disease status. The study consisted of
selecting diabetic case and non-diabetic control
subjects to serve as the exposed and unexposed
populations. Periodontal disease status was then
determined for the enrolled diabetic and non-diabetic
subjects. The study investigators inappropriately
termed their study a casecontrol study despite the
fact that casecontrol design principles were not
employed. Similar errors are evident in a more recent
study of metabolic control and oral health (4).
The bulk of purported casecontrol studies of
periodontal disease etiology evaluate gene polymorphisms. A convenience sample of studies designed to
evaluate the effect of pro-inflammatory cytokines
(e.g. interleukin-1) on periodontitis is used to highlight four additional common mistakes in the conduct of casecontrol studies. It should be noted,
however, that casecontrol studies of genetic exposures have some additional unique considerations
(38). A comprehensive review of the literature on this
topic was recently published (44). All studies described here evaluated prevalent periodontal disease,
and therefore the estimate of effect is the prevalence
odds ratio.
Healthy controls
As stated previously, the trohoc fallacy refers to the
fallacious view that casecontrol studies are simply
cohort studies performed backwards, hence the
name. When investigators focus on the selection of
healthy controls, the underlying fallacy is the

Analytic epidemiology and periodontal diseases

unjustified reasoning that if cases are diseased, controls should be healthy. The emphasis on healthy
controls violates principles of valid control selection
and validity is compromised (57). Studies using
this approach identified a control group who were
deemed healthy by several systemic criteria that
were not applied to the case group (17, 49, 58, 70, 73).
In doing so, the principle that controls should be
comparable to the population that gave rise to the
cases was violated.
Case control criteria
The counterfactual basis for study design should remind investigators that the causal contrast of interest
is always that of exposed individuals versus unexposed individuals. Great care should be taken to ensure that the comparison of interest is between
individuals who were highly exposed and those who
are truly unexposed a task that is most often
undertaken in cohort studies. Another manifestation
of the trohoc fallacy occurs when investigators are
instead concerned with the contrast between cases
and controls (56). Instead of the control criterion
simply being non-cases of periodontal disease in
other words, individuals not meeting the case definition more stringent criteria were placed on controls in several studies to ensure that they were truly
non-diseased or periodontally healthy (43, 49, 53, 54,
70, 76, 81, 82). Great detail was often provided
regarding control criteria when, in reality, the criteria
for selection of controls should simply be that they
(1) did not meet the case criterion, and (2) were
members of the same population as the cases, i.e. the
same inclusion and exclusion criteria at study outset.
Comparing exposure frequencies among cases and
controls
In addition to backwards thinking in the conduct of a
casecontrol study, the trohoc fallacy can also
manifest in the analysis of a casecontrol study. Instead of comparing case control ratios across levels
of the exposure, several studies compared exposure
frequencies among cases and controls (17, 43, 53, 54,
58, 73, 76, 81). While the test of statistical significance
is unharmed, the effect on the odds ratio in this situation is unpredictable. More importantly, the effect
of exposure on disease occurrence is not assessed.
Incorrect use of case control terminology in a
cross-sectional design
Finally, many investigators confuse the casecontrol
design with a cross-sectional design because of
inappropriate use of case control terminology (41,

47, 82). In most studies, a convenient cross-section of


a clinic population was identified, subjects were then
classified as periodontal disease cases and non-cases,
and DNA samples were collected for exposure
ascertainment (IL-1). The case control terminology
was incorrectly applied to cases and non-cases.
In contrast to those studies described above, a
sample of three recent studies on this topic represents adequately conducted casecontrol studies that
also highlight the utility of the casecontrol design
(13, 14, 69). All three studies were conducted in the
same source population: participants in the National
Survey of Adult Oral Health in Australia (74). All
identified cases from the underlying source were recruited for participation. One control for every case
was then randomly selected from among the
remaining non-cases and recruited for participation
in the study. Samples for all enrolled subjects were
evaluated for genetic markers. The conduct of these
studies also highlights the utility of casecontrol
studies. First, the same casecontrol study population was used for all three studies to evaluate several
biomarkers and even a behavioral exposure. Furthermore, analysis of gingival crevicular fluid samples
is expensive, and analysis of the whole source population for several exposures would be financially
prohibitive.

Cross-sectional studies
From a validity standpoint, cross-sectional studies
are often inferior to the cohort and casecontrol designs due to issues of temporality and dependence on
prevalent cases. However, depending on their conduct, casecontrol studies and cross-sectional studies
may not differ much in terms of validity, particularly
if casecontrol studies use prevalent cases and the
temporal association between exposure and disease
can be established. Cross-sectional studies, however,
can never distinguish between incidence and natural
history of disease, regardless of variations on the
design. Further, cross-sectional studies cannot provide effect estimates that approximate the risk or rate
ratio, despite misconceptions perpetuated in the literature (39).
Although the cross-sectional design may be considered inferior from a validity standpoint due to its
inherent limitations for establishing temporality
and dependence on prevalent disease, the strength
of the evidence obtained from cross-sectional studies
has to be considered in context. In most cases,
cross-sectional studies cannot distinguish between

117

Heaton & Dietrich

temporal cause and effect. However, when causal


variables under study are immutable characteristics,
such as genetic determinants of disease, the crosssectional design can be a powerful alternative to
other designs to generate hypotheses that can later
be tested using designs that can distinguish between
the incidence and natural history of disease. Crosssectional studies have been utilized to a great extent
to study possible predictors of periodontal disease
status, such as gender, race ethnicity and genetic
polymorphisms, where temporal sequence is not an
issue (2).

References
1. Belanger CF, Hennekens CH, Rosner B, Speizer FE. The
nurses health study. Am J Nurs 1978: 78: 10391040.
2. Borrell LN, Papapanou PN. Analytical epidemiology of
periodontitis. J Clin Periodontol 2005: 6: 132158.
3. Brenner H, Savitz DA. The effects of sensitivity and specificity of case selection on validity, sample size, precision,
and power in hospital-based casecontrol studies. Am J
Epidemiol 1990: 132: 181192.
4. Busato IMS, Bittencourt MS, Machado MAN, Gregio AMT,
Azevedo-Alanis LR. Association between metabolic control
and oral health in adolescents with type 1 diabetes mellitus. Oral Surg Oral Med Oral Pathol Oral Radiol Endod
2010: 109: e51e56.
5. Campus G, Salem A, Uzzau S, Baldoni E, Tonolo G.
Diabetes and periodontal disease: a casecontrol study.
J Periodontol 2005: 76: 418425.
6. Cochran WG, Chambers SP. The planning of observational
studies of human populations. J R Statist Soc A 1965: 128:
234266.
7. Dietrich T, Garcia R. Associations between periodontal
disease and systemic disease: evaluating the strength of the
evidence. J Periodontol 2005: 76: 21752184.
8. Dietrich T, Jimenez M, Krall Kaye EA, Vokonas PS, Garcia
RI. Age-dependent associations between chronic periodontitis edentulism and risk of coronary heart disease.
Circulation 2008: 117: 16681674.
9. Doll R, Hill AB. A study of the aetiology of carcinoma of the
lung. BMJ 1952: 2: 12711286.
10. Dupont WD. Power calculations for matched casecontrol
studies. Biometrics 1988: 44: 11571168.
11. Feinstein AR. Clinical biostatistics. XX. The epidemiologic
trohoc, the ablative risk ratio, and retrospective research.
Clin Pharmacol Ther 1973: 14: 291307.
12. Feinstein AR. The casecontrol study: valid selection of
subjects. J Chronic Dis 1985: 38: 551552.
13. Fitzsimmons TR, Sanders AE, Bartold PM, Slade GD. Local
and systemic biomarkers in gingival crevicular fluid increase odds of periodontitis. J Clin Periodontol 2010: 37:
3036.
14. Fitzsimmons TR, Sanders AE, Slade GD, Bartold PM. Biomarkers of periodontal inflammation in the Australian
adult population. Aust Dent J 2009: 54: 115122.

118

15. Flanders WD, Greenland S. Analytic methods for two-stage


casecontrol studies and other stratified designs. Stat Med
1991: 10: 739747.
16. Glymour MM, Greenland S. Instrumental variables. In:
Rothman KJ, Greenland S, Lash TL, editors. Modern epidemiology. Philadelphia: Lipponcott Williams & Wilkins, 2008:
202204.
17. Gore EA, Sanders JJ, Pandey JP, Palesch Y, Galbraith GM.
Interleukin-1b+3953 allele 2: association with disease status
in adult periodontitis. J Clin Periodontol 1998: 25: 781785.
18. Greenland S. Response and follow-up bias in cohort studies. Am J Epidemiol 1977: 106: 184187.
19. Greenland S. Randomization, statistics, and causal inference. Epidemiology 1990: 1: 421429.
20. Greenland S. Basic methods for sensitivity analysis of biases. Int J Epidemiol 1996: 25: 11071116.
21. Greenland S. An introduction to instrumental variables for
epidemiologists. Int J Epidemiol 2000: 29: 722729.
22. Greenland S. Sensitivity analysis, Monte Carlo risk analysis,
and Bayesian uncertainty assessment. Risk Anal 2001: 21:
579583.
23. Greenland S. Modeling longitudinal data. In: Rothman KJ,
Greenland S, Lash TL, editors. Modern epidemiology. Philadelphia: Lippincott Williams & Wilkins, 2008: 451455.
24. Greenland S, Lash TL. Bias analysis. In: Rothman KJ,
Greenland S, Lash TL, editors. Modern epidemiology. Philadelphia: Lippincott Williams & Wilkins, 2008: 345380.
25. Greenland S, Morgenstern H. Matching and efficiency in
cohort studies. Am J Epidemiol 1990: 131: 151159.
26. Greenland S, Morgenstern H, Thomas DC. Considerations
in determining matching criteria and stratum sizes for
casecontrol studies. Int J Epidemiol 1981: 10: 389392.
27. Greenland S, Robins JM. Identifiability, exchangeability,
and epidemiological confounding. Int J Epidemiol 1986: 15:
413419.
28. Greenland S, Robins JM, Pearl J. Confounding and collapsibility in causal inference. Stat Sci 1999: 14: 2946.
29. Greenland S, Thomas DC. On the need for the rare disease
assumption in casecontrol studies. Am J Epidemiol 1982:
116: 547553.
30. Greenland S, Thomas DC, Morgenstern H. The rare-disease
assumption revisited. A critique of estimators of relative
risk for casecontrol studies. Am J Epidemiol 1986: 124:
869883.
31. Grobbee DE, Rimm EB, Giovannucci E, Colditz G, Stampfer
M, Willett W. Coffee, caffeine, and cardiovascular disease
in men. N Engl J Med 1990: 323: 10261032.
32. Hennessy S, Bilker WB, Berlin JA, Strom BL. Factors influencing the optimal control-to-case ratio in matched case
control studies. Am J Epidemiol 1999: 149: 195197.
33. Hernan MA. A definition of causal effect for epidemiological research. J Epidemiol Community Health 2004: 58:
2658271.
34. Hernan MA, Robins JM. Instruments for causal inference: an
epidemiologists dream? Epidemiology 2006: 17: 360372.
35. Jimenez M, Krall EA, Garcia RI, Vokonas PS, Dietrich T.
Periodontitis and incidence of cerebrovascular disease in
men. Ann Neurol 2009: 66: 505512.
36. Kapur KK. The veterans administration longitudinal study
of oral health. Methodology and preliminary findings. Int J
Aging Hum Dev 1972: 3: 125137.

Analytic epidemiology and periodontal diseases


37. Kaye EK, Valencia A, Baba N, Spiro A III, Dietrich T, Garcia
RI. Tooth loss and periodontal disease predict poor cognitive function in older men. J Am Geriatr Soc 2010: 58:
713718.
38. Khoury MJ, Millikan R, Gwinn M. Genetic and molecular
epidemiology. In: Rothman KJ, Greenland S, Lash TL,
editors. Modern epidemiology. Philadelphia: Lippoincott
Williams & Wilkins, 2008: 564579.
39. Kingman A, Albandar JM. Methodological aspects of epidemiological studies of periodontal diseases. Periodontol
2000 2002: 29: 1130.
40. Knol MJ, Vandenbroucke JP, Scott P, Egger M. What do
casecontrol studies estimate? Survey of methods and
assumptions in published casecontrol research. Am J
Epidemiol 2008: 168: 10731081.
41. Kornman KS, Crane A, Wang HY, di Giovine FS, Newman
MG, Pirk FW, Wilson TG Jr, Higginbottom FL, Duff GW.
The interleukin-1 genotype as a severity factor in adult
periodontal disease. J Clin Periodontol 1997: 24: 7277.
42. Kupper LL, Karon JM, Kleinbaum DG, Morgenstern H,
Lewis DK. Matching in epidemiologic studies: validity and
efficiency considerations. Biometrics 1981: 37: 271291.
43. Laine ML, Farre MA, Gonzalez G, van Dijk LJ, Ham AJ,
Winkel EG, Crusius JB, Vandenbroucke JP, van Winkelhoff
AJ, Pena AS. Polymorphisms of the interleukin-1 gene
family, oral microbial pathogens, and smoking in adult
periodontitis. J Dent Res 2001: 80: 16951699.
44. Laine ML, Loos BG, Crielaard W. Gene polymorphisms in
chronic periodontitis. Int J Dent 2010: 2010: 324719.
45. Lash TL, Fox MP, Fink AK. Applying quantitative bias
analysis to epidemiologic data. New York: Springer, 2009.
46. Little RJ, Rubin DB. Causal effects in clinical and epidemiological studies via potential outcomes: concepts and
analytical approaches. Annu Rev Public Health 2000: 21:
121145.
47. Lopez NJ, Jara L, Valenzuela CY. Association of interleukin1 polymorphisms with periodontal disease. J Periodontol
2005: 76: 234243.
48. Maldonado G, Greenland S. Estimating causal effects. Int J
Epidemiol 2002: 31: 422429.
49. McDevitt MJ, Wang HY, Knobelman C, Newman MG, di
Giovine FS, Timms J, Duff GW, Kornman KS. Interleukin-1
genetic association with periodontitis in clinical practice.
J Periodontol 2000: 71: 156163.
50. Miettinen OS. The casecontrol study: valid selection of
subjects. J Chronic Dis 1985: 38: 543548.
51. Miettinen OS. Theoretical epidemiology. New York: Wiley,
1985.
52. Miettinen OS. The clinical trial as a paradigm for epidemiologic research. J Clin Epidemiol 1989: 42: 491496.
53. Papapanou PN, Neiderud AM, Sandros J, Dahlen G.
Interleukin-1 gene polymorphism and periodontal
status. A casecontrol study. J Clin Periodontol 2001: 28:
389396.
54. Parkhill JM, Hennig BJ, Chapple IL, Heasman PA, Taylor JJ.
Association of interleukin-1 gene polymorphisms with
early-onset periodontitis. J Clin Periodontol 2000: 27: 682
689.
55. Pearce N. What does the odds ratio estimate in a case
control study? Int J Epidemiol 1993: 22: 11891192.
56. Poole C. Exposure opportunity in casecontrol studies. Am
J Epidemiol 1986: 123: 352358.

57. Poole C. Controls who experienced hypothetical causal


intermediates should not be excluded from casecontrol
studies. Am J Epidemiol 1999: 150: 547551.
58. Quappe L, Jara L, Lopez NJ. Association of interleukin-1
polymorphisms with aggressive periodontitis. J Periodontol
2004: 75: 15091515.
59. Robins J, Pike M. The validity of casecontrol studies with
nonrandom selection of controls. Epidemiology 1990: 1:
273284.
60. Robins JM. Causal inference from complex longitudinal
data. In: Berkane M, editor. Latent variable modeling and
applications to causality. New York: Springer Verlag, 1997:
69117.
61. Robins JM. Data, design, and background knowledge in
etiologic inference. Epidemiology 2001: 12: 313320.
62. Rosenbaum PR, Rubin DB. The central role of the propensity score in observational studies for causal effects.
Biometrika 1983: 70: 4155.
63. Rosenberg L, Adams-Campbell L, Palmer JR. The black
womens health study: a follow-up study for causes and
preventions of illness. J Am Med Womens Assoc 1995: 50:
5658.
64. Rothman KJ, Greenland S, Lash TL (editors). Casecontrol
studies. Modern epidemiology. Philadelphia: Lippincott
Williams & Wilkins, 2008: 111127.
65. Rothman KJ, Greenland S, Lash TL (editors). Control
selection. Modern epidemiology. Philadelphia: Lippincott
Williams & Wilkins, 2008: 115116.
66. Rothman KJ, Greenland S, Lash TL (editors). Overmatching. Modern epidemiology. Philadelphia: Lipponcott
Williams & Wilkins, 2008: 179181.
67. Rubin DB. Estimating causal effects of treatments in randomized and nonrandomized studies. J Educ Psychol 1974:
66: 688701.
68. Rubin DB. For objective causal inference, design trumps
analysis. Ann Appl Stat 2008: 2: 808840.
69. Sanders AE, Slade GD, Fitzsimmons TR, Bartold PM.
Physical activity, inflammatory biomarkers in gingival
crevicular fluid and periodontitis. J Clin Periodontol 2009:
36: 388395.
70. Scapoli C, Borzani I, Guarnelli ME, Mamolini E, Annunziata M, Guida L, Trombelli L. IL-1 gene cluster is not
linked to aggressive periodontitis. J Dent Res 2010: 89:
457461.
71. Schlesselman JJ. Valid selection of subjects in casecontrol
studies. J Chronic Dis 1985: 38: 549550.
72. Shelton BJ, Gilbert GH, Lu Z, Bradshaw P, Chavers LS,
Howard G. Comparing longitudinal binary outcomes in an
observational oral health study. Stat Med 2003: 22: 2057
2070.
73. Shete AR, Joseph R, Vijayan NN, Srinivas L, Banerjee M.
Association of single nucleotide gene polymorphism at
interleukin-1b +3954, -511, and -31 in chronic periodontitis
and aggressive periodontitis in Dravidian ethnicity. J Periodontol 2010: 81: 6269.
74. Slade GD, Spencer AJ, Roberts-Thomson KF. Australias
dental generations: the National Survey of Adult Oral
Health 20042006. Canberra: Australian Institute of Health
and Welfare, 2007.
75. Straus SE, Glasziou P, Richardson WS, Haynes RB. Evidence-based medicine: how to practice and teach it. Philadelphia: Churchill-Livingstone, 2005.

119

Heaton & Dietrich


76. Tai H, Endo M, Shimada Y, Gou E, Orima K, Kobayashi T,
Yamazaki K, Yoshie H. Association of interleukin-1 receptor
antagonist gene polymorphisms with early onset periodontitis in Japanese. J Clin Periodontol 2002: 29: 882888.
77. Thomas DC, Greenland S. The relative efficiencies of
matched and independent sample designs for casecontrol
studies. J Chronic Dis 1983: 36: 685697.
78. Wacholder S. The casecontrol study as data missing by
design: estimating risk differences. Epidemiology 1995: 7:
144150.
79. Wacholder S, McLaughlin JK, Silverman DT, Mandel JS.
Selection of controls in casecontrol studies. I. Principles.
Am J Epidemiol 1992: 135: 10191028.

120

80. Wacholder S, Silverman DT, McLaughlin JK, Mandel JS.


Selection of controls in casecontrol studies. II. Types of
controls. Am J Epidemiol 1992: 135: 10291041.
81. Wagner J, Kaminski WE, Aslanidis C, Moder D, Hiller KA,
Christgau M, Schmitz G, Schmalz G. Prevalence of OPG
and IL-1 gene polymorphisms in chronic periodontitis.
J Clin Periodontol 2007: 34: 823827.
cel OO, Berker E, Gariboglu S, Otlu H. Interleukin-11,
82. Yu
interleukin-1b, interleukin-12 and the pathogenesis of
inflammatory periodontal diseases. J Clin Periodontol 2008:
35: 365370.

Copyright of Periodontology 2000 is the property of Wiley-Blackwell and its content may not be copied or
emailed to multiple sites or posted to a listserv without the copyright holder's express written permission.
However, users may print, download, or email articles for individual use.

Potrebbero piacerti anche