Sei sulla pagina 1di 7

VOTING RECORDS AND VALIDATED

VOTING STUDIES

CAROL A. CASSEL
University of Alabama

This research note examines whether people who overreport voting or voting
records (or both) account for different results from self-reported and validated
voter turnout research.
Presser, Traugott, and Traugott (1990) raised the possibility that findings of
overreporting bias in self-reported turnout studies may be artifacts of validation
error. They showed that inferior voting records may deter validators from finding
evidence of actual voting in central cities, the South, and African American
communities. This is a critical concern for political scientists because some
studies have found that overreporters bias the effects of the same or overlapping
variables. Abramson and Claggett (1984, 1986, 1989, 1991) found that over-
reporters alter the effect of race on turnout. Bernstein, Chadha, and Montjoy
(2001) found that overreporters alter the effect of race, residence in the Deep
South, Hispanic ethnicity, the interaction of region and race, and minority
concentration. Cassel (2002, 2003) found that overreporters alter the effect of
race, southern residence, and Hispanic ethnicity.1 Past research still leaves
several questions unanswered. Do nonvoters who falsely claim to vote distort
what we might otherwise predict about the effect of African American race,
southern residence, Hispanic ethnicity, and related variables? Or do inferior voting
records in some voting districts cause validators to undercount African American,
southern, or Hispanic voters? What is the magnitude of validation error?

Previous Research
Presser, Traugott, and Traugott (1990) and Abramson and Claggett (1990,
1992) reach different conclusions about whether poor-quality voting records
may bias validated turnout estimates. Presser and his colleagues examined

I am indebted to Michael Traugott for helpful comments on an earlier version of this note.
Address correspondence to the author; e-mail: ccassel@tenhoor.as.ua.edu.
1. Cassel (2002) found that overreporters bias the effect of Hispanic ethnicity in midterm, but not
presidential, elections.
Public Opinion Quarterly, Vol. 68 No. 1 Pp. 102–108, © American Association for Public Opinion Research 2004; all rights reserved.
DOI:10.1093 / poq / nfh007
Voting Records and Vote Validation 103

record quality indicators available in the 1988 National Election Study (NES)
election administration study and found that African Americans, southerners,
and central city residents tend to live where record management is poorest.2
(The study reported here looks at the same record quality variables, identified
in the “analysis” section below.) Furthermore, they show that validators are
least likely to confirm self-reports of voting in these communities. For example,
validators confirmed 90 percent of self-reports of voting among African
American registrants who lived where record quality and access is highest, but
only 52 percent of self-reports among African American registrants who lived
where record quality and access is lowest (calculated from Presser, Traugott,
and Traugott 1990, table 6). Presser and his colleagues conclude that validated
voting studies overestimate misreporting, and the well-known finding that
African Americans overreport voting more than whites overreport voting
might be at least in part an artifact of poor-quality voting records. However,
they also advised that political scientists need further, multivariate tests to
determine the relative effects of the quality of voting records and the respon-
dents on validated turnout.
To the contrary, Abramson and Claggett (1990, 1992) found little difference in
voting records in African American and white communities from an examination
of record quality variables in the 1986 and 1988 NES election administration
studies. They concluded that voting records do not distort racial differences in
validated turnout. However, the record quality variables Abramson and Claggett
examine are not the same variables that Presser and his colleagues find important
so the former’s study does not directly counter the research claims of the
latter.3 Yet a third, indirect test of record quality supports Abramson and
Claggett’s conclusion. Using 1980–1988 NES data, Bernstein, Chadha, and
Montjoy (2001, n. 12) compared validated turnout models that include and
exclude people classified as overreporters because validators could not find their
voting records.4 Results from the two models are approximately the same.

Analysis
This research extends Presser, Traugott, and Traugott’s (1990) study with
multivariate tests of 1988 NES data to determine whether poor-quality and

2. African Americans and southerners are twice as likely, and central city residents are three
times as likely, as others to live where voting record quality and access are lowest (calculated
from Presser, Traugott, and Traugott 1990, table 6).
3. Abramson and Claggett (1990) examine validators’ subjective assessments of record quality,
whether voting offices have procedures to see if people live at the address where they are registered,
whether master files contain the names of all people who are registered, whether master files are
computerized, whether one needs to know the precinct to find a voting record, and whether
one does not need to know the precinct or could locate a person’s voting record from his or her
registration record.
4. In the NES validation procedure, field staff will misclassify actual voters as overreporters if
104 Carol A. Cassel

inaccessible voting records bias the effects of race, region, etc. on validated
turnout (i.e., to determine the relative effects of the voting records and the
characteristics of respondents on validated turnout).5 NES interviewers ques-
tioned local government officials about the nature of registration and voting
records in election administration studies in both 1986 and 1988, but only the
larger 1988 study contains the necessary variables. The multivariate tests also
make it possible to estimate the magnitude of validation error.
To determine the effects of voting records on validated turnout research,
this study compares results from validated turnout models that exclude and
include Presser, Traugott, and Traugott’s (1990) record quality indicators. The
models contain the demographic variables whose effects may be biased by
faulty voting records—race, and southern and central city residence—and
interrelated socioeconomic control variables. These independent variables are
standard explanations of turnout in voting participation research (Conway
1999; Rosenstone and Hansen 1993; Teixeira 1992; Verba and Nie 1972;
Verba, Schlozman, and Brady 1995; Wolfinger and Rosenstone 1980).
The record-keeping variables are the number of offices to register, office
workload, and a record quality and access index. Whether there is more than
one office to register is V1222 in the 1988 NES data set. Office workload is
the number of voters per precinct, or V1215/V1217 (natural logarithm). The
record quality and access index is a 3-point measure of complications in voting
records, combining whether election officials merge the registration records
with vote information (V1133, no = 1, yes = 0); whether all voting records are
available (V1145, no = 1, yes = 0); whether the validator needs the exact
address to locate an individual (V1176, yes = 1, no = 0); whether officials had
updated all 1988 election records (V1179, no = 1, yes = 0); and whether the
validator may handle records that are not computerized (V1174 and V1229,
no = 1, yes = 0).6 Validated vote is V1147 (1 = 11; 0 = 21, 22, 24, 31, 32;
missing = 0, 12, 13, 23, 33). The correlations between validated turnout and
the number of offices to register, voters per precinct, and record quality and
access index are −.07, −.06, and −.08, respectively. All p-values are < .01.
The socioeconomic and demographic predictors of turnout examined here
are education, age, and length of residence in the community in years; family
income in ordinal categories; dummy variables measuring whether people are
married, southern residents, female, homeowners, Hispanic, or central city

they fail to find the voting records of validated registrants or the registration records of self-
reported voters.
5. The data analyzed here are from the 1988 National Election Study. Neither the principal inves-
tigators (Warren E. Miller and the National Election Studies) nor the suppliers of the data (Sapiro,
Rosenstone, Miller, and the National Election Studies 1998) bear any responsibility for the analysis
or interpretation.
6. Approximately 56 percent of respondents live in areas with no complications in voting record
keeping (the index of record quality and access = 0), about 35 percent live where there is one
complication (the index = 1), and 8.4 and .1 percent live where there are two and three complica-
tions, respectively (the index = 2).
Voting Records and Vote Validation 105

residents; and two dummy variables—African American and “other”—


measuring race. “White” is the omitted race category; “other” is Asian, Native
American, and other. Age and length of residence are natural logarithms to
correct for nonlinear relationships with turnout.
Table 1 presents the logistic regression predictions of turnout from models
that exclude and include the three record-keeping variables. In the right-hand
equation the effects of the number of registration offices and office workload
are not significant. The effect of the third record-keeping variable, the voting
record quality and access index, is significant and moderately large: we expect
a 9 percentage point difference in turnout when changing from the lowest to
highest value of the index. Yet adding the voting record index and other
record-keeping variables to table 1’s left-hand equation does not notably
change the coefficients, significance levels, or effect sizes of the other variables.
All differences in effects in the two sets of equations are less than those
included in 95 percent confidence intervals.
Why does controlling for the record-keeping variables—particularly the
voting record quality and access index—make so little difference for the
effects on validated turnout of race, southern and central city residence, and
the other independent variables? Further analysis shows that the correlations
between the voting record quality and access index and race, southern
residence, and central city residence are .12, .11, and .23, respectively. All
p-values are <.01. However, central city residence does not affect turnout
(table 1), so the strongest relationship, between record keeping and central city
residence, is not important. In fact, because turnout in central cities may be
explained by the characteristics of individual residents, U.S. voting participation
studies generally do not include a size of place variable (Conway 1999;
Rosenstone and Hansen 1993; Teixeira 1992; Verba, Schlozman and Brady
1995; Wolfinger and Rosenstone 1980). The weak correlations between the
voting records index and both race and region indicate that voting records
explain about 1 percent of their variance and explain why these relationships
are not important as well. African Americans and southerners may be twice
as likely as others to live where validators have more difficulty matching self-
reports of voting with actual voting records, yet only 16 percent of African
Americans and 13 percent of southerners (and 8.5 percent of all Americans)
live in these districts (Presser, Traugott, and Traugott 1990).7
Finally, to estimate the magnitude of validation error, we present an analysis
that assumes that no actual voters would be misclassified as overreporters if
all voting records were of the highest quality. This assumption overlooks ran-
dom error from misspelled names, inexperienced validators, registration in
different counties, or other unmeasured factors. However, we assume the

7. Presser, Traugott, and Traugott (1990) show that “match” rates, or validators’ ability to con-
firm self-reports of voting, are similar in voting districts with zero or one complication in voting
records; but match rates decline 9 percentage points in districts with two or more complications
(the voting record quality and access index = 2).
Table 1. Estimated Effect of NES Misclassification of Voters as Overreporters on Predictions of Validated Turnout, 1988
(Logistic Regression)
% %
Independent Variable Coef SE Proba Coef SE Prob

SES and demographic


Education .11** .03 29 .11** .03 29
Income .002 .02 1 .01 .02 0
Age (log) .51* .23 14 .50* .23 13
Married .34* .18 6 .33* .19 5
South −.95** .17 17 −.97** .20 19
Female −.12 .16 2 −.12 .16 2
African American −.71** .24 13 −.70** .25 14
Other Race −.83* .38 16 −.89** .38 18
Hispanic −.53 .37 10 −.47 .37 8

106
Homeowner .50** .19 8 .49** .19 8
Years of residence (log) −.01 .07 0 .003 .07 0
Central city −.17 .20 3 −.12 .56 2
Record keeping
Registration offices −.11 .20 2
Office workload .07 .12 15
Record quality and access −.28* .13 9
Constant −2.16* 1.06 −2.45* 1.29
Nb 1,085 1,085
Pseudo-R2c .10 .11
a
Change in expected turnout produced by change from lowest to highest value of a predictor. If dichotomy, change in expected turnout produced by change from zero to one.
b
Weighted by the number of politically eligible adults in the household.
c 2
χ /χ2 + n.
* p < .05 (one-tailed).
** p < .01, (one-tailed).
Voting Records and Vote Validation 107

underestimation of actual voters from these additional factors is small. For


example, Traugott, Traugott, and Presser (1992) report that validators’ prior
experience made no difference in finding voting records; and Traugott (1989)
reports that validators check many possible misspellings, although some
voters may be registered in different counties. Here, we estimate the actual
voters misclassified as overreporters as the difference in validated turnout
predictions from the right-hand equation in table 1 when setting all indepen-
dent variables at their mean value, and after setting the record-keeping index
to reflect the highest quality and accessibility. This method indicates the NES
misclassified 2 percent of respondents as overreporters. This suggests that in
1988, 7.1 (not 9.1) percent of respondents were overreporters; and 63.1 (not
61.1) percent were actual voters. The low 2 percent estimate of the NES
misclassification of voters as overreporters may be explained by the fact that
more than 90 percent of Americans live in areas with little or no problem in
voting record quality or accessibility.

Conclusion
The measurement of whether people did or did not vote is critical to political
science. This research note helps to clarify our understanding of whether NES
validated turnout predictions are an accurate standard for assessing self-
reported turnout research. Validation error from poor-quality or inaccessible
voting records—located particularly in central city, African American, and
southern communities—does not bias the effects of related turnout predictors.
The small 2 percentage point estimate of validation error from voting records
helps explain why the NES validated voting data are an accurate standard for
assessing electoral participation research. Validators confirm the fewest self-
reports of voting where record quality is poorest, but only small minorities of
potential voters—including African Americans and southerners—live in such
communities.

References
Abramson, Paul R., and William Claggett. 1984. “Race-Related Differences in Self-Reported and
Validated Turnout.” Journal of Politics 46 (August): 719–39.
———. 1986. “Race-Related Differences in Self-Reported and Validated Turnout in 1984.”
Journal of Politics 48 (May): 412–22.
———. 1989. “Race-Related Differences in Self-Reported and Validated Turnout in 1986.”
Journal of Politics 51 (May): 397–408.
———. 1991. “Racial Differences in Self-Reported and Validated Turnout in the 1988 Presiden-
tial Election.” Journal of Politics 53 (February): 186–97.
———. 1992. “The Quality of Record Keeping and Racial Differences in Validated Turnout.”
Journal of Politics 54 (August): 871–80.
Bernstein, Robert, Anita Chadha, and Robert Montjoy. 2001. “Overreporting Voting: Why It
Happens and Why It Matters.” Public Opinion Quarterly 65:22–44.
108 Carol A. Cassel

Cassel, Carol A. 2002. “Hispanic Turnout: Estimates from Validated Voting Data.” Political
Research Quarterly 55 (June): 391–408.
———. 2003. “Overreporting and Electoral Participation Research.” American Politics Research
31 (January): 81–92.
Conway, M. Margaret. 1999. Political Participation in the United States. 3d ed. Washington, DC:
Congressional Quarterly Press.
Presser, Stanley, Michael W. Traugott, and Santa Traugott. 1990. “Vote ‘Over’ Reporting in
Surveys: The Records or the Respondents?” Technical Report no. 39. Ann Arbor, MI: National
Election Studies.
Rosenstone, Steven J., and John Mark Hansen. 1993. Mobilization, Participation, and Democracy
in America. New York: Macmillan.
Teixeira, Ruy. 1992. The Disappearing American Voter. Washington, DC: Brookings.
Traugott, Michael W., Santa Traugott, and Stanley Presser. 1992. “Revalidation of Self-Reported
Vote.” Technical Report no. 42. Ann Arbor, MI: National Election Studies.
Traugott, Santa. 1989. “Validating the Self-Reported Vote: 1964–1988.” Technical Report no. 34.
Ann Arbor, MI: National Election Studies.
Verba, Sidney, and Norman H. Nie. 1972. Participation in America. New York: Harper and Row.
Verba, Sidney, Kay Lehman Schlozman, and Henry Brady. 1995. Voice and Equality: Civic
Voluntarism in American Politics. Cambridge, MA: Harvard University Press.
Wolfinger, Raymond E., and Steven J. Rosenstone. 1980. Who Votes? New Haven, CT: Yale
University Press.

Potrebbero piacerti anche