Sei sulla pagina 1di 17

Journalism Practice

ISSN: 1751-2786 (Print) 1751-2794 (Online) Journal homepage: https://www.tandfonline.com/loi/rjop20

How did Americans Really Think About the Apple/


FBI Dispute? A Mixed-method Study

Angela M. Lee & Ori Tenenboim

To cite this article: Angela M. Lee & Ori Tenenboim (2019): How did Americans Really
Think About the Apple/FBI Dispute? A Mixed-method Study, Journalism Practice, DOI:
10.1080/17512786.2019.1623709

To link to this article: https://doi.org/10.1080/17512786.2019.1623709

Published online: 03 Jun 2019.

Submit your article to this journal

Article views: 8

View Crossmark data

Full Terms & Conditions of access and use can be found at


https://www.tandfonline.com/action/journalInformation?journalCode=rjop20
JOURNALISM PRACTICE
https://doi.org/10.1080/17512786.2019.1623709

How did Americans Really Think About the Apple/FBI Dispute?


A Mixed-method Study
Angela M. Leea and Ori Tenenboimb
a
School of Arts, Technology, and Emerging Communication, University of Texas, Dallas, TX, USA; bSchool of
Journalism, University of Texas, Austin, TX, USA

ABSTRACT KEYWORDS
Second-level agenda-setting suggests that news media influence Science communication;
how we think. As a case study examining the nature and effects of news coverage; polls; Pew
mainstream news media’s coverage of the 2015 Apple/FBI dispute survey; content analysis;
about data privacy versus national security, this study found via experiment; public opinion;
Apple; FBI
content analysis that a majority of articles covering the dispute
(73.7%) made the same potentially misleading claim about how
the American public feels about the dispute. Nearly half (45.6%) of
those articles made public opinion claims without offering
empirical evidence, and almost all articles (97.4%) that cited the
Pew survey appeared to have inadvertently created an
unsubstantiated social reality. Then, this study found in a
subsequent experiment that, consistent with impersonal influence,
the above-mentioned news portrayals significantly affected the
participants’ view on Americans’ collective opinion towards the
Apple/FBI dispute. The long-term effect of this journalistic
oversight is notable. Theoretical implications and practical
recommendations for future science communication in the news
are discussed.

The role and effectiveness of mass media in communicating science to the public have
gained increasing significance in social science research in the past two decades, with
an emphasis on differentiating among media (e.g., quality versus popular press) and
genre (e.g., news versus drama series) (Hansen 2009). By science communication, this
study refers to the ways in which mainstream news media portray social trends in mass
opinion and general social climate through their use of public opinion polls in news
stories. Specifically, this mixed-method study draws on second-level agenda-setting and
impersonal influence in examining the nature and impact of mainstream news media’s
report of public opinion polls data.
Using the Apple/FBI encryption dispute from 2015 as a case study, integrating a content
analysis and online experiment, this mixed-method study (1) examines how Western main-
stream news media portrayed public opinion surrounding the dispute based on a large-
scale survey released by the Pew Research Center. This part of the inquiry draws on
second-level agenda-setting theory, which posits that the news media influence how
we think about an issue through attribute salience (e.g., how frequently the news

CONTACT Angela M. Lee angela.lee@utdallas.edu http://www.utdallas.edu/atec/lee/ @angelamlee


© 2019 Informa UK Limited, trading as Taylor & Francis Group
2 A. M. LEE ET AL.

media talk about attributes of an issue in a certain way) (McCombs 2014). This study also
(2) investigates, using an online between-subject experiment, whether the way news
media used the Pew survey to portray public opinion influenced the audiences’ perception
of public opinion on the issue. Such investigation extends impersonal influence, which
posits that the ways in which news media report on public opinion polls influence
public opinion (Mutz 1998, 1992b). Based on its mixed-method findings, this study
addresses the potentially adverse effects of how a majority of Western news media
reported on the Apple/FBI dispute, and offers some practical recommendations on how
journalists may report public opinion data.

Case Study: The 2015 Apple/FBI Dispute


On December 2, 2015, Syed Rizwan Farook and Tashfeen Malik, a married couple, opened
fire at a San Bernardino County Department of Public Health training event and Christmas
party in California. The mass shooting killed 14 people and injured 22 others. The San Ber-
nardino attack was the deadliest terrorist attack on American soil since 9/11(Chang 2015).
Both attackers were killed by the police four hours after the mass shooting. While the
attackers destroyed their personal phones ahead of time, Farook’s work phone, an
Apple iPhone 5C, was recovered intact, albeit protected by a four-digit password and
would eliminate all of its data upon ten failed password attempts. Claiming that it was
unable to break the phone’s security protection, the Federal Bureau of Investigation
(FBI) requested that Apple create a new version of its iOS operating system that would
give the FBI backdoor access to the iPhone 5C. Apple declined the FBI’s request, citing
its company policy to never undermine the security features of its own products.
The FBI then appealed to Sherri Pym, a U.S. magistrate judge, to issue a court order that
mandates Apple to carry out its original request on February 16, 2016. In response to the
court order, Tim Cook, Apple CEO, released “A message to our customers” centering on the
importance of protecting digital security. Specifically, Cook argued that while the govern-
ment said they would only use the backdoor software for this particular case, once the
software is created, it would become the “equivalent of a master key, capable of
opening hundreds of millions of locks — from restaurants and banks to stores and
homes” (Cook 2016, para. 14). Cook concluded that the creation of the backdoor software
would seriously compromise data privacy and security on all Apple devices.
The dispute ended on March 28, 2016 when the FBI withdrew the request, citing that it
was able to hack the phone with a third party’s help. Nonetheless, the Apple/FBI dispute
carries important legal and social implications in that it sets a precedent for how the
United States deals with data security and privacy in the increasingly digitized modern world.
The Apple/FBI dispute may not constitute a media event in the traditional sense (Dayan
and Katz 1994) (i.e., it is not about official ceremonies or diplomatic initiatives); however,
data privacy is of increasing national and global importance and has drawn emerging
scholarly (e.g., Chen, Quan-Haase, and Park 2018; Sun, Strang, and Pambel 2018) and trans-
national attention. For example, the Data Privacy Day (January 28 annually) is an extension
of the Data Protection Day, which is “the first legally binding, international treaty dealing
with privacy and data protection” (National Cyber Security Alliance 2018, 1) that has been
commemorated in Western countries since 1981. The dispute may be contextualized
through Couldry’s conceptualization of mediated centers and media rituals; the former
JOURNALISM PRACTICE 3

refers to the news media’s role as our central access-point to understand the social world
(Couldry 2005, 2003), and the latter “formalized actions organized around key media-
related categories and boundaries” (Couldry 2003, 29) that facilitate social cohesion and
reinforce social norms and orders (Martin 2008; Madianou 2005).
Moreover, not only does the Apple/FBI dispute set a legal precedent in the United
States for how federal courts handle the dilemma between data privacy and national
security, but it also sets a media precedent for how the press narrates American public
opinion towards the dilemma immediately after the deadliest terrorist attack on American
soil since 9/11 (Chang 2015).
The Apple/FBI dispute may also be conceptualized as a contemporary real-world
example of moral dilemma. Moral dilemmas are situations where there is a need for the
decision maker to choose between at least two equally compelling albeit incompatible
alternatives (Merrill 1996). In other words, moral dilemmas are not simply about “right
versus wrong” scenarios. Rather, they encompass situations where each decision entails
a different set of morally relevant pros and cons (Hauser 2006). In this case, the Apple/
FBI dispute highlights the choice between national security and data privacy. As such,
findings from this study may be generalizable to our collective understanding of the
role and impact of the press in reporting on moral dilemmas with the use of poll data.

Pew Survey Regarding the Apple/FBI Dispute


In the midst of the standoff between the FBI and Apple, the Pew Research Center released
a survey report (N = 1002) on February 22, 2016 titled, “More Support for Justice Depart-
ment Than for Apple in Dispute Over Unlocking iPhone.” The survey was conducted Feb-
ruary 18–21, 2016. Unlike many of its other much longer survey reports that have
significantly more survey questions and topline report pages (e.g., the 2010 biennial
news consumption survey topline has over 100 questions and is 43 pages long), Pew’s
Apple/FBI survey topline, which is accessible on the upper-right-hand corner of the
page, contains only three questions and fits on a single PDF page: (1) whether the respon-
dents have heard about the federal court ordering Apple to help the FBI unlock an iPhone
used by one of the suspects in the San Bernardino terrorist attacks, (2) whether they think
Apple should unlock the iPhone or not, and (3) whether they consider themselves a
Republican, Democrat, or independent.
The brevity of the topline questionnaire makes it an ideal case study, and the assump-
tion being made in this study is that journalists who made public opinion claims using the
Pew survey had read the one-page document in its entirety.

Second-Level Agenda-Setting
The news media can play a key role in shaping the pictures in our heads (Lippmann 1998).
Therefore, to understand how people perceive a public issue, it is important to identify
how the news media portray this issue. Agenda-setting is a central theory in mass com-
munication that maps the contribution of the media in shaping the public’s pictures, posit-
ing that salience is transmitted from the media agenda to the public agenda (McCombs
2014; McCombs and Shaw 1972). At the first-level of agenda-setting, the media transmit
to the public the salience of objects – e.g., issues, events, and people. At the second-
4 A. M. LEE ET AL.

level, the media transmit attribute salience, where an attribute can be understood as the
full range of properties and traits that characterize an object. The third-level addresses the
transmission of salience of interrelationships among objects and/or attributes (Guo 2016).
This study focuses on the second level of agenda-setting to make the case that how the
news media talked about the Apple/FBI dispute and the Pew survey regarding this dispute
could matter. Research has shown a high degree of correspondence between media descrip-
tions and public descriptions of political candidates and public issues (McCombes, Lopez-
Escobar, and Pablo Llamas 2000; Takeshita and Mikami 1995; Weaver et al. 1981). For
example, Weaver and his colleagues (1981) found high cross-lagged correlations between
the description of the U.S. presidential election candidates in the Chicago Tribune and the
subsequent descriptions by Illinois voters. Content analysis was used to identify the media
agenda, and public opinion polls were used to determine the public agenda. Other scholars
(e.g., Kiousis, Bantimaroudis, and Ban 1999) conducted laboratory experiments, providing
evidence of causality: the media attribute agenda affected the public attribute agenda.
In the first part of our study, we identify through content analysis the frequency of attri-
butes in news articles pertaining to the Apple/FBI dispute and the public. These attributes
include mentions of public opinion on the dispute, the Pew survey and its findings, and
other polls. As Matthews, Pickup, and Cutler (2012) pointed out, “While the causes and
consequences of poll reporting have been studied comprehensively, less systematic
work has been conducted on how the media report on polls” (p.129). The present study
examines how the media reported on polls, addressing the following research questions:
RQ1: To what extent did news articles on the Apple/FBI dispute make claims about the public’s
perception of the dispute?

RQ2: How was the Pew report discussed in the news media surrounding the Apple/FBI
dispute?

RQ3: How was the first survey question from Pew’s report, which asks “Whether [Pew respon-
dents] have heard about the federal court ordering Apple to help the FBI unlock an iPhone
used by one of the suspects in the San Bernardino terrorist attacks”, reported by the news
media that covered the Pew survey on the Apple/FBI dispute?

RQ4: How was the second survey question from Pew’s report, which asks “Whether [Pew
respondents] think Apple should unlock the iPhone or not?”, reported by the news media
that covered the Pew survey on the Apple/FBI dispute?

RQ5: To what degree did the news media report on other public polls surrounding the Apple/
FBI dispute?

To understand the “dynamics of science in society,” it is important to study both media


agendas and the media’s influence on public understanding and opinion (Hansen 2009,
118). The present study does both. In the second part of the study, we examine the
media’s influence on public opinion.

Public Opinion Polls in the News


Public opinion polls have become commonplace in news reporting, as the press has
become a leading contractor and disseminator of public opinion polls (Brettschneider
2008, 481). Some applaud such integration and call for more journalists to routinely
include public opinion research in news coverage (Meyer 2002). Others have argued
JOURNALISM PRACTICE 5

that those who conduct the polls should report the poll results themselves as a “public
opinion research correspondent,” for journalists are not always trained to accurately rep-
resent social scientific research (Noelle-Neumann 1980).
The effects of news media’s presentation of public opinion on news consumer in
terms of decision-making (Bhatti and Pedersen 2016; Boudreau and McCubbins
2010), perception of what is important (McCombs 2014), and false consensus (Mutz
1992b; Sonck and Loosveldt 2010), are well-documented, and some scholars suggest
the effects may be substantial (Mutz 1998). For example, concerns in the political
science realm on this topic include the extent to which exit polls and early media pre-
diction of election outcomes from the East Coast affect how people on the West coast
vote (e.g., whether people go out to vote or stay home) (Mutz 1992b), and other
studies have examined best practices in news reporting of survey results (Bhatti and
Pedersen 2016).
Similarly, The American Association for Public Opinion research (APPOR) has published
a checklist (2009) that consists of things that survey researchers should disclose to meet
the minimum disclosure requirement of the AAPOR Code of Professional Ethics and Prac-
tice, which includes elements such as survey sponsor, sample size, and type of sample.
Supportive of standardization of survey reports, Sonck and Loosveldt remarked, “Polls
that are publicly reported should conform to methodological survey standards, in order
to reflect the true opinions of the general public, on which people could accurately
base their perceptions of the general opinion climate” (2010, 250). After all, news media
play a crucial role in educating the public on not only how to make sense of what
“other people” are thinking, but also how to make sense of science (Hansen 2009).

Impersonal Influence
Rather than focusing on direct persuasive influences of mass media (e.g., agenda-setting or
priming theories), impersonal influence is concerned with “the capacity for presentation of
collective opinion or experience [in media] to trigger social influence processes” (Mutz 1998,
4). Its focus is on how people think or behave as a result of their mediated perception of how
the public thinks or behaves (Mutz 1994; Mutz and Soss 1997), which is consistent with the
focus in other theoretical frameworks, such as the bandwagon effect, the spiral of silence,
and the cognitive response model (Marsh 1985; Glynn, Hayes, and Shanahan 1997;
Noelle-Neumann 1974; Mutz 1997). In Mutz’s words, “Media are tremendously influential
in telling people what others are thinking about and experiencing. These perceptions, in
return, have important consequences for the political behavior of mass publics and political
elites as well” (Mutz 1998, 5), since what is reported in the news media may be interpreted as
what is accepted or expected in society regardless of its empirical validity (e.g., Asch 1956;
McCombs 2014; Mutz 1998; Nolan et al. 2008; Katz 1981). In their study of the effect of polls
on decision-making, Boudreau and McCubbins (2010, 522) found that polls strongly
influence decision-making, and that “the direction of the majority (regardless of whether it
is correct or incorrect) exerts an enormous influence on subjects’ decisions, while the size
of the majority does not” (emphasis in original). Other studies have found that while
people are generally influenced by perceived public consensus, the strength of such
effects is greater when people are either more personally committed to the issue (Kaplowitz
et al. 1983), or lack information on perceived normative consensus (Kassin 1979).
6 A. M. LEE ET AL.

To test the impersonal influence of polls, experimental studies tend to use published
poll information as the independent variable and perceived public opinions as dependent
variables (e.g., Mutz 1992b; Sonck and Loosveldt 2010; Mutz 1992a, 1998). The effect of
polls on public opinion is contingent on a number of things. For example, several
studies have found poll-effects to be moderated by opinion strength, political interest,
and perception of poll influence on one’s own opinion (Gunther 1998; Hardmeier 2008;
Zaller 1992).
A number of cognitive and psychological processes have also been examined as poss-
ible explanations of poll-effects (e.g., pluralistic ignorance, spiral of silence, false consensus
effects, cognitive response theories, and consensus heuristics, etc.) (Mutz 1992b; Sonck
and Loosveldt 2010). While the antecedents and consequence of poll-effects are contin-
gent on a number of factors, there is a collective understanding that “people could use
information cues from a message, for example, by perceiving a social consensus as the
‘correctness’ of an opinion, and hence by assuming that many people holding a similar
opinion cannot be wrong” (Sonck and Loosveldt 2010, 236). Especially if published poll
results were interpreted as the “majority’s opinion,” it may influence people’s own
opinions due to conformity (Asch 1956) or other factors.
While impersonal influence of public opinion polls is established in the literature, what
remains unexplored is whether the perception of how knowledgeable poll participants are
—with regard to the specific issue being surveyed—moderates impersonal influence.
Specifically, the experimental portion of this study seeks to address this question: Will
information about how knowledgeable the Pew survey respondents are of the Apple/
FBI dispute affect people’s perception of public opinion surrounding the dispute?
Given that impersonal influence often occurs when individuals treat public opinion
polls as an indicator of what “most other people” (e.g., the public) think, it is reasonable
to assume that the more knowledgeable the Pew survey respondents appear, the more
likely that the audiences assume the survey finding represents public opinions on the
issue. To test this, we propose the following two hypotheses, whose purpose is to tease
out nuanced impacts of how different portrayals of participant knowledge, and the lack
thereof, moderate the effects of impersonal influence.
H1: News audiences are significantly more likely to perceive survey findings as representative
of public opinion if they were told that most respondents had heard “a lot or a little” about the
issue in question prior to taking the survey than if they were told that most had heard “a little
or nothing at all” about it.

H2: News audiences are significantly more likely to perceive survey findings as representative
of public opinion if they were not given any information on whether the survey participants
had heard about the issue in question than if they were told that most had heard “a little or
nothing at all” about it.

Additionally, while H1 and H2 assume that “% of Pew participants’ having heard of the
dispute” prior to taking the survey influences people’s perception of how knowledgeable
the Pew respondents are, the next hypothesis puts this assumption to test:
H3: Those who were told that most survey respondents had heard “a lot or a little” about the
issue at hand are most likely to assume that the respondents are knowledgeable of the issue,
compared with (a) those who received no information on whether the respondents had heard
about the issue, and (b) those who were told that most respondents had heard “a little or
JOURNALISM PRACTICE 7

nothing at all” about the issue. The latter (b) are the least likely to assume that the respondents
are knowledgeable of the issue.

Furthermore, since impersonal influence (Mutz 1992b, 1998; Sonck and Loosveldt 2010)
(Mutz 1998, 1992b) posits that public poll and survey findings affect only people’s percep-
tion of public opinions rather than personal opinions, the following null hypothesis is
proposed:
H4: Those in the three experimental groups will not differ significantly on whether they per-
sonally think Apple should unlock the iPhone for the FBI.

Method
Study 1: Content Analysis
To examine how the press talks about large-scale survey findings, study 1 used the Apple/
FBI encryption dispute as a case study. For data collection, this study used LexisNexis Aca-
demic and collected articles from all available English newspapers in the world using the
Boolean search phrase, “Apple AND FBI AND Pew OR poll.” This Boolean search phrase
came after a number of trials and was found to maximize precision (e.g., it excluded
news stories on Apple or FBI that have nothing to do with the issue at hand). The Pew
survey report was published on February 22, 2016, and the dispute ended on March 28,
2016 when the FBI announced that it unlocked the iPhone with the assistance of a third
party. To cast a wider net for news coverage of the Apple/FBI encryption dispute, this study
collected all published news stories between February 21and April 21, 2016. Overall, 179
news articles were collected. After removing unfit articles from the dataset (e.g., stories
that included the key words but do not relate to the dispute, or law reviews instead of
news articles), 152 news articles remained, and 75% of the data were randomly
sampled for analysis (N = 114). The articles analyzed are from 49 different outlets. Two
thirds, or 76, of the articles are from 26 American outlets, 16 articles from U.K. outlets, 8
from Canadian outlets, 6 from Australian outlets, 2 from Irish outlets, and 6 from other
countries – China, India, Malta, Saudi Arabia, Turkey, and United Arab Emirates (an
article from each country). The outlets are with divergent political leanings, including
outlets that can be considered liberal (e.g., The New York Times, and Los Angeles Times),
conservative (e.g., The Wall Street Journal, and The Florida Times-Union), and more centrist
(such as USA Today); as well as outlets that do not endorse political candidates (such as The
Christian Science Monitor).

Codebook
See Table A1.

Pretests
Two researchers from two large research universities in Southwestern U.S. were trained for
the pretest, and two sets of pretests were conducted. Fifteen percent of the sample was
randomly selected for each pretest. For statistical rigor, Krippendorff’s alpha was calculated
for all variables that require subjective coding (e.g., excluding objective variables such as
publication date). Krippendorff’s alpha is a conservative reliability test that adjusts for
8 A. M. LEE ET AL.

change agreement. As Table A2 (see Appendix) suggests, all of the variables are above the
conventional standard of .80 (Krippendorff 2004).

Study 2: Between-Subject Experiment


To test the ways in which press coverage of Pew’s survey on the Apple/FBI dispute affects
audience reception, two articles culled from Study 1 were used to create a between-
subject experiment (N = 672). The experiment was conducted on Amazon.com’s Mechan-
ical Turk (MTurk), an opt-in digital tool that provides a self-selected group of participants
for small tasks in exchange for payment. Participants were paid $1 each. They were
between 18 and 77 years old, and the average age is 39 (SD = 12.78). A little over half
(53%) are women, and a majority of the sample is a college graduate (43.4%), followed
by 28.4% with some college degrees. Most of them (93%) have a smartphone, with a
majority of them (54%) having something other than an iPhone, whereas 39% of them
own an iPhone.

Independent variable
To test whether knowing about the Pew respondents’ familiarity with the Apple/FBI
dispute affects the ways in which people interpret and are influenced by its survey
findings, three conditions were created for this between-subject test. The control group
(A) echoes the ways in which a majority of the news stories examined in study 1
covered the dispute; it does not acknowledge whether the Pew respondents had heard
of the Apple/FBI dispute prior to answering the survey. The first experimental group (B)
reflects a minority of the news stories’ portrayal, as found in Study 1, by noting that
“most Pew survey respondents have heard of the dispute – 75% told Pew they’d heard
either a lot (39%) or a little (36%)” of the dispute. The second experimental group (C)
tests the effect of an alternative portrayal of participant knowledge. Specifically, this con-
dition specified that “most Pew survey respondents have not heard of the dispute – 60%
told Pew they’d heard either nothing at all (24%) or a little (36%)” of the dispute. The stat-
istics in groups B and C reflect different ways of slicing Pew’s topline questionnaire on par-
ticipant knowledge.

Manipulation check
One-way analysis of variance was conducted, and the results revealed that the manipu-
lation was successful, F(2, 668) = 238.84, p < .001. Specifically, those in group B were
most likely to agree that the Pew respondents had heard about the Apple/FBI dispute, fol-
lowed by group A and group C (see Table A3 in Appendix).

Dependent Variables
Perceived participant knowledge
Experimental participants’ perception of how knowledgeable Pew participants were of the
Apple/FBI dispute prior to participating in the survey is measured by an index of three
Likert scale variables (a = .89): “A majority of those surveyed by Pew were knowledgeable
of the Apple/FBI dispute before voicing their opinions,” “I believe that a majority of those
surveyed by Pew understood what the dispute was about before giving their opinions,”
JOURNALISM PRACTICE 9

and “I am confident that a majority of those surveyed by Pew knew what they were talking
about” (1 = strongly disagree; 5 = strongly agree).

Impersonal influence
Two measures are used to assess impersonal influence (e.g., perceptual disconnect
between one’s perception of public opinion versus one’s own opinion). Adopting the
same wording used in the Pew survey, one’s perception of public opinion is measured
by the following: “A majority of Americans think Apple should have unlocked the
iPhone. Whereas one’s own perception is measured by, “I think Apple should have
unlocked the iPhone” (1 = strongly disagree; 5 = strongly agree).

Results
Study 1: Content Analysis
The first research question (RQ1) asked about the extent to which news articles on the
Apple/FBI dispute made claims about the public’s perception of the dispute. The vast
majority of the articles (73.7%) made such claims. For example: “Americans are deeply
divided about the legal struggle between the government and one of the nation’s most
iconic companies” (Shear, Sanger, and Benner 2017, para. 16), and “ … .the public
appears to be siding with the FBI” (Carollo 2016, para. 6).
The second research question (RQ2) asked how the Pew survey was discussed in the
news media surrounding the Apple/FBI dispute. The survey was mentioned in 43% of
the examined articles. Examples: “ … polling by Pew found that the public are on the
side of the FBI” (Thielman 2016, para. 6) and “ … a Pew Research Center poll released
Monday found that 51 percent of respondents felt Apple should unlock the iPhone,
with only 38 percent saying it should not unlock the phone to ensure the security of its
other users’ information … ” (May 2016, para. 10).
The third research question (RQ3) asked how the first survey question from Pew’s report
was reported by the news media that covered the Pew survey on the Apple/FBI dispute. That
survey question asks “Whether [Pew respondents] have heard about the federal court order-
ing Apple to help the FBI unlock an iPhone used by one of the suspects in the San Bernar-
dino terrorist attacks.” Very few articles (2.6%) mentioned findings from that question.
The fourth research question (RQ4) asked how the second survey question from Pew’s
report was reported by the news media that covered the Pew survey. That question asks
“Whether [Pew respondents] think Apple should unlock the iPhone or not.” Findings from
that question were mentioned in 42.1% of the examined articles.
Finally, the fifth research question (RQ5) asked about the degree to which the news
media reported on other public polls surrounding the Apple/FBI dispute. Other polls,
such as Wall Street Journal/NBC News and Reuters/Ipsos polls, were mentioned in about
a fifth of the articles (21.9%).1

Study 2: Online Experiment


H1 hypothesized that news audiences are significantly more likely to perceive survey
findings as representative of public opinion if they were told that most respondents
10 A. M. LEE ET AL.

had heard “a lot or a little” about the issue in question prior to taking the survey than if
they were told that most had heard “a little or nothing at all” about it. Independent-
sample t-test was conducted, and H1 is supported, t(435) = 1.82, p < .05, M = 3.96, SD
= .96 versus M = 3.78, SD = 1.07.
H2 hypothesized that news audiences are significantly more likely to perceive survey
findings as representative of public opinion if they were not given any information on
whether the survey participants had heard about the issue in question than if they
were told that most had heard “a little or nothing at all about it.” Independent-sample
t-test was conducted, and H2 is supported, t(455) = 3.26, p < .001, M = 4.10, SD = .09
versus M = 3.78, SD = 1.07.
H3 hypothesized that those who were told that most survey respondents had heard “a
lot or a little” about the issue at hand are most likely to assume that the respondents are
knowledgeable of the issue at hand, followed by those who received no such information,
and those who were told that most respondents had heard “a little or nothing at all.” One-
way analysis of variance was conducted, and H3 is supported, F(2, 668) = 139.42, p < .001.
Specifically, pairwise comparison, using Bonferroni correction, between each of the three
experimental groups revealed that all the group differences are statistically significant (see
Table A3 in Appendix).
H4, a null hypothesis that stems from impersonal influence, predicts no difference
among the three experimental groups regarding experimental participants’ personal
view towards the Apple/FBI dispute. One-way analysis of variance was conducted, and
no statistical significance was observed, F(2,668) = 1.46, p = .26. H4 is supported.

Discussion
Findings from this study carry both theoretical and practical implications. On the theoreti-
cal side, this study advances our understanding of impersonal influence by finding that the
perceived knowledge of Pew survey respondents significantly affects the extent to which
average news audiences believe the poll represents public opinion, but has no effect on
their personal opinions. In particularly, the more knowledgeable the survey participants
appear, the more likely that average news audiences assume that the poll represents
public opinion. Future studies are encouraged to explore just how knowledgeable most
people expect survey respondents to be on issues being surveyed, as this may enrich
our theoretical understanding of the moderating effect of perceived knowledge on imper-
sonal influence.
On the practical side, by integrating findings from the content analysis and online exper-
iment, this study found that a significant portion of the examined news articles on the
Apple/FBI dispute misinformed the readers regarding the American public’s view on the
Apple/FBI dispute, and that their misrepresentation significantly affects the news readers’
perception of public opinion. This is disconcerting, because it suggests that a majority of
Western mainstream news media might have inadvertently created an unsubstantiated
social reality where “the American public supports unlocking the iPhone.” In turn, this uncor-
roborated social constructed reality may alter the direction of the broader national debate
on the issue of data privacy in the future if the industry-wide mistake is left unamended.
Specifically, most news articles (73.7%) made claims about the American public’s view
on the Apple/FBI dispute, which carries second-level agenda-setting implications
JOURNALISM PRACTICE 11

(McCombs 2014), although nearly half (45.6%) did not include any poll result to substanti-
ate such normative claims.2 In other words, even if more diligent news readers wanted to
fact-check the validity of the public opinion claims made in most news articles on the
Apple/FBI dispute, such as “ … the public appears to be siding with the FBI” (Carollo
2016, para. 6), they would not even know where to begin, because a significant portion
of these news articles did not provide any polling information.
When it comes to news articles that incorporated Pew’s survey findings in their discussion
of public opinion on the issue, nearly all of the articles (97.4%) did not report on how knowl-
edgeable the Pew survey respondents are of the Apple/FBI dispute, even though such infor-
mation appeared on the same page as the public opinions question that all these news
articles reported on. This is troubling because, as the experiment showed, this piece of
omitted information significantly influenced average news readers’ perception of how
much “the public” supported the FBI’s request to unlock the iPhone in this dispute.
In other words, with regards to the ways in which mainstream news media portrayed
the public’s opinion on the Apple/FBI dispute, which were predominately less-than-
ideal, a majority of Western news media appeared to have inadvertently manufactured
a false sense of collective social consent that prioritized national security over data
privacy, which is unsubstantiated at best by the Pew survey the examined articles relied
on. Additionally, not only did the news media appear to exert false impersonal
influence on their readers regarding the Apple/FBI dispute, but also the fact that all
these news articles have now been indexed and archived on the World Wide Web
means such misleading characterization of the American public’s view on the dispute
may forever be tainted.
Among the very few articles (2.6%) that did report on whether the Pew respondents
were knowledgeable of the dispute prior to taking the survey, all of them collapsed
Pew’s findings in a way that suggested that the Pew respondents were more knowledge-
able of the dispute than otherwise. Specifically, whereas the Pew topline questionnaire
reported that “39% had heard a lot, 36% a little, and 24% not at all” about the Apple/
FBI dispute, all of these articles portrayed the findings as “75% had heard ‘a lot’ or ‘a
little’” rather than “60% had heard ‘a little’ or ‘not at all’” about the dispute. The subsequent
experiment further found that different portrayals of the Pew respondents’ knowledge of
the Apple/FIB dispute significantly affected the news readers’ perception of public opinion
on the dispute upon reading the news article. This is understandable, as it does not make
much intuitive sense for journalists to report on a poll finding where a majority of the
respondents were unfamiliar with the issue at hand. Nonetheless, from a journalistic per-
spective, particularly since the news media have the power to shape the way the audi-
ences see and interpret the world around them (Lippmann 1998; McCombs 2014; Mutz
1998), this needs to be addressed. Moreover, perhaps the more urgent question that
ought to be asked is, how knowledgeable should average survey respondents be before
such survey or poll findings make it into the news?
Future studies are encouraged to conduct qualitative research with authors of these
news reports to have a richer understanding of different factors that might have contrib-
uted to the ways in which these news stories are produced, as well as conduct further
empirical studies assessing the real public sentiment (e.g., opinions from those who
are knowledgeable of the issue being surveyed) on the Apple/FBI dispute before we
arrive at a normative understanding of how the American people really think about
12 A. M. LEE ET AL.

the dispute. Aside from the empirical need to accurately and ethically assess public
climate regarding this dispute, mainstream news organizations are also encouraged to
address this issue systematically. Specifically, those that published mischaracterizing
articles on the Apple/FBI dispute are encouraged to issue corrections. After all, given
the important role mainstream news media continue to play in shaping how the audi-
ences think about what is important to society (McCombs 2014) and how the society
thinks (Lippmann 1998; Mutz 1998), their power in shaping how both the current and
future generations think about our collective attitude towards socio-political issues
cannot be ignored.
For those looking for specific guidelines on how to accurately and meaningfully cover
polls, a number of reputable sources offer free resources on the Web (e.g., AAPOR 2018;
Kelly 2015; Krueger 2016; Ordway 2018). Just as a few of the guidelines have suggested
the importance of covering the “don’t knows,” (e.g., when respondents haven’t made
up their minds yet), findings form this case study suggest that one new item should be
added to existing guidelines on how to cover poll numbers in the news: Are the poll par-
ticipants reasonably knowledgeable of the issue being surveyed? Particularly given, as this
study’s experimental findings suggest, how news articles talk about poll results affects
audiences’ perception of public opinion, journalists should be careful not to present an
incomplete picture of what the public thinks.
In the ideal world, public opinion surveys should always ask about the respondents’
general understanding of the issue being surveyed, and journalists should always report
on such finding for contextual purposes. However, in reality, it may well be the case
that not all public opinion surveys ask such questions, in which case – the best journalists
can do is to raise questions regarding the survey participants’ potential knowledge of the
issue being questioned.
Moving beyond the Apple/FBI dispute, future studies are also encouraged to examine
whether the same pattern—both in terms of whether and how mainstream media utilize
polls to make public opinion claims, and the subsequent influence of their media narrative
on audience perception—is generalizable to other news events. Since this case study
focuses strictly on the Apple/FBI dispute, readers are encouraged to interpret the
findings with caution. Specifically, it should be noted that although findings from this
study may be generalized to examine the role and impact of journalism in other moral
dilemmas, this mixed method study is ultimately limited in its generalizability in that
the experimental findings are strictly based on the exact ways in which the Apple/FBI
dispute was reported in the news. Nonetheless, this mixed-method approach may serve
as a stepping stone for our collective understanding of the causal impact that news report-
ing has on audiences’ perception of public opinion.
How did the American public really think about the Apple/FBI dispute? Did most Amer-
ican, as most news articles reported, side with the FBI? Based on this study’s findings, the
only answer for certain is: We don’t know, yet.

Notes
1. Over half (54.5%) of all news articles on the Apple/FBI dispute include at least one poll.
2. Among all 73.7% news article that made claims about public opinion, 27.4% did not include
poll results of any kind.
JOURNALISM PRACTICE 13

Acknowledgement
The authors would like to thank Dean Anne Balsamo for funding this study through the faculty
research grant, as well as M Tsai and anonymous reviewers for their helpful feedback on this study.

Disclosure Statement
No potential conflict of interest was reported by the authors.

References
AAPOR. 2018. “Journalist Cheat Sheet to Understanding Polls.” https://twitter.com/AAPOR/status/
996456680861859845.
American Association for Public Opinion Research. 2009. “Survey Disclosure Checklist.” May 13, 2009.
http://www.aapor.org/Standards-Ethics/AAPOR-Code-of-Ethics/Survey-Disclosure-Checklist.aspx.
Asch, Solomon. 1956. “Studies of Independence and Conformity: A Minority of One Against a
Unanimous Majority.” Psychological Monographs: General and Applied 70 (January), doi:10.1037/
h0093718.
Bhatti, Yosef, and Rasmus Tue Pedersen. 2016. “News Reporting of Opinion Polls: Journalism and
Statistical Noise.” International Journal of Public Opinion Research 28 (1): 129–141. doi:10.1093/
ijpor/edv008.
Boudreau, Cheryl, and Mathew D. McCubbins. 2010. “The Blind Leading the Blind: Who Gets Polling
Information and Does It Improve Decisions?” The Journal of Politics 72 (2): 513–527. doi:10.1017/
S0022381609990946.
Brettschneider, Frank. 2008. “The News Media’s Use of Opinion Polls.” In The SAGE Handbook of Public
Opinion Research, edited by Wolfgang Donsbach, and Michael W. Traugott, 479–486. Thousand
Oaks, California: SAGE Publications Ltd.
Carollo, Malena. 2016. “Privacy Advocates Plan Nationwide Rallies to Back Apple in IPhone Case.”
Christian Science Monitor, February 23, 2016. https://www.csmonitor.com/World/Passcode/2016/
0223/Privacy-advocates-plan-nationwide-rallies-to-back-Apple-in-iPhone-case.
Chang, Cindy. 2015. “San Bernardino Shootings Cast a Somber Tone over Muslim Conference in
Chino.” Los Angeles Times, December 26, 2015. http://www.latimes.com/local/california/la-me-
1227-islamic-convention-20151227-story.html.
Chen, Wenhong, Anabel Quan-Haase, and Yong Jin Park. 2018. “Privacy and Data Management: The
User and Producer Perspectives.” American Behavioral Scientist July, 0002764218791287. doi:10.
1177/0002764218791287.
Cook, Tim. 2016. “A Message to Our Customers.” Apple. February 16, 2016. http://www.apple.com/
customer-letter/.
Couldry, Nick. 2003. Media Rituals: A Critical Approach. New York, NY: Routledge.
Couldry, Nick. 2005. “Transvaluing Media Studies: Or, Beyond the Myth of the Mediated Center.” In
Media and Culture Theory, edited by James Curran, and David Morley, 177–194. Abingdon, U.K.:
Routledge.
Dayan, Daniel, and Elihu Katz. 1994. Media Events: The Live Broadcasting of History. Revised Edition.
Cambridge, Mass.: Harvard University Press.
Glynn, Carroll J., Andrew F. Hayes, and James Shanahan. 1997. “Perceived Support for One’s Opinions
and Willingness to Speak out: A Meta-Analysis of Survey Studies on the ‘Spiral of Silence.’.” Public
Opinion Quarterly 61 (3): 452–463.
Gunther, Albert C. 1998. “The Persuasive Press Inference: Effects of Mass Media on Perceived Public
Opinion.” Communication Research 25 (5): 486–504. doi:10.1177/009365098025005002.
Guo, Lei. 2016. “A Theoretical Explication of the Network Agenda Setting Model: Current Status and
Future Directions.” In The Power of Information Networks: New Directions for Agenda Setting, edited
by Lei Guo, and Maxwell McCombs, 3–18. New York, NY: Routledge.
14 A. M. LEE ET AL.

Hansen, Anders. 2009. “Science, Communication and Media.” In Investigating Science Communication
in the Information Age, edited by Richard Holliman, Elizabeth Whitelegg, Eileen Scanlon, Sam
Smidt, and Jeff Thomas, 106–127. New York, NY, USA: Oxford University Press, Inc.
Hardmeier, Sibylle. 2008. “The Effects of Published Polls on Citizens.” In The SAGE Handbook of Public
Opinion, edited by Wolfgang Donsbach, and Michael W. Traugott, 504–514. Thousand Oaks,
California: SAGE Publications Ltd.
Hauser, Marc. 2006. Moral Minds: How Nature Designed Our Universal Sense of Right and Wrong. 1st ed.
New York, NY: Ecco.
Kaplowitz, Stan A., Edward L. Fink, Dave D’Alessio, and G. Blake Armstrong. 1983. “Anonymity,
Strength of Attitude, and the Influence of Public Opinion Polls.” Human Communication
Research 10 (1): 5–25. doi:10.1111/j.1468-2958.1983.tb00002.x.
Kassin, Saul. 1979. “Consensus Information, Prediction, and Causal Attribution; A Review of the
Literature and Issues.” Journal of Personality and Social Psychology 37 (11): 1966–1981. doi:10.
1037/0022-3514.37.11.1966.
Katz, Elihu. 1981. “Publicity and Pluralistic Ignorance: Notes on ‘The Spiral of Silence.’.” In Public
Opinion and Social Change: For Elisabeth Noelle-Neumann, edited by Horst Baier, Marthias
Kepplinger, and Kurt Reumann, 22–38. Wiesbaden, Germany: Westdeutscher Verlag.
Kelly, Brian Resnick, Nora. 2015. “The Pollsters’ Guide to Reporting on Polls.” The Atlantic. June 16,
2015. https://www.theatlantic.com/politics/archive/2015/06/the-pollsters-guide-to-reporting-on-
polls/448410/.
Kiousis, Spiro, Philemon Bantimaroudis, and Hyun Ban. 1999. “Candidate Image Attributes:
Experiments on the Substantive Dimension of Second Level Agenda Setting.” Communication
Research 26 (4): 414–428. doi:10.1177/009365099026004003.
Krippendorff, Klaus. 2004. “Reliability in Content Analysis.: Some Common Misconceptions and
Recommendations.” Human Communication Research 30 (3): 411–433. doi:10.1111/j.1468-2958.
2004.tb00738.x.
Krueger, Vicki. 2016. “5 Guidelines for Writing about Poll Numbers.” Poynter. October 18, 2016.
https://www.poynter.org/news/5-guidelines-writing-about-poll-numbers.
Lippmann, Walter. 1998. Public Opinion. 2nd ed. New Brunswick, NJ: Transaction Publishers. http://
books.google.com/books?hl=en&lr=&id=YhXLOVc6BsoC&oi=fnd&pg=PR12&dq=pictures + in +
their + heads + lippman&ots=4tvVEWfmts&sig=xSf0W5Bi9_0eqWMJO1UCrJCQX0E.
Madianou, Mirca. 2005. Mediating the Nation: News, Audiences and the Politics of Identity. New York,
NY: Routledge.
Marsh, Catherine. 1985. “Back on the Bandwagon: The Effect of Opinion Polls on Public Opinion.”
British Journal of Political Science 15 (1): 51–74. doi:10.1017/S0007123400004063.
Martin, Vivian B. 2008. “Attending the News: A Grounded Theory About a Daily Regimen.” Journalism:
Theory, Practice & Criticism 9 (1): 76–94. doi:10.1177/1464884907084341.
Matthews, J. Scott, Mark Pickup, and Fred Cutler. 2012. “The Mediated Horserace: Campaign Polls and Poll
Reporting.” Canadian Journal of Political Science 45 (2): 261–287. doi:10.1017/S0008423912000327.
May, Patrick. 2016. “Public Debate Intensifies around Apple CEO Tim Cook’s Refusal to Help the FBI
Unlock a Terrorist’s IPhone.” The Mercury News (blog). February 22, 2016. https://www.
mercurynews.com/2016/02/22/public-debate-intensifies-around-apple-ceo-tim-cooks-refusal-to-
help-the-fbi-unlock-a-terrorists-iphone/.
McCombes, Maxwell, Esteban Lopez-Escobar, and Juan Pablo Llamas. 2000. “Setting the Agenda of
Attributes in the 1996 Spanish General Election.” Journal of Communication 50 (2): 77–92. doi:10.
1111/j.1460-2466.2000.tb02842.x.
McCombs, Maxwell. 2014. Setting the Agenda: Mass Media and Public Opinion. 2nd ed. Cambridge, UK:
Polity Press.
McCombs, Maxwell, and Donald Shaw. 1972. “The Agenda-Setting Function of Mass Media.” Public
Opinion Quarterly 36 (2): 176–187.
Merrill, John, C. 1996. “Overview: Foundations for Media Ethics.” In Controversies in Media Ethics,
edited by David Gordon, John M. Kittross, and Carol Reuss, 1–25. White Plains, New York:
Longman Publishers USA.
JOURNALISM PRACTICE 15

Meyer, Philip. 2002. Precision Journalism: A Reporter’s Introduction to Social Science Methods. 4 edition.
Lanham, MD: Rowman & Littlefield Publishers.
Mutz, Diana. 1992a. “Mass Media and the Depoliticization of Personal Experience.” American Journal
of Political Science 36 (2): 483–508. doi:10.2307/2111487.
Mutz, Diana. 1992b. “Impersonal Influence: Effects of Representations of Public Opinion on Political
Attitudes.” Political Behavior 14 (2): 89–122. doi:10.1007/BF00992237.
Mutz, Diana. 1994. “The Political Effects of Perceptions of Mass Opinion.” In Research in Micropolitics:
New Directions in Political Psychology, edited by Michael X Delli Carpini, Leonie Huddy, and Robert
Shapiro, 143–167. Greenwich, CT: JAI Press.
Mutz, Diana. 1997. “Mechanisms of Momentum: Does Thinking Make It So?” The Journal of Politics 59
(1): 104–125. doi:10.2307/2998217.
Mutz, Diana. 1998. Impersonal Influence: How Perceptions of Mass Collectives Affect Political Attitudes.
Cambridge, England: Cambridge University Press.
Mutz, Diana, and Joe Soss. 1997. “Reading Public Opinion: The Influence of News Coverage on
Perceptions of Public Sentiment.” Public Opinion Quarterly 61 (3): 431–451.
National Cyber Security Alliance. 2018. “About Data Privacy Day.” https://staysafeonline.org/wp-
content/uploads/2018/03/2018-DPD-Report.pdf.
Noelle-Neumann, Elisabeth. 1980. “The Public Opinion Research Correspondent.” Public Opinion
Quarterly 44 (4): 585–597. doi:10.1086/268626.
Noelle-Neumann, Elisabeth. 1974. “The Spiral of Silence a Theory of Public Opinion.” Journal of
Communication 24 (2): 43–51. doi:10.1111/j.1460-2466.1974.tb00367.x.
Nolan, Jessica M., P. Wesley Schultz, Robert B. Cialdini, Noah J. Goldstein, and Vladas Griskevicius.
2008. “Normative Social Influence Is Underdetected.” Personality & Social Psychology Bulletin 34
(7): 913–923. doi:10.1177/0146167208316691.
Ordway, Denise-Marie. 2018. “11 Questions Journalists Should Ask about Public Opinion Polls.”
Journalist’s Resource of the Shorenstein Center on Media, Politics and Public Policy (blog). June 14,
2018. https://journalistsresource.org/tip-sheets/reporting/public-opinion-polls-tips-journalists.
Shear, Michael D., David E. Sanger, and Katie Benner. 2017. “In the Apple Case, a Debate over Data
Hits Home.” The New York Times, December 21, 2017, sec. Technology. https://www.nytimes.com/
2016/03/14/technology/in-the-apple-case-a-debate-over-data-hits-home.html.
Sonck, N., and G. Loosveldt. 2010. “Impact of Poll Results on Personal Opinions and Perceptions of
Collective Opinion.” International Journal of Public Opinion Research 22 (2): 230–255. doi:10.
1093/ijpor/edp045.
Sun, Zhaohao, Kenneth David Strang, and Francisca Pambel. 2018. “Privacy and Security in the Big
Data Paradigm.” Journal of Computer Information Systems 0 (0): 1–10. doi:10.1080/08874417.
2017.1418631.
Takeshita, Toshio, and Shunji Mikami. 1995. “How Did Mass Media Influence the Voters’ Choice in the
1993 General Election in Japan? A Study of Agenda Setting.” Keio Communication Review 17: 27–
41.
Thielman, Sam. 2016. “Apple’s Battle with the FBI: Who’s Supporting Them – and Who’s Not?” The
Guardian, February 29, 2016, sec. Technology. https://www.theguardian.com/technology/2016/
feb/29/apple-fbi-encryption-battle-supporters-technology-politics.
Weaver, David H., Doris Graber, Maxwell E. McCombs, and Chaim Eyal. 1981. Media Agenda-Setting in
a Presidential Election: Issues, Images, and Interest. Westport, CT: Greenwood.
Zaller, John. 1992. The Nature and Origins of Mass Opinion. Cambridge, England: Cambridge
University Press.
16 A. M. LEE ET AL.

Appendix A
Table A1. Codebook from content analysis
Variable Description Categories
1. Public Does the article conclude public opinion on the 1. Yes (e.g., the article mentioned that the “public”
Opinion case? or “Americans” thinks XY or Z about the issue. In
other words, the article used survey results to
assume it represents public opinion)
0. No (e.g., the article refers to the dispute but
does not comment on public opinion of the case)
2. Pew Does the article mention the Pew survey fielded Feb 1. Yes
Survey 18–21, 2016, either by mentioning Pew or 0. No
explicitly referring to its findings? [e.g., that “51%
said the public should unlock the iPhone”, “38%
said should not unlock the iPhone”, or “11% don’t
know/refused;” or that “39% have heard ‘a lot
about the Apple/FBI case,’ 36% ‘a little,’ 24%
‘nothing at all’ 1% ‘don’t know/refused’”]
3. If yes on “2” – Does the article mention findings from 1. Yes
Knowledge the 1st question?: “Have you heard about a federal 0. No
Question court ordering Apple to help the FBI unlock an
From iPhone used by one of the suspects in the San
Pew Bernardino terrorist attacks?”. [Findings show that
said “39% have heard ‘a lot,’ 36% ‘a little,’ 24%
‘nothing at all’ 1% ‘don’t know/refused’”]
4. Opinion If yes on “2” – Does it talk about findings from the 1. Yes
Question 2nd question?: “Do you think Apple should give 0. No
From FBI access to the iPhone in their ongoing
Pew investigation into the San Bernardino attacks?”
5. Other Polls Does the article cite results from other polls? [e.g., 1. Yes
Reuters/Ipsos, Wall Street Journal/NBC News 0. No
survey, or IBD/TIPP poll relating to the Apple/FBI
case?]

Table A2. Reliability test for content analysis.


Variables Krippendorff’s Alpha
Public Opinions 1
Pew Survey .83
Issue Knowledge 1
Issue Opinion .87
Other Polls 1

Table A3. One-way analysis of variance, means, standard deviations, and mean differences among the
three experimental conditions.
Mean differences
Dependent Among
Variables F-Tests Conditions Mean (SD) Conditions
B C
Manipulation Check F(2, 668) = 238.84*** A 3.64 (.97) −.59*** 1.56***
B 4.23 (.97) 2.16***
C 2.08 (.08)
H3: Perceived knowledge of Pew Survey F(2, 668) = 139.42*** A 10.38 (2.99) −.68* 3.67***
Participants B 11.06 (2.93) 4.35***
C 6.71 (2.93)
H4: Own Perspective (Null Hypothesis) F(2,668) = 1.46, p = .26 A 2.98 (1.55) .05 .22
B 2.93 (1.46) .17
C 2.76 (1.43)
Note: p* < .05, p** < .01, p*** < .001. N = 671. Bonferroni correction was incorporated to correct for multiple comparisons.
Condition A is the control group where respondents’ knowledge of the dispute before giving their opinions is not dis-
cussed; Condition B states that “most have heard of the dispute before giving their opinions”; and Condition C states that
“most have not heard of the dispute before giving their opinions.” Independent-sample t-tests for H1 and H2 are reported
in-text.

Potrebbero piacerti anche