The ‘Social’ Part of Social Desirability: How SocialNetworks Influence the Survey Response∗
Matthew T. Pietryka | [email protected] F. Langley | [email protected] State University | Department of Political Science
January 3, 2019
Survey researchers have dedicated increasing attention toward understanding how socialnetworks influence individual-level outcomes like attitudes, political participation, andother forms of collective action. This work implicitly assumes that errors in individuals’self-reported attitudes and behaviors are unrelated to the composition of their socialnetworks. We evaluate this assumption, developing a theory explaining how socialnetworks influence the survey response by shaping the social desirability of variousbehaviors and attitudes. We apply our theory to the study of political participation,examining evidence from three observational datasets and an experiment conducted on anational sample. We demonstrate that non-voting respondents’ tendency to falsely reporthaving voted is driven by political participation levels among their close friends andfamily. We show that this tendency can artificially inflate estimates of social influence.This study therefore suggests that survey researchers must account for social influenceon the survey response to avoid biasing their conclusions.
∗We thank the FSU political science department for funding the CCES module; Brad Gomez for coordinatingand administering the module; Anand Sokhey for sharing data from his CCES module; Jessica Parsons forresearch assistance; and Stephen Goggin, Philip Paolino, and Stefan Wojcik for thoughtful suggestions.This research also benefited from feedback during presentations at the University of British Columbia,the 2017 American Political Science Association annual meeting, and the 2017 Midwest Political ScienceAssociation annual meeting.
Scholars have increasingly portrayed political participation as a social, rather than an
individualistic act. This work suggests voter turnout depends on social influence arising
from one’s political discussion network (Knoke 1990; Straits 1990; Mutz 2002; Huckfeldt,
Mendez and Osborn 2004; Nir 2005; McClurg 2006; Jang 2009; Klofstad, Sokhey and
McClurg 2013; Nir 2011), voluntary group associations (Sallach, Babchuk and Booth 1972;
Uhlaner 1989), local neighborhood context (Olsen 1972; Eulau and Rothenberg 1986;
Huckfeldt and Sprague 1995; Timpone 1998; Oliver 1996), and family structure (Glaser
1959; Wolfinger and Wolfinger 2008). These social forces seem to influence other forms of
political participation (Alford and Scoble 1968; Huckfeldt 1979; Pollock 1982; Lake and
Huckfeldt 1998; Tam Cho 1999; Kim, Wyatt and Katz 1999; McLeod, Scheufele and Moy
1999; Scheufele 2000; Scheufele et al. 2004; Eveland and Hively 2009; Klofstad, Sokhey
and McClurg 2013), persisting across a variety of countries and contexts (Fitton 1973; Zipp
and Smith 1979; Harell 2009).
Like most political behavior research, studies of social influence tend to rely on survey
data. Unfortunately, surveys may present overlooked problems for this body of work. As
scholars are widely aware, survey reports are often biased by social desirability; respondents
falsely report holding attitudes and behaviors that others will view favorably while obscuring
their attitudes and actions that others might deem distasteful. Yet no research examines
how individuals’ immediate social networks contribute to social desirability bias in survey
reports. This problem can be mitigated by relying on validated outcomes, but existing work
typically does not. Indeed, all of the studies cited above examine self-reported participation.
By overlooking the social network’s influence on social desirability bias, previous survey
research may have suggested incorrect conclusions about the magnitude of social influence.
When measures that predict biased reporting are used as explanatory variables in models
of self-reported attitudes and behaviors, the estimated relationship will often be biased
(Ansolabehere and Hersh 2012; Bernstein, Chadha and Montjoy 2001). If social pressure
encourages biased responses, apparent social influence on self-reported behavior may instead
1
reflect only social influence on the survey response. Such a confound would present a serious
problem because much of the current literature on social influence in political behavior
relies on self-reported outcomes.1
We therefore explore the extent to which self reports may bias our understanding of
social influence in political behavior. We focus on electoral turnout as our outcome variable
because scholars, journalists, and the public at large widely recognize its essential role
in democratic governance. And unlike most survey measures, self-reported turnout can
be checked against government voter files, allowing us to examine individual variation
in misreporting. Across three surveys administered in different times and locations, we
demonstrate that overreports—cases where non-voters report having voted—are more
common among respondents whose associates participate at greater rates. We then show
that this pattern can lead to biased estimates of social influence when the socially-desirable
outcome variable is self-reported. Finally, we present an experiment conducted with a
large, national sample, demonstrating a causal link between the salience of associates’
participation and one’s own tendency to overreport. This experiment evaluates several
possible mechanisms by which social influence might arise. Our research thus provides
insight for elections scholars about the determinants of participation. For survey researchers,
it reveals unexplored sources of social desirability bias. And for network scholars, it provides
insight into the mechanisms driving social influence.
Social desirability encourages biased survey responses
When responding to surveys, people often attempt to present themselves in a manner they
feel would be most favorable to others. This impression management strategy (Goffman
1959) encourages survey respondents to misreport their true beliefs and actions that others
1A skeptical reader may argue that the relationship between social networks and social desirability is tooobvious to merit attention. Yet the widespread publication of social influence studies relying on self-reporteddata suggests this relationship and its implications have not been obvious to prominent social influencescholars, journal editors, or reviewers.
2
might deem distasteful. Such dissembling is most common on items measuring behaviors
and beliefs that are widely practiced or commonly abhorred. For instance, Ansolabehere and
Hersh (2012) demonstrate that survey respondents misreport most frequently on items like
turnout and voter registration where a socially desirable option is available. But respondents
rarely misreport on items like racial identification, where no unambiguous societal norm
exists (Ansolabehere and Hersh 2012). Of course, tastes vary; not everyone agrees on which
practices are desirable and which should be avoided. Individuals who fail to recognize or
endorse a survey option’s social desirability should feel little need to misreport.
Why do some people feel compelled to misreport socially-undesirable traits while others
lack this compulsion? Extant explanations focus on respondents’ personal attributes, demon-
strating that overreporting increases with respondents’ education, income, and political
interest (Ansolabehere and Hersh 2012; Silver, Anderson and Abramson 1986). These same
characteristics also predict actual participation, suggesting that people who participate most
frequently also feel the strongest pressure to overreport (Bernstein, Chadha and Montjoy
2001). Following this logic, the strong relationship between social networks and turnout
(e.g., Kenny 1992; Leighley 1990; Mcclurg 2003; Nickerson 2008; Rolfe 2012) implies an
analogous relationship between social networks and turnout misreports. Though work has
yet to examine this relationship directly, contextual analysis provides suggestive evidence.
People overreport voting more frequently in electoral contexts that tend to have greater
aggregate participation (Karp and Brockington 2005). This pattern suggests that the social
desirability of turnout may depend on societal norms. Unfortunately, no work on social
desirability bias examines the mechanisms by which individuals internalize these norms.
A rich history in social influence suggests that individuals look to their close friends and
family members to determine the normative appeal of actions and beliefs (Duncan, Haller
and Portes 1968; Erickson 1988; Festinger, Schachter and Back 1950; Homans 1961). Yet
this intimate social group, which scholars often label the core network, has received little
3
attention in theories or empirical analyses of survey misreports.2 This inattention presents a
problem because so much research on social influence relies on self reported data.
Previous studies of social influence may be biased
If the social network affects the survey response, we must reevaluate studies of social
influence that rely on self reports. As Bernstein, Chadha and Montjoy (2001, 22) explain,
“using reported votes in place of validated votes substantially distorts standard multivariate
explanations of voting, increasing the apparent importance of independent variables that
are related in the same direction to both overreporting and voting and sharply decreasing
the apparent importance of independent variables related in opposing directions to those
two variables.” If social pressure increases the social desirability of identifying in surveys
as a voter, previous studies of voting may have overstated the effect of individuals’ social
networks. And, indeed, most existing research has relied on self-reports (e.g., Huckfeldt,
Johnson and Sprague 2004; Klofstad, Sokhey and McClurg 2013; McClurg 2006).
This problem may not only impact studies of social influence, but also behavior studies
that have ignored the social network entirely. By failing to control for the network, previous
work may have misestimated the impact of other voting predictors that are correlated with
social networks. The inattention to the social basis of social desirability will be inconse-
quential if wealth, education, and other individual attributes commonly used as controls
account fully for variation in the social pressure generated by network members. If rich,
educated people comprise the majority of individuals with participatory networks, then
2Two studies address this issue indirectly, though their substantive interests lie elsewhere. Fowler (2005, 287)suggests, but does not test, the possibility that social networks influence turnout overreports. And Bond et al.(2012) explore a related premise on p. 4 of their online supplementary information: whether perceptionsof network turnout encourage false reports of voting in social media. The analysis suggests only a weakeffect. While this study may suggest the network plays only a marginal role in misreporting, we expect astronger relationship for survey misreports. Brenner (2012) argues that turnout overreporting is driven byrespondents’ social identification as voters rather than nonvoters. In Fowler et al.’s study, Facebook userschose whether to provide a message to their friends stating that they voted; there was no option to advertisethat they did not vote. Thus Facebook users may have felt less pressure to dissemble because dismissing themessage without advertising their participation did not identify them as “nonvoters.”
4
previous studies of turnout overreports may have captured this potential influence by proxy.
And, by controlling for these characteristics, studies of social influence in turnout can purge
the bias created by self-reports. If these individual attributes provide insufficient proxies
for network-induced social pressure, however, the social influence estimates are likely to
be biased. To help researchers evaluate the potential that these problems arise, we need a
theory that can explain how social networks influence the survey response.
A social theory of the survey response
The social network can influence individual attitudes and beliefs through two mechanisms
(Deutsch and Gerard 1955). First, informational influence occurs when the network provides
new information that changes an individuals’ beliefs. Second, the network may change
an individual’s behavior by creating social pressure to conform, even in the absence of
informational influence (Carlson and Settle 2016). Social pressure arises from one’s desire
to win peers’ approval or avoid condemnation.
In the context of turnout misreports, the network may create informational influence by
changing people’s beliefs about broad, societal norms. For instance, someone might use an
availability heuristic (Tversky and Kahneman 1973), leading them to infer that more people
participate nationally if they are surrounded by people wearing “I voted” stickers than they
would if surrounded by people lacking these stickers. Or more localized informational effects
arise if they infer that many of their peers vote, regardless of the societal norm.
Alternatively, the network may influence turnout misreports through social pressure,
altering an individual’s perception of the social rewards of voting or the social sanctions for
abstaining (Anoll 2018). To influence survey reports, this pressure must operate even in
anonymous survey settings. Drawing from Goffman (1967), Scheff (1988) argues that social
pressure arises because people anticipate the pride or shame they would feel if their beliefs
5
or actions were made public. And thus they perpetually feel this social pressure—even in
situations where their peers are absent.3
Both mechanisms suggest a common expectation: People with more participatory net-
works should be more likely to overreport turnout. If informational influence drives this
process, people with more participatory networks should infer a stronger societal norm
toward participation. If social pressure drives this process, people with more participatory
networks should envision these social rewards or sanctions more intensely. In either case,
the social desirability arises from people’s mental reconstructions of their networks.
For these mechanisms to operate, people’s perceptions of their network need not be
accurate. Indeed, work on social influence in participation suggests that when perceptions of
network characteristics diverge from reality, their perceptions are often more consequential
than their networks’ actual characteristics (Huckfeldt 2007; Ryan 2011). Like many political
perceptions, people’s mental reconstructions of their networks should be governed by recent
interactions and cues in the immediate context (Zaller and Feldman 1992), which may lead
people to over- or under-estimate their associates’ participation. On one hand, political
discussion is typically initiated by the most politically engaged (Huckfeldt and Mendez 2008).
When this group is salient, people may overestimate the frequency of political participation
in their network, increasing the social desirability of reporting turnout. On the other hand,
a growing body of the public perceives politics as distasteful, and systematically conceal
their political involvement from their peers (Klar and Krupnikov 2016). When this group is
salient, people may underestimate the frequency of political participation in their network,
decreasing the social desirability of reporting turnout.
To summarize, we posit that people rely on a mental reconstruction of their social network
to ascertain the social desirability of actions or beliefs. Just as people leaving their house
might bring along their phone, keys, purse, or wallet, they also bring a mental image of their
network. This mental image, we argue, governs the social desirability of particular responses
3This perspective provides a plausible explanation for the persistence of turnout overreporting even in self-administered, online surveys (Ansolabehere and Hersh 2012).
6
to survey questions. Our empirical analysis therefore begins with observational analysis of
three datasets, examining the relationship between the core network and turnout misreports.
Our theory suggests two mechanisms by which the network may influence misreports. This
mental image might create social desirability bias through informational influence, altering
individuals’ awareness of societal norms. Or it may create social desirability bias through
social pressure by altering people’s perceptions about the social rewards of voting. After
the observational analysis, we therefore present an experiment designed to evaluate these
mechanisms and the causal logic underpinning our theory.
Observational study
For empirical evidence, we begin by asking two questions to assess the potential for networks
to bias the conclusions drawn from observational survey data. First, do people with more
participatory networks overreport voting at greater rates? Second, do models relying on
self-reported turnout suggest greater social influence than would models relying on validated
turnout? As we show below, the answer to both questions is yes.
Observational data
To address these questions, we need individual-level measures of self-reported turnout,
validated turnout, and turnout within respondents’ immediate social networks. In survey
research, social networks are typically measured with a name generator battery (see e.g.,
Laumann 1973; Marsden 1987), which ask respondents to identify the names or initials of
their closest friends, family, or other associates. Name generators remain rare on political
surveys, despite increasing scholarly attention to social networks. And most surveys with
name generators—like the 2000 American National Election Study—offer only self-reported
turnout measures. Only two preexisting studies fit our needs: The 1984 South Bend Study
and the 2010 Williamsburg Study. To expand the temporal and geographic scope of our
7
study, and to examine our theory on a nationally-representative sample, we added a name
generator battery to the 2016 Cooperative Congressional Election Study (CCES).
The South Bend Study: Huckfeldt and Sprague (2006) combined a three-wave panel survey
with validated records of respondents’ participation in elections. These data have been
used in many prominent studies of social influence (e.g., Huckfeldt and Sprague 1995;
Kenny 1992; Klofstad, McClurg and Rolfe 2009; Mutz and Mondak 1997). The survey’s
first wave contacted approximately 1,500 respondents within the South Bend metropolitan
area and the second and third waves of the study attempted to recontact these individuals,
supplementing panel attrition with new respondents. We restrict our analysis to the 1,510
individuals who responded to the third wave, which included both the turnout measure and
name generator.4
Respondents were randomly selected within 16 South Bend neighborhoods. Thus, with
over 90 respondents per neighborhood, the sample is intended to be representative within
neighborhoods, but not of South Bend as a whole.5 Huckfeldt and Sprague accessed the
Indiana voter file to provide a validated turnout measure for these respondents.
The Williamsburg Study: This multiwave panel, collected by Miller et al. (2015), surveys
William & Mary students around the time of the May 2010 Williamsburg municipal elections.
The survey targeted all 5,726 students on the college master email list. We restrict our
analysis to the 1,735 students older than 18 years who responded to both the first and
third waves. The first wave, collected from late February to early March, included measures
of respondents’ social networks, demographics, and political attitudes.6 The third wave,
4Self-reported turnout was measured with the item, “In talking to people about the election, we find that alot of people weren’t able to vote for one reason or another. How about you? Did you vote this fall, or didsomething keep you from voting?” [with response options: “Yes, voted”/ “No, did not vote”]. The namegenerator focused on up to three people with whom the main respondent discussed politics: “Can you giveme the FIRST names of the three people you talked with most about the events of the past election year?These people might be from your family, from work, from the neighborhood, from church, from some otherorganization you belong to, or they might be from somewhere else.”
5In all South Bend analyses, we cluster the standard errors on neighborhoods to account for this samplingstrategy.
6The name generator focused on up to five friends among the students at William & Mary: “One of the purposesof this survey is to examine the flow of political information on campus between people who know each other.We are particularly interested in knowing whether people who are friends have similar opinions and thoughts
8
collected immediately following the election, included the self-reported turnout measure.7
The data also provide validated turnout measures from the Williamsburg voter file.
CCES: Our new data come from a name generator battery included on two modules of
the 2016 CCES pre-election wave.8 A post-election wave collected self-reported turnout,
yielding 1,512 valid responses from across the US.9
Together, these three datasets offer useful variation in both measurement and context,
as summarized in Table 1. The South Bend Study and the CCES both collected up to three
names, targeting associates with whom they discussed politics. The Williamsburg Study
collects up to five names, targeting close friends. Though political discussion networks differ
from friendship networks, they often overlap because people tend to discuss politics most
frequently with those whom they discuss other important matters: their significant others,
close friends, and family (Huckfeldt, Johnson and Sprague 2004; Klofstad, McClurg and Rolfe
2009). The South Bend Study included both the name generator and turnout measure on
the same post-election wave. The other two datasets collected the name generator before the
election and turnout after, establishing temporal precedence. The South Bend Study and the
CCES each rely on respondents’ perceptions of network participation while the Williamsburg
Study provides validated turnout for network members.10 The South Bend Study relied on
about politics. To help us answer this question, we are asking you to give first and last names of up to five ofyour closest friends who attend William and Mary.”
7The survey text read,“Did you vote in the May 4th city council election?” [“Yes”/“No”]. The survey only asksthe turnout question to respondents who reported that they were registered to vote in the City of Williamsburg.In the analyses we present below, respondents who reported that they were not registered are treated asnon-voters. Restricting the sample to registered voters yields analogous conclusions.
8The name generator collected up to three names with the prompt, “From time to time, people discussgovernment, elections, and politics with other people. Who are the people with whom you discuss thesematters? These people might or might not be relatives. Can you think of anyone? Please enter their firstname in the box below.”
9We draw the turnout and control variables from the CCES common content data posted to https://doi.org/10.7910/DVN/GDF6Z0 on 2018-02-10. The post-election turnout item read, “Which of the followingstatements best describes you?” [respondents are coded as voters if they chose, “I definitely voted in theGeneral Election on November 6.”].
10The South Bend PIs did not validate the participation of discussants identified by the main respondents andthe identities of these discussants have been anonymized, preventing a new validation effort. The SouthBend study PIs also attempted to interview some of the discussants identified by the main respondents. Inprinciple, we could instead rely on discussants’ self-reported turnout. Unfortunately, this “snowball” samplewas much smaller than the main respondent sample. Main respondent’s provided turnout reports of 4,153discussants while only 891 of these discussants provided self-reports. Given the similarity between main
9
Table 1: Comparing the observational datasets
South Bend Williamsburg CCES
Year 1984 2011 2016Validated turnout Yes Yes Yes
Self-reported turnout Yes Yes YesName generator Yes Yes Yes
Maximum number of names elicited 3 5 3Type of network Political discussion Friendship Political discussion
Network measured prior to turnout No Yes YesNetwork participation Perceived Validated Perceived
Survey mode Face-to-face Online OnlineSampling Frame Local Local NationalElectoral context Presidential Municipal Presidential
N - total 1,510 1,735 1,512N - validated voters 1,010 400 960
N - nonvoters 500 1,335 552
face-to-face interviews while the other two studies were self-administered online. Since
self-administered online studies may reduce social desirability bias (Holbrook and Krosnick
2010), these two datasets provide hard tests for the theory. The South Bend Study and
Williamsburg Studies focus on specific communities while the CCES draws respondents
from across the country. Finally, the Williamsburg Study focuses on a low-turnout local
election while the remaining two studies focus on high-turnout national elections. Despite
these broad differences, each dataset suggests a heretofore overlooked social component of
turnout overreports, as we show below.
Observational results
Using these data, the analysis proceeds in two stages. First, we examine overreporting
by restricting the sample to nonvoters and regressing self-reported turnout on network
participation. We define nonvoters as respondents who were not identified as voters in
respondents’ perceptions of discussant characteristics and discussants’ own self-reported characteristics(Huckfeldt 2001), we rely on the larger sample. The Williamsburg Study provides validated turnout measuresfor both respondents and their friends. To avoid simultaneity bias, we use the friends’ validated turnout inthe 2009 VA gubernatorial election. The CCES name-generator battery collected only first names. Askingonly for first names increases response rates but prevents a validation effort.
10
the public voter file for the focal election.11 Second, relying on the entire sample, we
estimate turnout models using alternately self-reported and validated turnout as the outcome
measures. This second analysis allows us to assess whether these two measures of turnout
lead to differing conclusions about the strength of association between network participation
and respondent voting.
In all models, we control for common predictors of overreporting and turnout. These
measures are drawn from Ansolabehere and Hersh (2012), a recent comprehensive study
of turnout overreporting on surveys.12 We also include a variable indicating whether the
respondent failed to identify any associates with the name generator battery. We do not
exclude these individuals from analysis because respondents failing to name discussion part-
ners are nonetheless typically embedded in meaningful political communication networks
(Eveland, Hutchens and Morey 2013). Details about item wording and variable values for
these and other items can be found in section A of the online Supporting Information (SI);
summary statistics can be found in SI B. To avoid bias emerging from listwise deletion, we
use 50 imputations of the explanatory variables.13
Network participation predicts overreporting
Table 2 presents logistic regressions of self-reported turnout among validated nonvoters. The
first model in each dataset presents the bivariate relationship between overreporting and
Network Participation—the proportion of the main respondent’s discussants identified
11Like previous work, we do not examine underreporting because many nonvoters falsely report voting, butvoters seldom report abstaining (e.g., Ansolabehere and Hersh 2012; Silver, Anderson and Abramson 1986).
12For the Williamsburg analyses, we must deviate from the Ansolabehere and Hersh (2012) controls in severalways. Since student samples provide little variation in respondents’ age, education, income, or marital status,the survey omitted these items. It also omitted a measure of respondents’ church attendance. We proxy forage and education with indicators of respondents’ academic class standing. We proxy for income with anitem asking about respondents’ socioeconomic status.
13The practice of dropping all cases with even a single missing value, commonly referred to as listwise deletion,reduces the model’s degrees of freedom unnecessarily. Further, it biases the coefficient estimates anytimepeople with missing responses differ systematically from those with complete responses. Multiple imputationaddresses both problems, preserving all valid responses and reducing the bias arising from differencesbetween observed and missing cases (for details, see Rubin 2009). Though many imputation methods exist,we rely on conditional multiple imputation because it tends to perform better than alternative approacheson the categorical and ordinal variables commonly included in surveys (Kropko et al. 2014). We generatedthe imputations using the mice 2.30 package (Buuren and Groothuis-Oudshoorn 2011) in R 3.3.3 (RCore Team 2017). For details about the imputations, see SI C.
11
as voters. The second model introduces the controls. Since the sample is restricted to
nonvoters, positive coefficients associated with network participation indicate people with
more participatory networks are more likely to overreport turnout.
In all three datasets, the first model suggests that nonvoters with more participatory
networks are more likely to report that they voted. The substantive impact of these results
can be seen by using the estimates to generate predicted probabilities. In South Bend,
nonvoters who believed all of their discussants voted are 17 percentage points more likely to
report turnout than nonvoters who believed none of their discussants voted. The analogous
difference is 43 percentage points in Williamsburg and 26 points in the CCES.14
The second model for each dataset introduces the controls. In South Bend and Williams-
burg, the difference between nonvoters in participatory and non-participatory networks
is weaker after conditioning on the other predictors, but remains substantively and sta-
tistically significant. After conditioning, the South Bend model predicts that nonvoters
in fully-participatory networks will be eight percentage points more likely to report vot-
ing than nonvoters in fully-abstaining networks.15 The Williamsburg model predicts a 20
percentage-point difference. Compared to the first model, the second CCES model predicts a
substantively larger difference between people in fully participatory or fully non-participatory
networks—34 percentage points.
In all three cases, a substantively important relationship remains after introducing the
controls, suggesting that the literature on overreporting has missed the social aspect of
social desirability bias. The controls in our models have been used in this literature to
identify citizens who feel the strongest social pressure to participate—and hence social
14The CCES provides two ways to measure validated turnout, which differ based on how respondents who arenot matched to a voter file are handled. Here, we follow convention by treating these unmatched respondentsas nonvoters. Berent, Krosnick and Lupia (2016) argue that this approach introduces too much error becauserespondents may go unmatched due to poor record keeping rather than abstention. To address this concern,an alternative approach is to exclude unmatched respondents from analysis, identifying respondents asnonvoters only if they were explicitly reported as such in the voter file. In SI D, we reestimate our modelswith this alternative specification, yielding almost identical results.
15To reduce ceiling effects, the reported estimates are based on covariate profiles where ordinal covariates areset to their minimum values and dummy-coded variables are set to their baseline values.
12
Table 2: Nonvoters embedded in more participatory networks are more likely to report that they voted.
Bivariate Controls
Estimate (SE) p value Estimate (SE) p value
South BendIntercept 0.17 (0.249) .505 -3.25 (0.705) < .001Network Participation (as a proportion) 0.73 (0.345) .036 1.05 (0.445) .019
Controls? No YesAIC 630.29 473.04N - Observations 500 500N - Imputations 50 50
WilliamsburgIntercept -2.17 (0.107) < .001 -3.56 (0.469) < .001Network Participation (as a proportion) 2.29 (0.523) < .001 2.08 (0.538) < .001
Controls? No YesAIC 1008.46 984.47N - Observations 1335 1335N - Imputations 50 50
CCESIntercept 0.59 (0.512) .247 -2.18 (0.962) .023Network Participation (as a proportion) 1.20 (0.585) .040 1.64 (0.724) .024
Controls? No YesAIC 487.17 409.23N - Observations 552 552N - Imputations 50 50
Note: This table displays coefficients from logistic regressions of turnout on network participation. Themodels include only individuals who are nonvoters according to the public voter files. In each dataset,the first model displays the bivariate relationship and the second model displays the relationship afterconditioning on controls for gender, marital status, residential mobility, church attendance, political interest,party identification strength, ideology strength, education, income, race/ethnicity, age, and whether therespondent failed to identify a discussant using the name generator (SI D reports the complete results,including the coefficients from these control variables). The standard errors in the South Bend models arecorrected for clustering at the neighborhood level. The CCES estimates rely on post-election weights.
13
pressure to report participating even when they have not. Nonetheless, a strong relationship
between overreporting and network participation persists in the presence of these controls.
By comparison, education provides the most consistent predictor of overreporting in past
work (Ansolabehere and Hersh 2012, 441) and our results reaffirm that pattern (see SI D
for our complete model results). In our data, people in the top half of education tend to
overreport at rates 10-20 percentage points greater than the least educated—differences
that are similar in magnitude to those associated with a minimal to maximal shift in network
participation. Thus, prior work has been ignoring substantial variation in individuals’
propensity to overreport turnout. To account for this pattern, these analyses suggest common
controls serve as insufficient proxies for the network. Controlling for this bias will be easier
if researchers explicitly incorporate network participation into their models.
Self-Reported turnout can bias estimates of social influence
Since network participation predicts overreporting—even after controlling for commonly-
used explanatory variables—it has the potential to bias estimates of social influence. To
determine whether this potential is realized, Table 3 reports models regressing individual-
level turnout on network participation. Unlike the previous results, these models are not
restricted to voters. The first model for each dataset relies on self-reported turnout as the
outcome variable. The second model uses validated turnout.
The table suggests that using self-reported turnout rather than validated turnout can lead
to distinct conclusions about the extent of social influence in voting. In all three models based
on self-reported turnout, the coefficient associated with network participation is positive and
significant. In two of the three models based on validated turnout, these coefficients are close
to zero and lack statistical significance. In South Bend, the model predicts that people in
fully-participatory networks report turnout at rates 12 percentage points greater than those
with fully-abstaining networks. The corresponding difference for validated turnout is only
one percentage point. In the CCES, the self-reported model suggests a 26 percentage-point
14
Table 3: Models relying on self-reported turnout suggest greater social influence than do models relying onvalidated turnout.
Self-Reported turnout Validated turnout
Estimate (SE) p value Estimate (SE) p value
South BendIntercept -2.05 (0.458) < .001 -0.82 (0.418) .048Network Participation (as a proportion) 0.73 (0.397) .066 0.05 (0.234) .827
Controls? Yes YesAIC 834.75 1694.83N - Observations 1510 1510N - Imputations 50 50
WilliamsburgIntercept -1.18 (0.273) < .001 -1.07 (0.299) < .001Network Participation (as a proportion) 2.26 (0.382) < .001 1.91 (0.381) < .001
Controls? Yes YesAIC 2123.64 1804.75N - Observations 1735 1735N - Imputations 50 50
CCESIntercept -0.47 (0.870) .592 -0.58 (0.747) .442Network Participation (as a proportion) 1.13 (0.545) .038 -0.28 (0.445) .530
Controls? Yes YesAIC 590.78 1661.27N - Observations 1512 1512N - Imputations 50 50
Note: This table displays coefficients from logistic regressions of turnout on network participation. In eachdataset, the first model measures turnout with self-reports and the second model uses validated records.All models control for gender, marital status, residential mobility, church attendance, political interest, partyidentification strength, ideology strength, education, income, race/ethnicity, age, and whether the respondentfailed to identify a discussant using the name generator (SI D reports the complete results, including thecoefficients from these variables). The standard errors in the South Bend models are corrected for clusteringat the neighborhood level. The CCES estimates rely on post-election weights.
15
difference, compared to the validate model’s predicted -6 point difference, which is in the
opposite direction than one would expect and lacks statistical significance.
Williamsburg serves as the exception because the self-reported and validated models
suggest the same conclusion. Compared to respondents with no participation in their
networks, those with fully-participatory networks are 51 percentage-points more likely to
report turnout and 44 percentage points more likely to actually turn out.
The different substantive conclusions supported by the models indicates that researchers
may be mislead if they rely only on self-reports. Using the South Bend and CCES data,
researchers without access to validated turnout may conclude that individuals’ turnout
decisions depend on their networks; with validated turnout, they would be unable to reject
the null of no influence.16 These disparate results across the three datasets suggests that
researchers cannot predict ex ante the extent to which relying on self-reported turnout will
bias their conclusions.
Experimental study
The analyses above establish the relationship between networks and overreporting, but leave
several questions unanswered. First, do more participatory networks cause overreporting or
is the relationship spurious, caused perhaps by aspects of the broader context that encourage
people to overreport and encourage their networks to participate? For instance, groups of
friends may all be targeted by the same geographically-focused mobilization drive, increasing
their chances of voting and the pressure they feel to report voting on surveys. Second, if
networks cause overreporting, through what mechanisms do they operate? As we explain
above, social influence may arise from informational effects or social pressure. We therefore
designed an experiment that allows us to explore these questions.
16The inability to reject the null in the South Bend and CCES studies does not provide strong evidence foran absence of social influence (see Rainey 2014). Though the validated coefficients are smaller than theself-reported coefficients, they are not estimated with enough precision to rule out important substantiveeffects.
16
Experimental design
An ideal experiment would compare overreporting among individuals who have been
randomly assigned to more or less participatory discussion networks. Random assignment
is infeasible here, however, since real-world discussion networks tend to include individuals’
closest friends and family (Huckfeldt, Mendez and Osborn 2004). Nonetheless, random
assignment provides a means to exogenously influence a central mechanism in our theory,
which posits that respondents carry with them a mental representation of their network
which conditions the social desirability of various responses. We can therefore examine this
mechanism by exogenously influencing their mental representation of the network, rather
than influencing the network itself.
Network participation prime
We embedded the experiment within a standard name-generator battery, which we included
on a post-election module of the 2016 CCES, yielding a national sample of 841 respondents.
We influence respondents’ perceptions of their networks by priming them to think of more
or less participatory associates. The network turnout prime thus asked respondents to think
of someone, “who you believe [voted / did not vote] in the General Election on November
8? What is that person’s first name?”. Our theory suggests respondents will overreport
at greater rates if they are primed to think of an associate who voted (we refer to these
respondents as the associate voted group, N = 439) rather than an associate who did
not vote (the associate abstained group, N = 402). Our key outcome is therefore the
respondent’s self-reported turnout, which we asked shortly after the name generator.17
If individuals in the associate voted group report higher levels of turnout, we can con-
clude that these individuals are overreporting at greater rates. This conclusion is warranted
because the experiment occurred after the election and therefore cannot have influenced
17Some people may have been unable to think of a voter or non-voter in their network. Others gave obviouslyfalse names like “Hillary Clinton.” Since we cannot be certain respondents imagined a real associate, ourresults represent intention-to-treat (ITT) effects, providing a more conservative test of our theory.
17
whether respondents actually voted. Moreover, by randomizing the treatment, we can be
confident that no differences exist between treatments in their baseline propensity to vote
or misreport voting.18 The observational analyses above assume that the validated measures
are free of error, an assumption that Berent, Krosnick and Lupia (2016) call into question.
Since a validation effort is not necessary for the experimental analysis, the design allows us
to test our theory without this restrictive assumption. To see whether any observed effects
depended on the type of relationship the name generator elicited, we asked respondents to
name either a close associate (“close friend or family member”), a less intimate associate
(“coworker or casual acquaintance”), or someone they identified with (“person like you”).
All three wordings yielded similar effects, as shown in SI F.
National turnout cue
As we explain above, networks may influence misreports through information effects or
social pressure. Information plays a role if their network provides information about how
typical people behave, helping them make inferences about societal norms. If the network
influences overreporting through this mechanism, there must be a causal link between higher
perceptions of national turnout levels and greater overreporting. To provide causal leverage
on this relationship, we randomly embedded one of three cues about national participation
levels immediately prior to the name generator. One group received no cue (no cue group,
N = 293). The second group received a cue stating that, “Nationwide, approximately 37%
of eligible voters cast votes in the 2014 U.S. elections” (low national turnout group, N
= 268). The third group received a cue stating “Nationwide, approximately 59% of eligible
voters cast votes in the 2012 U.S. elections” (high national turnout group, N = 264).19
18We present balance statistics for pre-treatment demographic variables in SI section E. Only small differencesemerge between treatment groups. In SI E, we nonetheless replicate our analyses after controlling for thecovariates that show even moderate imbalance. These models suggest the same substantive conclusions asthose reported below.
19These turnout estimates come from McDonald (2016).
18
The national turnout cue was assigned independently from the name-generator text.
Thus, a subject in the associate voted and low national turnout groups received
the following prompt:
Lots of people vote in federal elections, but many others do not. Nationwide,
approximately 37% of eligible voters cast votes in the 2014 U.S. elections. Think
about a specific [close friend or family member / coworker or casual acquaintance
/ person like you] who you believe voted in the General Election on November
8? What is that person’s first name?
Dependent variables
This design allows us to compare the relative impact of interpersonal relationships and
societal norms on misreports by comparing the effect of the network turnout prime to the
effect of the national turnout cue.
Further, we included several questions to help us distinguish information effects from
social pressure. If social influence occurs through information, the network treatment
must change respondents’ perceptions about national turnout or turnout within their own
networks. We therefore examine whether the treatments changed these perceptions, asking
about turnout nationally (“Thinking about all eligible voters in the U.S., what percentage
of these people do you think voted in the General Election on November 8?”) and within
respondents’ own networks (“Thinking about all the people you talk with about matters that
are important to you, what percentage of these people do you think voted in the General
Election on November 8?”).20 Asking about perceptions of national turnout also serves
as a manipulation check for the national turnout cue. If social influence occurs through
social pressure, respondents should anticipate greater social sanctions if a peer were to
learn that they did not vote. We therefore asked, “Thinking about [FIRST NAME], how
disappointed would they be if they learned that you did not vote in the election this year?”.
If participation levels in one’s immediate social network increase social pressure, then
20We randomized the order of these two questions.
19
individuals in the associate voted group should anticipate greater disappointment than
those in the associate abstained group.
Experimental results
Figure 1 displays the results.21 In the figure and the text below, we report p-values from
difference in means tests, estimated via OLS with heteroskedasticity-robust standard er-
rors, but we find substantively identical results with randomization inference (see Keele,
McConnaughy and White 2012).
Figure 1A shows the effect of the network participation prime on the proportion of
respondents who reported voting in the 2016 election.22 Both groups reported high levels
of turnout as is typical in surveys, but this tendency was particularly strong for respondents
primed to think about an associate who voted. Almost 90% of these associate voted re-
spondents reported turnout, compared to 82% of the associate abstained respondents,
who were primed to think of an associate who did not vote (p < .01). To put this effect in
perspective, this difference is almost as large as the twelve percentage point difference in
overreporting between weak partisans and pure independents shown in the CCES column
of the observational analysis (Table D1 in SI D). And it is roughly half as large as the twenty
percentage point difference between the most and least educated individuals.23
Figure 1B shows that the cue about national turnout levels had little effect on self-reported
turnout. If perceptions of societal norms cause overreporting, we would expect people
21The survey allowed responses to skip questions and thus two people did not respond to the turnout reportand 10-16 others did not respond to the other dependent variables. SI G documents this non-response andprovides sensitivity analysis demonstrating that the conclusions below are robust to any possible assumptionsabout the true values of the missing variables.
22Applying survey weights can bias estimates of treatment effects and we therefore present unweighted results,as recommended by Franco et al. (2017).
23To verify that these results arise from overreporting, rather than actual differences in turnout, we havereplicated this analysis after excluding validated voters from the sample. Within this subset (N = 367),subjects primed to think of a voter were ten percentage points more likely to report turnout than thoseprimed to think of a non-voter (p = 0.051). Given the concerns over the accuracy of government records(Berent, Krosnick and Lupia 2016), our primary analysis focuses on the full sample. Reassuringly, there isonly a one percentage point treatment effect for the validated voters in the sample (p = 0.34), suggestingthe effect is not driven by the voters in the sample.
20
primed with the lower turnout estimate to be less likely to report voting. Instead, the low
national turnout group reported voting at a greater rate than the other groups, though
these differences are small in magnitude (plow national turnout = .23; phigh national turnout = .61).
Figure 1C plots the mechanisms through which the network participation prime may
effect reported turnout. The two leftmost panels examine information effects, plotting the
treatment effect on perceptions of national turnout (left panel) and turnout within their
own networks (middle panel). These panels suggest the network participation prime had
little effect on these perceptions. Therefore, in this experiment the network’s effect on
overreporting does not appear to arise by changing respondents’ perceptions of participation
either nationally or within their own networks. Rather, the apparent mechanism appears to
be social pressure, as shown in the far right panel, which plots how disappointed the associate
would be to learn that the respondent did not vote. The plot shows that individuals expect
non-voting associates to feel little disappointment, averaging only 0.6 on the three-point
scale which ranged from 0= “Not at all disappointed” to 3= “Very disappointed.” In contrast,
respondents whose associate voted reported an average of 1.2—just over the midpoint on
the scale and double that of respondents whose associate did not vote (p < .001).
Figure 1D provides analogous plots for the national turnout cue. Though the national
turnout cue had no effect on overreporting, it succeeded in altering respondents’ perceptions
of national participation levels, as shown in the left panel. Respondents who received no cue
believed 60% of eligible voters turned out in 2016, on average, compared to 55% among
the low national turnout group (p < .001). Thus, priming the low turnout in the
2014 midterm elections reduced respondents’ estimates of aggregate participation in the
2016 presidential election. Those in the high national turnout group estimated a 58%
national turnout rate, which was similar to those receiving no cue (p = .21) and greater than
the low national turnout group (p = .01). As the last two panels show, this treatment
had no apparent effects on respondents’ perceptions of turnout within their own networks
nor perceived social pressure.
21
In summary, the experiment suggests that respondents’ immediate social networks
influence overreports. In contrast, perceptions of societal norms about participation do not.
In this case, the network tends to operate through social pressure rather than information.
The network treatment had no systematic effect on perceptions of turnout either nationally
or within respondents’ own networks. Instead, its primary effect was on expected levels of
disappointment.
Our results suggest that the apparent effect of societal norms on overreporting (Karp and
Brockington 2005) may instead reflect an aggregation of more localized social influence.
This conclusion aligns with recent evidence suggesting that people feel pressure to conform
to the behavior of their ingroup peers, but not outgroup members (Suhay 2015). And
perhaps as a result, the social rewards of voting depend heavily on the local context (Anoll
2018). This conclusion requires further research, however, because the high turnout rate in
the sample may imply a ceiling effect, suppressing the effect of the national turnout cue.
Nonetheless, turnout reports in the experiment are consistent with typical survey estimates
in U.S. presidential elections. Thus, this study examines effects using variation that reflects
common real-world conditions. Still, we hope future work explores whether perceptions of
national turnout matter more in low-turnout elections.
22
Figure 1: Experimental treatment effects on self-reported turnout (Panels A & B), and perceptions of nationalturnout, network turnout, and social pressure to vote (Panels C & D).
(A) The effect of the network participationprime on self-reported turnout. People aremore likely to report having voted if they are re-minded of an associate who voted, rather thanone who abstained.
(B) The effect of the national turnout cue onself-reported turnout. Knowledge about na-tional turnout rates has little effect on the like-lihood that people report having voted.
(C) The effect of the network participation prime on perceptions of national turnout, network turnout,and social pressure to vote. The network treatment increases a respondent’s perceived social pressure.
(D) The effect of the national turnout cue on perceptions of national turnout, network turnout, andsocial pressure to vote. The national treatment affects a respondent’s expectation about national turnout.
Note: The plots show means and 95% confidence intervals for each experimental group. The outcomein panels A and B is self-reported turnout. The outcomes in panels C and D reflect a respondent’s beliefabout the percentage of eligible voters who cast votes, the percentage of their discussion network who castvotes, and the level of disappointment their associate would feel if they learned that the respondent didnot vote. To the right of each estimate, differences-in-means p-values compare the focal estimate to thereference category, which is the associate abstained group for the network turnout prime and the nocue group for the national turnout prime.
23
Conclusion
The results provide evidence demonstrating how the social network can magnify or reduce
the social desirability bias of turnout self reports. The experiment suggests that individuals
feel greater social pressure to vote and, subsequently, overreport turnout at a greater rate
when they are asked to think about acquaintances who themselves voted (Figure 1A). These
results indicate that individuals with more participatory networks should be more prone
toward misreporting because they will tend to perceive greater social pressure (Figure 1B).
Using observational data from three different elections, we find support for that expecta-
tion. Individuals with more participatory networks are more likely to falsely report voting
(Table 2). This relationship creates the potential to bias estimates of social influence in
voting. Researchers using self-reported turnout would find evidence of a substantively large,
and statistically significant relationship between network participation and individual-level
turnout. Researchers using validated turnout would find such evidence only in one of our
three datasets (Table 3).
The social desirability of turnout arising in participatory networks presents a great con-
found for studies of social influence relying on self reports. For most attitudinal outcomes
and many behavioral outcomes, validated measures remain unavailable and thus the bias of
the estimates cannot be checked. If analyses using self-reported measures always overesti-
mated the magnitude of social influence relative to analyses using estimates from validated
measures, researchers could correct for this bias by adjusting the self-report estimates to-
ward zero. Unfortunately the evidence presented above suggests that such a procedure may
instead lead to underestimates of social influence in some cases. This correction would be
warranted in the South Bend and CCES studies, but in Williamsburg, it would underestimate
the apparent social influence observed in the validated voting model. When validation is in-
feasible, we therefore recommend that scholars subject their estimates to sensitivity analysis
24
to help readers understand the range of estimates they would recover after correcting for
plausible levels of bias (see e.g., VanderWeele 2011).
While the analyses reported above focus on turnout, our results suggest that similar
problems will arise when researchers study other outcomes that may be biased by social
desirability. For instance, many scholars are interested in understanding how individuals’
social networks influence their partisan identification and candidate preferences. But for
each of these outcomes, social desirability has powerful effects on reported values (Klar and
Krupnikov 2016). Even studies that are not interested in social influence must address this
problem. Since these outcomes can rarely be validated, the challenge for scholars will be to
remove the bias generated by social desirability. As Table 2 suggests, individual covariates
serve as insufficient proxies for the social network. Therefore, removing the bias associated
with social desirability will be easier if researchers can directly measure and control for the
network.
This paper adds to the growing literature exploring the confounds to causal inference that
emerge in observational analyses of social influence. Such analyses often cannot distinguish
cases of social influence from cases where social interaction arises because of shared interests.
Nor can they disentangle social influence from the various contextual pressures that may
produce similar attitudes or behaviors within the network, such as mobilization drives or
media coverage. In light of these many challenges, can observational data provide any insight
into social influence? We believe they can, particularly as a compliment to experimental
designs, which overcome these confounds, but often lack external validity. To eliminate
threats to internal validity, many experimental studies of social influence occur in artificial
laboratory settings (Ahn, Huckfeldt and Ryan 2014; Carlson and Settle 2016; Pietryka
2016; Ryan 2011) or focus on online interaction with newly generated social ties (Klar and
Shmargad 2017; Carlson 2018). Abstract experimental environments provide well-identified
causal estimates, but these precise treatment effects often lack a specific real-world analogue.
In light of these many challenges to both internal and external validity, some scholars may
25
be tempted to abandon the study of social influence altogether. Rather than run from
these challenges, we believe scholars should explore their theories with multiple methods,
leveraging the advantages of each. At the same time, reviewers must recognize that no
single dataset, regression, or experimental design can address all of these challenges at once.
Evaluating theories of social influence requires both experimental work demonstrating a
clear causal mechanism and observational work demonstrating a substantively significant
real-world relationship.
26
References
Ahn, T. K., Robert Huckfeldt and John Barry Ryan. 2014. Experts, Activists, and Interdependent
Citizens: Are Electorates Self-Educating? New York, NY: Cambridge University Press.
Alford, Robert R. and Harry M. Scoble. 1968. “Sources of Local Political Involvement.”
American Political Science Review 62(04):1192–1206.
Anoll, Allison P. 2018. “What Makes a Good Neighbor? Race, Place, and Norms of Political
Participation.” American Political Science Review 112(3).
Ansolabehere, Stephen and Eitan Hersh. 2012. “Validation: What Big Data Reveal About
Survey Misreporting and the Real Electorate.” Political Analysis 20(4):437–459.
Berent, Matthew K., Jon A. Krosnick and Arthur Lupia. 2016. “Measuring Voter Registration
and Turnout in Surveys Do Official Government Records Yield More Accurate Assessments?”
Public Opinion Quarterly 80(3):597–621.
Bernstein, Robert, Anita Chadha and Robert Montjoy. 2001. “Overreporting Voting: Why It
Happens and Why It Matters.” Public Opinion Quarterly 65(1):22–44.
Bond, Robert M., Christopher J. Fariss, Jason J. Jones, Adam D. I. Kramer, Cameron Marlow,
Jaime E. Settle and James H. Fowler. 2012. “A 61-million-person experiment in social
influence and political mobilization.” Nature 489(7415):295–298.
Brenner, Philip S. 2012. “Overreporting of voting participation as a function of identity.” The
Social Science Journal 49(4):421–429.
Buuren, Stef van and Karin Groothuis-Oudshoorn. 2011. “mice: Multivariate Imputation by
Chained Equations in R.” Journal of Statistical Software 45(3):1–67.
Carlson, Taylor N. 2018. “Modeling Political Information Transmission as a Game of Tele-
phone.” The Journal of Politics 80(1):348–352.
27
Carlson, Taylor N. and Jaime E. Settle. 2016. “Political Chameleons: An Exploration of
Conformity in Political Discussions.” Political Behavior pp. 1–43.
Deutsch, Morton and Harold B. Gerard. 1955. “A study of normative and informational social
influences upon individual judgment.” The Journal of Abnormal and Social Psychology
51(3):629–636.
Duncan, Otis Dudley, Archibald O. Haller and Alejandro Portes. 1968. “Peer Influences on
Aspirations: A Reinterpretation.” American Journal of Sociology 74(2):119–137.
Erickson, Bonnie H. 1988. The Relational Basis of Attitudes. In Social Structures: A Network
Approach, ed. Barry Wellman and S. D. Berkowitz. New York, NY: Cambridge University
Press pp. 99–121.
Eulau, Heinz and Lawrence Rothenberg. 1986. “Life space and social networks as political
contexts.” Political Behavior 8(2):130–157.
Eveland, William P and Myiah Hutchens Hively. 2009. “Political Discussion Frequency,
Network Size, and “Heterogeneity” of Discussion as Predictors of Political Knowledge and
Participation.” Journal of Communication 59(2):205–224.
Eveland, William P., Myiah J. Hutchens and Alyssa C. Morey. 2013. “Political Network Size
and Its Antecedents and Consequences.” Political Communication 30(3):371–394.
Festinger, Leon, Stanley Schachter and Kurt W. Back. 1950. Social Pressures in Informal
Groups: A Study of Human Factors in Housing. Stanford University Press.
Fitton, Martin. 1973. “Neighbourhood and Voting: a Sociometric Examination.” British
Journal of Political Science 3(04):445–472.
Fowler, James H. 2005. Turnout in a Small World. In The Social Logic of Politics: Personal
Networks as Contexts for Political Behavior, ed. Alan S. Zuckerman. Philadelphia: Temple
University Press pp. 269–287.
28
Fowler, James H., Michael T. Heaney, David W. Nickerson, John F. Padgett and Betsy Sinclair.
2011. “Causality in Political Networks.” American Politics Research 39(2):437–480.
Franco, Annie, Neil Malhotra, Gabor Simonovits and L. J. Zigerell. 2017. “Developing
Standards for Post-Hoc Weighting in Population-Based Survey Experiments.” Journal of
Experimental Political Science 4(2):161–172.
Glaser, William A. 1959. “The Family and Voting Turnout.” Public Opinion Quarterly
23(4):563–570.
Goffman, Erving. 1959. The Presentation of Self in Everyday Life. United States: Anchor
Books.
Goffman, Erving. 1967. Interaction Ritual: Essays on Face-to-Face Behavior. Garden City,
N.Y.: Doubleday. OCLC: 550570.
Harell, Allison. 2009. “Equal Participation but Separate Paths?: Women’s Social Capital and
Turnout.” Journal of Women, Politics & Policy 30(1):1–22.
Holbrook, Allyson L. and Jon A. Krosnick. 2010. “Social desirability bias in voter turnout
reports Tests using the item count technique.” Public Opinion Quarterly 74(1):37–67.
Homans, George Caspar. 1961. Social behavior: its elementary forms. New York, NY: Harcourt,
Brace, and Wrold, Inc.
Huckfeldt, R. Robert. 1979. “Political Participation and the Neighborhood Social Context.”
American Journal of Political Science 23(3):579.
Huckfeldt, R. Robert, Paul E. Johnson and John D. Sprague. 2004. Political disagreement:
the survival of diverse opinions within communication networks. New York, NY: Cambridge
University Press.
Huckfeldt, Robert. 2001. “The Social Communication of Political Expertise.” American
Journal of Political Science 45(2):425–438.
29
Huckfeldt, Robert. 2007. “Unanimity, Discord, and the Communication of Public Opinion.”
American Journal of Political Science 51(4):978–995.
Huckfeldt, Robert and Jeanette Morehouse Mendez. 2008. “Moths, Flames, and Political
Engagement: Managing Disagreement Within Communication Networks.” The Journal of
Politics 70(01):83–96.
Huckfeldt, Robert, Jeanette Morehouse Mendez and Tracy Osborn. 2004. “Disagreement,
Ambivalence, and Engagement: The Political Consequences of Heterogeneous Networks.”
Political Psychology 25(1):65–95.
Huckfeldt, Robert and John Sprague. 1995. Citizens, Politics, and Social Communication:
Information and Influence in an Election Campaign. New York: Cambridge University Press.
Huckfeldt, Robert and John Sprague. 2006. Presidential Election Campaign Study, 1984:
South Bend, Indiana. Technical report Inter-university Consortium for Political and Social
Research [distributor] Ann Arbor, MI: . https://doi.org/10.3886/ICPSR06522.v1.
Jang, Seung-Jin. 2009. “Are Diverse Political Networks Always Bad for Participatory Democ-
racy? Indifference, Alienation, and Political Disagreements.” American Politics Research
37(5):879–898.
Karp, Jeffrey A. and David Brockington. 2005. “Social Desirability and Response Validity:
A Comparative Analysis of Overreporting Voter Turnout in Five Countries.” Journal of
Politics 67(3):825–840.
Keele, Luke, Corrine McConnaughy and Ismail White. 2012. “Strengthening the Experi-
menter’s Toolbox: Statistical Estimation of Internal Validity.” American Journal of Political
Science 56(2):484–499.
Kenny, Christopher B. 1992. “Political Participation and Effects from the Social Environment.”
American Journal of Political Science 36(1):259.
30
Kim, Joohan, Robert O. Wyatt and Elihu Katz. 1999. “News, Talk, Opinion, Participation:
The Part Played by Conversation in Deliberative Democracy.” Political Communication
16(4):361–385.
Klar, Samara and Yanna Krupnikov. 2016. Independent Politics: How American Disdain for
Parties Leads to Political Inaction. New York, NY: Cambridge University Press.
Klar, Samara and Yotam Shmargad. 2017. “The Effect of Network Structure on Preference
Formation.” The Journal of Politics 79(2):717–721.
Klofstad, Casey A., Anand Edward Sokhey and Scott D. McClurg. 2013. “Disagreeing about
Disagreement: How Conflict in Social Networks Affects Political Behavior.” American
Journal of Political Science 57(1):120–134.
Klofstad, Casey A, Scott D McClurg and Meredith Rolfe. 2009. “Measurement of Political
Discussion Networks A Comparison of Two “Name Generator” Procedures.” Public Opinion
Quarterly 73(3):462–483.
Knoke, David. 1990. “Networks of Political Action: Toward Theory Construction.” Social
Forces 68(4):1041–1063.
Kropko, Jonathan, Ben Goodrich, Andrew Gelman and Jennifer Hill. 2014. “Multiple
Imputation for Continuous and Categorical Data: Comparing Joint Multivariate Normal
and Conditional Approaches.” Political Analysis 22(4):497–519.
Lake, Ronald La Due and Robert Huckfeldt. 1998. “Social Capital, Social Networks, and
Political Participation.” Political Psychology 19(3):567–584.
Laumann, Edward O. 1973. Bonds of pluralism: The form and substance of urban social
networks. Wiley-Interscience.
Leighley, Jan E. 1990. “Social Interaction and Contextual Influences On Political Participa-
tion.” American Politics Quarterly 18(4):459–475.
31
Marsden, P. V. 1987. “Core discussion networks of Americans.” American Sociological Review
52(1):122–131.
Mcclurg, Scott D. 2003. “Social Networks and Political Participation: The Role of Social
Interaction in Explaining Political Participation.” Political Research Quarterly 56(4):449
–464.
McClurg, Scott D. 2006. “The Electoral Relevance of Political Talk: Examining Disagreement
and Expertise Effects in Social Networks on Political Participation.” American Journal of
Political Science 50(3):737–754.
McDonald, Michael P. 2016. “Voter Turnout.”. http://www.electproject.org/home/voter-
turnout/voter-turnout-data. Date accessed: 2016-06-15.
McLeod, Jack M., Dietram A. Scheufele and Patricia Moy. 1999. “Community, Communication,
and Participation: The Role of Mass Media and Interpersonal Discussion in Local Political
Participation.” Political Communication 16(3):315–336.
Miller, Patrick R., Piotr S. Bobkowski, Daniel Maliniak and Ronald B. Rapoport. 2015.
“Talking Politics on Facebook Network Centrality and Political Discussion Practices in
Social Media.” Political Research Quarterly 68(2):377–391.
Mutz, Diana C. 2002. “The Consequences of Cross-Cutting Networks for Political Participa-
tion.” American Journal of Political Science 46(4):838–855.
Mutz, Diana C. and Jeffery J. Mondak. 1997. “Dimensions of Sociotropic Behavior: Group-
Based Judgements of Fairness and Well-Being.” American Journal of Political Science
41:284–308.
Nickerson, David W. 2008. “Is Voting Contagious? Evidence from Two Field Experiments.”
American Political Science Review 102(01):49–57.
32
Nir, Lilach. 2005. “Ambivalent Social Networks and Their Consequences for Participation.”
International Journal of Public Opinion Research 17(4):422–442.
Nir, Lilach. 2011. “Disagreement and Opposition in Social Networks: Does Disagreement
Discourage Turnout?” Political Studies 59(3):674–692.
Oliver, J. Eric. 1996. “The Effects of Eligibility Restrictions and Party Activity on Absentee
Voting and Overall Turnout.” 40:498–513.
Olsen, Marvin E. 1972. “Social Participation and Voting Turnout: A Multivariate Analysis.”
American Sociological Review 37(3):317–333.
Pietryka, Matthew T. 2016. “Accuracy Motivations, Predispositions, and Social Information
in Political Discussion Networks.” Political Psychology 37(3):367–386.
Pollock, Philip H. 1982. “Organizations as Agents of Mobilization: How Does Group Activity
Affect Political Participation?” American Journal of Political Science 26(3):485–503.
R Core Team. 2017. “R: A Language and Environment for Statistical Computing.”.
https://www.R-project.org/.
Rainey, Carlisle. 2014. “Arguing for a Negligible Effect.” American Journal of Political Science
58(4):1083–1091.
Rolfe, Meredith. 2012. Voter Turnout: A Social Theory of Political Participation. New York,
NY: Cambridge University Press.
Rubin, Donald B. 2009. Multiple Imputation for Nonresponse in Surveys. John Wiley & Sons.
Ryan, John Barry. 2011. “Social Networks as a Shortcut to Correct Voting.” American Journal
of Political Science 55(4):753–766.
Sallach, David. L, Nicholas Babchuk and Alan Booth. 1972. “Social Involvement and Political
Activity: Another View.” Social Science Quarterly 52(4):879–892.
33
Scheff, Thomas J. 1988. “Shame and conformity: The deference-emotion system.” American
Sociological Review 53(3):395–406.
Scheufele, Dietram A. 2000. “Talk or conversation? Dimensions of interpersonal discussion
and their implications for participatory democracy.” Journalism and Mass Communication
Quarterly 77(4):727–743.
Scheufele, Dietram A., Matthew C. Nisbet, Dominique Brossard and Erik C. Nisbet. 2004.
“Social Structure and Citizenship: Examining the Impacts of Social Setting, Network Het-
erogeneity, and Informational Variables on Political Participation.” Political Communication
21(3):315–338.
Silver, Brian D., Barbara A. Anderson and Paul R. Abramson. 1986. “Who Overreports
Voting?” American Political Science Review 80:613–624.
Straits, Bruce C. 1990. “The Social Context of Voter Turnout.” Public Opinion Quarterly
54(1):64–73.
Suhay, Elizabeth. 2015. “Explaining group influence: The role of identity and emotion in
political conformity and polarization.” Political Behavior 37(1):221–251.
Tam Cho, Wendy K. 1999. “Naturalization, Socialization, Participation: Immigrants and
(Non-) Voting.” The Journal of Politics 61(04):1140–1155.
Timpone, Richard J. 1998. “Ties That Bind: Measurement, Demographics, and Social
Connectedness.” Political Behavior 20(1):53–77.
Tversky, Amos and Daniel Kahneman. 1973. “Availability: A heuristic for judging frequency
and probability.” Cognitive Psychology 5(2):207–232.
Uhlaner, Carole J. 1989. “Rational Turnout - the Neglected Role of Groups.” American Journal
of Political Science 33:390–422.
34
VanderWeele, Tyler J. 2011. “Sensitivity Analysis for Contagion Effects in Social Networks.”
Sociological Methods & Research 40(2):240–255.
Wolfinger, Nicholas H. and Raymond E. Wolfinger. 2008. “Family Structure and Voter
Turnout.” Social Forces 86(4):1513–1528.
Zaller, John and Stanley Feldman. 1992. “A Simple Theory of the Survey Response: Answering
Questions and Revealing Preferences.” American Journal of Political Science 36:579–616.
Zipp, John F. and Joel Smith. 1979. “The Structure of Electoral Political Participation.”
American Journal of Sociology 85(1):167–177.
35
Online Supporting Information
The ‘Social’ Part of Social Desirability: How Social NetworksInfluence the Survey Response
Matthew T. Pietryka | [email protected] F. Langley | [email protected] State University | Department of Political Science
January 3, 2019
SI A Variable definitions 1
SI B Summary statistics 7
SI C Multiple imputation details 11
SI D Full observational model results 12
SI E Balance statistics for the experimental study 15
SI F Treatment effects by the type of relationship elicited 20
SI G Attrition during the experiment 21
A. Variable definitions
We describe below how we constructed each variable we use in our analysis. The South Bend
study included some items in all three waves and others only the first time the respondent
was interviewed. For items included in all waves, we use the most recent wave before the
election. Below, we list the relevant items we use from each wave. The item names include
a letter indicating their wave followed by a unique numeric identifier, as described in the
codebook available at https://doi.org/10.3886/ICPSR06522.v1. Likewise, we list
CCES variable names from the ’CCES Guide 2016.pdf’ (dated August 16, 2017) available at
https://doi.org/10.7910/DVN/GDF6Z0. For the Williamsburg data, we describe the
items here since no public codebook exists.
Turnout - Self-Reported: 0 = Did not report voting; 1 = Reported voting
South Bend source: C27: “In talking to people about the election, we find that a lot
of people weren’t able to vote for one reason or another. How about you? Did
you vote this fall, or did something keep you from voting?”
Williamsburg source: Wave C: Respondents were first asked “Are you currently regis-
tered to vote in the City of Williamsburg?”. Those answering yes were then asked
“Did you vote in the May 4th city council election?”. We code respondents as
voters if they answered yes to the first question and non-voters if they answered
no to either question.
CCES source: (Common content) CC16_401
Turnout - Validated: 0 = Not in file as voter; 1 = In file as voter
South Bend source: (Indiana voter file) vtpg84.
Williamsburg source: Williamsburg voter file.
SI-1
CCES source: (Common content) CL_E2016GVM
Network Participation: ranges from 0 (none of named discussants are voters) to 1 (all
named discussants are voters)
South Bend source: The number of named discussants the respondent perceived as
voters divided by the number of named discussants. Unlike the other two surveys,
the South Bend Study provided a ‘don’t know’ option for this question. We treat
‘don’t know’ responses as perceived non-voters because people rarely can confirm
someone is a non-voter. The results are almost identical when treating ‘don’t
know’ responses as missing and imputing them. The name generator text was:
“Can you give me the FIRST names of the three people you talked with most about
the events of the past election year? These people might be from your family,
from work, from the neighborhood, from church, from some other organization
you belong to, or they might be from somewhere else. All I need are the first
names.” Perceptions of discussant turnout come from (C149-C151).
Williamsburg source: The number of named friends casting validated votes in the
2009 VA gubernatorial election divided by the total number of named friends.
Friends are identified with the name generator (Wave A): “One of the purposes
of this survey is to examine the flow of political information on campus between
people who know each other. We are particularly interested in knowing whether
people who are friends have similar opinions and thoughts about politics. To
help us answer this question, we are asking you to give first and last names of
up to five of your closest friends who attend William and Mary. Your responses
will remain completely confidential, and the friends listed below will never know
you named them in the survey.”
SI-2
CCES source: (Module) The number of named discussants the respondent perceived
as voters divided by the number of named discussants. The name generator text
was: “From time to time, people discuss government, elections and politics with
other people. I’d like to ask you about the people with whom you discuss these
matters. These people might or might not be relatives. Can you think of anyone?
Please enter their first name in the box below. Is there anyone else you talk with
about these matters? Please enter their first name in the box below. If you cannot
think of anyone, please hit next.” Perceptions of discussant turnout come from
the question, “Do you think the person/people you named will vote for Clinton,
Trump, some other candidate, or will not vote?”
Education: Amount of education, divided into quartiles.
South Bend source: C178; B202; A102
Williamsburg source: Derived from the Wave A item: “When do you expect to grad-
uate from William & Mary?”
CCES source: (Common content) educ. Before creating quartiles, we collapse “Some
college” and “2-year” responses into a single category.
Income: Family income, divided into quintiles.
South Bend source: C232; B249; A142
Williamsburg source: Derived from the Wave A item: “How would you describe your
family’s economic status?”
CCES source: (Common content) faminc. Before creating the quintiles, all values
at or above $150,000 were collapsed into a single category because the response
options are not mutually exclusive (e.g., some are coded as “$150,000 or more”
while others are coded as “$200,000 - $249,999.”
SI-3
Race/Ethnicity: White, non-Hispanic; Black; Other
South Bend source: C217; B230; A139
Williamsburg source: Derived from the Wave A item: “What term best describes
your race?”
CCES source: (Common content) race
Marital Status: 0 = Widowed, divorced, separated, or never married; 1 = Married
South Bend source: C177; B201; A101
Williamsburg source: NA (not included in the survey)
CCES source: (Common content) marstat
Church Attendance: Frequency of church attendance
South Bend source: C211; B224; A133
Williamsburg source: NA (not included in the survey)
CCES source: (Common content) pew_churatd
Age: Age in years, divided into quintiles
South Bend source: 1984 - C212/B225/A134
Williamsburg source: NA (not included in the survey)
CCES source: (Common content) 2016 - birthyr
Gender: Female or male?
South Bend source: crsex; brsex; rsex. These items are interviewer assessments
rather than self-reports.
Williamsburg source: Wave A: “Are you: []Female []Male”
SI-4
CCES source: (Common content) gender
Political Interest: Interest in politics, government, or public affairs
South Bend source: B136; A64
Williamsburg source: Wave A: “In general how interested are you in national poli-
tics?”
CCES source: (Common content) newsint
Party ID Strength The absolute value of a seven-point party ID scale with pure inde-
pendents centered at zero.
South Bend source: The seven-point scale is built through branching items B237-
B239 or A73-A75.
Williamsburg source: Constructed from the Wave A item: “How would you describe
your party affiliation?”
CCES source: (Common content) pid7
Ideology Strength The absolute value of the left-right ideology scale with moderates
centered at zero.
South Bend source: he seven-point scale is built through branching items B233-B235
or A65-A67.
Williamsburg source: The absolute value of a seven-point ideology scale with mod-
erates centered at zero. Ideology is constructed from the Wave A item: “How
would you rate yourself on a scale of 1 to 7, where 1 is very liberal and 7 is very
conservative?”
CCES source: (Common content) ideo5.
Recent Mover: 0 = Has not moved in last two years; 1 = Has moved in last two years
SI-5
South Bend source: C213; B226; A135
Williamsburg source: NA (not included in the survey)
CCES source: (Common content) CC16_361
No Network: 0 = Identified one or more discussants with the name generator; 1 = Identi-
fied no discussants.
SI-6
B. Summary statistics
This section provides summary statistics for the variables defined in SI-A.
• Statistics for the South Bend study are presented in Table B1 on page SI-8.
• Statistics for the Williamsburg study are presented in Table B2 on page SI-9.
• Statistics for the 2016 CCES are presented in Table B3 on page SI-10.
SI-7
Table B1: Summary statistics for the South Bend data
Variable Value(s) Mean Std. Dev.
Network Participation [0-1] 0.93 0.18Age Quintile 1 0.2 –
Quintile 2 0.2 –Quintile 3 0.2 –Quintile 4 0.2 –Quintile 5 0.2 –Missing 0.01 –
Church Attendance Never 0.08 –Seldom 0.23 –A few times a year 0.12 –Once/twice a month 0.13 –Once a week 0.4 –Missing 0.05 –
Education Quartile 1 0.25 –Quartile 2 0.25 –Quartile 3 0.25 –Quartile 4 0.25 –Quartile Missing 0 –
Gender Male 0.45 –Female 0.55 –
Ideology Strength Moderate 0.01 –Moderate, leans liberal/conservative 0.25 –Weak liberal/conservative 0.25 –Strong liberal/conservative 0.19 –Missing 0.3 –
Income Quintile 1 0.18 –Quintile 2 0.18 –Quintile 3 0.18 –Quintile 4 0.18 –Quintile 5 0.18 –Missing 0.09 –
Political Interest 0 (least interested) 0.05 –1 0.12 –2 0.26 –3 (most interested) 0.41 –Missing 0.17 –
Marital Status Not married 0.25 –Married 0.75 –Missing 0 –
Party ID Strength Independent 0.08 –Lean D/R 0.22 –Not very strong D/R 0.22 –Strong D/R 0.29 –Missing 0.19 –
Race/Ethnicity White, Non-Hispanic 0.93 –Black 0.04 –Other 0.02 –Missing 0.01 –
Recent mover No 0.98 –Yes 0.02 –Missing 0 –
Self-reported Turnout No 0.11 –Yes 0.89 –
Validated Turnout No 0.33 –Yes 0.67 –
SI-8
Table B2: Summary statistics for the Williamsburg data
Variable Value(s) Mean Std. Dev.
Network Participation [0-1] 0.42 0.43Age Missing 1 –Church Attendance Missing 1 –Education Quartile 1 0.25 –
Quartile 2 0.25 –Quartile 3 0.25 –Quartile 4 0.25 –Quartile Missing 0.01 –
Gender Male 0.38 –Female 0.61 –Missing 0.01 –
Ideology Strength Moderate 0.17 –Moderate, leans liberal/conservative 0.32 –Weak liberal/conservative 0.27 –Strong liberal/conservative 0.11 –Missing 0.13 –
Income Quintile 1 0.18 –Quintile 2 0.18 –Quintile 3 0.18 –Quintile 4 0.18 –Quintile 5 0.18 –Missing 0.08 –
Political Interest 0 (least interested) 0.02 –1 0.13 –2 0.45 –3 (most interested) 0.34 –Missing 0.06 –
Marital Status Missing 1 –Party ID Strength Independent 0.09 –
Lean D/R 0.28 –Not very strong D/R 0.27 –Strong D/R 0.22 –Missing 0.14 –
Race/Ethnicity White, Non-Hispanic 0.75 –Black 0.04 –Hispanic 0.04 –Other 0.1 –Missing 0.08 –
Recent mover Missing 1 –Self-reported Turnout No 0.67 –
Yes 0.33 –Validated Turnout No 0.77 –
Yes 0.23 –
SI-9
Table B3: Summary statistics for the CCES data
Variable Value(s) Mean Std. Dev.
Network Participation [0-1] 0.91 0.25Age Quintile 1 0.2 –
Quintile 2 0.2 –Quintile 3 0.2 –Quintile 4 0.2 –Quintile 5 0.2 –
Church Attendance Never 0.27 –Seldom 0.22 –A few times a year 0.13 –Once/twice a month 0.08 –Once a week 0.2 –More than once a week 0.08 –Missing 0.01 –
Education Quartile 1 0.25 –Quartile 2 0.25 –Quartile 3 0.25 –Quartile 4 0.25 –
Gender Male 0.46 –Female 0.54 –
Ideology Strength Moderate 0.35 –Liberal/conservative 0.43 –Very liberal/conservative 0.17 –Missing 0.05 –
Income Quintile 1 0.18 –Quintile 2 0.18 –Quintile 3 0.18 –Quintile 4 0.18 –Quintile 5 0.18 –Missing 0.11 –
Political Interest 0 (least interested) 0.04 –1 0.12 –2 0.28 –3 (most interested) 0.55 –Missing 0.01 –
Marital Status Not married 0.44 –Married 0.56 –
Party ID Strength Independent 0.16 –Lean D/R 0.19 –Not very strong D/R 0.24 –Strong D/R 0.39 –Missing 0.02 –
Race/Ethnicity White, Non-Hispanic 0.74 –Black 0.1 –Hispanic 0.07 –Other 0.1 –
Recent mover No 0.79 –Yes 0.21 –Missing 0 –
Self-reported Turnout No 0.08 –Yes 0.92 –
Validated Turnout No 0.37 –Yes 0.63 –
SI-10
C. Multiple imputation details
Rather than assuming that responses are missing at random, we use a multiple imputation
procedure to model the missing data, treating the cases at missing at random, conditional
on the predictors in the imputation model. Therefore, this approach is increasingly useful
as the imputation model improves. For imputation models, prediction is the criterion of
interest rather than causal identification. Therefore, adding additional variables that improve
prediction are useful regardless of whether (or how) they are causally related to the imputed
variable (King et al., 2001). We therefore include as predictors all other variables from
our models. To minimize functional form assumptions, we use untransformed variables in
the imputation models. After the imputations, we discretize age, education, and income
and transform party identification and ideology to measures of partisan and ideological
extremity.
Since non-response is strongly related to media consumption (Berinsky, 2007; Keeter
et al., 2006), we also include measures of media use as additional predictors in the imputation
models. The South Bend study and the CCES both provide two items that measure how
often respondents watch and read the news.1 Unfortunately, the Williamsburg survey lacked
analogous items. The CCES common content also included two five-item batteries that
measure media use and social media use.2 We include each scale item as a separate predictor.
Finally, we include neighborhood fixed effects as predictors in the South Bend imputations
and state fixed effects for the CCES imputations.3
1The South Bend measures come from items C6 and C7. The CCES items are indicators constructed fromcommon content item CC16_300b.
2The media use items are CC16_300_1–CC16_300_5. The social media use items are CC16d_300_1–CC16d_300_5.
3In South Bend, the neighborhood identifiers are items adnhd, bdnhd, cadnhd. We use the most recentnon-missing value. In the CCES, we use the post-election common content item, inputstate_post
SI-11
D. Full observational model results
To conserve space, we report only the coefficients associated with network participation in
the main text. The tables below display the complete results from those logistic regression
models. In addition, the last columns in each table provide an alternative specification of
validated turnout. As we explain in the main text, validated voting measures typically treat
unmatched respondents as nonvoters, but Berent, Krosnick and Lupia (2016) argue that
this approach introduces too much error. To address this concern, we also report results
from models where we exclude unmatched respondents from analysis, ensuring that all
observations have been validated.
SI-12
Table D1: Nonvoters embedded in more participatory networks are more likely to report that they voted.
South Bend Williamsburg CCES CCES - Matched(1) (2) (3) (4) (5) (6) (7) (8)
Intercept 0.17 −3.25∗∗∗ −2.17∗∗∗ −3.56∗∗∗ 0.59 −2.18∗∗ −0.55 −3.54∗∗
(0.25) (0.70) (0.11) (0.47) (0.51) (0.96) (0.60) (1.77)Network Participation (as a proportion) 0.73∗∗ 1.05∗∗ 2.29∗∗∗ 2.08∗∗∗ 1.20∗∗ 1.64∗∗ 1.92∗∗∗ 2.43∗∗
(0.35) (0.45) (0.52) (0.54) (0.58) (0.72) (0.74) (1.03)Church Attendance Frequency —– 0.23∗∗∗ —– —– —– 0.20 —– 0.03
—– (0.08) —– —– —– (0.13) —– (0.18)Political Interest —– 0.82∗∗∗ —– 0.32∗∗ —– 0.48∗∗ —– 0.80∗∗
—– (0.14) —– (0.14) —– (0.21) —– (0.36)Party ID Strength —– 0.27∗ —– 0.04 —– 0.54∗∗∗ —– 0.38
—– (0.14) —– (0.12) —– (0.16) —– (0.29)Ideology Strength —– −0.33∗ —– 0.03 —– −0.52∗ —– −0.60
—– (0.18) —– (0.11) —– (0.30) —– (0.58)Education —– 0.06 —– 0.35∗∗∗ —– 0.51∗∗ —– 0.62∗
—– (0.12) —– (0.08) —– (0.20) —– (0.34)Income —– 0.18 —– −0.02 —– 0.10 —– 0.22
—– (0.13) —– (0.07) —– (0.16) —– (0.26)Indicators (0 = No; 1 = Yes) —– —– —– —– —– —– —– —–
—– —– —– —– —– —– —– —–No network —– −0.57 —– 0.04 —– −0.05 —– 3.42∗
—– (0.49) —– (0.19) —– (0.77) —– (1.96)Female —– −0.19 —– −0.49∗∗∗ —– −0.35 —– −0.52
—– (0.23) —– (0.18) —– (0.41) —– (0.66)Married —– −0.23 —– —– —– 0.11 —– 0.56
—– (0.30) —– —– —– (0.40) —– (0.64)Recent Mover —– 0.44 —– —– —– −1.26∗∗∗ —– −2.34∗∗∗
—– (0.75) —– —– —– (0.42) —– (0.69)Race/Ethnicity Dummies (ref. = White, non-Hispanic) —– —– —– —– —– —– —– —–
—– —– —– —– —– —– —– —–Black —– 1.43∗∗∗ —– −0.11 —– −0.98 —– −1.69
—– (0.51) —– (0.49) —– (0.71) —– (1.20)Hispanic —– —– —– −0.44 —– 0.98 —– −2.03∗
—– —– —– (0.56) —– (0.89) —– (1.04)Other —– −0.82 —– 0.02 —– 1.23∗∗ —– −0.33
—– (1.02) —– (0.28) —– (0.62) —– (0.81)Age Dummies (ref. = Quintile 1) —– —– —– —– —– —– —– —–
—– —– —– —– —– —– —– —–Age - Quintile 2 —– −0.00 —– —– —– −0.40 —– 0.66
—– (0.36) —– —– —– (0.48) —– (0.87)Age - Quintile 3 —– 0.80 —– —– —– −0.09 —– −0.65
—– (0.54) —– —– —– (0.59) —– (0.92)Age - Quintile 4 —– 0.65 —– —– —– −1.11 —– −1.73
—– (0.53) —– —– —– (0.74) —– (1.10)Age - Quintile 5 —– 2.39∗∗∗ —– —– —– −0.43 —– −0.90
—– (0.47) —– —– —– (0.75) —– (1.46)
AIC 630.29 473.04 1008.46 984.47 487.17 409.23 227.70 196.69N - Observations 500 500 1335 1335 552 552 183 183N - Imputations 50 50 50 50 50 50 50 50∗∗∗p < 0.01, ∗∗p < 0.05, ∗p < 0.1
Note: This table displays coefficients from logistic regression models in which overreporting is regressed onnetwork participation. Standard errors are provided in parentheses. The errors in the South Bend modelsare corrected for clustering at the neighborhood level. The CCES estimates rely on post-election weights.
SI-13
Table D2: Models relying on self-reported turnout suggest greater social influence than do models relyingon validated turnout.
South Bend Williamsburg CCES CCES - MatchedSelf Report Validated Self Report Validated Self Report Validated Self Report Validated
Intercept −2.05∗∗∗ −0.82∗∗ −1.18∗∗∗ −1.07∗∗∗ −0.47 −0.58 0.33 −0.33(0.46) (0.42) (0.27) (0.30) (0.87) (0.75) (0.94) (0.91)
Network Participation (as a proportion) 0.73∗ 0.05 2.26∗∗∗ 1.91∗∗∗ 1.13∗∗ −0.28 1.69∗∗∗ −0.11(0.40) (0.23) (0.38) (0.38) (0.54) (0.45) (0.61) (0.56)
Church Attendance Frequency 0.29∗∗∗ 0.13∗∗∗ —– —– 0.18 0.02 0.15 0.07(0.07) (0.03) —– —– (0.14) (0.06) (0.18) (0.09)
Political Interest 0.59∗∗∗ 0.05 0.23∗∗∗ 0.16∗ 0.63∗∗∗ 0.49∗∗∗ 0.72∗∗∗ 0.55∗∗∗
(0.10) (0.08) (0.08) (0.09) (0.21) (0.15) (0.25) (0.20)Party ID Strength 0.44∗∗∗ 0.21∗∗∗ 0.05 0.05 0.43∗∗∗ 0.00 0.37∗ −0.00
(0.13) (0.08) (0.08) (0.08) (0.16) (0.09) (0.21) (0.13)Ideology Strength −0.34∗∗∗ −0.11 −0.03 −0.07 −0.33 0.04 −0.76∗∗ 0.14
(0.12) (0.08) (0.07) (0.08) (0.28) (0.15) (0.36) (0.23)Education 0.23∗∗ 0.19∗∗ −0.04 −0.25∗∗∗ 0.29 −0.15∗ 0.47∗∗ 0.04
(0.10) (0.09) (0.05) (0.05) (0.20) (0.08) (0.23) (0.12)Income 0.18∗ 0.02 0.03 0.06 0.08 0.04 −0.08 0.02
(0.11) (0.05) (0.04) (0.04) (0.17) (0.07) (0.18) (0.11)Indicators (0 = No; 1 = Yes) —– —– —– —– —– —– —– —–
—– —– —– —– —– —– —– —–No network −0.73 −0.44∗ −0.37∗∗∗ −0.58∗∗∗ −0.29 −0.27 2.13∗∗ −0.25
(0.44) (0.23) (0.13) (0.15) (0.66) (0.36) (0.87) (0.54)Female −0.03 0.02 −0.31∗∗∗ −0.15 −0.42 0.13 −0.34 0.25
(0.18) (0.17) (0.11) (0.12) (0.40) (0.18) (0.52) (0.27)Married 0.03 0.23∗ —– —– 0.13 0.03 0.30 −0.02
(0.22) (0.13) —– —– (0.35) (0.19) (0.36) (0.28)Recent Mover −0.21 −0.77∗∗∗ —– —– −0.81∗∗ 0.31 −0.89∗ 0.38
(0.62) (0.21) —– —– (0.33) (0.22) (0.46) (0.31)Race/Ethnicity Dummies (ref. = White, non-Hispanic) —– —– —– —– —– —– —– —–
—– —– —– —– —– —– —– —–Black 0.05 −1.03∗∗∗ −0.54 −0.78∗ −0.84 0.09 −1.21 −0.10
(0.32) (0.31) (0.34) (0.42) (0.79) (0.35) (1.01) (0.53)Hispanic —– —– −0.13 −0.01 0.60 −0.51 −0.36 0.04
—– —– (0.28) (0.30) (0.87) (0.35) (0.96) (0.52)Other −1.20∗∗ −0.86∗∗ −0.28 −0.44∗∗ 0.78 −0.41 0.25 −0.19
(0.59) (0.42) (0.19) (0.22) (0.54) (0.32) (0.66) (0.49)Age Dummies (ref. = Quintile 1) —– —– —– —– —– —– —– —–
—– —– —– —– —– —– —– —–Age - Quintile 2 0.49∗ 0.61∗∗∗ —– —– −0.71 −0.12 −1.53∗∗ −0.44
(0.30) (0.20) —– —– (0.45) (0.25) (0.71) (0.35)Age - Quintile 3 1.20∗∗∗ 0.87∗∗∗ —– —– −0.26 0.32 −0.79 0.50
(0.40) (0.18) —– —– (0.55) (0.27) (0.85) (0.41)Age - Quintile 4 1.32∗∗∗ 1.03∗∗∗ —– —– −0.71 0.61∗∗ −1.80∗ 0.39
(0.41) (0.23) —– —– (0.83) (0.29) (0.94) (0.48)Age - Quintile 5 1.56∗∗∗ −0.76∗∗∗ —– —– −0.05 0.84∗∗∗ −1.14 0.41
(0.33) (0.18) —– —– (0.66) (0.31) (0.86) (0.43)
AIC 834.75 1694.83 2123.64 1804.75 590.78 1661.27 383.35 998.63N - Observations 1510 1510 1735 1735 1512 1512 1143 1143N - Imputations 50 50 50 50 50 50 50 50∗∗∗p < 0.01, ∗∗p < 0.05, ∗p < 0.1
Note: This table displays coefficients from logistic regression models in which self-reported or validatedturnout is regressed on network participation. Standard errors are provided in parentheses. The errors inthe South Bend models are corrected for clustering at the neighborhood level. The CCES estimates rely onpost-election weights.
SI-14
E. Balance statistics for the experimental study
Tables E1-E3 present balance statistics for each randomly assigned variable in the CCES
experiment. In the tables, the means reflect proportions of respondents with each value of
the pre-treatment covariates examined in the observational study. The p-values come from
regressions of treatment assignment on the covariates. For the binary network participation
prime, shown in Table E1, the p-values come from logistic regression. For the trichotemous
treatments shown in Table E2 (the national treatment) & Table E3 (the type of relationship
the name generator elicited), the p-values come from multinomial logistic regressions. In
all three models, missing covariate values are treated as distinct categories. In the tables,
the covariates are ordered by the minimum p-value.
In general, the covariates appear balanced across treatments, but Table E1 suggests re-
spondents in the associate voted group differ somewhat from the associate abstained
respondents in terms of their income, age, and partisan strength. We therefore estimated
models controlling for these covariates. Since these covariates include missing values, we rely
on the imputed data from the observational study and combine the estimates using Rubin’s
rules. After including controls, respondents in the associate voted group were 4 percent-
age points more likely to report voting than respondents in the associate abstained
group (95% CI = [0.4,8.3]; p = 0.03). And, with these controls, the associate voted
group still tended to feel greater social pressure. Compared to the associate abstained
group, these respondents expected their associate to be 0.61 more disappointed on the
three-point scale (95% CI = [0.49,0.72]; p < 0.001).
Table E2 suggests some imbalance across values of the national turnout cue for church
attendance, income, education, and age. Much like the network participation prime effects,
the effect of the national turnout cue on perceptions of national turnout remains after
controlling for these covariates. Conditional on these controls, respondents’ estimates of
SI-15
Table E1: Balance statistics for the network participation prime
Mean
Covariate Covariate Value 1. Non-Voter 2. Voter p-value
Income Quintile 1 0.2 0.15 baselineQuintile 2 0.17 0.18 0.1Quintile 3 0.14 0.21 0Quintile 4 0.19 0.16 0.62Quintile 5 0.18 0.18 0.17Missing 0.1 0.11 0.36
Age Quintile 1 0.19 0.21 baselineQuintile 2 0.24 0.17 0.01Quintile 3 0.23 0.18 0.04Quintile 4 0.19 0.21 0.63Quintile 5 0.16 0.24 0.39
Party ID Strength Independent 0.13 0.17 baselineLean D/R 0.19 0.17 0.06Not very strong D/R 0.23 0.23 0.19Strong D/R 0.39 0.41 0.23Missing 0.05 0.03 0.02
Church Attendance Never 0.28 0.27 baselineSeldom 0.22 0.22 0.81A few times a year 0.11 0.15 0.2Once/twice a month 0.08 0.09 0.69Once a week 0.22 0.21 0.61More than once a week 0.09 0.06 0.17Missing 0.01 0 0.73
Ideology Strength Moderate 0.31 0.34 baselineLiberal/conservative 0.45 0.41 0.19Very liberal/conservative 0.17 0.18 0.84Missing 0.07 0.07 0.85
Political Interest 0 (least interested) 0.05 0.05 baseline1 0.14 0.1 0.222 0.27 0.26 0.683 (most interested) 0.53 0.57 0.8Missing 0.01 0.01 0.99
Gender Male 0.44 0.43 baselineFemale 0.56 0.57 0.43
Race/Ethnicity White, Non-Hispanic 0.73 0.74 baselineBlack 0.1 0.09 0.62Hispanic 0.06 0.07 0.72Other 0.11 0.1 0.76
Marital Status Not married 0.48 0.48 baselineMarried 0.52 0.52 0.71
Education Quartile 1 0.25 0.25 baselineQuartile 2 0.24 0.26 0.77Quartile 3 0.26 0.24 0.75Quartile 4 0.25 0.25 0.89
Recent mover No 0.77 0.79 baselineYes 0.23 0.21 0.88
SI-16
Table E2: Balance statistics for the national turnout cue
Mean p-value
Covariate Covariate Value 1. No cue 2. 37% cue 3. 59% cue 1 v 2 1 v 3
Church Attendance Never 0.26 0.27 0.3 baseline baselineSeldom 0.21 0.22 0.22 0.77 0.64A few times a year 0.13 0.12 0.13 0.61 0.43Once/twice a month 0.1 0.07 0.09 0.09 0.32Once a week 0.23 0.24 0.17 0.54 0.06More than once a week 0.07 0.08 0.07 0.95 0.56Missing 0 0.01 0.01 0 0
Income Quintile 1 0.17 0.14 0.23 baseline baselineQuintile 2 0.18 0.17 0.19 0.77 0.33Quintile 3 0.19 0.2 0.14 0.44 0.03Quintile 4 0.2 0.18 0.16 0.79 0.01Quintile 5 0.16 0.18 0.19 0.7 0.19Missing 0.1 0.14 0.09 0.16 0.1
Education Quartile 1 0.27 0.26 0.23 baseline baselineQuartile 2 0.27 0.23 0.24 0.97 0.43Quartile 3 0.23 0.24 0.28 0.47 0.02Quartile 4 0.24 0.27 0.25 0.24 0.07
Age Quintile 1 0.2 0.18 0.23 baseline baselineQuintile 2 0.2 0.2 0.2 0.83 0.34Quintile 3 0.17 0.22 0.21 0.27 0.88Quintile 4 0.23 0.21 0.15 0.97 0.02Quintile 5 0.2 0.19 0.21 1 0.8
Marital Status Not married 0.52 0.42 0.49 baseline baselineMarried 0.48 0.58 0.51 0.03 0.14
Political Interest 0 (least interested) 0.05 0.04 0.07 baseline baseline1 0.13 0.1 0.12 0.59 0.722 0.25 0.3 0.25 0.09 0.923 (most interested) 0.56 0.54 0.55 0.25 0.99Missing 0.01 0.02 0.01 0.23 0.67
Ideology Strength Moderate 0.32 0.29 0.37 baseline baselineLiberal/conservative 0.44 0.44 0.39 0.62 0.14Very liberal/conservative 0.18 0.2 0.15 0.36 0.22Missing 0.06 0.07 0.09 0.14 0.29
Gender Male 0.46 0.45 0.41 baseline baselineFemale 0.54 0.55 0.59 0.66 0.19
Race/Ethnicity White, Non-Hispanic 0.73 0.76 0.71 baseline baselineBlack 0.1 0.08 0.1 0.83 0.72Hispanic 0.06 0.06 0.07 0.74 0.72Other 0.11 0.09 0.12 0.26 0.82
Party ID Strength Independent 0.14 0.15 0.16 baseline baselineLean D/R 0.18 0.17 0.2 0.86 0.86Not very strong D/R 0.23 0.25 0.21 0.49 0.56Strong D/R 0.42 0.4 0.4 0.88 0.86Missing 0.04 0.04 0.04 0.81 0.31
Recent mover No 0.77 0.78 0.79 baseline baselineYes 0.23 0.22 0.21 0.72 0.35
SI-17
the national turnout rate were five percentage points lower if they received the 37% cue
compared to no cue (95% CI = [−7.8,−1.8]; p = 0.002); and three percentage points lower
if they received the 37% cue compared to the 59% cue (95% CI = [−6.5,−0.5]; p = 0.02).
Likewise, the national turnout cue still has little impact on turnout once these controls are
added. Conditional on these controls, respondents with the 37% cue were one percentage
point more likely to report voting than those who received no cue (95% CI = [−3.4, 5.1]; p
= 0.70) and four percentage points more likely to report voting than those who received
the 59% cue (95% CI = [−0.4,9.3]; p = 0.07). Thus, consistent with the main text, the
estimated conditional effects are in the opposite direction as we would expect if perceptions
of national turnout levels increased overreporting.
SI-18
Table E3: Balance statistics for the type of relationship elicited
Mean p-value
Covariate Covariate Value 1.Fr
iend/F
amily
2.C
owor
ker/
Acq
uain
tanc
e
3.Pe
rson
like
you
1 v 2 1 v 3
Church Attendance Never 0.27 0.3 0.26 baseline baselineSeldom 0.21 0.23 0.21 0.95 0.86A few times a year 0.11 0.13 0.15 0.63 0.11Once/twice a month 0.08 0.07 0.1 0.59 0.3Once a week 0.22 0.19 0.23 0.18 0.92More than once a week 0.1 0.07 0.04 0.11 0.03Missing 0.01 0.01 0 0.77 0.68
Marital Status Not married 0.5 0.49 0.43 baseline baselineMarried 0.5 0.51 0.57 0.48 0.05
Gender Male 0.47 0.43 0.41 baseline baselineFemale 0.53 0.57 0.59 0.34 0.06
Race/Ethnicity White, Non-Hispanic 0.72 0.7 0.77 baseline baselineBlack 0.1 0.11 0.07 0.26 0.69Hispanic 0.09 0.04 0.06 0.09 0.4Other 0.09 0.14 0.09 0.15 0.98
Income Quintile 1 0.2 0.17 0.17 baseline baselineQuintile 2 0.16 0.19 0.18 0.23 0.36Quintile 3 0.18 0.16 0.19 0.88 0.87Quintile 4 0.14 0.19 0.2 0.15 0.22Quintile 5 0.2 0.18 0.16 0.84 0.51Missing 0.11 0.12 0.1 0.34 0.92
Education Quartile 1 0.26 0.23 0.27 baseline baselineQuartile 2 0.23 0.28 0.24 0.22 0.88Quartile 3 0.26 0.23 0.26 0.9 0.69Quartile 4 0.25 0.27 0.23 0.43 0.6
Party ID Strength Independent 0.15 0.14 0.15 baseline baselineLean D/R 0.17 0.2 0.18 0.24 0.75Not very strong D/R 0.24 0.22 0.23 0.86 0.95Strong D/R 0.41 0.4 0.4 0.74 0.8Missing 0.04 0.04 0.04 0.8 0.54
Ideology Strength Moderate 0.35 0.32 0.31 baseline baselineLiberal/conservative 0.41 0.43 0.44 0.39 0.29Very liberal/conservative 0.18 0.17 0.18 0.92 0.65Missing 0.06 0.08 0.08 0.27 0.31
Political Interest 0 (least interested) 0.05 0.05 0.06 baseline baseline1 0.12 0.11 0.12 0.75 0.712 0.28 0.28 0.24 0.67 0.473 (most interested) 0.54 0.55 0.57 0.7 0.95Missing 0.02 0.01 0.01 0.9 0.43
Age Quintile 1 0.2 0.2 0.2 baseline baselineQuintile 2 0.21 0.2 0.19 0.82 0.61Quintile 3 0.22 0.2 0.19 0.83 0.56Quintile 4 0.18 0.19 0.23 0.92 0.52Quintile 5 0.2 0.21 0.2 0.99 0.58
Recent mover No 0.78 0.79 0.78 baseline baselineYes 0.22 0.21 0.22 0.55 0.98
SI-19
F. Treatment effects by the type of relationship elicited
Figure F1: The effect of the network participation prime remains consistent across types of relationships
●
●
●
●
Someone like you
Coworker or casual acquaintance
Combined
Friend or family
0.0 0.1 0.2Marginal effect of the network participation prime
on self−reported turnout
Note: In the experiment, the name generator asked respondents to think of a voter or non-voter from oneof the following three randomly-assigned groups: a “close friend or family member”, a “coworker or casualacquaintance”, or a“person like you”. This figure displays the estimated effect of the network participationprime for each type of relationship as well as the estimate when all three groups are combined. The dotsand lines represent estimates and 95% confidence intervals from difference in means tests. These valuesrepresent the expected change in an individual’s probability of reporting that they voted when asked to thinkof an associate who voted rather than an associate who abstained. The point estimates are substantivelysimilar in each group, though the precision of the estimates is lower within each group than when combineddue to reduced power. The data come from the 2016 post-election CCES module.
SI-20
G. Attrition during the experiment
Of the 841 post-treatment respondents in the CCES module, 39 provided no response to one
or more of the dependent variables used in the experimental study. Tables G1-G3 examine
how this attrition was distributed across the randomly assigned variables. Despite some
attrition, the effects reported in the main text are not sensitive to attrition, as we demonstrate
below.
Attrition was greatest for responses to the question asking about how disappointed the
associate elicited by the name generator would be to learn the respondent did not vote.
Table G1 suggests that the attrition was greater among respondents asked to name a non-
voter (n = 11) than among respondents asked to name a voter (n = 5). Given this difference,
we can investigate how non-response might bias the effect of the network participation prime
on perceived social pressure. The estimates in the main text omit the missing responses,
which relies on the strong assumption that the observations are missing completely at
random. Under this assumption, the treatment effect is .62 (95% CI = [.51, .73]; p < 0.001).
That is, respondents primed to think of an associate who voted expected this associate to be
about .62 more disappointed on the three-point scale than respondents primed to think of a
non-voter. To provide a lower bound on the estimate, we can instead impute the minimum
value of disappointment for all missing responses assigned to think of a voter. Likewise,
we can impute the maximum value of disappointment for all missing responses assigned
to think of a non-voter. These extreme assumptions bias the estimate toward the null, but
nonetheless suggest a substantively similar treatment effect of .57 (95% CI = [.46, .68];
p < 0.001). To estimate the upper bound due to missing values, we can instead impute
the maximum value of disappointment for the associate voted group and the minimum
value for the associate abstained group. Again this strong assumption leads to the
same substantive conclusion with a treatment effect of .64 (95% CI = [.54, .75]; p < 0.001).
SI-21
Table G1: Number of cases missing (and percent missing) for each dependent variable across values of thenetwork participation prime
Treatment Value
DV Non-Voter Voter
Disappointment 11 (2.7%) 5 (1.1%)National Turnout Estimate 6 (1.5%) 4 (0.9%)Network Turnout Estimate 9 (2.2%) 2 (0.5%)Self-Reported Turnout 1 (0.2%) 1 (0.2%)
Table G2: Number of cases missing (and percent missing) for each dependent variable across values of thenational turnout cue
Treatment Value
DV No cue 37% cue 59% cue
Disappointment 4 (1.3%) 6 (2.2%) 6 (2.2%)National Turnout Estimate 4 (1.3%) 3 (1.1%) 3 (1.1%)Network Turnout Estimate 3 (1%) 5 (1.8%) 3 (1.1%)Self-Reported Turnout 0 (0%) 1 (0.4%) 1 (0.4%)
Table G3: Number of cases missing (and percent missing) for each dependent variable by the type ofrelationship elicited
Treatment Value
DV Friend/Family Coworker/Acquaintance Person like you
Disappointment 5 (1.8%) 4 (1.5%) 7 (2.4%)National Turnout Estimate 3 (1.1%) 3 (1.1%) 4 (1.4%)Network Turnout Estimate 4 (1.4%) 1 (0.4%) 6 (2%)Self-Reported Turnout 0 (0%) 0 (0%) 2 (0.7%)
SI-22
The network turnout estimate also has greater non-response among the associate
abstained group. As we show in the main text, omitting missing cases leads to a near-zero
treatment effect equal to -0.8 on the 101-point scale (95% CI = [−3.2,1.6]; p = 0.51).
Again, even the most extreme assumptions about the true values of these cases leads to the
same conclusion. If we impute these cases to create an upper bound on the effect as above,
the estimated effect is 0.4 (95% CI = [−2.1, 3.0]; p = 0.73). And if we impute the cases to
estimate the lower bound, the estimated effect is -2.0 (95% CI = [−4.5,0.5]; p = 0.12).
SI-23
Supporting Information References
Berent, Matthew K., Jon A. Krosnick and Arthur Lupia. 2016. “Measuring Voter Registrationand Turnout in Surveys Do Official Government Records Yield More Accurate Assessments?”Public Opinion Quarterly 80(3):597–621.
Berinsky, Adam J. 2007. Survey Non-Response. In The SAGE Handbook of Public OpinionResearch, ed. Wolfgang Donsbach and Michael W. Traugott. Thousand Oaks, CA: SAGEpp. 309–321.
Keeter, Scott, Courtney Kennedy, Michael Dimock, Jonathan Best and Peyton Craighill.2006. “Gauging the Impact of Growing Nonresponse on Estimates from a National RDDTelephone Survey.” The Public Opinion Quarterly 70(5):759–779.
King, Gary, James Honaker, Anne Joseph and Kenneth Scheve. 2001. “Analyzing IncompletePolitical Science Data: An Alternative Algorithm for Multiple Imputation.” AmericanPolitical Science Review null(01):49–69.
SI-24
Top Related