Veri able Communication on Networks · agreement in social discourse, and aim to characterize the...
Transcript of Veri able Communication on Networks · agreement in social discourse, and aim to characterize the...
Verifiable Communication on Networks∗
German Gieczewski†
April 2020
Abstract
This paper models the diffusion of verifiable information on a network pop-
ulated by biased agents. Some agents, who are exogenously informed, choose
whether inform their neighbors. Informing a neighbor affects her behavior, but
also enables her to inform others. Agents cannot lie; they can, however, feign
ignorance. The model yields three main conclusions. First, unless a large set
of agents is initially informed, learning is incomplete. Second, moderate signals
are more likely to propagate than extreme signals. Third, when agents are
forward-looking, concerns about informational cascades lead to an endogenous
division of the population into like-minded groups that do not communicate
with each other.
JEL codes: D83, D85
Keywords: Verifiable information; Networks; Social learning; Strategic commu-
nication; Information cascades
1 Introduction
The Internet has given us unprecedented access to endless sources of information,
and has also made it easier to share this information with others through social net-
∗I am thankful to Daron Acemoglu, Glenn Ellison, Drew Fudenberg, Robert Gibbons, CatherineHafer, Matıas Iaryczower, Mihai Manea, Jacopo Perego, Juuso Toikka, Robert Townsend, AlexWolitzky, Muhamet Yildiz and seminar participants at MIT for helpful comments and suggestions.Declarations of interest: none.†Department of Politics, Princeton University. Address: 311 Fisher Hall, Princeton NJ 08540,
USA. Phone number: 6172330642. E-mail: [email protected].
1
works. Nevertheless, wild differences of opinion persist around politically or emotion-
ally charged topics, even those for which definitive evidence-based answers are avail-
able. For example, significant minorities of Americans believe that climate change is
not real or not man-made (Leiserowitz et al., 2019; Howe et al., 2015; Funk et al.,
2015); that vaccines (Moore, 2015; Omer et al., 2012) or bio-engineered foods (Funk
et al., 2015) are dangerous; or that Barack Obama was not born in the United States
(Morales, 2011; Jardina and Traugott, 2019). These and similar examples bear out
two stylized facts. First, beliefs about charged issues are partly driven by political
affiliation or other “deep” preferences, with self-identified liberals and conservatives
often having divergent views (Dimock et al., 2015). Second, misconceptions about
a substantive issue are accompanied by misconceptions about the existing body of
evidence: the number of climate change skeptics is similar to the number of people
who believe that scientific opinion is divided on the issue (Funk et al., 2015). Thus,
while biases may play a role, failures to converge to a common truth appear to be tied
to breakdowns in the transmission of information. If such breakdowns are possible
even in these stark examples, the hope of reaching a consensus is even weaker when
it comes to politically charged debates that have no clear-cut answer—for instance,
the effects of minimum wage increases, healthcare reform, or gun control.
In this paper, I propose motivated communication as a cause of persistent dis-
agreement in social discourse, and aim to characterize the extent to which it can
limit the diffusion of information. I define motivated communication as the act of
strategically using valid evidence to support one’s positions, even if those positions
were chosen for self-serving reasons. For instance, a climate change skeptic may cite
scientific studies—perhaps outliers—to support her claims, but her position may be
driven by the fact that she works for an oil company.
More concretely, this paper studies a model of information diffusion underpinned
by the following broad assumptions, motivated by our setting: there is a large popu-
lation of agents connected by a social network. Information is verifiable, but initially
only held by some. Agents have differing biases (which, however, do not prevent them
from discussing and sharing objective evidence if they wish to do so). When deciding
what information to share with their acquaintances, they are driven by the desire to
bring the views of others into alignment with their own.
Here is an example that closely fits the model. The citizens of a country in
political turmoil share news on an online social network. Citizens differ in their
2
ulterior motives: some are connected with the ruling party or are favored by its
policies, while others are routinely mistreated. In addition, citizens may have hard
information about the quality of the regime (e.g., a link to a video showing the
president taking bribes) acquired directly from Nature—say, from reading a news
website—or from other agents. Citizens have random opportunities to talk to others,
and decide what links to share; a link that is shared can, in turn, be passed on by
the recipient. Citizens want to bias others towards their own views, either for public
or private reasons—for example, to topple the government, or so that they can spend
time with their friends at a rally.
Since information consists of links to a verifiable source, agents cannot lie, but
their biases tempt them to lie by omission, that is, to only share evidence that bolsters
their position. For example, government supporters will avoid mentioning corruption
scandals even if they know the allegations to be true. This self-censorship stifles
the free diffusion of information. Moreover, in the face of this behavior, the beliefs
of biased agents who remain ignorant will end up reinforcing their bias: a dissident
will think her position vindicated if she meets a government supporter who fails to
produce evidence supporting the government.
The model yields three main insights. The first one concerns the extent of in-
formation diffusion—that is, how close the population comes to learning the state
of the world. I show that full diffusion is in general possible only when the set of
initially informed agents is diverse enough. More precisely, if the true state of the
world is high, then the truth spreads to all agents only if some very high-bias agents
are initially informed, and vice versa.
The second concerns the kind of information that proliferates. I show that moder-
ate signals—roughly speaking, signals close to the mean of the distribution—are more
likely to spread. More precisely, if the state of the world is symetrically distributed
around 0, with support [−1, 1], there is an interval I = (−ε, ε) ⊂ [−1, 1] such that, if
the state of the world is in I, then everyone will learn this as long as any one agent
is initially informed. On the other hand, extreme signals outside I—paradoxically,
the ones that carry more valuable information—will remain less known. The logic
behind this result is that extreme signals are only communicated in a predictable
direction, which stifles their propagation: for instance, high signals are only shared
to lower-bias agents to increase their beliefs, but low-bias agents are unlikely to pass
them on further. On the other hand, moderate signals can cycle around the popula-
3
tion by traveling in both directions. This implies that disinformation is more likely
to persist in disputes that have conclusive answers entirely favoring one side. It also
means that the existence of moderate evidence promotes the diffusion of informa-
tion, so that conversely, if all news are extreme–or if even balanced news can always
be divided into small, good-or-bad nuggets to be shared selectively–communication
becomes more difficult.
The third insight concerns the structure of information diffusion when learning is
incomplete. I show that, when the network is densely connected, failures of informa-
tion diffusion are driven by concerns about informational cascades. These concerns
lead to an endogenous division of the population into like-minded groups, or segments,
that communicate with each other only in one direction. The intuition here is that,
in a dense network—that is, a network in which every agent has a number of friends,
some with similar biases—myopic incentives would drive agents to share their infor-
mation with at least some neighbors in such a way that everyone would eventually
become informed. However, forward-looking agents understand the implications of
such behavior and, in equilibrium, choose not to inform neighbors who are close in
bias but who would then share information with the wrong group. Hence, in dense
networks, higher sophistication leads to less information transmission. In sparse net-
works, the effect of sophistication can go either way: forward-looking agents may fear
informational cascades, but they may also seek alliances of convenience, that is, they
may be willing to use a disliked neighbor as a conduit to reach a like-minded player
further down the network.
This paper is closely connected to the literature on strategic communication on
networks. Similarly to the present paper, the prototypical model in this literature
(Galeotti et al., 2013) considers biased agents on a network who choose whether to
communicate; each agent has a different bias and wants everyone’s actions to match
her own state-contingent bliss point. However, the most common variant of this
framework (Galeotti et al., 2013; Hagenbach and Koessler, 2010; Dewan et al., 2015;
Dewan and Squintani, 2018) assumes that communication is non-verifiable (cheap
talk) and that there is a single round of communication, that is, agents are not al-
lowed to pass on signals that other players have shared with them. In contrast, in
this paper, agents exchange hard evidence and multiple rounds of communication are
possible. As a result, more information is transmitted in my setting: while cheap
talk is informative only when the difference between agents’ biases is small enough
4
(Crawford and Sobel, 1982), verifiable communication gives rise to one-sided reve-
lation strategies, whereby agents reveal convenient facts and hide the rest. More
importantly, the informational cascades that are at the heart of my results are, by
design, absent from the aforementioned papers.
Within this literature, the two papers closest to this one are Squintani (2019) and
Bloch et al. (2018). They both consider verifiable messages that can be re-shared,
but differ from this paper in other respects. Squintani (2019) aims to characterize
optimal and/or pairwise stable networks, and finds conditions under which these
networks are either a star or an ordered line. In contrast, I take the network as given
and characterize the equilibrium for a large class of networks. Another difference is
that, in Squintani (2019), there is a single decision-maker whose beliefs other agents
are concerned with; this assumption limits the complexity of informational cascades,
as there is no trade-off between informing wanted and unwanted agents. On the other
hand, in my model each agent cares about the beliefs of all other agents.
In Bloch et al. (2018), there are two types of players: unbiased types that want
to match the state, and biased types whose bliss point is independent of the state.
In contrast, I consider agents of differing biases who care equally about the state.
The main difference, however, lies in the communication protocol. In Bloch et al.
(2018), agents can share information with either every neighbor, or none of them;
and information-sharing strategies are static (that is, players cannot adjust their
messages depending on when they become informed, or who has shared information
with them). In contrast, I assume agents can share information with only some of
their neighbors, and their strategies are allowed to depend on the history (and they
typically do).
This paper is also related more broadly to a literature that advances alternative
explanations for the persistence of disagreement. For example, agents may interpret
information differently (Acemoglu et al., 2015); they may be unable to separate oth-
ers’ priors from their information (Sethi and Yildiz, 2012); or the acquisition and
transmission of information may be costly (Calvo-Armengol et al., 2015; Perego and
Yuksel, 2016).
Finally, in the two-player case, the communication game in this paper is similar to
existing models of communication with verifiable information (Milgrom and Roberts,
1986; Glazer and Rubinstein, 2006). A common result in this literature (Milgrom and
Roberts, 1986; Hagenbach et al., 2014) is that unraveling leads to full revelation. In
5
this paper, however, there is only partial revelation because the amount of information
held by the sender is uncertain, which allows a sender with “bad” information to
fake ignorance without paying too high of a reputational cost. The way in which I
model verifiable communication differs more markedly from the literature on Bayesian
persuasion (Gentzkow and Kamenica, 2011), in which the sender is allowed to commit
to a revelation strategy before learning the state. In my setting, this assumption
would be unnatural, as the sender necessarily knows her available information before
meeting the next receiver.
The paper proceeds as follows. Section 2 presents the model. Section 3 solves the
case of myopic agents as a benchmark. Section 4 analyzes the case of forward-looking
agents. Section 5 is a conclusion. Appendix A contains additional analysis of the
model in Section 4. Appendix B contains all the proofs.
2 The Model
Preliminaries
There is a set N = {1, . . . , n} of players connected by an undirected network G ⊆N ×N . Each agent i has a type bi ∈ R denoting her bias. Without loss of generality,
we assume b1 ≤ . . . ≤ bn. We denote the set of i’s neighbors by Ni.
There is a state of the world θ that is fixed over time, and distributed according
to a c.d.f. H with support contained in [−1, 1] and mean 0. At the beginning of the
game, Nature draws θ and shows it to a subset S of players with probability γ(S),
where γ : 2N → [0, 1] is a probability distribution. We denote by γ0 = 1 − γ(∅)the probability that Nature informs a non-empty set of players, and assume γ0 ∈(0, 1). The assumption that γ0 < 1 guarantees that full unraveling in the style of
Milgrom and Roberts (1986) cannot obtain, as the players do not infer too much
from being uninformed. It can be taken as reflecting a world in which opportunities
to communicate strategically with neighbors do not arrive too often and are thus
somewhat unexpected.
Actions
After Nature informs the players in S, a game is played over discrete and infinite
time: t = 0, 1, . . .. In each period t, agent i chooses an action ait. i’s utility function
6
is
Ui(θ, ai, a−i) = −∞∑t=0
δt
[∑j∈N
(ajt − θ − bi)2].
This functional form, which is common in the literature, ensures that i’s optimal
action at time t is the expected value of θ + bi given her information; and, moreover,
that i wants to induce each player j 6= i to play according to i’s own bias, that is, i
wants to induce ajt = θ + bi, whereas j would choose ajt = θ + bj if fully informed.1
The discount factor 0 < δ < 1 is the same for all agents.
Communication
In each period t, each informed agent i has a probability α|Ni| of meeting each of her
neighbors j ∈ Ni, and a probability 1 − α of having no meetings, where α ∈ (0, 1).2
When i meets j, i decides whether to share θ or share nothing, potentially as a
function of her private history: m(θ, i, j, t, hti) = 1 or 0, where 1 denotes a message
that reveals θ and 0 denotes no message. If j receives θ she learns θ as well as the
fact that i sent the message at time t, but if i shares nothing, j does not know that a
meeting has occurred. In particular, meetings are one-way (when i has an opportunity
to talk to j, j does not necessarily have an opportunity to talk to i). We also allow i
to meet j even if j is herself already informed.
Intuitively, we can imagine i having a one-way meeting with j to mean that i has
“logged on” to the social network and considered messaging j, but j is unaware of
this unless he gets a message. In other settings (for example, face-to-face meetings)
it would be more natural to assume that meetings are two-way. The only difference
this would introduce is that j’s posterior would jump when he meets another player
i who sends no message, but would stay constant otherwise. In contrast, with one-
way meetings, posteriors evolve independently of when meetings happen (since j’s
posterior under ignorance responds to the expected number of meetings in which
she’s been the receiver), which simplifies the analysis.
Finally, we assume that actions ait are not observable by other players and pay-
offs are not observable, so that j can only learn from i through i’s messages. The
1The assumption that i’s own action and the actions of other players have the same weight in i’sutility function is immaterial to the results. Heterogeneous weights over the actions of other playerscan be incorporated into the model without complications.
2Allowing the probabilities of meetings to vary across links does not substantially alter the results.
7
ramifications of learning from other players’ behavior (Banerjee, 1992) or outcomes
(Wolitzky, 2018) have been studied elsewhere; they are orthogonal to the point of
this paper.
Timing
The game proceeds as follows. Before period 0, Nature determines the value of θ and
informs a set of players S. Then, at each time t ≥ 0, random meetings are decided;
communication takes place; and players choose their actions ait.
Equilibrium Concept
Our solution concept is sequential equilibrium. We denote a general strategy profile
by σ = (a(θ, i, t),m(θ, i, j, t, hti)), where a(θ, i, t) denotes the action chosen by agent i
at time t, given her observation of θ ∈ [−1, 1]∪{∅} (θ = ∅ denotes that i is uninformed
at time t);3 and m(θ, i, j, t, hti) is the probability that i shares her information with
agent j, conditional on her observation of the state θ and her history hti, with the
restriction that m(∅, ·) ≡ 0, i.e., no message is sent if i is uninformed.4
We denote the beliefs held by an agent of type i at a history hti by µ(i, t, hti). Since
we will often be interested mostly in i’s beliefs about θ, rather than about the entire
history of play, we also define p(θ, i, t) as the probability that player i is informed at
the end of period t, if the true state is θ. (In general p(θ, i, t) depends on θ because
the players’ communication strategies condition on the state.) We denote the limit
of p(θ, i, t) as t→∞ by p(θ, i). In addition, we denote i’s expectation about θ under
ignorance by θ(i, t), i.e., θ(i, t) =∫ 1−1 θ(1−p(θ,i,t)dH(θ)∫ 1−1(1−p(θ,i,t))dH(θ)
, and we denote its limit as t→∞
by θ(i).5
3In principle ait could depend on hti, but it will depend only on θ, i and t in any equilibrium,since ait is not observed by anyone else, and (θ, i, t) is a sufficient statistic for i’s posterior.
4Unlike ait, mit may depend on hti, since hti may contain information about who else is informed.5p(θ, i) is well-defined since p(θ, i, t) ≤ 1 is weakly increasing in t. θ(i) is pinned down by Bayes’
rule due to the assumption that γ0 < 1, which guarantees that player i remains uninformed withpositive probability (at least 1− γ0).
8
3 Benchmark: Myopic Preferences
As a prelude to analyzing the full model, this Section solves the case of myopic
preferences (δ = 0) as a benchmark. The equilibrium analysis is much simpler in this
special case: under myopic preferences, when i meets j, i cares only about j’s current
action, as opposed to the current and future actions of all other players. Hence, i
shares θ with j at time t if and only if doing so increases Eit(−(θ + bi − ajt)2|θ, hti).There are two reasons for studying the myopic case. Firstly, the assumption of
myopic preferences is descriptively accurate in some situations. Agents may care
about the private effects of actions with public consequences: for example, if i and j
are parents with children in the same school, i may want j to vaccinate his children so
that i’s children are safe while they are near j’s. Similarly, in the context of a protest, i
may want to change j’s mind so that j will attend a rally with i, and care more about
the consumption value of the rally than its political consequences. Alternatively,
agents may be truly myopic and fail to anticipate the long-term consequences of
their messages; this is a real concern in such a complex model. Finally, in practice,
communication is not always instrumental—instead, agents may be simply driven by
a desire to “win arguments”.
Secondly, the myopic case serves as an instructive bridge between my setting and
the extant literature on strategic communication on networks, most of which assumes
a single round of communication. Indeed, in a model with myopic preferences and
multi-round communication, informational cascades are possible but the players do
not take them into account—rather, they behave as if the current round of commu-
nication were the last.
Proposition 1 characterizes equilibrium behavior in the case of myopic preferences.
Proposition 1 (Myopic Equilibrium). If δ = 0, γ0 < 1 and α is low enough, there
is a unique equilibrium. In it,
(i) ait = θ + bi if i is informed at time t, and ait = θ(i, t) + bi otherwise.
(ii) If i is informed, m(θ, i, j, t) = 1 iff θ > θ(j, t) and 2(bj − bi) < θ − θ(j, t), or
θ < θ(j, t) and 2(bi − bj) < θ(j, t)− θ.
Part (i) follows directly from the definition of the players’ utility functions: each
agent i optimally matches her action ait to the state of the world (or her expectation
of it, if uninformed) plus her bias. Part (ii) reflects the following intuition: if i can
9
talk to j at time t, she faces a choice between revealing the true θ and letting j retain
his posterior under ignorance, which has mean θ(j, t). If bi > bj, i will want to reveal θ
precisely when doing so increases j’s posterior. But θ may also be shared when it has
the opposite effect if |bi− bj| is small relative to |θ− θ(j, t)|: in that case, i’s incentive
to bias j is trumped by her desire to prevent him from choosing an action badly
mismatched with the state of the world. Finally, taking α to be low enough serves
purely to guarantee equilibrium uniqueness, by ensuring that each player’s mean
posterior under ignorance θ(i, t) is not too sensitive to the communication strategies
of other players in period t.
Corollary 1 characterizes the extent of information transmission resulting from
these equilibrium strategies, in two instructive cases. We say the population N has
small gaps if bi+1 − bi < 1−γ02−γ0 for all i (consecutive agents have similar biases), and
large gaps if bi+1 − bi > 1 for all i (consecutive agents have very different biases).
Then
Figure 1: p(θ, i, t) on the equilibrium path for n = 4, i = 2, θ ∼ U [−1, 1], δ = 0,γ0 = 0.75, α = 0.3, and large gaps
−1 −0.8 −0.6 −0.4 −0.2 0 0.2 0.4 0.6 0.8 10
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
θ
p(θ,2,t)
p(θ,2,∞)
θ(1) θ(4)
Corollary 1. Assume (N,G) is a complete network and that γ({i}) = γ0n
for all i,
i.e., Nature informs a single randomly chosen player.
If N has small gaps, and the state is binary (P (θ = −1) = P (θ = 1) = 0.5), then
there is full diffusion: p(θ, i) = γ0 for all θ, i.
10
Figure 2: Posterior beliefs for i = 1, 2, 3, 4
(a) p(θ, i,∞)
−1 −0.8 −0.6 −0.4 −0.2 0 0.2 0.4 0.6 0.8 10.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
θ
p(θ,j,∞
)
p(θ,1,∞)
p(θ,1,∞)
p(θ,2,∞)
p(θ,3,∞)
p(θ,3,∞)
p(θ,2,∞)
p(θ,4,∞)
p(θ,4,∞)θ(1) θ(4)
(b) θ(i, t)
0 20 40 60 80 100 120 140 160 180
−0.3
−0.2
−0.1
0
0.1
0.2
0.3
t
θ(j,t)
θ(4, t)
θ(3, t)
θ(2, t)
θ(1, t) θ(1)
θ(2)
θ(3)
θ(4)
If N has large gaps, then:
(i) There is full diffusion for moderate states: p(θ, j) = γ0 for θ ∈(θ(1), θ(n)
).
(ii) There is partial diffusion for extreme states:
p(θ, j) = jnγ0 for θ < θ(1), and p(θ, j) = n−j
nγ0 for θ > θ(n).
The intuition behind Corollary 1 is as follows. If the gaps between consecutive
players are small enough, then for every player i her closest neighbors—in either
direction—are close enough that she would like to inform them no matter what the
state is. But the receivers will also find close neighbors to talk to, and so on. Thus
information spreads through the whole network.
In contrast, in the case of large gaps, information is only shared in a self-serving
fashion: i > j only informs j in order to increase her posterior belief, and vice versa.
As a result, both partial and full diffusion are possible, depending on the state. Figure
1 illustrates this in a 4-player example. The graph shows player 2’s probability of
learning the state over time. This probability is always increasing, but converges to
different values depending on θ. If θ is low, only player 1 will tell her; if θ is high,
only players 3 and 4 will tell her; if it is intermediate, she will be told no matter who
is initially informed.
11
Figure 3: Myopic transmission with large gaps
1
2
3
4
θ < θ(1)
1
2
3
4
θ ∈(θ(1), θ(n)
)
1
2
3
4
θ > θ(n)
The transmission of moderate information in the case of large gaps is enabled by
the following fact: over time, the posteriors under ignorance of extreme types come to
reinforce their biases. Formally, θ(i, t) is increasing in i; increasing in t for high-bias
players; and decreasing in t for low-bias players, as shown in Figure 2b. The reason is
that a low-bias player, for instance, expects others to try and bias her upwards, and
so expects to learn the state when it is high (Figure 2a). If she is not informed, she
will guess that θ is low and that is why no one told her.
In particular, this means that n’s posterior under ignorance in the long run is
higher than 1’s: θ(n) > θ(1). When θ lies between these two values, every player
wants to share θ with both 1 and n, in order to moderate their actions. And once 1
and n are both informed, they inform everyone else: for any player i between them,
either 1 or n will be willing to tell i. This is represented diagramatically in Figure 3.
Finally, it is worth noting that the variability of posterior beliefs increases with
information abundance: the higher γ0 is, the more uninformed players can infer from
their lack of information, which prompts other players to share more values of θ. As
a result, full diffusion is attained in the limit as γ0 → 1, due to an argument closely
related to the classic unraveling result under verifiable communication (Milgrom and
Roberts, 1986). This is the subject of Corollary 2.
Corollary 2. Assume (N,G) is a complete graph. Then, as γ → 0, p(·, i,∞)→|| ||∞ 0
and θ(i) → 0 for all i. Conversely, if the c.d.f of θ has full support in [−1, 1], then
as γ0 → 1, p(θ, j,∞)→|| ||1 1.
12
4 Forward-Looking Players
Informational Cascades
We now consider the case of forward-looking agents who care about the future actions
of the entire population. Two new considerations appear, both related to informa-
tional cascades. First, an agent i may want to inform j, even if this negatively affects
i’s payoff from j’s action, so that j can then inform a third player k. Second, i may
want to hide information from j, even when it improves j’s action, to prevent j from
talking to k.
These incentives are illustrated in Figures 4 and 5. In Figure 4, we assume that
(b1, b2, b3) = (0, 1, 1.6), and γ({2}) = γ0 > 0 is small (in other words, information is
scarce, which guarantees that posteriors under ignorance are close to 0; and only 2
may be initially informed). Suppose θ = 1 is realized and 2 is informed. Clearly, if 3
becomes informed, he will inform 1. While 2 has a myopic incentive not to inform 3,
she does wants to inform 1, and the latter incentive dominates. Hence, 2 tells 3.
Figure 4: Informational cascades encourage communication
2 3 1
In Figure 5, we assume that (b1, b2, b3) = (0, 0.4, 0.8); γ({1}) = γ0 is small; and
θ = 1 is realized. Then agent 1 has a slight incentive to inform 2, and 2 has a slight
incentive to inform 3, but 1 is strongly opposed to 3 becoming informed. Hence, if 1
were myopic, she would talk to 2; but, being far-sighted, she chooses not to.
Figure 5: Informational cascades discourage communication
1 2 3
These examples show that informational cascades may result in either more or less
communication. However, there is an asymmetry between the two. In Figure 4, the
result depends on 2 not being linked to 1 (as otherwise 2 could talk to 1 directly and
bypass 3); in Figure 5, the result is robust to adding links. This insight applies more
broadly. In a sparse network, the first motive can dominate, as in order to reach far-
away agents a player may be forced to use undesirable neighbors as intermediaries. On
13
the other hand, in a dense network, the first motive vanishes (as direct communication
is possible) but the second remains, hence discouraging communication.
In arbitrary networks, the set of equilibria of the communication game with
forward-looking players may be very complex. For instance, there may be multiple
equilibria that are Pareto-ordered; multiple equilibria that are not Pareto-ordered;
equilibria in which two informed players want to be the first to send a message (pre-
emption), or the last (war of attrition); equilibria in which a player’s communication
strategy depends, even in the long run, on who she has been informed by; and fi-
nally, there may not be a pure strategy equilibrium. A detailed discussion of these
possibilities is found in Appendix A.
What I show next is that, on a large class of networks, much of this complexity
can be ruled out. Indeed, a reasonable condition on the network structure (well-
connectedness), combined with assumptions about the communication protocol (mu-
tual observability and an activity rule) guarantee the existence of a natural equi-
librium with several appealing properties. Briefly, this equilibrium is coalition-proof
in a certain sense; it is robust to small changes in the network structure; and the
extent of information diffusion it induces is uniquely determined and admits a simple
description.
Communication in Well-Connected Networks
We begin with some terminology. Given an interval I = [a, b], an interval (NI , GI)
of the graph (N,G) is the induced subgraph with vertex set NI = {i ∈ N : bi ∈ I}.Given a network (N,G) and a value of γ0, we say (N,G) is well-connected if all its
intervals of size 1−γ02−γ0 or higher are connected.
Next, we define a class of strategy profiles that the equilibrium of the game will
belong to. For ease of exposition, assume a binary state of the world, with P (θ =
−1) = P (θ = 1) = 0.5.
A sequence of bias profiles b1 < b2 < . . . < bl ⊆ [b1, bn] partitions the population
N into segments M1, M2, . . . , Ml+1, where Mi = N[bi−1,bi).6 A strategy profile σ
is segmented if there are two sequences b1 < . . . < bk and b1 < . . . < bk, defining
respective partitions M1, . . . , Mk+1 and M1, . . . , Mk+1 such that:
(i) if θ = −1 and min(S) ∈ M i, then all the players in M i and higher segments
6Implicitly we have taken b0 = −∞, bl+1 =∞.
14
eventually become informed w.p. 1, and all the players in lower segments remain
uninformed.
(ii) if θ = 1 and max(S) ∈ M j, then all the players in M j and lower segments
eventually become informed w.p. 1, and all the players in higher segments
remain uninformed.
The essence of a segmented strategy profile is that the players are split into groups
according to their biases. Groups always share information internally, but information
only travels between groups in a single direction: when the state is high, higher-bias
groups only talk to lower groups, and the opposite happens if the state is low. This
is illustrated in Figure 6. There, if θ = 1 and 6 is initially informed, information
spreads in 6’s segment and the one below it, but not the one above it.
Figure 6: Outcome of a segmented strategy profile with S = {6} and θ = 1
109
8
765
4
32
1
M3
M2
M1
We call a segmented strategy profile natural if, for θ = 1 and each i ∈ M l, i
prefers that all the members of M l above her be informed (compared to none of them
being informed), and for any higher segment M l′ (l′ > l) prefers that no members of
M l′ be informed (compared to all being informed); plus the analogous condition for
θ = −1.
We now make two minor additions to the game defined in Section 2. We say a
communication game with mutual observability is one in which, at all times, informed
players automatically observe the set of other informed players in addition to knowing
θ;7 and a communication game with an activity rule is one in which, if all the informed
7Note that, even without the explicit assumption of mutual observability, players still make
15
players have had at least K opportunities to communicate with each of their neighbors
and have declined to inform any new players, all meetings stop forever.
Proposition 2 provides a characterization of the equilibria of the communication
game with forward-looking players, mutual observability and an activity rule.
Proposition 2. Suppose that (N,G) is well-connected. Then generically,8 if γ0 is
low enough and δ is high enough,
(i) every equilibrium is natural segmented, and all equilibria have the same segments
M1, . . . , Mk+1 and M1, . . . , Mk+1.
(ii) The threshold sequences satisfy bi+1 − bi ≥ 1−γ02−γ0 and bi+1 − bi ≥ 1−γ0
2−γ0 for all i.
In addition, if the biases b1, . . . , bn are roughly uniformly distributed in [b1, bn],
then b ≈ (b1 + 1, b1 + 2, . . .) and b ≈ (. . . , bn − 2, bn − 1).9
(iii) Holding b1, . . . , bn constant, any other well-connected network (N,G′) has a nat-
ural segmented equilibrium with the same segments as (N,G).
All natural segmented equilibria partition the population in the same way, and as
a result they are all approximately payoff-equivalent. Moreover, the partition they
induce can be constructed by means of the following algorithm. Suppose that θ = 1
and n is informed. Clearly n wants to tell everyone; if the network has enough links,
everyone becomes informed in the long run. Now suppose that n−1 is informed. n−1
wants to inform all the lower-bias players, and also wants to inform n if bn − bn−1 is
small enough. Hence everyone becomes informed.
If we look successively at lower-bias players, eventually some n′ < n is reached that
would rather not tell n. However, n′ understands that telling anyone in {n′+1, . . . , n}results in everyone learning, so the choice is whether or not to tell the whole group.
Since n′ is still willing to tell most members of the group, everyone learns. Eventually,
we reach some n′′ < n′ that would rather not tell {n′′ + 1, . . . , n}. {n′′ + 1, . . . , n}then becomes the highest segment. Now n′′ − 1 is happy to inform n′′ because she
knows that information will not flow upwards from n′′, so the process restarts.
inferences about the set of informed players: e.g., if i informs j, j knows i is informed. Moreover,informed players could share additional information through the timing of superfluous messages, i.e.,they could have have a “Morse code” that re-sends θ at specific times to indicate information aboutwho else is informed, or manipulates the timing of the first message to the same effect.
8The genericity is in the set of biases b1, . . . , bn: the statement holds except for a set of biasn-tuples of measure zero.
9A rigorous statement is given in the Appendix.
16
These equilibria have two other important properties. First, they are coalition-
proof, in the following sense. Say all the informed players are members of a segment
Ml. By construction, these players have a common interest in informing all the lower
segments and none of the upper segments. In equilibrium, this is exactly what they
do. Second, as stated in part (iii) of the Proposition, the extent of information
diffusion is not sensitive to details of the network structure—it depends only on the
distribution of biases and the network being well-connected.
Finally, at this point we can draw a comparison between Propositions 2 and 1. The
clear takeaway is that agent sophistication limits information transmission: indeed,
although a well-connected network must have “small gaps” as defined in Section 3, full
diffusion does not obtain here if the distribution of biases is wide enough that there
are multiple segments in equilibrium. Instead, the results in Proposition 2 hew much
closer to the “large gaps” case, even when the ideological gaps between neighbors are
in fact small.
Figure 7: A pathological non-segmented equilibrium
1
23
4
5
6
7
Under some additional conditions, the existence of a natural segmented equilib-
rium can also be guaranteed in the communication game without mutual observability
or an activity rule (see Proposition 5 in Appendix A). However, other equilibria may
arise that are not segmented or not natural. An example of how this might come
about is given in Figure 7. Assume that γ({2, 3}) > 0 is small and γ(S) = 0 other-
wise, and θ = 1. The dividing red line denotes that {5, 6, 7} communicate with each
other, but {1, 2, 3, 4} would rather not share information with them. That is, in a
natural segmented equilibrium, {1, 2, 3, 4} and {5, 6, 7} are separate segments.
17
Without mutual observability or an activity rule, there may be a non-natural
equilibrium where 3 talks to 5 and 2 talks to 7, that is, {1, 2, . . . , 7} forms a single
segment. The reason is that, if 2 expects 3 to share information with the upper
segment, then 2’s own message to 7 has no marginal impact on long-run information
diffusion, and vice versa. Instead, such messages only have a short-term effect, which
may be positive: for instance, 3 expects that 7 will become informed through 2 sooner
than through the path 3→ 5→ 6→ 7, so talking to t mainly amounts to informing
{5, 6} a bit sooner, which may be desirable. Similarly, if 2 has not yet been able to
inform 1, she may inform 7 to speed up transmission to 1.
There may also be a non-segmented equilibrium if we instead assume γ({4}) > 0
in the same example. The reason is that, counterintuitively, if 4 is the only informed
player it may be optimal to inform nobody. Indeed, although a priori 4 should be
more willing to share information than 2 and 3, she knows that informing either of
them will trigger a continuation that is bad for all of them. Finally, note that these
equilibria are not ex post coalition-proof: {1, 2, 3, 4} would all prefer to move to a
natural segmented equilibrium, but cannot necessarily do so. In a sense, then, they
constitute a coordination failure.
Such pathological equilibria rely on the players’ uncertainty about who else is
informed, or about who other informed players will inform. Mutual observability
removes the first kind of uncertainty, while an activity rule removes the second by
allowing the players to “lock in” a desirable information distribution by staying silent
for a number of periods.
Diffusion of Moderate Signals
For simplicity, Section 4 has so far focused on the case of a binary state. Here
we briefly discuss the case of a continuous state, and recover results regarding the
diffusion of moderate signals reminiscent of those in Section 3.
Assume now that the c.d.f. of θ admits a density h symmetric around 0 and with
full support in [−1, 1], and that the distribution of biases is also symmetric.10
Proposition 3. Let (N,G) be a complete network. Then, for γ0 low enough and δ
high enough, the game has a natural segmented equilibrium described by thresholds
10For brevity, we limit our discussion to the communication game with mutual observability andan activity rule.
18
b1(θ) < . . . < bkθ(θ) and associated segments M1(θ), . . . , Mkθ+1(θ), such that:
(i) For θ < 0, i becomes informed iff she is in min(S)’s segment or higher.
(ii) For θ > 0, i becomes informed iff she is in max(S)’s segment or lower.
(iii) There is ε > 0 such that for θ ∈ (−ε, ε) there is only one segment, i.e., everyone
becomes informed.
The intuition behind this equilibrium structure is as follows. First, as in Propo-
sition 1, the players’ posteriors under ignorance reinforce their biases, so that θ(1) <
. . . < θ(n). When θ > θ(n), players always want to inform lower-bias neighbors,
and only higher-bias neighbors who are close enough; this leads to the same general
results as in Proposition 5 when θ = 1. The outcome when θ < θ(1) is analogous.
However, values of θ lying in [θ(1), θ(n)] create a different set of incentives. Suppose
θ(i) < θ < θ(i + 1) for some i. Then both i and i + 1 want to inform everyone,
as the actions of higher players will be lowered after learning θ, while the actions of
lower players will be increased. Consider now i − 1, who wants to inform everyone
except for i; and i + 2, who wants to inform everyone except for i + 1. Since i and
i + 1 talk to each other, both i − 1 and i + 2 have effectively only two options: tell
both i and i + 1 or neither. However, depending on whether sharing θ increases or
decreases the average of i and i + 1’s actions, it will always be optimal for at least
one of {i − 1, i + 2} to inform them. Hence, if any player in {i − 1, i, i + 1, i + 2} is
informed, everyone becomes informed.
This argument can be iterated: given an interval of players who inform everyone,
at least one of the adjacent players on either end will be willing to inform that set, and
thus everyone. The argument ends when we run out of players on one side, pinning
down b1(θ) or bkθ(θ). After that, the remaining players have monotonic incentives, so
their behavior is as in Proposition 5. However, for θ close to zero, the set of players
who inform everyone turns out to be the entire population. That is, just as in the
myopic setting, moderate states of the world are communicated to everyone.
Well-Connectedness in Random Graphs
The condition of well-connectedness is satisfied only by networks with a relatively high
number of links. However, it is not too restrictive, and is plausibly satisfied by real
social networks. Here is a back-of-the-envelope calculation supporting this statement.
19
Take an Erdos-Renyi random graph G(n, p),11 with n equal to 320 million. Assume
γ0 is small, biases are uniformly distributed and bn − b1 = 4 (so that, per part (ii) of
Proposition 2, there are 4 segments in equilibrium). Then, if we assume an average
degree of np = 405, at least 99% of the resulting networks are well-connected (see
Proposition 7 in Appendix B). This bound is not tight, so in fact even a somewhat
lower average degree would be sufficient.
That aside, an average degree of 405 is approximately in line with online social
networks: for example, the average Facebook user in the U.S. has 338 friends (Smith,
2014). In addition, well-connectedness can be satisfied for much lower average degrees
if the network displays homophily (McPherson et al., 2001), that is, links between
similar agents are disproportionately common. Indeed, a high global average degree
is not required for well-connectedness—rather, it suffices for the average degree, re-
stricted to ideologically similar players, to be relatively high. To illustrate, in our
example, a person whose bias is in the 50th percentile must have an expected 76
friends with biases between the 41th and 59th percentiles.
Equilibrium Analysis in Trees
Although our previous results all rely on the network having a relatively high number
of links, the other extreme—that is, a network that is sparse enough—is also tractable.
Namely, if the network is a tree (that is, it has no cycles) and Nature always informs
at most one player, then the equilibrium of the game can be characterized as the
solution to a recursive algorithm.
Proposition 4. If the state of the world is binary (P (θ = −1) = P (θ = 1) = 0.5),
(N,G) is a tree, γ(S) > 0 only for singleton S, and γ0 is low enough, then generically
there is a unique sequential equilibrium.
Briefly, the argument is as follows. If the tree is made up of two players, and one
player is informed, her decision to inform the other player can be characterized as in
Proposition 1 (there are no informational cascades). If the network is a connected
line of three players, i1 ↔ i2 ↔ i3, and i1 is initially informed, then the problem faced
by i2, conditional on being informed by i1, reduces to the two-player case. Then i1
simply decides whether she prefers to inform {i2} vs. not, or {i2, ı3} vs. neither,
11That is, a graph with n nodes in which any two nodes are linked with probability p.
20
depending on i2’s anticipated behavior in the continuation. The same logic extends
to a line of any length. If the tree is not a line, players with multiple neighbors will
have the option of choosing which neighbors—and hence which branches—to inform,
but these decisions can be made separately.12
We finish with two observations. First, the assumption that there is a single source
of information is crucial: if two players are initially informed, multiple equilibria can
arise (see Appendix A). Second, even though well-connected networks and trees
both admit tractable (and, under some conditions, unique) equilibria, the resulting
patterns of information transmission that arise are very different. Whereas in a
well-connected graph the players break up into segments based on their biases, in a
tree communication is highly constrained by the network structure, and as a result
undesirable agents are often used as intermediaries.
5 Conclusions
In this paper I have studied a model of multi-player motivated communication. The
analysis pinpoints in what cases information transmission is possible despite the play-
ers’ biases, and what signals are more likely to propagate when there is only partial
transmission.
Myopic agents end up informing everyone if every player in the network has like-
minded neighbors (small gaps), but not if biases are very disparate (large gaps).
However, forward-looking agents endogenously limit their communication with even
like-minded neighbors who would then share information with the wrong group. As a
result, even in the case of small gaps, forward-looking agents break up into segments
that restrict communication with each other. In both settings, whenever full learning
does not obtain, moderate signals are more likely to be transmitted than extreme
ones.
The model also sheds light on the interaction between the network structure and
the strategic consequences of informational cascades. In sparse networks, forward-
looking agents may communicate more than myopic ones, because they understand
that undesirable neighbors may be used as intermediaries. But in well-connected
networks, forward-looking agents communicate less, because the main effect of infor-
mational cascades is to allow information to fall into the wrong hands.
12Note also that mutual observability and an activity rule make no difference in this case.
21
The model can be extended in several directions. One concerns the communication
protocol: in many settings players choose whether to share information publicly or not
at all, and cannot personalize their messages (for example, on Twitter). Embedding
this assumption in the model may lead to more transmission if the network displays
homophily and agents are myopic, as, in this case, they would be willing to share
information with their mostly-similar neighbors, without being able to hide it from
their more biased neighbors. But it may also lead to less transmission if agents are
forward-looking, as they would be fearful of information spilling into the wrong group,
and would potentially have to censor all their communication to avoid it.
Another potential direction for future work concerns endogenous network for-
mation and information acquisition. Indeed, we have assumed throughout that the
players are active as senders but passive as receivers: they cannot choose which of
their neighbors to listen to, nor make costly investments to form links or increase
their probability of being informed by Nature. Giving players more control over in-
formation sources may not lead to higher information diffusion; instead, if players
care enough about influencing others, they may choose to spend most of their time
listening to like-minded agents, who are the most likely to share useful ammunition
for arguing with other groups.
Finally, this paper focuses on the case where information is all-or-nothing, that is,
an informative message fully reveals the state. In reality, agents may be exposed to
multiple partially informative signals. If anything, a model incorporating this would
tend to feature lower information transmission in equilibrium: if players are exposed
to many signals, so that each one has a small impact on beliefs, there would never
be a motive to share a high signal with a high-bias player in order to correct a large
mistake in beliefs.
22
References
Acemoglu, Daron, Victor Chernozhukov, and Muhamet Yildiz, “Fragility
of Asymptotic Agreement under Bayesian Learning,” Theoretical Economics, 2015.
Forthcoming.
Banerjee, Abhijit V, “A Simple Model of Herd Behavior,” The Quarterly Journal
of Economics, 1992, 107 (3), 797–817.
Bloch, Francis, Gabrielle Demange, and Rachel Kranton, “Rumors and Social
Networks,” International Economic Review, 2018, 59 (2), 421–448.
Calvo-Armengol, Antoni, Joan De Martı, and Andrea Prat, “Communication
and Influence,” Theoretical Economics, 2015, 10 (2), 649–690.
Crawford, Vincent P and Joel Sobel, “Strategic Information Transmission,”
Econometrica, 1982, pp. 1431–1451.
Dewan, Torun and Francesco Squintani, “Leadership with Trustworthy Asso-
ciates,” American Political Science Review, 2018, 112 (4), 844–859.
, Andrea Galeotti, Christian Ghiglino, and Francesco Squintani, “Infor-
mation Aggregation and Optimal Structure of the Executive,” American Journal
of Political Science, 2015, 59 (2), 475–494.
Dimock, Michael, Jocelyn Kiley, Scott Keeter, Carroll Doherty,
and Alec Tyson, “Beyond Red vs. Blue: The Political Typology,”
Pew Research Center, 2015. http://www.people-press.org/files/2014/06/
6-26-14-Political-Typology-release1.pdf.
Erdos, Paul and Alfred Renyi, “On the Evolution of Random Graphs,” Publ.
Math. Inst. Hung. Acad. Sci, 1960, 5 (1), 17–60.
Funk, C, L Rainie, and D Page, “Public and Scientists’ Views on Science and So-
ciety,” Pew Research Center, 2015. http://www.pewinternet.org/files/2015/
01/PI_ScienceandSociety_Report_012915.pdf.
Galeotti, Andrea, Christian Ghiglino, and Francesco Squintani, “Strategic
Information Transmission Networks,” Journal of Economic Theory, 2013, 148 (5),
1751–1769.
23
Gentzkow, Matthew and Emir Kamenica, “Bayesian Persuasion,” American
Economic Review, 2011, 101 (6), 2590–2615.
Glazer, Jacob and Ariel Rubinstein, “A Study in the Pragmatics of Persuasion:
a Game Theoretical Approach,” Theoretical Economics, 2006, 1 (4), 395–410.
Hagenbach, Jeanne and Frederic Koessler, “Strategic Communication Net-
works,” The Review of Economic Studies, 2010, 77 (3), 1072–1099.
, , and Eduardo Perez-Richet, “Certifiable Pre-Play Communication: Full
Disclosure,” Econometrica, 2014, 82 (3), 1093–1131.
Howe, Peter D, Matto Mildenberger, Jennifer R Marlon, and Anthony
Leiserowitz, “Geographic variation in opinions on climate change at state and
local scales in the USA,” Nature Climate Change, 2015, 5 (6), 596–603.
Jardina, Ashley and Michael Traugott, “The Genesis of the Birther Rumor: Par-
tisanship, Racial Attitudes, and Political Knowledge,” Journal of Race, Ethnicity
and Politics, 2019, 4 (1), 60–80.
Leiserowitz, A., E. Maibach, S. Rosenthal, J. Kotcher, P. Bergquist,
M. Ballew, M. Goldberg, and A. Gustafson, “Climate change in the Amer-
ican mind: April 2019,” Yale Project on Climate Change Communication (Yale
University and George Mason University, New Haven, CT), 2019.
McPherson, Miller, Lynn Smith-Lovin, and James M Cook, “Birds of
a Feather: Homophily in Social Networks,” Annual Review of Sociology, 2001,
pp. 415–444.
Milgrom, Paul and John Roberts, “Relying on the Information of Interested
Parties,” The RAND Journal of Economics, 1986, 17 (1), 18–32.
Moore, Peter, “Young Americans most worried about vaccines,” Re-
trieved on 9/6/2015, 2015. https://today.yougov.com/news/2015/01/30/
young-americans-worried-vaccines/.
Morales, Lymari, “Obama’s Birth Certificate Convinces Some, but Not All,
Skeptics,” Retrieved on 9/6/2015, 2011. http://www.gallup.com/poll/147530/
obama-birth-certificate-convinces-not-skeptics.aspx.
24
Omer, Saad B, Jennifer L Richards, Michelle Ward, and Robert A Bed-
narczyk, “Vaccination Policies and Rates of Exemption from Immunization, 2005-
2011,” New England Journal of Medicine, 2012, 367 (12), 1170–1171. PMID:
22992099.
Perego, Jacopo and Sevgi Yuksel, “Searching for Information and the Diffusion
of Knowledge,” Working paper, 2016.
Sethi, Rajiv and Muhamet Yildiz, “Public Disagreement,” American Economic
Journal: Microeconomics, 2012, 4 (3), 57–95.
Smith, Aaron, “6 New Facts about Facebook,” Pew Research Center. Retrieved
on 11/6/2015, 2014. http://www.pewresearch.org/fact-tank/2014/02/03/
6-new-facts-about-facebook/.
Squintani, Francesco, “Information Transmission in Political Networks,” Working
paper, 2019.
Wolitzky, Alexander, “Learning from Others’ Outcomes,” American Economic
Review, 2018, 108 (10), 2763–2801.
25
A Other Equilibria with Forward-Looking Players
In arbitrary (not well-connected) networks, and without the assumptions of mutual
observability and an activity rule, concerns about informational cascades can support
a variety of complex or pathological equilibria. We illustrate the possibilities in several
examples. In all of these, we assume that γ0 > 0 is small and that θ = 1 is realized.
Figure 8: Multiple Pareto-ordered equilibria
(a) Messages are substitutes
2
3
4
5
1
(b) Messages are complements
1
2
3
4
5
First, Figures 8a and 8b show games with multiple equilibria that are Pareto-
ordered, in the sense that the informed agents agree on the best equilibrium but
cannot necessarily coordinate on it. In Figure 8a, take (b1, b2, b3, b4, b5) = (0, 1, 1, b, b+
ε), with 32< b < 2; γ({2, 3}) small and γ(S) = 0 otherwise; and ε > 0 small. Then 4
and 5 talk to 1 if they are informed, while 1 talks to no one. Both 2 and 3 want to
inform 1, but not 4 or 5. However, it is worth using one and only one of them as an
intermediary. Hence, there is an equilibrium where 2 talks to 4 while 3 does not talk
to 5, and another where the opposite happens. Although both 2 and 3 would rather
inform 4, if the “wrong” equilibrium is being played there is no way for 2 to inform
4 and tell 3 that informing 5 is no longer necessary.
In Figure 8b, (b1, b2, b3, b4, b5) = (1, 1, b, b, b + 0.4), where 1 < b < 32; γ({2, 3}) is
small, and γ(S) = 0 otherwise. (This example requires a directed network, where 5
can receive but not share information, but more complex examples can deliver similar
results in an undirected network.) Here 3 and 4 will talk to 5 if informed. 1 and 2
are willing to inform 3 and 4, but this comes at the cost of informing 5. It is then
optimal to either inform no one or everyone; and one of these options (depending
on the value of b) is preferred by both 1 and 2. However, for a range of values of b,
both are equilibrium outcomes. Indeed, if no one is informed in equilibrium, then a
deviation by 1 can inform {3, 5}, but not 4: a one-player deviation does not capture
the full value of switching to the informative equilibrium. Conversely, if everyone is
26
informed in equilibrium, then a deviation by 1 leaves 3 uninformed while having no
effect on 4 or 5, which is never profitable.
Figure 9: Multiple unordered equilibria
2
4
9
8
1
3
5
7
6
Figure 9 shows why there may be multiple equilibria that are not Pareto-ordered
for the informed players. Suppose that γ({2, 4}) is small, and γ(S) = 0 otherwise.
As in Figure 8a, 2 and 4 want to use either 8 or 9 as an intermediary to share
information with 1. Hence, for appropriate bias vectors b, there are two pure-strategy
equilibria. However, the choice of intermediary now results in other players learning,
such as {3, 7} or {5, 6}; since 2 and 4 in general differ in their preferences over which
group to inform, examples can be constructed where 2 would rather use 9 as an
intermediary, and 4 would rather use 8, or vice versa. Note that, if 2 and 4 could
observe each other’s actions, the first case would become a game of preemption (each
player races to be the first to communicate) while the second would become a war of
attrition (each player waits for the other to use her intermediary).
Figure 10: History-dependent strategies
5
4
2 3 1
Figure 10 presents a case where the equilibrium must condition on the players’
histories in an important way. Assume that (b1, b2, b3, b4, b5) = (0, 1, 1.6, 3, 3), and
27
γ({4}) = γ({5}) = 0.1 and γ(S) = 0 otherwise. Clearly 1 never talks, and 2 never
talks to 4 or 5. 3 and 4 always inform 1 if they are informed, and 4 and 5 always
inform 2. 3 would talk to 2 but never can (since 2 is 3’s only potential source) and
3 talks to 1. The question then is: should 2 talk to 3? The answer depends on how
2 became informed. If 2 was informed by 5, he should inform 3, but not if he was
informed by 4.
The reason is that 2 wants to inform 1, but not 3. If 5 is the informed player,
informing 3 amounts to also informing 1: 3 is used as an intermediary. However, if 4
is informed, then 1 would become informed no matter what 2 does.
This example can be altered to produce another where players want to manipulate
each other. Suppose that γ({4}) = 0.001, γ({5}) = 0.1, γ({4, 5}) = 0.009 and
γ(S) = 0 otherwise. Then the analysis proceeds as before, with one caveat: now
4 should not talk to 2. Indeed, 4 would rather risk a 10% chance that 2 will go
uninformed (if 5 is uninformed) to ensure that, with a 90% chance, 2 will be informed
by 5 and hence, thinking that 4 is likely uninformed, she will inform 3.
Figure 11: Non-existence of pure-strategy equilibrium
1
4
2
5
3
Finally, 11 shows that a pure-strategy equilibrium may not exist. Suppose that
(b1, b2, b3, b4, b5) = (0, 0.45, 0.9, 2, 2.6); γ({1, 4}) is small and γ(S) = 0 otherwise.
(This example also assumes a directed network in which 3 is only a receiver.) Then 2
and 5 always inform 3. 1 wants to inform 2 but not 3, and would rather tell neither
than both; 4 wants to inform 3 but not 5, and would rather tell both than neither.
Then, if neither player is talking, 4 would deviate and inform 5. If 4 is talking and
1 is not, 1 would deviate and inform 2 (since 3 is already informed). If both 1 and 4
are talking, 4 would deviate and stop talking (since 3 is already informed). And if 1
is talking but not 4, 1 would deviate and stop talking.
28
Robustness of Natural Segmented Equilibrium
We finish with a survey of alternative sets of conditions which guarantee the exis-
tence of a natural segmented equilibrium, without requiring the addition of mutual
observability and an activity rule to the game. However, these conditions typically
do not guarantee uniqueness, so they do not rule out the existence of other equilibria
like the ones we have just described.
Proposition 5. If (N,G) is well-connected, then generically, for γ0 is low enough
and δ = 1,13 there is a natural segmented equilibrium of the game without mutual
observability or an activity rule, with the same segments M1, . . . , Mk+1 and M1,
. . . , Mk+1 found in Proposition 2.
In other words, a natural segmented equilibrium always exists when δ = 1 (but
so do many other equilibria, as with δ = 1 players are indifferent over any deviations
that do not change the information distribution in the long run). Moreover, it has
a simple structure: when θ = 1, a player in M l always informs players in segments
M l and higher, and never informs players in lower segments. (The case θ = −1 is
analogous.)
Next, say an agent i is popular if she is connected to everyone, and say (N,G)
is very well-connected if each interval of size 1−γ02−γ0 or higher contains a popular agent
(this, of course, implies well-connectedness).
Proposition 6. If (N,G) is very well-connected, γ({l}) = γ0 is low enough and δ
is high enough, then generically there is a natural segmented equilibrium of the game
without mutual observability or an activity rule, with the same segments M1, . . . ,
Mk+1 and M1, . . . , Mk+1 found in Proposition 2.
The stronger notion of very well-connectedness guarantees that, if any player j
wants to inform a large set of players “quickly”, she can do so by informing a popular
agent whose bias is close to j’s—this popular agent will want to inform the same set of
players, and will be able to do so by virtue of being popular. Finally, the assumption
that γ0 = γ({l}) for some l removes uncertainty about which set of segments might
become informed in equilibrium.
13This means that the players’ ex ante payoffs are Ui = limt→∞E0(uit), where uit is i’s flow payoffat time t. This is well-defined since p(θ, i, t) converges for all θ, i, so uit converges.
29
B Proofs
Proof of Proposition 1. We first assume the existence of an equilibrium and prove it
must satisfy (i) and (ii).
(i) Each player i chooses ait to maximize Eit(−(ait− θ− bi)2). Since this is a concave
function of ait, the unique optimum is given by the FOC: ait = Eit(θ) + bi, where
Eit(θ) = θ if i is informed and Eit(θ) = θ(i, t) if not, by definition.
(ii) This follows from the fact that the differential payoff of showing θ to j is
P[−(bj − bi)2 + (θ(j, t) + bj − θ − bi)2
]= P (θ − θ(j, t))
[(θ − θ(j, t))− 2(bj − bi)
],
where P > 0 is the probability that j will be uninformed by the end of time t if i
does not show θ.
The existence and uniqueness of equilibrium can be proven by solving the game
forward. For brevity, we write the argument under the assumption that H admits a
continuous density h with support [−1, 1]. Indeed, suppose the result is true up to
time t. Let x(i) be candidate values for the agents’ mean posteriors under ignorance
at the end of period t+ 1. The communication strategies at time t are then uniquely
determined, by part (ii) of the Proposition. Define T (x(i)) as
T (x(i)) =
∫ 1
−1 θ(1− p(θ, i, t+ 1)h(θ)dθ∫ 1
−1(1− p(θ, i, t+ 1))h(θ)dθ, where
1− p(θ, i, t+ 1) = (1− p(θ, i, t))
[1−
∑j∈Ni
p(θ, j, t)α
|Nj|1(θ−x(i))[(θ−x(i))−2(bi−bj)]>0
]
and p(θ, j, t) denotes the probability j is informed at time t conditional on the state
being θ, and i being uninformed at time t. Then x(i) is compatible with equilibrium
iff T (x(i)) = x(i). After some algebra, a bound of the form |T ′(x(i))| ≤ Kα can be
proven for fixed K, so that for α > 0 small enough T is a contraction. Hence the mean
posteriors θ(i, t) are uniquely determined, and from parts (i), (ii) the equilibrium is
uniquely pinned down up to time t+ 1.
Proof of Corollary 1. First note that |θ(i, t)| ≤ γ02−γ0 for all i, t. (The upper bound is
attained when p(1, i, t) = 0, p(−1, i, t) = γ0, and the lower bound when p(1, i, t) = γ0,
p(−1, i, t) = 0.) In the small gaps case, there is c > 0 such that bi+1 − bi ≤ 1−γ02−γ0 − c
30
for all i. Then, by Proposition 1, at any time t, i strictly wants to share any θ ∈[−1,−1 + 2c) ∪ (1 − 2c, 1] with i − 1 and i + 1. Hence p(θ, i) = γ0 for all i and
θ ∈ [−1,−1 + 2c) ∪ (1− 2c, 1], i.e., for all θ if θ is binary.
For the large gaps case, we begin by noting that, if i has a meeting with j and
i > j, i informs j iff θ > θ(j, t) (Proposition 1). Next, we argue that θ(n, t) is
increasing in t. Suppose for the sake of contradiction that θ(n, t+ 1) ≤ θ(n, t). Then,
in equilibrium, informed players i < n who meet n at time t + 1 will inform n iff
θ ≤ θ(n, t+ 1) ≤ θ(n, t). Hence 1− p(θ, n, t+ 1) is strictly smaller than 1− p(θ, n, t)for θ ≤ θ(n, t+ 1) ≤ θ(n, t) and equal for θ > θ(n, t+ 1), whence θ(n, t+ 1) > θ(n, t),
a contradiction. By the same argument, θ(1, t) is decreasing in t. And, in similar
fashion, we can also prove that θ(1, t) < θ(i, t) < θ(n, t) for all t.
Next, we prove that there is full diffusion for states θ ∈(θ(1), θ(n)
). Take such a
state, and let t0 be such that θ ∈(θ(1, t), θ(n, t)
)for all t ≥ t0. Then, at any time
t ≥ t0, 1 wants to share θ with n; n wants to share θ with 1; and other players want
to share θ with both 1 and n. Hence p(θ, 1) = p(θ, n) = γ0. But then, for any player
1 < i < n, either 1 or n will share θ with i, depending on whether θ > θ(i, t) or vice
versa. Hence p(θ, t) = γ0 for all t. On the other hand, if θ < θ(1), then it is only ever
shared from lower types to higher types (as it is lower than anyone’s posterior under
ignorance), yielding the result. The case θ > θ(n) is analogous.
Proof of Corollary 2. As γ0 → 0, p(θ, i) ≤ γ0 → 0 for all θ and i. Then θ(i, t)→ E(θ),
which equals 0 by assumption.
Next consider the case γ0 → 1. Note that, if θ(1) → −1 and θ(n) → 1, we are
done. For the sake of contradiction, then, suppose WLOG that θ(1) does not converge
to −1, so there is a sequence γk0 → 1 such that θ(1)k → θ0 > −1. Then, as k → ∞,
the probability that 1 does not learn the state converges to 0 for θ > θ0 and to γ0n> 0
for θ < θ0. But then θ(1)k must converge to some value in the interval (−1, θ0), a
contradiction.
Proof of Proposition 2. We begin by showing how to find a set of thresholds b1 <
. . . < bk, b1 < . . . < bk that can support a natural segmented equilibrium.
Given players i, j, let ∆U(1, i, j) = −(bj − bi)2 + (bj − bi − 1)2 be i’s net flow
payoff from informing j when γ0 = 0 (so that θ(j) = 0). Let l1 be such that∑j>l ∆U(1, l1, j) < 0 but
∑j>l ∆U(1, l, j) > 0 for all l > l1. (Generically, there
31
is no l for which∑
j>l ∆U(1, l, j) = 0.) Then pick bk ∈ (bl1 , bl1+1).14 Let l2 < l1
be such that∑
l1≥j>l2 ∆U(1, l2, j) < 0 but∑
l1≥j>l ∆U(1, l, j) > 0 for all l > l2,
and let bk−1 ∈ (bl2 , bl2+1). Continue until∑
b1≥j>1 ∆U(1, 1, j) ≥ 0. We define bi
analogously, working from b1 and using ∆U(−1, i, j) = −(bj − bi)2 + (bj − bi + 1)2.
Take (b1, . . . , bn) to be generic values such that all of the sums∑
l′≥j>l ∆U(1, l, j) and∑l′≤j<l ∆U(−1, l, j) are nonzero. Then the partition defined by these thresholds is
the only one that could support a natural segmented equilibrium when γ0 = 0.
Next, recall that |θ(i, t)| ≤ γ02−γ0 for all i, t. And note that there is γ∗ s.t., if γ0 < γ∗,
none of the inequalities∑
l′≥j>l ∆U(1, l, j) ≷ 0,∑
l′≤j<l ∆U(−1, l, j) ≷ 0 change signs
when we replace the terms ∆(1, i, j) with ∆(1, i, j, θ) = −(bj − bi)2 + (bj + θ− bi− 1)2
for any |θ| ≤ γ02−γ0 . Hence, the partition into segments M1, . . . ,Mk+1, M1, . . ., Mk+1
induced by b1 < . . . < bk, b1 < . . . < bk is still the only one compatible with a natural
segmented equilibrium for all γ0 > 0 small enough.
Next we prove part (ii). The first half follows immediately from the fact that
∆(1, i, j, θ) = −(bj − bi)2 + (bj + θ − bi − 1)2 = (1 − θ)(1 − θ + 2(bi − bj)), which is
positive whenever |bi − bj| < 1−γ02−γ0 and |θ| ≤ γ0
2−γ0 .
For the second half, we will prove the following. Define a random variable X that
takes the values b1, . . . , bn each with probability 1n, and denote its c.d.f. by F . Then,
if there is ζ ≤ 1 such that for all b′ − b ≥ 1−γ02−γ0 we have
1
2ζ(b′ − b) ≥ E(X − b|X ∈ [b′, b]) ≥ ζ
2(b′ − b),
then the thresholds b1 < . . . < bk, b1 < . . . < bk found above (which we will show to
be the natural segmented equilibrium thresholds) satisfy
1
ζ≥ bi+1 − bi ≥ ζ,
1
ζ≥ bi+1 − bi ≥ ζ.
The argument goes as follows. Suppose θ = 1 is realized, j + 1 is the lowest
member of segment [bi, bi+1], and j is the highest member of the next-lowest segment.
It must be that j prefers not to inform the set of members [bi, bi+1] when γ0 is low
14k can be determined at the end, but it makes no difference to relabel the thresholds.
32
enough, in particular when γ0 = 0. If γ0 = 0, the net payoff from informing them is
0 ≥ ∆ = −∫ bi+1
bj
(b′ − bj)dF (b′) +
∫ bi+1
bj
(b′ − bj − 1)dF (b′)
=
∫ bi+1
bj
[1− 2(b′ − bj)] dF (b′) =⇒ 1 ≥ 2E(X − bj|X ∈ (bj, bi+1])
=⇒ 1
ζ≥ bi+1 − bj =⇒ 1
ζ≥ bi+1 − bi.
Conversely, j + 1 must prefer to inform the set of members of [bi, bi+1] above him,
when γ0 = 0. The net payoff from informing them is
0 ≤ ∆ = −∫ bi+1
bj+1
(b′ − bj+1)dF (b′) +
∫ bi+1
bj+1
(b′ − bj+1 − 1)dF (b′)
=
∫ bi+1
bj+1
[1− 2(b′ − bj+1)] dF (b′) =⇒ 1 ≤ 2E(X − bj+1|X ∈ (bj+1, bi+1])
=⇒ bi+1 − bj+1 ≥ ζ =⇒ bi+1 − bi ≥ ζ.
The proof for the thresholds bi is analogous.
Next, we will prove that for γ0 low enough and δ close enough to 1, part (i) holds,
and the equilibrium thresholds b1 < . . . < bk, b1 < . . . < bk must be the ones we
calculated above. In particular, this will imply part (iii), since our definition of the
thresholds was independent of the network structure.
Take θ = 1. The proof will be by induction on the set of segments k + 1, k, . . . , 1.
First, suppose everyone is informed. Clearly nothing else can happen.
Second, suppose a player i0 in [bn − 1−γ02−γ0 , bn] is informed. (Note that players in
[bn− 1−γ02−γ0 , bn] strictly prefer to inform everyone.) We argue that everyone will become
informed. Suppose for the sake of contradiction that, when the activity rule binds,
the set of informed players Y is a strict subset of N . There are two cases. If not every
player in [bn− 1−γ02−γ0 , bn] is informed, by well-connectedness there is a path joining i0 to
an uninformed player in this interval, i∗. WLOG i0 is directly connected to i∗. But
then i0 should have informed i∗ at her Kth opportunity at the latest, since i0 strictly
prefers for everyone to be informed. Alternatively, if every player in [bn − 1−γ02−γ0 , bn] is
informed, let i∗ be the maximal member of N−Y . Since (N,G) is well-connected, the
interval [bi∗ , bi∗ + 1−γ02−γ0 ] is connected. In particular, if i < n, i∗ must have a neighbor
33
j > i who is informed. But then j should have informed i∗ at her Kth opportunity
at the latest, since j strictly prefers for everyone in N − Y to be informed.
Third, suppose a player [bk, bn] is informed. We again argue that everyone will
become informed. To prove it, suppose the statement is false for some i0, and take
i0 to be maximal with this property. Let Y be the set of informed players when the
activity rule binds. By the maximality of i0, Y must not contain any players above i0
(otherwise everyone would have become informed). Since the interval [bi0 , bi0 + 1−γ02−γ0 ]
is connected, i0 is connected to some uninformed player j0 > i0 such that, if he is
informed, everyone will be informed. Then i0’s net payoff from informing j0 is of the
form
∆Ui0 =∑
j>i0
[−(bj − bi)2 + (bj + θ(j)− bi − 1)2
]+
+∑
j<i0:j /∈Y[−(bj − bi)2 + (bj + θ(j)− bi − 1)2
]+O(1− δ),
for some values θ(j) ∈ [− γ02−γ0 ,
γ02−γ0 ]. The term O(1 − δ) comes from the fact that
informing everyone takes some time, but the number of periods it takes is bounded
by the activity rule. Now note that, for γ0 low enough, the first sum is positive by
construction of the thresholds bi, and the second sum is positive pointwise. Hence i0
strictly prefers to inform j0, a contradiction.
Next we move on to segment Mk. We proceed in the same three steps. First,
suppose everyone in (−∞, bk−1] is informed, but no one in the top segment is informed.
Then we argue that no one else ever becomes informed. The proof is by backward
induction from the time at which the activity rule binds: suppose that we are at a
history such that, since the last time a player got informed, each informed player
has passed at least K times on informing each of her uninformed neighbors, except
for one pair (i0, j0) where i0 has passed on informing j0 only K − 1 times, and i0
currently has a chance to inform j0. Since j0 is uninformed, by assumption he is in
the top segment, so informing him will lead to everyone becoming informed. Hence
the continuation payoff after messaging him (forgetting about the payoffs generated
by players j′ < i0) is
Ui0 =∑
j∈Mk+1
[−(bj − bi)2
]+O(1− δ).
34
The payoff from not messaging him can be written as
U ′i = P∑
j∈Mk+1
[−(bj + θ(j)− bi − 1)2
]+ (1− P )
∑j∈Mk+1
[−(bj − bi)2 +O(1− δ)
],
where P is the probability that no one else sends a message in this period. The key
observation is that, because α < 1, P > 0 (in fact P ≥ (1−α)n−1). And, of course, the
expression∑
j∈Mk+1
[−(bj − bi)2 + (bj + θ(j)− bi − 1)2
]is negative for γ0 low enough
and δ high enough by the construction of the thresholds. Hence i0 prefers not to send
a message, and the same is true of all other informed players. Then, if we define
the vector (Zij)ij as denoting in each coordinate the number of times each informed
player i has passed on informing each uninformed player j, we can do induction on
Z. For instance, if Z = (K− 2, K, . . . ,K), then all players know that if the sender in
the first pair passes one more time, they will end up in the case (K − 1, K, . . . ,K) in
which no new messages will be sent, which is equivalent to the activity rule binding.
Then they also won’t send messages when Z = (K − 2, K, . . . ,K), etc.
Second, we show that if a player i0 in [bk−1−1−γ02−γ0 , bk−1] is informed, then everyone
with bias up to bk−1 becomes informed (but no one above bk−1 becomes informed),
and third, we show that the same is true if a player i0 in [bk−2, bk−1] is informed. The
proofs are analogous to those given for the top segment.
This argument can be iterated for all segments and similarly for θ = −1.
Proof of Proposition 3. To begin, construct a natural segmented equilibrium when
γ0 = 0. This can be done in exactly the same fashion as in Proposition 2: since
informed players have common knowledge of θ, and the beliefs of uninformed players
are independent of the exact distribution of θ as long as E(θ) = 0, the continuations
after Nature picks each possible value of θ can be solved separately. Indeed, the case
of any θ > 0 is equivalent to θ = 1 in the binary case if we multiply all the biases
by 1θ, and an analogous argument applies to θ < 0. Finally, for θ = 0, everyone is
indifferent at all times. Denote such a natural segmented equilibrium for the case γ0
by σ0.
Next we move on to the case γ0 > 0. First we show that, for small γ0 > 0, we have
θ(1) < . . . < θ(n). The argument goes as follows. Calculate the long-run posterior
under ignorance for players 1, 2, . . ., n under the assumptions that γ0 = γ > 0 but σ0
is still played. Denote these by θd(1, γ), . . . , θ
d(n, γ). It can be shown directly that
35
θd(1, γ) < . . . < θ
d(n, γ), and moreover that ∂θ
d(1,γ)∂γ
< . . . < ∂θd(n,γ)∂γ
.
Next, note that there is a function θ∗(γ) such that, for |θ| ≥ θ∗(γ), the partition
dictated by σ0 after θ is drawn is still a strict equilibrium when γ0 = γ (in fact,
this is true no matter what the posteriors under ignorance are, as long as they lie
in [− γ02−γ0 ,
γ02−γ0 ] at all times); and moreover θ∗(γ) → 0 as γ → 0. It follows that
any equilibrium for γ0 = γ must have posteriors under ignorance that differ from
θd(1, γ) < . . . < θ
d(n, γ) by at most o(γ), whence ∂θ
d(i,γ)∂γ
= ∂θd(i,γ)∂γ
. This proves
θ(1) < . . . < θ(n) for small γ0.
Next we show how to characterize the partition thresholds b1(θ), . . . , bkθ(θ) for
general θ and small γ0 > 0. We can find thresholds γ∗ and θ∗ such that, for |θ| > θ∗,
the equilibrium segments are independent of γ0 for all values of γ0 ∈ [0, γ∗]; and, for
|θ| < θ∗ and all γ0 ∈ [0, γ∗], the equilibrium segments must be made up of single
players.15
For |θ| < θ∗ but θ /∈ [θ(1), θ(n)], since segments are made up of single players,
behavior must clearly be the same as in the large gaps case of Proposition 1. For
θ ∈ [θ(1), θ(n)], a different argument applies.
First, assume that t is large enough that posteriors under ignorance are close to
their long-run values. If θ(i) < θ < θ(i+ 1), then i and i+ 1 want everyone to learn
θ. Since the network is complete, they can make this happen unilaterally.
Next, consider i− 1 and i+ 2. i− 1 wants to tell everyone except for i, and i+ 2
wants to tell everyone except for i+1. However, telling i implies telling i+1 and vice
versa, as they communicate. Now suppose WLOG that θ ≥ θ(i)+θ(i+1)2
, i.e., revealing
θ to i and i + 1 increases the average of their actions. Then i + 2 prefers informing
both to informing neither, and hence everyone is informed. The reverse argument
applies if θ < θ(i)+θ(i+1)2
.
More generally, suppose that once any member of {j, j+ 1, . . . , l} is informed, ev-
eryone quickly becomes informed. By the same argument, both j−1 and l+1 want to
inform everyone outside of {j, j+1, . . . , l}, and at least one of them prefers informing
all the members of {j, j + 1, . . . , l} to informing none of them. This argument stops
when j = 1 or l = n, and in particular implies that i and i+ 1 belong to an extremal
segment. The incentives of the remaining players are as in the case θ /∈ [θ(1), θ(n)].
15The reason is that, as noted above, having state θ > 0 is equivalent to having state θ = 1 withall biases multiplied by 1
θ ; for θ small enough, this results in all the adjusted biases differing by morethan 1.
36
Finally, by the assumed symmetry of (b1, . . . , bn), when θ = 0 ∈ [θ(1), θ(n)], the
above argument will run out of players on both ends simultaneously, whence there is
a single segment {1, . . . , n}. By continuity, this is true for all states in a neighborhood
of 0.
Proof of Proposition 4. First, consider the case where the tree is a line, i.e., there is a
permutation σ of (1, . . . , n) such that σ(i) is connected to σ(i+ 1) for i = 1, . . . , n−1
and there are no other links. Moreover, suppose that 0 < γ0 = γ({σ(1)}) < 1, i.e.,
Nature only ever informs σ(1).
First, for simplicity, assume γ0 = 0 and δ = 1. For θ = 1, −1, let ∆ij(θ) =
−(bj − bi)2 + (bj − bi − θ)2 be i’s net payoff from j becoming informed (note that,
since γ0 = 0, θ(j, t) = 0). Then the unique equilibrium is the solution to the following
algorithm:
(i) If ∆σ(n−1)σ(n)(θ) > 0, then m(θ, σ(n − 1), σ(n), t) = 1 for all t. If so, we define
M(n− 1, θ) = 1. Else, if ∆σ(n−1)σ(n)(θ) < 0, then m(θ, σ(n− 1), σ(n), t) = 0 for
all t and M(n− 1, θ) = 0.
(ii) For k < i ≤ n, let N(k, i, θ) =∏i−1
m=k+1M(m, θ) and
Σlimσ(k)(θ) =
n∑i=k+1
∆σ(k)σ(i)(θ)N(k, i, θ).
If Σσ(k)(θ) > 0, then m(θ, σ(n− 1), σ(n), t) = 1 for all t, and M(k, θ) = 1. Else,
if Σσ(k)(θ) < 0, then m(θ, σ(n− 1), σ(n), t) = 0 for all t and M(k, θ) = 0.
When γ0 = 0, these conditions must be satisfied by any equilibrium, and they pin
down an essentially unique strategy profile in the generic case where none of the
(finitely many) expressions ∆ij(θ) equal zero.16
Next consider the case γ0 > 0 and δ < 1. Note that, if the equilibrium continuation
after σ(k+ 1) is informed at some t has already been shown to be unique; and σ(k)’s
net continuation payoff from informing σ(k+1) is always positive or always negative,
regardless of t; then σ(k)’s optimal strategy is either to always inform σ(k) (if the net
16By essentially unique we mean that superfluous messages may be sent between already informedplayers, but they make no difference).
37
payoff is positive) or to never inform (if it is negative). This net payoff is of the form
Σσ(k),t(θ) =∞∑s=t
δs−tn∑
i=k+1
[−(bσ(i) − bσ(k))2 + (bσ(i) + θ(σ(i), s)− bσ(k) − θ)2
]N(k, i, θ, s, t),
where θ(σ(i), s) is σ(i)’s posterior under ignorance at time s ≥ t, and N(k, i, θ, s, t)
is the probability i will become informed by period s, if k informed k + 1 in period
t. Since |θ(σ(i), s)| ≤ γ02−γ0 , for low γ0 the term in brackets converges to ∆σ(k)σ(i)(θ).
And, if the continuation after k+ 1 is involved has every player using always-message
or never-message strategies, then either N(k, i, θ, s, t) = 0 for all s ≥ t or N(k, i, θ, s, t)
converges quickly to 1 as s→∞. Hence Σσ(k),t(θ)→ Σlimσ(k)(θ) as γ0 → 0 and δ → 1,
so the unique equilibrium is given by the same strategies. Using the same argument,
we can prove that there is generically a unique equilibrium for any δ < 1 if γ0 is low,
although the equilibrium strategies may no longer be the same.
Note that σ(1) being the informed player is not essential. If a player σ(i) is
informed for 1 < i < n, then effectively two disjoint games are played: σ(i) is the
initial player in the lines (i, i− 1, . . . , 1) and (i, i + 1, . . . , n), which can be solved as
above.
Now, return to the case of a general tree. Since γ(S) > 0 only for singleton S,
in each non-trivial realization of the game there is a single player i who is initially
informed. Then N − {i} can be partitioned into a collection of disjoint trees, such
that i is connected to exactly one player in each tree. We can solve this case by
backward induction in the same fashion.
Proposition 7. Let G(n, p) be a large Erdos-Renyi random graph, where the pop-
ulation N = {1, . . . , n} and the distribution of biases (b1, . . . , bn) are fixed. Let
K = 2(bn−b1)(2−γ0)1−γ0 and m = n
K. Suppose that (b1, . . . , bn) are roughly uniformly
distributed, i.e., (1 − ε)bj−bibn−b1 ≤
|{i,...,j}|n
≤ (1 + ε)bj−bibn−b1 whenever bj − bi ≥ 1−γ0
2(2−γ0) .
Let np = (lnm+c)K1−ε . Then (N,G) is well-connected with probability greater than
e−e−c(3K−1)/(1−ε) as n→∞.
Proof of Proposition 7. Partition the interval [b1, bn] into subintervals of size Z =(1−γ0)2(2−γ0) . In other words, write [b1, bn] = I1∪ . . .∪ Im, where Il = [b1 + (l−1)Z, b1 + lZ)
for l < m and Im = [b1 + (m− 1)Z, bn].
The proof is in two steps. First we argue that, to guarantee well-connectedness,
it is sufficient that:
38
(i) each interval Il (1 < l < m) is connected;
(ii) each i ∈ Il (for 1 < l < m) has a link to some agent in Il−1 and a link to some
agent in Il+1;
(iii) each i ∈ I1 has a link to some agent in I2;
(iv) each i ∈ Im has a link to some agent in Im−1.
The proof is as follows. Let (a, b) ⊆ [b1, bn] be an interval such that b− a ≥ 1−γ02−γ0 .
By construction, (a, b) contains at least one subinterval Il (for 1 < l < m). Let i be
an agent with bi ∈ (a, b), and suppose she belongs to Il′ . We argue that i is connected
to some j ∈ Il through a path within (a, b). Indeed, if l′ = l, this is trivial. If l′ < l,
then i is connected to some jl′+1 ∈ Il′+1, who is connected to some jl′+2 ∈ Il′+2, and so
on until some j ∈ Il. Moreover, by construction Il′+1, . . . , Il ⊆ (a, b). The case where
l′ > l is analogous. Hence, given two agents i, j in (a, b), they are both connected
within (a, b) through Il.
The second step is to calculate the probability P that a random network satisfies
the above conditions. We can provide a lower bound for P :
P ≥
[m−1∏i=2
Pi
][m−1∏l=2
(1− (1− p)|Il|
)|Il−1|+|Il+1|] (
1− (1− p)|I1|)|I2| (
1− (1− p)|Im|)|Im−1|
where Pi is the probability that Ii is connected. (P is larger than the right-hand side
because i ∈ Il being connected to someone in Il+1 is correlated to agents in Il+1 being
connected to someone in Il).
By our assumptions, |Il|n≥ (1−ε)Z
bn−b1 = 1−εK
, i.e., |Il| ≥ (1− ε)m =: m′. On the other
hand, p = lnm+c(1−ε)m = lnm′+c+ln(1−ε)
m′.
It is known that, in an Erdos-Renyi random graph (a, q) with link probability
q = ln a+ca
, the probability of the graph being connected converges to e−e−c
as a→∞(Erdos and Renyi, 1960). Therefore limPi ≥ e−e
−c−ln(1−ε).
In addition, (1 − p)a → e−ap as a → ∞ if ap converges to a constant. Hence
(1 − p)|Il| ≈ e−|Il|p, and(1− (1− p)|Il|
)|Il′ | ≈ e−e−|Il|p|Il′ |. Since (1 − ε)m ≤ |Il| ≤
(1 + ε)m, we have
e−e−|Il|p|Il′ | ≥ e−e
−(1−ε)m lnm+c(1−ε)m (1+ε)m = e−e
−c(1+ε).
39
Thus
limP ≥ e−e−c−ln(1−ε)(m−2)e−e
−c(1+ε)(2m−2) ≥ e−e−c(3m−4)/(1−ε) ≥ e−e
−c(3K−1)/(1−ε).
The last step follows from the fact that, by construction, m = dKe so m ≤ K+1.
If we take n = 320.000.000, γ0 small and (b1, bn) = (−2, 2), Then K = 16,
m = 20.000.000, lnm ≈ 16.81, and P ≥ e−47e−c
. Then, if we take c = 8.46, at least
99% of the resulting networks are well-connected. This choice of c implies an average
degree np = 405.
If there is homophily, the condition that a player’s average global degree be (lnm+
c)K is no longer required; it suffices for her average degree, restricted to a set of 3m
similarly biased players, to be 3(lnm + c). In our example, 3mN
= 18.75% of the
population, and 3(lnm+ c) ≈ 76.
Proof of Proposition 5. The proof is simple: take the segments from the equilibrium
defined in Proposition 2, and construct a strategy profile where, when θ = 1, each
player i ∈ M l always messages every member of M l and lower segments, but no
one in higher segments; and analogously for θ = −1. By construction, this strategy
profile is natural segmented. For it to not be an equilibrium, there would have to be
a strictly profitable deviation for some player. But the only deviations which change
the long-run payoffs are those that send messages to the wrong segments (e.g., when
θ = 1, i ∈M l would have to message someone in M l′ , with l′ > l), which are strictly
unprofitable by construction.
Proof of Proposition 6. The proof is in two parts. First, we consider a modified game
with a restricted space of message strategies such that, if i < j and j is in a segment
higher than l’s (according to the thresholds calculated in Proposition 2), then i is not
allowed to message j when θ = 1, unless i has previously received a message from a
player in j’s segment or higher. Impose also the analogous condition for θ = −1.
The game played on this restricted strategy space must have a sequential equilib-
rium, by the standard existence theorem.17 Next, we argue that intra-segment trans-
17The fact that the game is infinite is not problematic, as it is continuous at infinity, and informa-tion sets and action spaces are finite for every t. I.e., fix a strategy profile σ0 for the infinite game;consider the reduced (finite) game where strategies after time t0 are given by σ0; take a sequentialequilibrium of it, for every t0; then consider the sequence generated by taking t0 → ∞ and take aconvergent subsequence.
40
mission must occur, and it must be “fast” as δ → 1. To do this, WLOG, let θ = 1 and
take a segment [bm, bm+1). Let i be a popular agent such that bi ∈ [bm+1− 1−γ02−γ0 , bm+1),
and suppose that i is informed. Then note that, for γ0 low enough, i’s highest possible
flow payoff, if no higher segments are informed, is
−U∗i =∑
j:bj≤bm+1
(bj − bi)2,
and this is attained only when everyone in [b1, bm+1) is informed, i.e., i wants to tell
everyone up to segment m+ 1.18
Moreover, since i is popular, she can guarantee herself a payoff of the form
Ui = −δ αn−1
1− δ + δ αn−1
∑bj∈[b1,bm+1)
(bj − bi)2− 1− δ
1− δ + δ αn−1
∑bj∈[b1,bm+1)
(bj + θj − bi − 1)2
=
∑bj∈[b1,bm+1)
(bj − bi)2+O(1− δ)
by simply telling everybody whenever possible. Hence, in any equilibrium and any
continuation where i is informed, Ui must be at least Ui. This implies that there
must be a constant Q such that, in any continuation and for any j with bj ≤ bm+1,∑∞s=t δ
s−tP (j uninformed at time s) ≤ Q, and of course (1 − δ)Q → 0 as δ → 1. In
other words, if i is informed, everyone else in segments up to m + 1 must become
informed quickly as well, for high δ.19
Next, suppose some other player i′ with bi′ ∈ [bm+1 − 1−γ02−γ0 , bm+1) is informed. If
i′ is connected directly to i, then i being informed also implies everyone else being
informed with expected delay at most Q′, where again (1− δ)Q′ → 0 as δ → 1: i′ can
just tell i to guarantee that everyone will be informed quickly. Inductively, the same
reasoning applies if i′ is connected by a path of length k to i. This covers everyone
in [bm+1 − 1−γ02−γ0 , bm+1), since (N,G) is well-connected.
Now, suppose the highest i′ ∈ [bm, bm+1− 1−γ02−γ0 ) is informed. She can again guaran-
tee that everyone in segments up to m+1 will be informed with small expected delay,
18For ease of notation, we have left out the terms corresponding to payoffs generated by playersin higher segments, as they cannot be interacted with at all.
19Note that this does not mean i actually informs everyone in equilibrium; i may send very fewmessages if someone else is sharing the information.
41
by telling anyone in [bm+1 − 1−γ02−γ0 , bm+1) (and, by well-connectedness, she will know
such a person). Although this is no longer obviously optimal, since i may prefer to
inform some people in [bm+1 − 1−γ02−γ0 , bm+1) and exclude others, we already know that
informing anyone in that group induces fast transmission; and, for low enough γ and
high δ, i′ would rather tell all of this interval than none of it, since by construction∑bm+1>j>i′
∆U(1, i′, j) > 0. We can then repeat the argument with the next-highest
agent, and so on.
Hence, there is always fast transmission within segments, and to lower (higher)
segments when θ = 1 (θ = −1).
To finish the proof, we must check that the equilibrium we found is also an equi-
librium of the game with unconstrained strategies, i.e., players must not want to send
messages to higher segments for θ = 1 and vice versa, if they are not sure that those
segments are already informed. This is true since there is fast transmission within
segments, and transmission to an entire higher (for θ = 1) segment is unprofitable by
the construction of the thresholds bm.
42