SHARED INFORMATION Prakash Narayan with Imre Csisz ar ... · X1X2 h log dP X1 2 dP X 1 P 2 (X 1;X...

46
SHARED INFORMATION Prakash Narayan with Imre Csisz´ ar, Sirin Nitinawarat, Himanshu Tyagi, Shun Watanabe

Transcript of SHARED INFORMATION Prakash Narayan with Imre Csisz ar ... · X1X2 h log dP X1 2 dP X 1 P 2 (X 1;X...

Page 1: SHARED INFORMATION Prakash Narayan with Imre Csisz ar ... · X1X2 h log dP X1 2 dP X 1 P 2 (X 1;X 2) i; if P X 1X 2˚P X 1 P X 1; if P X 1X 2 6˚P X 1 P X 2 = D P X 1X 2 jjP X 1 P

SHARED INFORMATION

Prakash Narayan

with

Imre Csiszar, Sirin Nitinawarat, Himanshu Tyagi, Shun Watanabe

Page 2: SHARED INFORMATION Prakash Narayan with Imre Csisz ar ... · X1X2 h log dP X1 2 dP X 1 P 2 (X 1;X 2) i; if P X 1X 2˚P X 1 P X 1; if P X 1X 2 6˚P X 1 P X 2 = D P X 1X 2 jjP X 1 P

Outline

Two-terminal model: Mutual information

Operational meaning in:

I Channel coding: channel capacity

I Lossy source coding: rate distortion function

I Binary hypothesis testing: Stein’s lemma

Interactive communication and common randomness

I Two-terminal model: Mutual information

I Multiterminal model: Shared information

Applications

2/41

Page 3: SHARED INFORMATION Prakash Narayan with Imre Csisz ar ... · X1X2 h log dP X1 2 dP X 1 P 2 (X 1;X 2) i; if P X 1X 2˚P X 1 P X 1; if P X 1X 2 6˚P X 1 P X 2 = D P X 1X 2 jjP X 1 P

Outline

Two-terminal model: Mutual informationOperational meaning in:

I Channel coding: channel capacity

I Lossy source coding: rate distortion function

I Binary hypothesis testing: Stein’s lemma

Interactive communication and common randomness

Applications

3/41

Page 4: SHARED INFORMATION Prakash Narayan with Imre Csisz ar ... · X1X2 h log dP X1 2 dP X 1 P 2 (X 1;X 2) i; if P X 1X 2˚P X 1 P X 1; if P X 1X 2 6˚P X 1 P X 2 = D P X 1X 2 jjP X 1 P

Mutual Information

Mutual information is a measure of mutual dependence between two rvs.

Let X1 and X2 be R-valued rvs with joint probability distribution PX1X2.

The mutual information between X1 and X2 is

I (X1 ∧X2) =

{EPX1X2

[log

dPX1X2

dPX1×PX2

(X1, X2)], if PX1X2

≺ PX1× PX2

∞, if PX1X26≺ PX1

× PX2

= D(PX1X2

|| PX1× PX2

).(Kullback− Leibler divergence

)

When X1 and X2 are finite-valued,

I (X1 ∧X2) = H (X1) +H (X2)−H (X1, X2)

= H (X1)−H (X1 | X2) = H (X2)−H (X2 | X1)

= H (X1, X2)−[H (X1 | X2) +H (X2 | X1)

].

4/41

Page 5: SHARED INFORMATION Prakash Narayan with Imre Csisz ar ... · X1X2 h log dP X1 2 dP X 1 P 2 (X 1;X 2) i; if P X 1X 2˚P X 1 P X 1; if P X 1X 2 6˚P X 1 P X 2 = D P X 1X 2 jjP X 1 P

Mutual Information

Mutual information is a measure of mutual dependence between two rvs.

Let X1 and X2 be R-valued rvs with joint probability distribution PX1X2.

The mutual information between X1 and X2 is

I (X1 ∧X2) =

{EPX1X2

[log

dPX1X2

dPX1×PX2

(X1, X2)], if PX1X2

≺ PX1× PX2

∞, if PX1X26≺ PX1

× PX2

= D(PX1X2

|| PX1× PX2

).(Kullback− Leibler divergence

)

When X1 and X2 are finite-valued,

I (X1 ∧X2) = H (X1) +H (X2)−H (X1, X2)

= H (X1)−H (X1 | X2) = H (X2)−H (X2 | X1)

= H (X1, X2)−[H (X1 | X2) +H (X2 | X1)

].

4/41

Page 6: SHARED INFORMATION Prakash Narayan with Imre Csisz ar ... · X1X2 h log dP X1 2 dP X 1 P 2 (X 1;X 2) i; if P X 1X 2˚P X 1 P X 1; if P X 1X 2 6˚P X 1 P X 2 = D P X 1X 2 jjP X 1 P

Channel Coding

Let X1 and X2 be finite alphabets, and W : X1 → X2 be a stochastic matrix.

message

m ∈ {1, . . . ,M}encoder f DMC decoder φ

mf(m)

(x21, . . . , x2n)(x11, . . . , x1n)

Discrete memoryless channel (DMC):

W (n) (x21, . . . , x2n | x11, . . . , x1n) =

n∏

i=1

W (x2i | x1i) .

5/41

Page 7: SHARED INFORMATION Prakash Narayan with Imre Csisz ar ... · X1X2 h log dP X1 2 dP X 1 P 2 (X 1;X 2) i; if P X 1X 2˚P X 1 P X 1; if P X 1X 2 6˚P X 1 P X 2 = D P X 1X 2 jjP X 1 P

Channel Capacity

message

m ∈ {1, . . . ,M}encoder f DMC decoder φ

mf(m)

(x21, . . . , x2n)(x11, . . . , x1n)

Goal: Make code rate 1n logM as large as possible while keeping

maxm

P(φ (X21, . . . , X2n) 6= m | f(m)

)

to be small, in the asymptotic sense as n→∞.

[C.E. Shannon, 1948]

Channel capacity C = maxPX1

:PX2|X1=W

I (X1 ∧X2) .

6/41

Page 8: SHARED INFORMATION Prakash Narayan with Imre Csisz ar ... · X1X2 h log dP X1 2 dP X 1 P 2 (X 1;X 2) i; if P X 1X 2˚P X 1 P X 1; if P X 1X 2 6˚P X 1 P X 2 = D P X 1X 2 jjP X 1 P

Lossy Source Coding

Let {X1t}∞t=1 be an X1-valued i.i.d. source.

encoder f decoder φφ(j)f(x11, . . . , x1n)

(x11, . . . , x1n) (x21, . . . , x2n)j ∈ {1, . . . , J}

source

Distortion measure:

d((x11, . . . , x1n), (x21, . . . , x2n)

)=

1

n

n∑

i=1

d(x1i, x2i

).

7/41

Page 9: SHARED INFORMATION Prakash Narayan with Imre Csisz ar ... · X1X2 h log dP X1 2 dP X 1 P 2 (X 1;X 2) i; if P X 1X 2˚P X 1 P X 1; if P X 1X 2 6˚P X 1 P X 2 = D P X 1X 2 jjP X 1 P

Rate Distortion Function

encoder f decoder φφ(j)f(x11, . . . , x1n)

(x11, . . . , x1n) (x21, . . . , x2n)j ∈ {1, . . . , J}

source

Goal: Make (compression) code rate 1n log J as small as possible while keeping

P

(1

n

n∑

i=1

d (X1i, X2i) ≤ ∆

)

to be large, in the asymptotic sense as n→∞.

[Shannon, 1948, 1959]

Rate distortion function R (∆) = minPX2|X1

: E[d(X1,X2)]≤∆I (X1 ∧X2) .

8/41

Page 10: SHARED INFORMATION Prakash Narayan with Imre Csisz ar ... · X1X2 h log dP X1 2 dP X 1 P 2 (X 1;X 2) i; if P X 1X 2˚P X 1 P X 1; if P X 1X 2 6˚P X 1 P X 2 = D P X 1X 2 jjP X 1 P

Simple Binary Hypothesis Testing

Let {(X1t, X2t)}∞t=1 be an X1 ×X2-valued i.i.d. process generated according to

H0 : PX1X2 or H1 : PX1 × PX2 .

Test:

Decides H0 w.p. T (0 | x11, . . . , x1n, x21, . . . , x2n) ,

H1 w.p. T (1 | x11, . . . , x1n, x21, . . . , x2n) = 1− T (0 | . . .) .

Stein’s lemma [H. Chernoff, 1956]: For every 0 < ε < 1,

limn− 1

nlog inf

T : PH0(T says H0)≥1−ε

PH1 (T says H0)

= D (PX1X2|| PX1

× PX2) = I (X1 ∧X2) .

9/41

Page 11: SHARED INFORMATION Prakash Narayan with Imre Csisz ar ... · X1X2 h log dP X1 2 dP X 1 P 2 (X 1;X 2) i; if P X 1X 2˚P X 1 P X 1; if P X 1X 2 6˚P X 1 P X 2 = D P X 1X 2 jjP X 1 P

Outline

Two-terminal model: Mutual information

Interactive communication and common randomness

I Two-terminal model: Mutual information

I Multiterminal model: Shared information

Applications

10/41

Page 12: SHARED INFORMATION Prakash Narayan with Imre Csisz ar ... · X1X2 h log dP X1 2 dP X 1 P 2 (X 1;X 2) i; if P X 1X 2˚P X 1 P X 1; if P X 1X 2 6˚P X 1 P X 2 = D P X 1X 2 jjP X 1 P

Multiterminal Model

COMMUNICATION NETWORK

X1 X2 Xm

F

I Set of terminals =M = {1, . . . ,m}.

I X1, . . . , Xm are finite-valued rvs with known joint distribution PX1...Xm

on X1 × · · · × Xm.

I Terminal i ∈M observes data Xi.

I Multiple rounds of interactive communication on a noiseless channelof unlimited capacity; all terminals hear all communication.

11/41

Page 13: SHARED INFORMATION Prakash Narayan with Imre Csisz ar ... · X1X2 h log dP X1 2 dP X 1 P 2 (X 1;X 2) i; if P X 1X 2˚P X 1 P X 1; if P X 1X 2 6˚P X 1 P X 2 = D P X 1X 2 jjP X 1 P

Interactive CommunicationInteractive communication

I Assume: Communication occurs in consecutive time slots in r rounds.

I The corresponding rvs representing the communication are

F = F(X1, . . . , Xm

)=(F11, . . . , F1m, F21, . . . , F2m, . . . , Fr1, . . . , Frm

)

– F11 = f11(X1

), F12 = f12

(X2, F11

), . . .

– Fji = fji(Xi; all previous communication

).

Simple communication: F =(F1, . . . , Fm

), Fi = fi

(Xi

), 1 ≤ i ≤ m.

A. Yao, “Some complexity questions related to distributive computing,” Proc. Annual Symposium on Theory

of Computing, 1979.

12/41

Page 14: SHARED INFORMATION Prakash Narayan with Imre Csisz ar ... · X1X2 h log dP X1 2 dP X 1 P 2 (X 1;X 2) i; if P X 1X 2˚P X 1 P X 1; if P X 1X 2 6˚P X 1 P X 2 = D P X 1X 2 jjP X 1 P

Applications

COMMUNICATION NETWORK

X1 X2 Xm

F

I Data exchange: Omniscience

I Signal recovery: Data compression

I Function computation

I Cryptography: Secret key generation

13/41

Page 15: SHARED INFORMATION Prakash Narayan with Imre Csisz ar ... · X1X2 h log dP X1 2 dP X 1 P 2 (X 1;X 2) i; if P X 1X 2˚P X 1 P X 1; if P X 1X 2 6˚P X 1 P X 2 = D P X 1X 2 jjP X 1 P

WatanExample: Function Computation

X2 =

(X21

X22

)F1

F2X1 =

(X11

X12

)

[S. Watanabe]

I X11, X12, X21, X22 are mutually independent (0.5, 0.5) bits.

I Terminals 1 and 2 wish to compute:

G = g(X1, X2) = 1

((X11, X12) = (X21, X22)

).

I Simple communication: F =(F1 = (X11, X12), F2 = (X21, X22)

).

– Communication complexity: H(F) = 4 bits.

– No privacy: Terminal 1 or 2, or an observer of F, learns all the data X1, X2.

14/41

Page 16: SHARED INFORMATION Prakash Narayan with Imre Csisz ar ... · X1X2 h log dP X1 2 dP X 1 P 2 (X 1;X 2) i; if P X 1X 2˚P X 1 P X 1; if P X 1X 2 6˚P X 1 P X 2 = D P X 1X 2 jjP X 1 P

WatanExample: Function Computation

X2 =

(X21

X22

)F11

F12X1 =

(X11

X12

)

I An interactive communication protocol:

– F =(F11 = (X11, X12), F12 = G

).

– Complexity: H(F) = 2.81 bits.

– Some privacy: Terminal 2, or an observer of F, learns X1;Terminal 1, or an observer of F, either learns X2 w.p. 0.25or w.p. 0.75 that X2 differs from X1.

¿ Can a communication complexity of 2.81 bits be bettered ?

15/41

Page 17: SHARED INFORMATION Prakash Narayan with Imre Csisz ar ... · X1X2 h log dP X1 2 dP X 1 P 2 (X 1;X 2) i; if P X 1X 2˚P X 1 P X 1; if P X 1X 2 6˚P X 1 P X 2 = D P X 1X 2 jjP X 1 P

WatanExample: Function Computation

X2 =

(X21

X22

)F11

F12X1 =

(X11

X12

)

I An interactive communication protocol:

– F =(F11 = (X11, X12), F12 = G

).

– Complexity: H(F) = 2.81 bits.

– Some privacy: Terminal 2, or an observer of F, learns X1;Terminal 1, or an observer of F, either learns X2 w.p. 0.25or w.p. 0.75 that X2 differs from X1.

¿ Can a communication complexity of 2.81 bits be bettered ?

15/41

Page 18: SHARED INFORMATION Prakash Narayan with Imre Csisz ar ... · X1X2 h log dP X1 2 dP X 1 P 2 (X 1;X 2) i; if P X 1X 2˚P X 1 P X 1; if P X 1X 2 6˚P X 1 P X 2 = D P X 1X 2 jjP X 1 P

Related Work

I Exact function computation

– Yao ’79: Communication complexity.– Gallager ’88: Algorithm for parity computation in a network.– Giridhar-Kumar ’05: Algorithms for computing functions over sensor networks.– Freris-Kowshik-Kumar ’10: Survey: Connectivity, capacity, clocks,

computation in large sensor networks.– Orlitsky-El Gamal ’84: Communication complexity with secrecy.

I Information theoretic function computation

– Korner-Marton ’79: Minimum rate for computing parity.– Orlitsky-Roche ’01: Two terminal function computation.– Nazer-Gastpar ’07: Computation over noisy channels.– Ma-Ishwar ’08: Distributed source coding for interactive computing.– Ma-Ishwar-Gupta ’09: Multiround function computation in colocated networks.– Tyagi-Gupta-Narayan ’11: Secure function computation.– Tyagi-Watanabe ’13, ’14 Secrecy generation, secure computing.

I Compressing interactive communication

– Schulman ’92: Coding for interactive communication.– Braverman-Rao ’10: Information complexity of communication.– Kol-Raz ’13, Heupler ’14: Interactive communication over noisy channels.

16/41

Page 19: SHARED INFORMATION Prakash Narayan with Imre Csisz ar ... · X1X2 h log dP X1 2 dP X 1 P 2 (X 1;X 2) i; if P X 1X 2˚P X 1 P X 1; if P X 1X 2 6˚P X 1 P X 2 = D P X 1X 2 jjP X 1 P

Mathematical Economics: Mechanism Design

– Thomas Marschak and Stefan Reichelstein,“Communication requirements for individual agents in networks andhierarchies,”in The Economics of Informational Decentralization: Complexity, Efficiencyand Stability: Essays in Honor of Stanley Reiter, John O. Ledyard, Ed.,Springer, 1994.

– Kenneth R. Mount and Stanley Reiter,Computation and Complexity in Economic Behavior and Organization,Cambridge U. Press, 2002.

Courtesy: Demos Teneketzis

17/41

Page 20: SHARED INFORMATION Prakash Narayan with Imre Csisz ar ... · X1X2 h log dP X1 2 dP X 1 P 2 (X 1;X 2) i; if P X 1X 2˚P X 1 P X 1; if P X 1X 2 6˚P X 1 P X 2 = D P X 1X 2 jjP X 1 P

Common Randomness

L1

X1 X2 Xm

L2 Lm

COMMUNICATION NETWORK

∼= L

F

For 0 ≤ ε < 1, given interactive communication F, a rv L = L(X1, . . . , Xm) isε-CR for the terminals in M using F, if there exist local estimates

Li = Li(Xi, F

), i ∈M,

of L satisfying

P(Li = L, i ∈M

)≥ 1− ε.

18/41

Page 21: SHARED INFORMATION Prakash Narayan with Imre Csisz ar ... · X1X2 h log dP X1 2 dP X 1 P 2 (X 1;X 2) i; if P X 1X 2˚P X 1 P X 1; if P X 1X 2 6˚P X 1 P X 2 = D P X 1X 2 jjP X 1 P

Common Randomness

L1

X1 X2 Xm

L2 Lm

COMMUNICATION NETWORK

∼= L

F

Examples:

I Data exchange: Omniscience: L =(X1, . . . , Xm

).

I Signal recovery: Data compression: L ⊇ Xi∗ , for some fixed i∗ ∈M.

I Function computation: L ⊇ g(X1, . . . , Xm

)for a given g.

I Cryptography: Secret CR, i.e., secret key: L with I(L ∧ F) ∼= 0.

19/41

Page 22: SHARED INFORMATION Prakash Narayan with Imre Csisz ar ... · X1X2 h log dP X1 2 dP X 1 P 2 (X 1;X 2) i; if P X 1X 2˚P X 1 P X 1; if P X 1X 2 6˚P X 1 P X 2 = D P X 1X 2 jjP X 1 P

A Basic Operational Question

L1

X1 X2 Xm

L2 Lm

COMMUNICATION NETWORK

∼= L

F

¿ What is the maximal CR, as measured by H(L|F

), that can be generated

by a given interactive communication F for a distributed processing task ?

Answer in two steps:

I Fundamental structural property of interactive communication

I Upper bound on amount of CR achievable with interactive communication.

Shall start with the case of m = 2 terminals.

20/41

Page 23: SHARED INFORMATION Prakash Narayan with Imre Csisz ar ... · X1X2 h log dP X1 2 dP X 1 P 2 (X 1;X 2) i; if P X 1X 2˚P X 1 P X 1; if P X 1X 2 6˚P X 1 P X 2 = D P X 1X 2 jjP X 1 P

A Basic Operational Question

L1

X1 X2 Xm

L2 Lm

COMMUNICATION NETWORK

∼= L

F

¿ What is the maximal CR, as measured by H(L|F

), that can be generated

by a given interactive communication F for a distributed processing task ?

Answer in two steps:

I Fundamental structural property of interactive communication

I Upper bound on amount of CR achievable with interactive communication.

Shall start with the case of m = 2 terminals.

20/41

Page 24: SHARED INFORMATION Prakash Narayan with Imre Csisz ar ... · X1X2 h log dP X1 2 dP X 1 P 2 (X 1;X 2) i; if P X 1X 2˚P X 1 P X 1; if P X 1X 2 6˚P X 1 P X 2 = D P X 1X 2 jjP X 1 P

Fundamental Property of Interactive Communication

COMMUNICATION NETWORK

F

X1 X2

Lemma: [U. Maurer], [R. Ahlswede - I. Csiszar]

For interactive communication F of the Terminals 1 and 2 observingdata X1 and X2, respectively,

I(X1 ∧X2

∣∣F) ≤ I(X1 ∧X2

).

In particular, independent rvs X1, X2 remain so upon conditioning on aninteractive communication.

21/41

Page 25: SHARED INFORMATION Prakash Narayan with Imre Csisz ar ... · X1X2 h log dP X1 2 dP X 1 P 2 (X 1;X 2) i; if P X 1X 2˚P X 1 P X 1; if P X 1X 2 6˚P X 1 P X 2 = D P X 1X 2 jjP X 1 P

Fundamental Property of Interactive Communication

Lemma: [U. Maurer], [R. Ahlswede - I. Csiszar]

For interactive communication F of the Terminals 1 and 2 observingdata X1 and X2, respectively,

I(X1 ∧X2

∣∣F) ≤ I(X1 ∧X2

).

In particular, independent rvs X1, X2 remain so upon conditioning on aninteractive communication.

Proof: For interactive communication F =(F11, F12, . . . , Fr1, Fr2

),

I(X1 ∧X2

)= I(X1, F11 ∧X2

)

≥ I(X1 ∧X2|F11

)

= I(X1 ∧X2, F12|F11

)

≥ I(X1 ∧X2|F11, F12

),

followed by iteration.

22/41

Page 26: SHARED INFORMATION Prakash Narayan with Imre Csisz ar ... · X1X2 h log dP X1 2 dP X 1 P 2 (X 1;X 2) i; if P X 1X 2˚P X 1 P X 1; if P X 1X 2 6˚P X 1 P X 2 = D P X 1X 2 jjP X 1 P

An Equivalent Form

For interactive communication F of Terminals 1 and 2:

I(X1 ∧X2

∣∣F) ≤ I(X1 ∧X2

)

m

H(F) ≥ H(F|X1) +H(F|X2).

23/41

Page 27: SHARED INFORMATION Prakash Narayan with Imre Csisz ar ... · X1X2 h log dP X1 2 dP X 1 P 2 (X 1;X 2) i; if P X 1X 2˚P X 1 P X 1; if P X 1X 2 6˚P X 1 P X 2 = D P X 1X 2 jjP X 1 P

Upper Bound on CR for Two Terminals

X2

COMMUNICATION NETWORK

F

L2

X1

∼= LL1

Using

– L is ε-CR for Terminals 1 and 2 with interactive communication F; and

– H(F) ≥ H(F|X1) +H(F|X2),

we get

H(L|F

)≤ H

(X1, X2

)−[H(X1|X2

)+H

(X2|X1

)]+ 2ν(ε),

where limε→0 ν(ε) = 0.24/41

Page 28: SHARED INFORMATION Prakash Narayan with Imre Csisz ar ... · X1X2 h log dP X1 2 dP X 1 P 2 (X 1;X 2) i; if P X 1X 2˚P X 1 P X 1; if P X 1X 2 6˚P X 1 P X 2 = D P X 1X 2 jjP X 1 P

Maximum CR for Two Terminals: Mutual Information

X2

COMMUNICATION NETWORK

F

L2

X1

∼= LL1

Lemma: [I. Csiszar - P. Narayan] Let L be any ε-CR for Terminals 1 and 2observing data X1 and X2, respectively, achievable with interactive F. Then

H(L|F

)/ I

(X1 ∧X2

)= D

(PX1X2 ||PX1 × PX2

).

Remark: When {(X1t, X2t)}∞t=1 is an X1 ×X2-valued i.i.d. process, the upperbound is attained.

25/41

Page 29: SHARED INFORMATION Prakash Narayan with Imre Csisz ar ... · X1X2 h log dP X1 2 dP X 1 P 2 (X 1;X 2) i; if P X 1X 2˚P X 1 P X 1; if P X 1X 2 6˚P X 1 P X 2 = D P X 1X 2 jjP X 1 P

Interactive Communication for m ≥ 2 Terminals

Theorem 1: [I. Csiszar-P. Narayan]

For interactive communication F of the terminals i ∈M = {1, . . . ,m},with Terminal i oberving data Xi,

H (F) ≥∑

B∈BλBH (F|XBc)

for every family B = {B (M, B 6= ∅} and set of weights (“fractional partition”)

λ ,

{0 ≤ λB ≤ 1, B ∈ B, satisfying

B∈B:B3iλB = 1 ∀ i ∈M

}.

Equality holds if X1, . . . , Xm are mutually independent.

Special case of:

M. Madiman and P. Tetali, “Information inequalities for joint distributions, with interpretations and

applications,” IEEE Trans. Inform. Theory, June 2010.

26/41

Page 30: SHARED INFORMATION Prakash Narayan with Imre Csisz ar ... · X1X2 h log dP X1 2 dP X 1 P 2 (X 1;X 2) i; if P X 1X 2˚P X 1 P X 1; if P X 1X 2 6˚P X 1 P X 2 = D P X 1X 2 jjP X 1 P

CR for m ≥ 2 Terminals: A Suggestive Analogy

[S. Nitinawarat-P. Narayan]

For interactive communication F of the terminals i ∈M = {1, . . . ,m},with Terminal i observing data Xi,

(m = 2 : H(F) ≥ H(F|X1) +H(F|X2) ⇔ I

(X1 ∧X2

∣∣F) ≤ I(X1 ∧X2

))

H (F) ≥∑

B∈BλBH (F|XBc)

m

H(X1, . . . , Xm|F

)−∑

B∈BλB H

(XB |XBc ,F

)

≤ H(X1, . . . , Xm

)−∑

B∈BλB H

(XB |XBc

).

27/41

Page 31: SHARED INFORMATION Prakash Narayan with Imre Csisz ar ... · X1X2 h log dP X1 2 dP X 1 P 2 (X 1;X 2) i; if P X 1X 2˚P X 1 P X 1; if P X 1X 2 6˚P X 1 P X 2 = D P X 1X 2 jjP X 1 P

An Analogy

[S. Nitinawarat-P. Narayan]

For interactive communication F of the terminals i ∈M = {1, . . . ,m},with Terminal i observing data Xi,

H (F) ≥∑

B∈BλBH (F|XBc)

m

H(X1, . . . , Xm|F

)−∑

B∈BλB H

(XB |XBc ,F

)

≤ H(X1, . . . , Xm

)−∑

B∈BλB H

(XB |XBc

).

¿ Does the RHS suggest a measure of mutual dependence

among the rvs X1, . . . , Xm ?

28/41

Page 32: SHARED INFORMATION Prakash Narayan with Imre Csisz ar ... · X1X2 h log dP X1 2 dP X 1 P 2 (X 1;X 2) i; if P X 1X 2˚P X 1 P X 1; if P X 1X 2 6˚P X 1 P X 2 = D P X 1X 2 jjP X 1 P

Maximum CR for m ≥ 2 Terminals: Shared Information

Theorem 2: [I. Csiszar-P. Narayan]

Given 0 ≤ ε < 1, for an ε-CR L forM achieved with interactive communication F,

H(L|F

)≤ H

(X1, . . . , Xm

)−∑

B∈BλBH (XB |XBc) +mν

for every fractional partition λ of M, with ν = ν(ε) = ε log∣∣L∣∣+ h(ε).

Remarks:

– The proof of Theorem 2 relies on Theorem 1.

– When {(X1t, . . . , Xmt)}∞t=1 is an i.i.d. process, the upper bound is attained.

29/41

Page 33: SHARED INFORMATION Prakash Narayan with Imre Csisz ar ... · X1X2 h log dP X1 2 dP X 1 P 2 (X 1;X 2) i; if P X 1X 2˚P X 1 P X 1; if P X 1X 2 6˚P X 1 P X 2 = D P X 1X 2 jjP X 1 P

Shared Information

Theorem 2: [I. Csiszar-P. Narayan]

H(L|F

)/ H

(X1, . . . , Xm

)− max

λ

B∈BλBH (XB |XBc)

∆= SI

(X1, . . . , Xm

)

30/41

Page 34: SHARED INFORMATION Prakash Narayan with Imre Csisz ar ... · X1X2 h log dP X1 2 dP X 1 P 2 (X 1;X 2) i; if P X 1X 2˚P X 1 P X 1; if P X 1X 2 6˚P X 1 P X 2 = D P X 1X 2 jjP X 1 P

Extensions

Theorems 1 and 2 extend to:

I random variables with densities [S. Nitinawarat-P. Narayan]

I a larger class of probability measures [H.Tyagi-P. Narayan].

31/41

Page 35: SHARED INFORMATION Prakash Narayan with Imre Csisz ar ... · X1X2 h log dP X1 2 dP X 1 P 2 (X 1;X 2) i; if P X 1X 2˚P X 1 P X 1; if P X 1X 2 6˚P X 1 P X 2 = D P X 1X 2 jjP X 1 P

Shared Information and Kullback-Leibler Divergence[I. Csiszar-P. Narayan, C. Chan-L. Zheng]

SI(X1, . . . , Xm

)= H

(X1, . . . , Xm

)− max

λ

B∈BλBH (XB |XBc)

(m = 2) = H(X1, X2

)−[H(X1|X2

)+H

(X2|X1

)]= I(X1 ∧X2

)

(m = 2) = D(PX1X2 ||PX1 × PX2

)

(m ≥ 2) = min2≤k≤m

minAk=(A1,...,Ak)

1

k − 1D(PX1...Xm

∣∣∣∣k∏

i=1

PXAi

)

and equals 0 iff PX1...Xm= PXA

PXAc for some A (M.

¿ Does shared information have an operational significance as ameasure of the mutual dependence among the rvs X1, . . . , Xm ?

32/41

Page 36: SHARED INFORMATION Prakash Narayan with Imre Csisz ar ... · X1X2 h log dP X1 2 dP X 1 P 2 (X 1;X 2) i; if P X 1X 2˚P X 1 P X 1; if P X 1X 2 6˚P X 1 P X 2 = D P X 1X 2 jjP X 1 P

Shared Information and Kullback-Leibler Divergence[I. Csiszar-P. Narayan, C. Chan-L. Zheng]

SI(X1, . . . , Xm

)= H

(X1, . . . , Xm

)− max

λ

B∈BλBH (XB |XBc)

(m = 2) = H(X1, X2

)−[H(X1|X2

)+H

(X2|X1

)]= I(X1 ∧X2

)

(m = 2) = D(PX1X2 ||PX1 × PX2

)

(m ≥ 2) = min2≤k≤m

minAk=(A1,...,Ak)

1

k − 1D(PX1...Xm

∣∣∣∣k∏

i=1

PXAi

)

and equals 0 iff PX1...Xm= PXA

PXAc for some A (M.

¿ Does shared information have an operational significance as ameasure of the mutual dependence among the rvs X1, . . . , Xm ?

32/41

Page 37: SHARED INFORMATION Prakash Narayan with Imre Csisz ar ... · X1X2 h log dP X1 2 dP X 1 P 2 (X 1;X 2) i; if P X 1X 2˚P X 1 P X 1; if P X 1X 2 6˚P X 1 P X 2 = D P X 1X 2 jjP X 1 P

Outline

Two-terminal model: Mutual information

Interactive communication and common randomness

Applications

33/41

Page 38: SHARED INFORMATION Prakash Narayan with Imre Csisz ar ... · X1X2 h log dP X1 2 dP X 1 P 2 (X 1;X 2) i; if P X 1X 2˚P X 1 P X 1; if P X 1X 2 6˚P X 1 P X 2 = D P X 1X 2 jjP X 1 P

Omniscience

L1

X1 X2 Xm

L2 Lm

COMMUNICATION NETWORK

∼= L

F

[I. Csiszar-P. Narayan]

For L =(X1, . . . , Xm

), Theorem 2 gives

H(F)

' H (X1, . . . , Xm) − SI (X1, . . . , Xm) ,

which, for m = 2, is

H(F)

' H(X1|X2

)+H

(X2|X1

). [Slepian−Wolf]

34/41

Page 39: SHARED INFORMATION Prakash Narayan with Imre Csisz ar ... · X1X2 h log dP X1 2 dP X 1 P 2 (X 1;X 2) i; if P X 1X 2˚P X 1 P X 1; if P X 1X 2 6˚P X 1 P X 2 = D P X 1X 2 jjP X 1 P

Signal Recovery: Data Compression

L1

X1 X2 Xm

L2 Lm

COMMUNICATION NETWORK

∼= L

F

[S. Nitinawarat-P. Narayan]

With L = X1, by Theorem 2

H(F)

' H (X1) − SI (X1, . . . , Xm) ,

which, for m = 2, gives

H(F)

' H(X1|X2

).

[Slepian-Wolf]

35/41

Page 40: SHARED INFORMATION Prakash Narayan with Imre Csisz ar ... · X1X2 h log dP X1 2 dP X 1 P 2 (X 1;X 2) i; if P X 1X 2˚P X 1 P X 1; if P X 1X 2 6˚P X 1 P X 2 = D P X 1X 2 jjP X 1 P

Secret Common Randomness

L1

X1 X2 Xm

L2 Lm

COMMUNICATION NETWORK

∼= L

F

Terminals 1, . . . ,m generate CR L satisfying the secrecy condition

I(L ∧ F

) ∼= 0.

By Theorem 2,

H(L) ∼= H

(L|F

)/ SI

(X1, . . . , Xm

).

I Secret key generation [I. Csiszar-P. Narayan]

I Secure function computation [H. Tyagi-P. Narayan]

36/41

Page 41: SHARED INFORMATION Prakash Narayan with Imre Csisz ar ... · X1X2 h log dP X1 2 dP X 1 P 2 (X 1;X 2) i; if P X 1X 2˚P X 1 P X 1; if P X 1X 2 6˚P X 1 P X 2 = D P X 1X 2 jjP X 1 P

Querying Common Randomness

L1

Xn1 Xn

2 Xnm

L2 Lm

COMMUNICATION NETWORK

∼= L

F

[H. Tyagi-P. Narayan]

I A querier observes communication F and seeks to resolve the valueof CR L by asking questions: “Is L = l?” with yes-no answers.

I The terminals in M seek to generate L using F so as to makethe querier’s burden as onerous as possible.

¿ What is the largest query exponent ?

37/41

Page 42: SHARED INFORMATION Prakash Narayan with Imre Csisz ar ... · X1X2 h log dP X1 2 dP X 1 P 2 (X 1;X 2) i; if P X 1X 2˚P X 1 P X 1; if P X 1X 2 6˚P X 1 P X 2 = D P X 1X 2 jjP X 1 P

Largest Query Exponent

L1

Xn1 Xn

2 Xnm

L2 Lm

COMMUNICATION NETWORK

∼= L

F

E∗ , arg supE

[infq

P(q (L | F) ≥ 2nE

)→ 1 as n→∞

]

E∗ = SI (X1, . . . , Xm)

38/41

Page 43: SHARED INFORMATION Prakash Narayan with Imre Csisz ar ... · X1X2 h log dP X1 2 dP X 1 P 2 (X 1;X 2) i; if P X 1X 2˚P X 1 P X 1; if P X 1X 2 6˚P X 1 P X 2 = D P X 1X 2 jjP X 1 P

Shared information and a Hypothesis Testing Problem

SI(X1, . . . , Xm

)= min

2≤k≤mmin

Ak=(A1,...,Ak)

1

k − 1D(PX1...Xm

∣∣∣∣k∏

i=1

PXAi

)

I Related to exponent of “Pe-second kind” for an appropriate binary compositehypothesis testing problem, involving restricted CR L and communication F.

H. Tyagi and S. Watanabe, “Converses for secret key agreement and secure computing,” IEEE Trans.

Information Theory, September 2015.

39/41

Page 44: SHARED INFORMATION Prakash Narayan with Imre Csisz ar ... · X1X2 h log dP X1 2 dP X 1 P 2 (X 1;X 2) i; if P X 1X 2˚P X 1 P X 1; if P X 1X 2 6˚P X 1 P X 2 = D P X 1X 2 jjP X 1 P

In Closing ...

¿ How useful is the concept of shared information ?

A: Operational meaning in specific cases of distributed processing ...

For instance

I Consider n i.i.d. repetitions (say, in time) of the rvs X1, . . . , Xm.

I Data at time instant t is X1t, . . . , Xmt, t = 1, . . . , n.

I Terminal i observes the i.i.d. data(Xi1, . . . , Xin

), i ∈M.

I Shared information-based results are asymptotically tight (in n):

– Minimum rate of communication for omniscience

– Maximum rate of a secret key

– Largest query exponent

– Necessary condition for secure function computation

– Several problems in information theoretic cryptography.

40/41

Page 45: SHARED INFORMATION Prakash Narayan with Imre Csisz ar ... · X1X2 h log dP X1 2 dP X 1 P 2 (X 1;X 2) i; if P X 1X 2˚P X 1 P X 1; if P X 1X 2 6˚P X 1 P X 2 = D P X 1X 2 jjP X 1 P

In Closing ...

¿ How useful is the concept of shared information ?

A: Operational meaning in specific cases of distributed processing ...

For instance

I Consider n i.i.d. repetitions (say, in time) of the rvs X1, . . . , Xm.

I Data at time instant t is X1t, . . . , Xmt, t = 1, . . . , n.

I Terminal i observes the i.i.d. data(Xi1, . . . , Xin

), i ∈M.

I Shared information-based results are asymptotically tight (in n):

– Minimum rate of communication for omniscience

– Maximum rate of a secret key

– Largest query exponent

– Necessary condition for secure function computation

– Several problems in information theoretic cryptography.

40/41

Page 46: SHARED INFORMATION Prakash Narayan with Imre Csisz ar ... · X1X2 h log dP X1 2 dP X 1 P 2 (X 1;X 2) i; if P X 1X 2˚P X 1 P X 1; if P X 1X 2 6˚P X 1 P X 2 = D P X 1X 2 jjP X 1 P

Shared Information: Many Open Questions ...

– Significance in network source and channel coding ?

– Interactive communication over noisy channels ?

– Data-clustering applications ?[C. Chan-A. Al-Bashabsheh-Q. Zhou-T. Kaced-T.Liu, 2016]

...

...

41/41