Novel Performance Analysis of Network Coded Communications in Single-Relay Networks

14
Novel Performance Analysis of Network Coded Communications in Single-Relay Networks Evgeny Tsimbalo, Andrea Tassi, Robert Piechocki Communications Systems & Networks University of Bristol, UK IEEE GLOBECOM 2016, Washington DC 5th December 2016

Transcript of Novel Performance Analysis of Network Coded Communications in Single-Relay Networks

Novel Performance Analysis of Network Coded Communications in

Single-Relay NetworksEvgeny Tsimbalo, Andrea Tassi, Robert Piechocki

Communications Systems & Networks University of Bristol, UK

IEEE GLOBECOM 2016, Washington DC 5th December 2016

Evgeny Tsimbalo - [email protected]

Outline

1. Introduction

2. Proposed theoretical framework

3. Numerical results

4. Conclusions

2

Evgeny Tsimbalo - [email protected]

1. Introduction

3

Evgeny Tsimbalo - [email protected]

Random Linear Network Coding (RLNC)

• Simple and efficient technique to improve communication reliability.

• Encoding: K source packets -> N coded packets, N > K.

• Each coded packet is a linear combination of source packets.

• Random coefficients can be binary or not.

• Aim: packet loss mitigation.

4

Small letters in some equations were replaced with capitals, to make them

consistent and more understandable on the slides.

P(M,K) =

K�1Y

i=0

(1� qi�M). (1)

Pr(M,K) = q�MKG(M,K, r) (2)

=

1

q(M�r)(K�r)

r�1Y

i=0

(1� qi�M)(1� qi�K

)

1� qi�r.

Pptp(N,K, p) =NX

M=K

B(M,N, p)P(M,K), (3)

B(M,N, p) =

✓N

M

◆(1� p)MpN�M . (4)

Theorem 1. The probability of two random matrices X1 and X2 generated over

GF (q) with dimensions M1 ⇥ k and M2 ⇥ K, M1,M2 � k, and M12 common

rows being simultaneously full rank is given by

P⇤(M,K) =

X

i

Pi(M12, k)P(M1 �M12,K � i) (5)

· P(M2 �M12, k � i), (6)

where M = (M1,M2,M12) and the summation is performed over the values of

i from max(0,K �M1 +M12,K �M2 +M12) to min(M12,K).

Theorem 2. The probability of successful decoding for a two-destination mul-

ticast network defined by parameters N,K and p is given by

PM (N,K,p) =X

M

B⇤(M, N,p)P⇤

(M,K), (7)

where

B⇤(M, N,p) =

✓N

M12

◆✓N �M12

M1 �M12

◆✓N �M1

M2 �M12

·(1� p1)M1pN�M1

1

·(1� p2)M2pN�M2

2 (8)

and the summation is performed over the following values:

{M 1 ,M2 = K, . . . , N ;M12 = max(0,M1 +M2 �N), . . . ,min(M1,M2). (9)

y = C · x

r =

˜

C · x

1

c0c1

cN-1

...

y = C · x

r =

˜

C · x

) x =

˜

C

�1 · r

C =

2

Evgeny Tsimbalo - [email protected]

RLNC: point-to-point link

• Matrix should be invertible, i.e., full rank.

• What is the probability of successful decoding, given N, K and p?

5

c0c1

cN-1

...

S Dp

2

pSDS

R

D

pSR pRD

Fig. 1. Block diagram of a single-user single-relay network with packet errorprobabilities pSD , pSR and pRD .

II. SYSTEM MODEL AND BACKGROUND

A single-user single-relay network is depicted in Fig. 1. Thenetwork consists of a source node S, relay R and destinationD, and the goal is to transmit a message comprising of K

equal-size packets from S to D. To this end, the source nodeencodes the K message packets by using RLNC, such thateach coded packet is a linear combination of the originalpackets with the coefficients drawn uniformly at random overa GF (q), where q is the field size. In total, S transmitsN

S

� K coded packets. As proposed in [12], we assumethat all receiving nodes have a knowledge of seeds used togenerate the coded packets they receive, such that the codingvector of each packet can be re-generated.

Let pSD

, pSR

and p

RD

denote the Packet Error Probabilities(PEPs) of the links connecting S with D, S with R, andR with D, respectively. As is traditional in relay networks,the communication is performed in two stages. At the firststage, S transmits coded packets to R and D. Both R and D

receive a number of packets, for which they can restore thecorresponding coding vectors. The latter are stacked togetherhorizontally to form an M

R

⇥K coding matrix C

R

at the relaynode and an M

D

⇥ K coding matrix C

D

at the destinationnode. Since R and D may receive the same packets from S,the matrices may have common rows. Let M

RD

denote thenumber of such common rows. At this point, the destinationmay have enough coding packets to make its coding matrixC

D

full-rank, thus being able to decode the original messagewithout any assistance from R.

At the second stage, it is checked if the relay coding matrixC

R

is full rank. If so, R decodes the original K packets andre-encodes them using newly generated random coefficientsfrom GF (q) into N

R

packets and transmits them to D. Ingeneral, N

R

6= N

S

. We call this case the active relay mode.If the relay node cannot decode the source packets, it simplyre-transmits the M

R

packets to the destination node, whichcorresponds to the passive relay mode. In either mode, wedenote M

0D

as the number of coding vectors reached D fromR, and C

0D

as the updated M

D

+ M

0D

⇥ K coding matrixat D.

The described relay network is different from the oneproposed previously in [11], since in the latter the relay nodetransmits only if it can decode packets from the source node.Naturally, the passive relay mode should improve the decodingprobability at D in cases when R fails to decode. In addition,the described system and its analysis can be straightforwardlyextended to a network with multiple sources, in which each

source has an independent communication channel. This againcontrasts with [11], where the relay node uses packets fromboth sources at the encoding stage, thus introducing correlationbetween the sources.

A. Theoretical Background

We now provide some background results on RLNC whichwe will use in our analysis.

The number of full-rank m ⇥ k matrices generated overGF (2), with m > k, is given by [13]

F (m, k) =

k�1Y

i=0

(2

m � 2

i

) = 2

mk

k�1Y

i=0

(1� 2

i�m

). (1)

Based on that, the number of matrices of the same size thathave rank r k can be calculated as

G(m, k, r) =

F (m, r)F (k, r)

F (r, r)

. (2)

The probability of a random m ⇥ k matrix having full rankcan be obtained by dividing (1) by the number of all possiblem⇥ k matrices:

P(m, k) , 2

�mk

F (m, k). (3)

Similarly, the probability of a random m ⇥ k matrix havingrank r k is given by

Pr

(m, k) , 2

�mk

G(m, k, r). (4)

In a general case, when the elements are generated fromGF (q), q � 2, (3) can be rewritten by simply replacing 2

with q (see, for example, [4]):

P(m, k) =

k�1Y

i=0

(1� q

i�m

). (5)

Furthermore, following the same train of thought used toobtain (2) in the binary case, the probability (4) can begeneralized to the non-binary case as follows:

Pr

(m, k) = q

�mk

G(m, k, r) (6)

=

1

q

(m�r)(k�r)

r�1Y

i=0

(1� q

i�m

)(1� q

i�k

)

1� q

i�r

.

Consider now the application of RLNC to a point-to-point link, with a source node encoding K source packetsand transmitting N coded packets to the destination. Theprobability of successful decoding for such link characterizedby the PEP p can be given by [11]

P

ptp

(N,K, p) =

NX

M=K

B(M,N, p)P(M,K), (7)

where B(M,N, p) is the probability mass function (PMF) ofthe binomial distribution:

B(M,N, p) =

✓N

M

◆(1� p)

M

p

N�M

. (8)

In addition to the binomial distribution, we will also needits generalized version - the multinomial distribution [14].The PMF of such distribution describes the probability of

P(M,K) =

K�1Y

i=0

(1� qi�M).

Pr(M,K) = q�MKG(M,K, r)

=

1

q(M�r)(K�r)

r�1Y

i=0

(1� qi�M)(1� qi�K

)

1� qi�r.

Pptp(N,K, p) =NX

M=K

B(M,N, p)P(M,K),

B(M,N, p) =

✓N

M

◆(1� p)MpN�M .

X1 X2

GF (q) M1 ⇥ k M2 ⇥ K M1,M2 � k M12

P⇤(M,K) =

X

i

Pi(M12, k)P(M1 �M12,K � i)

· P(M2 �M12, k � i),

M = (M1,M2,M12)

i max(0,K �M1 +M12,K �M2 +M12) min(M12,K)

N,K p

PM (N,K,p) =X

M

B⇤(M, N,p)P⇤

(M,K),

B⇤(M, N,p) =

✓N

M12

◆✓N �M12

M1 �M12

◆✓N �M1

M2 �M12

·(1� p1)M1pN�M1

1

·(1� p2)M2pN�M2

2

{M 1 ,M2 = K, . . . , N ;M12 = max(0,M1 +M2 �N), . . . ,min(M1,M2).

y = C · x

r =

˜

C · x

) x =

˜

C

�1 · r

C =

2

y = C · x

r =

˜

C · x

) x =

˜

C

�1 · r

C =

2

y = C · x

r =

˜

C · x

) x =

˜

C

�1 · r

C =

2

Small letters in some equations were replaced with capitals, to make them

consistent and more understandable on the slides.

P(M,K) =

K�1Y

i=0

(1� qi�M). (1)

Pr(M,K) = q�MKG(M,K, r) (2)

=

1

q(M�r)(K�r)

r�1Y

i=0

(1� qi�M)(1� qi�K

)

1� qi�r.

Pptp(N,K, p) =NX

M=K

B(M,N, p)P(M,K), (3)

B(M,N, p) =

✓N

M

◆(1� p)MpN�M . (4)

Theorem 1. The probability of two random matrices X1 and X2 generated over

GF (q) with dimensions M1 ⇥ k and M2 ⇥ K, M1,M2 � k, and M12 common

rows being simultaneously full rank is given by

P⇤(M,K) =

X

i

Pi(M12, k)P(M1 �M12,K � i) (5)

· P(M2 �M12, k � i), (6)

where M = (M1,M2,M12) and the summation is performed over the values of

i from max(0,K �M1 +M12,K �M2 +M12) to min(M12,K).

Theorem 2. The probability of successful decoding for a two-destination mul-

ticast network defined by parameters N,K and p is given by

PM (N,K,p) =X

M

B⇤(M, N,p)P⇤

(M,K), (7)

where

B⇤(M, N,p) =

✓N

M12

◆✓N �M12

M1 �M12

◆✓N �M1

M2 �M12

·(1� p1)M1pN�M1

1

·(1� p2)M2pN�M2

2 (8)

and the summation is performed over the following values:

{M 1 ,M2 = K, . . . , N ;M12 = max(0,M1 +M2 �N), . . . ,min(M1,M2). (9)

y = C · x

r =

˜

C · x

1

Small letters in some equations were replaced with capitals, to make them

consistent and more understandable on the slides.

P(M,K) =

K�1Y

i=0

(1� qi�M). (1)

Pr(M,K) = q�MKG(M,K, r) (2)

=

1

q(M�r)(K�r)

r�1Y

i=0

(1� qi�M)(1� qi�K

)

1� qi�r.

Pptp(N,K, p) =NX

M=K

B(M,N, p)P(M,K) (3)

B(M,N, p) =

✓N

M

◆(1� p)MpN�M . (4)

Theorem 1. The probability of two random matrices X1 and X2 generated over

GF (q) with dimensions M1 ⇥ k and M2 ⇥ K, M1,M2 � k, and M12 common

rows being simultaneously full rank is given by

P⇤(M,K) =

X

i

Pi(M12,K)P(M1 �M12,K � i) (5)

· P(M2 �M12,K � i), (6)

where M = (M1,M2,M12) and the summation is performed over the values of

i from max(0,K �M1 +M12,K �M2 +M12) to min(M12,K).

Theorem 2. The probability of successful decoding for a two-destination mul-

ticast network defined by parameters N,K and p is given by

PM (N,K,p) =X

M

B⇤(M, N,p)P⇤

(M,K), (7)

where

B⇤(M, N,p) =

✓N

M12

◆✓N �M12

M1 �M12

◆✓N �M1

M2 �M12

·(1� p1)M1pN�M1

1

·(1� p2)M2pN�M2

2 (8)

and the summation is performed over the following values:

{M 1 ,M2 = K, . . . , N ;M12 = max(0,M1 +M2 �N), . . . ,min(M1,M2). (9)

1

y = C · x

r =

˜

C · x

) x =

˜

C

�1 · r

C =

2

Small letters in some equations were replaced with capitals, to make them consistent and more understandable on the slides.

Pr[

˜

C is f.r.] =

X

M

Pr[M pkts rxd] Pr[

˜

C is f.r. |M pkts rxd]

P(M,K) =

K�1Y

i=0

(1� qi�M). (1)

Pr(M,K) = q�MKG(M,K, r) (2)

=

1

q(M�r)(K�r)

r�1Y

i=0

(1� qi�M)(1� qi�K

)

1� qi�r.

Pptp(N,K, p) =NX

M=K

B(M,N, p)P(M,K) (3)

B(M,N, p) =

✓N

M

◆(1� p)MpN�M . (4)

Theorem 1. The probability of two random matrices X1 and X2 generated over GF (q) with dimensions M1 ⇥ k and M2 ⇥K,

M1,M2 � k, and M12 common rows being simultaneously full rank is given by

P⇤(M,K) =

X

i

Pi(M12,K)P(M1 �M12,K � i)P(M2 �M12,K � i) (5)

where M = (M1,M2,M12) and the summation is performed over the values of i from max(0,K �M1 +M12,K �M2 +M12) to

min(M12,K).

Theorem 2. The probability of successful decoding for a two-destination multicast network defined by parameters N,K and p

is given by

PM (N,K,p) =X

M

B⇤(M, N,p)P⇤

(M,K), (6)

where

B⇤(M, N,p) =

✓N

M12

◆✓N �M12

M1 �M12

◆✓N �M1

M2 �M12

◆(1� p1)

M1pN�M11 (1� p2)

M2pN�M22

and the summation is performed over the following values:

{M 1 ,M2 = K, . . . , N ;M12 = max(0,M1 +M2 �N), . . . ,min(M1,M2). (7)

y = C · x

r =

˜

C · x

) x =

˜

C

�1 · r

C =

PR,1 = Pptp(NS ,K, pSD) =

NX

M=K

B(M,NS , pSD)P(M,K) (8)

PR,2 =

X

M

B⇤(M, NS ,p)

NRX

M 0D=1

B(M 0D, NR, pRD) [P⇤

(M

0,K)� P⇤(M,K)]

PR,3 =

X

M

B⇤(M, NS ,p)

M 0RX

M 0D=1

B(M 0D,M 0

R, pRD)[P(MD +M 0D,K)� P(MD,K)� P⇤

(M

00,K) + P⇤(M,K)]

PR = PR,1 + PR,2 + PR,3 (9)

1

Evgeny Tsimbalo - [email protected]

2. From link to networks:proposed framework

6

Evgeny Tsimbalo - [email protected]

Preliminaries: multicast network

7

......

......

M12

M1 M2

D1

S

D2

p1 p2

P(M,K) =

K�1Y

i=0

(1� qi�M).

Pr(M,K) = q�MKG(M,K, r)

=

1

q(M�r)(K�r)

r�1Y

i=0

(1� qi�M)(1� qi�K

)

1� qi�r.

Pptp(N,K, p) =NX

M=K

B(M,N, p)P(M,K),

B(M,N, p) =

✓N

M

◆(1� p)MpN�M .

X1 X2

GF (q) M1 ⇥ k M2 ⇥ K M1,M2 � k M12

P⇤(M,K) =

X

i

Pi(M12, k)P(M1 �M12,K � i)

· P(M2 �M12, k � i),

M = (M1,M2,M12)

i max(0,K �M1 +M12,K �M2 +M12) min(M12,K)

N,K p

PM (N,K,p) =X

M

B⇤(M, N,p)P⇤

(M,K),

B⇤(M, N,p) =

✓N

M12

◆✓N �M12

M1 �M12

◆✓N �M1

M2 �M12

·(1� p1)M1pN�M1

1

·(1� p2)M2pN�M2

2

{M 1 ,M2 = K, . . . , N ;M12 = max(0,M1 +M2 �N), . . . ,min(M1,M2).

Small letters in some equations were replaced with capitals, to make them

consistent and more understandable on the slides.

P(M,K) =

K�1Y

i=0

(1� qi�M). (1)

Pr(M,K) = q�MKG(M,K, r) (2)

=

1

q(M�r)(K�r)

r�1Y

i=0

(1� qi�M)(1� qi�K

)

1� qi�r.

Pptp(N,K, p) =NX

M=K

B(M,N, p)P(M,K) (3)

B(M,N, p) =

✓N

M

◆(1� p)MpN�M . (4)

Theorem 1. The probability of two random matrices X1 and X2 generated over

GF (q) with dimensions M1 ⇥ k and M2 ⇥ K, M1,M2 � k, and M12 common

rows being simultaneously full rank is given by

P⇤(M,K) =

X

i

Pi(M12,K)P(M1 �M12,K � i)P(M2 �M12,K � i) (5)

where M = (M1,M2,M12) and the summation is performed over the values of

i from max(0,K �M1 +M12,K �M2 +M12) to min(M12,K).

Theorem 2. The probability of successful decoding for a two-destination mul-

ticast network defined by parameters N,K and p is given by

PM (N,K,p) =X

M

B⇤(M, N,p)P⇤

(M,K), (6)

where

B⇤(M, N,p) =

✓N

M12

◆✓N �M12

M1 �M12

◆✓N �M1

M2 �M12

·(1� p1)M1pN�M1

1

·(1� p2)M2pN�M2

2 (7)

and the summation is performed over the following values:

{M 1 ,M2 = K, . . . , N ;M12 = max(0,M1 +M2 �N), . . . ,min(M1,M2). (8)

y = C · x

r =

˜

C · x

) x =

˜

C

�1 · r

C =

1

Small letters in some equations were replaced with capitals, to make them

consistent and more understandable on the slides.

P(M,K) =

K�1Y

i=0

(1� qi�M). (1)

Pr(M,K) = q�MKG(M,K, r) (2)

=

1

q(M�r)(K�r)

r�1Y

i=0

(1� qi�M)(1� qi�K

)

1� qi�r.

Pptp(N,K, p) =NX

M=K

B(M,N, p)P(M,K) (3)

B(M,N, p) =

✓N

M

◆(1� p)MpN�M . (4)

Theorem 1. The probability of two random matrices X1 and X2 generated over

GF (q) with dimensions M1 ⇥ k and M2 ⇥ K, M1,M2 � k, and M12 common

rows being simultaneously full rank is given by

P⇤(M,K) =

X

i

Pi(M12,K)P(M1 �M12,K � i)P(M2 �M12,K � i) (5)

where M = (M1,M2,M12) and the summation is performed over the values of

i from max(0,K �M1 +M12,K �M2 +M12) to min(M12,K).

Theorem 2. The probability of successful decoding for a two-destination mul-

ticast network defined by parameters N,K and p is given by

PM (N,K,p) =X

M

B⇤(M, N,p)P⇤

(M,K), (6)

where

B⇤(M, N,p) =

✓N

M12

◆✓N �M12

M1 �M12

◆✓N �M1

M2 �M12

◆(1� p1)

M1pN�M11 (1� p2)

M2pN�M22

and the summation is performed over the following values:

{M 1 ,M2 = K, . . . , N ;M12 = max(0,M1 +M2 �N), . . . ,min(M1,M2). (7)

y = C · x

r =

˜

C · x

) x =

˜

C

�1 · r

C =

1

What is the probability of

successful decoding?

Small letters in some equations were replaced with capitals, to make them consistent and more understandable on the slides.

Pr[

˜

C is f.r.] =

X

M

Pr[M pkts rxd] Pr[

˜

C is f.r. |M pkts rxd]

P(M,K) =

K�1Y

i=0

(1� qi�M). (1)

Pr(M,K) = q�MKG(M,K, r) (2)

=

1

q(M�r)(K�r)

r�1Y

i=0

(1� qi�M)(1� qi�K

)

1� qi�r.

Pptp(N,K, p) =NX

M=K

B(M,N, p)P(M,K) (3)

B(M,N, p) =

✓N

M

◆(1� p)MpN�M . (4)

Theorem 1. The probability of two random matrices X1 and X2 generated over GF (q) with dimensions M1 ⇥ k and M2 ⇥K,

M1,M2 � k, and M12 common rows being simultaneously full rank is given by

P⇤(M,K) =

X

i

Pi(M12,K)P(M1 �M12,K � i)P(M2 �M12,K � i) (5)

where M = (M1,M2,M12) and the summation is performed over the values of i from max(0,K �M1 +M12,K �M2 +M12) to

min(M12,K).

Theorem 2. The probability of successful decoding for a two-destination multicast network defined by parameters N,K and p

is given by

PM (N,K,p) =X

M

B⇤(M, N,p)P⇤

(M,K), (6)

where

B⇤(M, N,p) =

✓N

M12

◆✓N �M12

M1 �M12

◆✓N �M1

M2 �M12

◆(1� p1)

M1pN�M11 (1� p2)

M2pN�M22

and the summation is performed over the following values:

(M1,M2 = K, . . . , N ;

M12 = max(0,M1 +M2 �N), . . . ,min(M1,M2).(7)

y = C · x

r =

˜

C · x) x =

˜

C

�1 · r

C =

PR,1 = Pptp(NS ,K, pSD) =

NX

M=K

B(M,NS , pSD)P(M,K) (8)

PR,2 =

X

M

B⇤(M, NS ,p)

NRX

M 0D=1

B(M 0D, NR, pRD) [P⇤

(M

0,K)� P⇤(M,K)]

PR,3 =

X

M

B⇤(M, NS ,p)

M 0RX

M 0D=1

B(M 0D,M 0

R, pRD)[P(MD +M 0D,K)� P(MD,K)� P⇤

(M

00,K) + P⇤(M,K)]

PR = PR,1 + PR,2 + PR,3 (9)

Pr[

˜

C1, ˜C2 are f.r. |M] :

Pr[M] :

1

Small letters in some equations were replaced with capitals, to make them consistent and more understandable on the slides.

Pr[

˜

C is f.r.] =

X

M

Pr[M pkts rxd] Pr[

˜

C is f.r. |M pkts rxd]

P(M,K) =

K�1Y

i=0

(1� qi�M). (1)

Pr(M,K) = q�MKG(M,K, r) (2)

=

1

q(M�r)(K�r)

r�1Y

i=0

(1� qi�M)(1� qi�K

)

1� qi�r.

Pptp(N,K, p) =NX

M=K

B(M,N, p)P(M,K) (3)

B(M,N, p) =

✓N

M

◆(1� p)MpN�M . (4)

Theorem 1. The probability of two random matrices X1 and X2 generated over GF (q) with dimensions M1 ⇥ k and M2 ⇥K,

M1,M2 � k, and M12 common rows being simultaneously full rank is given by

P⇤(M,K) =

X

i

Pi(M12,K)P(M1 �M12,K � i)P(M2 �M12,K � i) (5)

where M = (M1,M2,M12) and the summation is performed over the values of i from max(0,K �M1 +M12,K �M2 +M12) to

min(M12,K).

Theorem 2. The probability of successful decoding for a two-destination multicast network defined by parameters N,K and p

is given by

PM (N,K,p) =X

M

B⇤(M, N,p)P⇤

(M,K), (6)

where

B⇤(M, N,p) =

✓N

M12

◆✓N �M12

M1 �M12

◆✓N �M1

M2 �M12

◆(1� p1)

M1pN�M11 (1� p2)

M2pN�M22

and the summation is performed over the following values:

(M1,M2 = K, . . . , N ;

M12 = max(0,M1 +M2 �N), . . . ,min(M1,M2).(7)

y = C · x

r =

˜

C · x) x =

˜

C

�1 · r

C =

PR,1 = Pptp(NS ,K, pSD) =

NX

M=K

B(M,NS , pSD)P(M,K) (8)

PR,2 =

X

M

B⇤(M, NS ,p)

NRX

M 0D=1

B(M 0D, NR, pRD) [P⇤

(M

0,K)� P⇤(M,K)]

PR,3 =

X

M

B⇤(M, NS ,p)

M 0RX

M 0D=1

B(M 0D,M 0

R, pRD)[P(MD +M 0D,K)� P(MD,K)� P⇤

(M

00,K) + P⇤(M,K)]

PR = PR,1 + PR,2 + PR,3 (9)

Pr[

˜

C1, ˜C2 are f.r. |M] :

Pr[M] :

1

Small letters in some equations were replaced with capitals, to make them consistent and more understandable on the slides.

Pr[

˜

C is f.r.] =

X

M

Pr[M pkts rxd] Pr[

˜

C is f.r. |M pkts rxd]

P(M,K) =

K�1Y

i=0

(1� qi�M). (1)

Pr(M,K) = q�MKG(M,K, r) (2)

=

1

q(M�r)(K�r)

r�1Y

i=0

(1� qi�M)(1� qi�K

)

1� qi�r.

Pptp(N,K, p) =NX

M=K

B(M,N, p)P(M,K) (3)

B(M,N, p) =

✓N

M

◆(1� p)MpN�M . (4)

Theorem 1. The probability of two random matrices X1 and X2 generated over GF (q) with dimensions M1 ⇥ k and M2 ⇥K,

M1,M2 � k, and M12 common rows being simultaneously full rank is given by

P⇤(M,K) =

X

i

Pi(M12,K)P(M1 �M12,K � i)P(M2 �M12,K � i) (5)

where M = (M1,M2,M12) and the summation is performed over the values of i from max(0,K �M1 +M12,K �M2 +M12) to

min(M12,K).

Theorem 2. The probability of successful decoding for a two-destination multicast network defined by parameters N,K and p

is given by

PM (N,K,p) =X

M

B⇤(M, N,p)P⇤

(M,K), (6)

where

B⇤(M, N,p) =

✓N

M12

◆✓N �M12

M1 �M12

◆✓N �M1

M2 �M12

◆(1� p1)

M1pN�M11 (1� p2)

M2pN�M22

and the summation is performed over the following values:

(M1,M2 = K, . . . , N ;

M12 = max(0,M1 +M2 �N), . . . ,min(M1,M2).(7)

y = C · x

r =

˜

C · x) x =

˜

C

�1 · r

C =

PR,1 = Pptp(NS ,K, pSD) =

NX

M=K

B(M,NS , pSD)P(M,K) (8)

PR,2 =

X

M

B⇤(M, NS ,p)

NRX

M 0D=1

B(M 0D, NR, pRD) [P⇤

(M

0,K)� P⇤(M,K)]

PR,3 =

X

M

B⇤(M, NS ,p)

M 0RX

M 0D=1

B(M 0D,M 0

R, pRD)[P(MD +M 0D,K)� P(MD,K)� P⇤

(M

00,K) + P⇤(M,K)]

PR = PR,1 + PR,2 + PR,3 (9)

Pr[

˜

C1, ˜C2 are f.r. |M] :

Pr[M] :

1

Small letters in some equations were replaced with capitals, to make them consistent and more understandable on the slides.

Pr[

˜

C is f.r.] =

X

M

Pr[M pkts rxd] Pr[

˜

C is f.r. |M pkts rxd]

P(M,K) =

K�1Y

i=0

(1� qi�M). (1)

Pr(M,K) = q�MKG(M,K, r) (2)

=

1

q(M�r)(K�r)

r�1Y

i=0

(1� qi�M)(1� qi�K

)

1� qi�r.

Pptp(N,K, p) =NX

M=K

B(M,N, p)P(M,K) (3)

B(M,N, p) =

✓N

M

◆(1� p)MpN�M . (4)

Theorem 1. The probability of two random matrices X1 and X2 generated over GF (q) with dimensions M1 ⇥ k and M2 ⇥K,

M1,M2 � k, and M12 common rows being simultaneously full rank is given by

P⇤(M,K) =

X

i

Pi(M12,K)P(M1 �M12,K � i)P(M2 �M12,K � i) (5)

where M = (M1,M2,M12) and the summation is performed over the values of i from max(0,K �M1 +M12,K �M2 +M12) to

min(M12,K).

Theorem 2. The probability of successful decoding for a two-destination multicast network defined by parameters N,K and p

is given by

PM (N,K,p) =X

M

B⇤(M, N,p)P⇤

(M,K), (6)

where

B⇤(M, N,p) =

✓N

M12

◆✓N �M12

M1 �M12

◆✓N �M1

M2 �M12

◆(1� p1)

M1pN�M11 (1� p2)

M2pN�M22

and the summation is performed over the following values:

(M1,M2 = K, . . . , N ;

M12 = max(0,M1 +M2 �N), . . . ,min(M1,M2).(7)

y = C · x

r =

˜

C · x) x =

˜

C

�1 · r

C =

PR,1 = Pptp(NS ,K, pSD) =

NX

M=K

B(M,NS , pSD)P(M,K) (8)

PR,2 =

X

M

B⇤(M, NS ,p)

NRX

M 0D=1

B(M 0D, NR, pRD) [P⇤

(M

0,K)� P⇤(M,K)]

PR,3 =

X

M

B⇤(M, NS ,p)

M 0RX

M 0D=1

B(M 0D,M 0

R, pRD)[P(MD +M 0D,K)� P(MD,K)� P⇤

(M

00,K) + P⇤(M,K)]

PR = PR,1 + PR,2 + PR,3 (9)

Pr[

˜

C1, ˜C2 are f.r. |M] :

Pr[M] :

1

Evgeny Tsimbalo - [email protected]

Relay network• Stage 1: S transmits, R & D receive.

• If D can decode, success!

• If not:

• Stage 2: R attempts decoding and transmits to D.

• If R can decode, it re-encodes prior to transmission;

• If R can’t decode, it just relays packets to D.

• Previous work [1]: active relay only.

8

pSDS

R

D

pSR pRD

active relay

passive relay

What is the probability of successful decoding at D?

[1] A. S. Khan and I. Chatzigeorgiou, “Performance Analysis of Random Linear Network Coding in Two-Source Single-Relay Networks,” in Proc. of IEEE ICC 2015, (London, United Kingdom, UK), pp. 991–996, June 2015.

Evgeny Tsimbalo - [email protected]

Relay network

9

D decodes directly from S

Active relay

Passive relay

pSDS

R

D

pSR pRD

:

Small letters in some equations were replaced with capitals, to make them consistent and more understandable on the slides.

P(M,K) =

K�1Y

i=0

(1� qi�M). (1)

Pr(M,K) = q�MKG(M,K, r) (2)

=

1

q(M�r)(K�r)

r�1Y

i=0

(1� qi�M)(1� qi�K

)

1� qi�r.

Pptp(N,K, p) =NX

M=K

B(M,N, p)P(M,K) (3)

B(M,N, p) =

✓N

M

◆(1� p)MpN�M . (4)

Theorem 1. The probability of two random matrices X1 and X2 generated over GF (q) with dimensions M1 ⇥ k and M2 ⇥K,

M1,M2 � k, and M12 common rows being simultaneously full rank is given by

P⇤(M,K) =

X

i

Pi(M12,K)P(M1 �M12,K � i)P(M2 �M12,K � i) (5)

where M = (M1,M2,M12) and the summation is performed over the values of i from max(0,K �M1 +M12,K �M2 +M12) to

min(M12,K).

Theorem 2. The probability of successful decoding for a two-destination multicast network defined by parameters N,K and p

is given by

PM (N,K,p) =X

M

B⇤(M, N,p)P⇤

(M,K), (6)

where

B⇤(M, N,p) =

✓N

M12

◆✓N �M12

M1 �M12

◆✓N �M1

M2 �M12

◆(1� p1)

M1pN�M11 (1� p2)

M2pN�M22

and the summation is performed over the following values:

{M 1 ,M2 = K, . . . , N ;M12 = max(0,M1 +M2 �N), . . . ,min(M1,M2). (7)

y = C · x

r =

˜

C · x

) x =

˜

C

�1 · r

C =

PR,1 = Pptp(NS ,K, pSD) =

NX

M=K

B(M,NS , pSD)P(M,K) (8)

PR,2 =

X

M

B⇤(M, NS ,p)

NRX

M 0D=1

B(M 0D, NR, pRD) [P⇤

(M

0,K)� P⇤(M,K)]

PR,3 =

X

M

B⇤(M, NS ,p)

M 0RX

M 0D=1

B(M 0D,M 0

R, pRD)[P(MD +M 0D,K)� P(MD,K)� P⇤

(M

00,K) + P⇤(M,K)]

PR = PR,1 + PR,2 + PR,3 (9)

1

:

Small letters in some equations were replaced with capitals, to make them consistent and more understandable on the slides.

P(M,K) =

K�1Y

i=0

(1� qi�M). (1)

Pr(M,K) = q�MKG(M,K, r) (2)

=

1

q(M�r)(K�r)

r�1Y

i=0

(1� qi�M)(1� qi�K

)

1� qi�r.

Pptp(N,K, p) =NX

M=K

B(M,N, p)P(M,K) (3)

B(M,N, p) =

✓N

M

◆(1� p)MpN�M . (4)

Theorem 1. The probability of two random matrices X1 and X2 generated over GF (q) with dimensions M1 ⇥ k and M2 ⇥K,

M1,M2 � k, and M12 common rows being simultaneously full rank is given by

P⇤(M,K) =

X

i

Pi(M12,K)P(M1 �M12,K � i)P(M2 �M12,K � i) (5)

where M = (M1,M2,M12) and the summation is performed over the values of i from max(0,K �M1 +M12,K �M2 +M12) to

min(M12,K).

Theorem 2. The probability of successful decoding for a two-destination multicast network defined by parameters N,K and p

is given by

PM (N,K,p) =X

M

B⇤(M, N,p)P⇤

(M,K), (6)

where

B⇤(M, N,p) =

✓N

M12

◆✓N �M12

M1 �M12

◆✓N �M1

M2 �M12

◆(1� p1)

M1pN�M11 (1� p2)

M2pN�M22

and the summation is performed over the following values:

{M 1 ,M2 = K, . . . , N ;M12 = max(0,M1 +M2 �N), . . . ,min(M1,M2). (7)

y = C · x

r =

˜

C · x

) x =

˜

C

�1 · r

C =

PR,1 = Pptp(NS ,K, pSD) =

NX

M=K

B(M,NS , pSD)P(M,K) (8)

PR,2 =

X

M

B⇤(M, NS ,p)

NRX

M 0D=1

B(M 0D, NR, pRD) [P⇤

(M

0,K)� P⇤(M,K)]

PR,3 =

X

M

B⇤(M, NS ,p)

M 0RX

M 0D=1

B(M 0D,M 0

R, pRD)[P(MD +M 0D,K)� P(MD,K)� P⇤

(M

00,K) + P⇤(M,K)]

PR = PR,1 + PR,2 + PR,3 (9)

1

:

Small letters in some equations were replaced with capitals, to make them consistent and more understandable on the slides.

P(M,K) =

K�1Y

i=0

(1� qi�M). (1)

Pr(M,K) = q�MKG(M,K, r) (2)

=

1

q(M�r)(K�r)

r�1Y

i=0

(1� qi�M)(1� qi�K

)

1� qi�r.

Pptp(N,K, p) =NX

M=K

B(M,N, p)P(M,K) (3)

B(M,N, p) =

✓N

M

◆(1� p)MpN�M . (4)

Theorem 1. The probability of two random matrices X1 and X2 generated over GF (q) with dimensions M1 ⇥ k and M2 ⇥K,

M1,M2 � k, and M12 common rows being simultaneously full rank is given by

P⇤(M,K) =

X

i

Pi(M12,K)P(M1 �M12,K � i)P(M2 �M12,K � i) (5)

where M = (M1,M2,M12) and the summation is performed over the values of i from max(0,K �M1 +M12,K �M2 +M12) to

min(M12,K).

Theorem 2. The probability of successful decoding for a two-destination multicast network defined by parameters N,K and p

is given by

PM (N,K,p) =X

M

B⇤(M, N,p)P⇤

(M,K), (6)

where

B⇤(M, N,p) =

✓N

M12

◆✓N �M12

M1 �M12

◆✓N �M1

M2 �M12

◆(1� p1)

M1pN�M11 (1� p2)

M2pN�M22

and the summation is performed over the following values:

{M 1 ,M2 = K, . . . , N ;M12 = max(0,M1 +M2 �N), . . . ,min(M1,M2). (7)

y = C · x

r =

˜

C · x

) x =

˜

C

�1 · r

C =

PR,1 = Pptp(NS ,K, pSD) =

NX

M=K

B(M,NS , pSD)P(M,K) (8)

PR,2 =

X

M

B⇤(M, NS ,p)

NRX

M 0D=1

B(M 0D, NR, pRD) [P⇤

(M

0,K)� P⇤(M,K)]

PR,3 =

X

M

B⇤(M, NS ,p)

M 0RX

M 0D=1

B(M 0D,M 0

R, pRD)[P(MD +M 0D,K)� P(MD,K)� P⇤

(M

00,K) + P⇤(M,K)]

PR = PR,1 + PR,2 + PR,3 (9)

1

Small letters in some equations were replaced with capitals, to make them consistent and more understandable on the slides.

P(M,K) =

K�1Y

i=0

(1� qi�M). (1)

Pr(M,K) = q�MKG(M,K, r) (2)

=

1

q(M�r)(K�r)

r�1Y

i=0

(1� qi�M)(1� qi�K

)

1� qi�r.

Pptp(N,K, p) =NX

M=K

B(M,N, p)P(M,K) (3)

B(M,N, p) =

✓N

M

◆(1� p)MpN�M . (4)

Theorem 1. The probability of two random matrices X1 and X2 generated over GF (q) with dimensions M1 ⇥ k and M2 ⇥K,

M1,M2 � k, and M12 common rows being simultaneously full rank is given by

P⇤(M,K) =

X

i

Pi(M12,K)P(M1 �M12,K � i)P(M2 �M12,K � i) (5)

where M = (M1,M2,M12) and the summation is performed over the values of i from max(0,K �M1 +M12,K �M2 +M12) to

min(M12,K).

Theorem 2. The probability of successful decoding for a two-destination multicast network defined by parameters N,K and p

is given by

PM (N,K,p) =X

M

B⇤(M, N,p)P⇤

(M,K), (6)

where

B⇤(M, N,p) =

✓N

M12

◆✓N �M12

M1 �M12

◆✓N �M1

M2 �M12

◆(1� p1)

M1pN�M11 (1� p2)

M2pN�M22

and the summation is performed over the following values:

{M 1 ,M2 = K, . . . , N ;M12 = max(0,M1 +M2 �N), . . . ,min(M1,M2). (7)

y = C · x

r =

˜

C · x

) x =

˜

C

�1 · r

C =

PR,1 = Pptp(NS ,K, pSD) =

NX

M=K

B(M,NS , pSD)P(M,K) (8)

PR,2 =

X

M

B⇤(M, NS ,p)

NRX

M 0D=1

B(M 0D, NR, pRD) [P⇤

(M

0,K)� P⇤(M,K)]

PR,3 =

X

M

B⇤(M, NS ,p)

M 0RX

M 0D=1

B(M 0D,M 0

R, pRD)[P(MD +M 0D,K)� P(MD,K)� P⇤

(M

00,K) + P⇤(M,K)]

PR = PR,1 + PR,2 + PR,3 (9)

1

Relay can decode

D

R

RR

Small letters in some equations were replaced with capitals, to make them consistent and more understandable on the slides.

Pr[

˜

C is f.r.] =

X

M

Pr[M pkts rxd] Pr[

˜

C is f.r. |M pkts rxd]

P(M,K) =

K�1Y

i=0

(1� qi�M). (1)

Pr(M,K) = q�MKG(M,K, r) (2)

=

1

q(M�r)(K�r)

r�1Y

i=0

(1� qi�M)(1� qi�K

)

1� qi�r.

Pptp(N,K, p) =NX

M=K

B(M,N, p)P(M,K) (3)

B(M,N, p) =

✓N

M

◆(1� p)MpN�M . (4)

Theorem 1. The probability of two random matrices X1 and X2 generated over GF (q) with dimensions M1 ⇥ k and M2 ⇥K,

M1,M2 � k, and M12 common rows being simultaneously full rank is given by

P⇤(M,K) =

X

i

Pi(M12,K)P(M1 �M12,K � i)P(M2 �M12,K � i) (5)

where M = (M1,M2,M12) and the summation is performed over the values of i from max(0,K �M1 +M12,K �M2 +M12) to

min(M12,K).

Theorem 2. The probability of successful decoding for a two-destination multicast network defined by parameters N,K and p

is given by

PM (N,K,p) =X

M

B⇤(M, N,p)P⇤

(M,K), (6)

where

B⇤(M, N,p) =

✓N

M12

◆✓N �M12

M1 �M12

◆✓N �M1

M2 �M12

◆(1� p1)

M1pN�M11 (1� p2)

M2pN�M22

and the summation is performed over the following values:

(M1,M2 = K, . . . , N ;

M12 = max(0,M1 +M2 �N), . . . ,min(M1,M2).(7)

y = C · x

r =

˜

C · x) x =

˜

C

�1 · r

C =

PR,1 = Pptp(NS ,K, pSD) =

NX

M=K

B(M,NS , pSD)P(M,K) (8)

PR,2 =

X

M

B⇤(M, NS ,p)

NRX

M 0D=1

B(M 0D, NR, pRD) [P⇤

(M

0,K)� P⇤(M,K)]

PR,3 =

X

M

B⇤(M, NS ,p)

M 0RX

M 0D=1

B(M 0D,M 0

R, pRD)[P(MD +M 0D,K)� P(MD,K)� P⇤

(M

00,K) + P⇤(M,K)]

PR = PR,1 + PR,2 + PR,3 (9)

1

All possible outcomes

Evgeny Tsimbalo - [email protected]

3. Numerical results

10

Evgeny Tsimbalo - [email protected]

Simulation vs theory, binary code

11

Number of encoded packets NS

10 12 14 16 18 20 22 24 26 28 30

Decod

ingprobab

ilityPR

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

K = 10 K = 15 K = 20

Sim.TheorySim., active R onlyTheory, active R only

pSDS

R

D

pSR pRD

0.3

0.20.1

Perfect match between theory and simulation Performance improvement over previously proposed relay network

Evgeny Tsimbalo - [email protected]

Simulation vs theory, non-binary codes

12

Number of encoded packets NS

10 11 12 13 14 15 16 17 18 19 20

Decod

ingprobab

ilityPR

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

q = 2

q = 28

Sim.TheorySim., active R onlyTheory, active R only

pSDS

R

D

pSR pRD

0.5

0.40.3

Proposed theory is valid for non-binary codes too Performance improvement over previously proposed relay network

Evgeny Tsimbalo - [email protected]

Conclusions

• RLNC for relay networks is investigated.

• Active and passive relay modes proposed.

• Exact expressions for decoding probability are derived.

• Theory matches simulations for various scenarios.

• Performance improvement over previously proposed relay network.

13