Input X P(y|x) output Y transition probabilities...some channel models Input X P(y|x) output Y...

23
some channel models Input X P(y|x) output Y transition probabilities memoryless: - output at time i depends only on input at time i - input and output alphabet finite

Transcript of Input X P(y|x) output Y transition probabilities...some channel models Input X P(y|x) output Y...

Page 1: Input X P(y|x) output Y transition probabilities...some channel models Input X P(y|x) output Y transition probabilities memoryless: - output at time i depends only on input at time

some channel models

Input X P(y|x) output Y

transition probabilities

memoryless:

- output at time i depends only on input at time i

- input and output alphabet finite

Page 2: Input X P(y|x) output Y transition probabilities...some channel models Input X P(y|x) output Y transition probabilities memoryless: - output at time i depends only on input at time

Example: binary symmetric channel (BSC)

Error Source

+

E

X

OutputInput

EXY !=

E is the binary error sequence s.t. P(1) = 1-P(0) = p

X is the binary information sequence

Y is the binary output sequence

1-p

0 0

p

1 1

1-p

Page 3: Input X P(y|x) output Y transition probabilities...some channel models Input X P(y|x) output Y transition probabilities memoryless: - output at time i depends only on input at time

from AWGNto BSC

Homework: calculate the capacity as a function of A and σ2

p

Page 4: Input X P(y|x) output Y transition probabilities...some channel models Input X P(y|x) output Y transition probabilities memoryless: - output at time i depends only on input at time

Other models

0

1

0 (light on)

1 (light off)

p

1-p

X Y

P(X=0) = P0

0

1

0

E

1

1-e

e

e

1-e

P(X=0) = P0

Z-channel (optical) Erasure channel(MAC)

Page 5: Input X P(y|x) output Y transition probabilities...some channel models Input X P(y|x) output Y transition probabilities memoryless: - output at time i depends only on input at time

Erasure with errors

0

1

0

E

1p

pe

e

1-p-e

1-p-e

Page 6: Input X P(y|x) output Y transition probabilities...some channel models Input X P(y|x) output Y transition probabilities memoryless: - output at time i depends only on input at time

burst error model (Gilbert-Elliot)

Error Source

RandomRandom error channel; outputs independentP(0) = 1- P(1);

BurstBurst error channel; outputs dependent

Error SourceP(0 | state = bad ) = P(1|state = bad ) = 1/2;

P(0 | state = good ) = 1 - P(1|state = good ) = 0.999

State info: good or bad

good bad

transition probabilityPgb

Pbg

PggPbb

Page 7: Input X P(y|x) output Y transition probabilities...some channel models Input X P(y|x) output Y transition probabilities memoryless: - output at time i depends only on input at time

channel capacity:

I(X;Y) = H(X) - H(X|Y) = H(Y) – H(Y|X) (Shannon 1948)

H(X) H(X|Y)

notes:capacity depends on input probabilities

because the transition probabilites are fixed

channelX Y

!==

=

)(*)()(

);(max

iIippEEntropyH

capacityYXI

Page 8: Input X P(y|x) output Y transition probabilities...some channel models Input X P(y|x) output Y transition probabilities memoryless: - output at time i depends only on input at time

Practical communication system design

messageestimate

channel decoder

n

Codeword in

receive

There are 2k code words of length nk is the number of information bits transmitted in n channel uses

2k

Code book

Code bookwith errors

Page 9: Input X P(y|x) output Y transition probabilities...some channel models Input X P(y|x) output Y transition probabilities memoryless: - output at time i depends only on input at time

Channel capacity

Definition:

The rate R of a code is the ratio k/n, where

k is the number of information bits transmitted in n channel uses

Shannon showed that: :

for R ≤ C

encoding methods exist

with decoding error probability 0

Page 10: Input X P(y|x) output Y transition probabilities...some channel models Input X P(y|x) output Y transition probabilities memoryless: - output at time i depends only on input at time

Encoding and decoding according to Shannon

Code: 2k binary codewords where p(0) = P(1) = ½

Channel errors: P(0 →1) = P(1 → 0) = p

i.e. # error sequences ≈ 2nh(p)

Decoder: search around received sequence for codeword

with ≈ np differences

space of 2n binary sequences

Page 11: Input X P(y|x) output Y transition probabilities...some channel models Input X P(y|x) output Y transition probabilities memoryless: - output at time i depends only on input at time

decoding error probability

1. For t errors: |t/n-p|> Є→ 0 for n → ∞(law of large numbers)

2. > 1 code word in region (codewords random)

!"

#<=

"="

#$>

#####

nand

)p(h1nk

Rfor

022

2

2)12()1(P

)RBSCC(n)R)p(h1(n

n

)p(nhk

Page 12: Input X P(y|x) output Y transition probabilities...some channel models Input X P(y|x) output Y transition probabilities memoryless: - output at time i depends only on input at time

channel capacity: the BSC

1-p

0 0

p

1 1

1-p

X Y

I(X;Y) = H(Y) – H(Y|X)

the maximum of H(Y) = 1

since Y is binary

H(Y|X) = h(p)

= P(X=0)h(p) + P(X=1)h(p)

Conclusion: the capacity for the BSC CBSC = 1- h(p)Homework: draw CBSC , what happens for p > ½

Page 13: Input X P(y|x) output Y transition probabilities...some channel models Input X P(y|x) output Y transition probabilities memoryless: - output at time i depends only on input at time

channel capacity: the BSC

0.5 1.0

1.0

Bit error p

Cha

nnel

cap

acity

Explain the behaviour!

Page 14: Input X P(y|x) output Y transition probabilities...some channel models Input X P(y|x) output Y transition probabilities memoryless: - output at time i depends only on input at time

channel capacity: the Z-channel

Application in optical communications

0

1

0 (light on)

1 (light off)

p

1-p

X Y

H(Y) = h(P0 +p(1- P0 ) )

H(Y|X) = (1 - P0 ) h(p)

For capacity, maximize I(X;Y) over P0P(X=0) = P0

Page 15: Input X P(y|x) output Y transition probabilities...some channel models Input X P(y|x) output Y transition probabilities memoryless: - output at time i depends only on input at time

channel capacity: the erasure channel

Application: cdma detection

0

1

0

E

1

1-e

e

e

1-e

X Y

I(X;Y) = H(Y)– H(Y|X)

H(Y) = h(P0 )

H(Y|X) = e h(P0)

Thus Cerasure = 1 – e(check!, draw and compare with BSC and Z)P(X=0) = P0

Page 16: Input X P(y|x) output Y transition probabilities...some channel models Input X P(y|x) output Y transition probabilities memoryless: - output at time i depends only on input at time

Erasure with errors: calculate the capacity!

0

1

0

E

1p

pe

e

1-p-e

1-p-e

Page 17: Input X P(y|x) output Y transition probabilities...some channel models Input X P(y|x) output Y transition probabilities memoryless: - output at time i depends only on input at time

example

Consider the following example

1/3

1/3

0

1

2

0

1

2

For P(0) = P(2) = p, P(1) = 1-2p

H(Y) = h(1/3 – 2p/3) + (2/3 + 2p/3); H(Y|X) = (1-2p)log23

Q: maximize H(Y) – H(Y|X) as a function of pQ: is this the capacity?

hint use the following: log2x = lnx / ln 2; d lnx / dx = 1/x

Page 18: Input X P(y|x) output Y transition probabilities...some channel models Input X P(y|x) output Y transition probabilities memoryless: - output at time i depends only on input at time

channel models: general diagram

x1

x2

xn

y1

y2

ym

:::

:::

P1|1P2|1P1|2

P2|2

Pm|n

Input alphabet X = {x1, x2, …, xn}Output alphabet Y = {y1, y2, …, ym}Pj|i = PY|X(yj|xi)

In general:calculating capacity needs moretheory

The statistical behavior of the channel is completely defined by the channel transition probabilities Pj|i = PY|X(yj|xi)

Page 19: Input X P(y|x) output Y transition probabilities...some channel models Input X P(y|x) output Y transition probabilities memoryless: - output at time i depends only on input at time

* clue:

I(X;Y)

is convex ∩ in the input probabilities

i.e. finding a maximum is simple

Page 20: Input X P(y|x) output Y transition probabilities...some channel models Input X P(y|x) output Y transition probabilities memoryless: - output at time i depends only on input at time

Channel capacity: converse

For R > C the decoding error probability > 0

k/nC

Pe

Page 21: Input X P(y|x) output Y transition probabilities...some channel models Input X P(y|x) output Y transition probabilities memoryless: - output at time i depends only on input at time

Converse: For a discrete memory less channel

1 1 1 1

( ; ) ( ) ( | ) ( ) ( | ) ( ; )n n n n

n n ni i i i i i i

i i i i

I X Y H Y H Y X H Y H Y X I X Y nC= = = =

= ! " ! = "# # # #

Xi Yi

m Xn Yn m‘encoder channel

channel

Source generates one

out of 2k equiprobable

messages

decoder

Let Pe = probability that m‘ ≠ m

source

Page 22: Input X P(y|x) output Y transition probabilities...some channel models Input X P(y|x) output Y transition probabilities memoryless: - output at time i depends only on input at time

converse R := k/n

Pe ≥ 1 – C/R - 1/nR

Hence: for large n, and R > C,

the probability of error Pe > 0

k = H(M) = I(M;Yn)+H(M|Yn) Xn is a function of M Fano

≤ I(Xn;Yn) + 1 + k Pe

≤ nC + 1 + k Pe

1 – C n/k - 1/k ≤ Pe

Page 23: Input X P(y|x) output Y transition probabilities...some channel models Input X P(y|x) output Y transition probabilities memoryless: - output at time i depends only on input at time

We used the data processing theorem Cascading of Channels

I(X;Y)X Y

I(Y;Z)Z

I(X;Z)

The overall transmission rate I(X;Z) for the cascade cannot be larger than I(Y;Z), that is: