Lecture Five Coding - AAST

41
E E CE5984 CE5984 Orthogonal Frequency Division Multiplexing and Related Orthogonal Frequency Division Multiplexing and Related Technologies Technologies Fall 2007 Fall 2007 Mohamed Essam Khedr Coding in OFDM

Transcript of Lecture Five Coding - AAST

Page 1: Lecture Five Coding - AAST

EECE5984CE5984Orthogonal Frequency Division Multiplexing and Related Orthogonal Frequency Division Multiplexing and Related

TechnologiesTechnologiesFall 2007Fall 2007

Mohamed Essam Khedr

Coding in OFDM

Page 2: Lecture Five Coding - AAST

SyllabusSyllabus• Wireless channels characteristics (7.5%) 1

• OFDM Basics (10%) 1

• Modulation and Coding (10%) 2– Linear and nonlinear modulation – Interleavin\g and channel coding – Optimal bit and power allocation – Adaptive modulation

Page 3: Lecture Five Coding - AAST

-60 -40 -20 0 20 40 60-50

-40

-30

-20

-10

0

10

f [MHz]

pow

er s

pect

rum

mag

nitu

de [

dB] OFDM spectrum for N

FFT = 128, N

w in = 12, N

guard = 24, oversampling = 1

0 20 40 60 80 100 120 140 160 180 200-0.2

-0.1

0

0.1

0.2time domain signal (baseband)

sample nr.

imaginaryreal

OFDM Block DiagramOFDM Block Diagram

OFDM modulation

(IFFT)

Channel coding /

interleaving

Guard interval

I/Q I/QSymbol mapping

(modulation)

Transmitter

N symbols

OFDM demod. (FFT)

Decoding / deinter-leaving

Guard interval removal

Time sync.

I/Q I/Q

symbol de-mapping

(detection)

Channel est.

ReceiverFFT-part

time

1 OFDM symbol

Channel impulse response:

0101010010110

Page 4: Lecture Five Coding - AAST
Page 5: Lecture Five Coding - AAST

• Channel coding:Transforming signals to improve communications performance by

increasing the robustness against channel impairments (noise, interference, fading, ..)

• Waveform coding: Transforming waveforms to better waveforms• Structured sequences: Transforming data sequences into better

sequences, having structured redundancy.-“Better” in the sense of making the decision process less

subject to errors.

What is channel coding?What is channel coding?

Page 6: Lecture Five Coding - AAST

Error control techniquesError control techniques• Automatic Repeat reQuest (ARQ)

– Full-duplex connection, error detection codes– The receiver sends a feedback to the transmitter, saying that if any error is

detected in the received packet or not (Not-Acknowledgement (NACK) and Acknowledgement (ACK), respectively).

– The transmitter retransmits the previously sent packet if it receives NACK.

• Forward Error Correction (FEC)– Simplex connection, error correction codes– The receiver tries to correct some errors

• Hybrid ARQ (ARQ+FEC)– Full-duplex, error detection and correction codes

Page 7: Lecture Five Coding - AAST

Why using error correction coding?Why using error correction coding?

– Error performance vs. bandwidth– Power vs. bandwidth– Data rate vs. bandwidth

(dB) / 0NEb

BP

AF

B

D

C

E Uncoded

Coded

Coding gain:For a given bit-error probability, the reduction in the Eb/N0 that can berealized through the use of code:

[dB][dB] [dB] c0u0���

����

�−��

����

�=

NE

NE

G bb

Page 8: Lecture Five Coding - AAST

Linear block codes Linear block codes

• The information bit stream is chopped into blocks of k bits. • Each block is encoded to a larger block of n bits.• The coded bits are modulated and sent over channel.• The reverse procedure is done at the receiver.

Data blockChannelencoder Codeword

k bits n bits

rate Code

bits Redundant

nk

R

n-k

c =

Page 9: Lecture Five Coding - AAST

Linear block codes Linear block codes –– cont’dcont’d

• The Hamming weight of vector U, denoted by w(U), is the number of non-zero elements in U.

• The Hamming distance between two vectors U and V, is the number of elements in which they differ.

• The minimum distance of a block code is

)()( VUVU, ⊕= wd

)(min),(minmin iijijiwdd UUU ==

Page 10: Lecture Five Coding - AAST

Linear block codes Linear block codes –– cont’dcont’d

• Error detection capability is given by

• Error correcting-capability t of a code, which is defined as the maximum number of guaranteed correctable errors per codeword, is

��

−=2

1mindt

1min −= de

Page 11: Lecture Five Coding - AAST

Linear block codes Linear block codes –– cont’dcont’d

• For memory less channels, the probability that the decoder commits an erroneous decoding is

– is the transition probability or bit error probability over channel.

• The decoded bit error probability is

jnjn

tjM pp

j

nP −

+=

−���

����

�≤ � )1(

1

jnjn

tjB pp

j

nj

nP −

+=

−���

����

�≈ � )1(

1

1

p

Page 12: Lecture Five Coding - AAST

Example of the block codesExample of the block codes

8PSK

QPSK

[dB] / 0NEb

BP

Page 13: Lecture Five Coding - AAST

Convolutional codesConvolutional codes

• A Convolutional code is specified by three parameters or where

– is the coding rate, determining the number of data bits per coded bit.

• In practice, usually k=1 is chosen and we assume that from now on.

– K is the constraint length of the encoder and where the encoder has K-1 memory elements.

• There is different definitions in literatures for constraint length.

),,( Kkn ),/( Knk

nkRc /=

Page 14: Lecture Five Coding - AAST

Block diagram of the DCSBlock diagram of the DCS

Informationsource

Rate 1/n Conv. encoder Modulator

Informationsink

Rate 1/n Conv. decoder Demodulator

�� ��� ��sequenceInput

21 ,...),...,,( immm=m

�� ��� ��

��� ���� ��

bits) coded ( rdBranch wo

1

sequence Codeword

321

,...),...,,,(

n

nijiii

i

,...,u,...,uuU

UUUU

=

== G(m)U

,...)ˆ,...,ˆ,ˆ(ˆ 21 immm=m

� �������

��� ���� ��

dBranch worper outputs

1

dBranch worfor outputsr Demodulato

sequence received

321

,...),...,,,(

n

nijii

i

i

i

,...,z,...,zzZ

ZZZZ

=

=Z

Channel

Page 15: Lecture Five Coding - AAST

A Rate ½ Convolutional encoderA Rate ½ Convolutional encoder

• Convolutional encoder (rate ½, K=3)– 3 shift-registers where the first one takes the incoming data bit and the

rest, form the memory of the encoder.

Input data bits Output coded bitsm

1u

2u

First coded bit

Second coded bit

21,uu

(Branch word)

Page 16: Lecture Five Coding - AAST

A Rate ½ Convolutional encoderA Rate ½ Convolutional encoder

1 0 01t

1u

2u11

21 uu0 1 02t

1u

2u01

21 uu

1 0 13t

1u

2u00

21 uu0 1 04t

1u

2u01

21 uu

)101(=mTime Output OutputTime

Message sequence:

(Branch word) (Branch word)

Page 17: Lecture Five Coding - AAST

A Rate ½ Convolutional encoderA Rate ½ Convolutional encoder

Encoder)101(=m )1110001011(=U

0 0 15t

1u

2u11

21 uu0 0 06t

1u

2u00

21 uu

Time Output Time Output(Branch word) (Branch word)

Page 18: Lecture Five Coding - AAST

State diagramState diagram

• A finite-state machine only encounters a finite number of states.

• State of a machine: the smallest amount of information that, together with a current input to the machine, can predict the output of the machine.

• In a Convolutional encoder, the state is represented by the content of the memory.

• Hence, there are states.• A state diagram is a way to represent the encoder. • A state diagram contains all the states and all possible

transitions between them.• Only two transitions initiating from a state• Only two transitions ending up in a state.

12 −K

Page 19: Lecture Five Coding - AAST

State diagram State diagram –– cont’dcont’d

10 01

00

11

��������������

����� ����������

���

���

��

���

���

��

���

���

��

���

���

��0S

1S

2S

3S

0S

2S

0S

2S

1S

3S

3S1S

0S

1S2S

3S

1/11

1/00

1/01

1/10

0/11

0/00

0/01

0/10

Input

Output(Branch word)

Page 20: Lecture Five Coding - AAST

Trellis Trellis

• Trellis diagram is an extension of the state diagram that shows the passage of time.

– Example of a section of trellis for the rate ½ code

Timeit 1+it

State

000 =S

011 =S

102 =S

113 =S

0/00

1/10

0/11

0/10

0/01

1/11

1/01

1/00

Page 21: Lecture Five Coding - AAST

Trellis Trellis ––cont’dcont’d

• A trellis diagram for the example code

0/11

0/10

0/01

1/11

1/01

1/00

0/00

0/11

0/10

0/01

1/11

1/01

1/00

0/00

0/11

0/10

0/01

1/11

1/01

1/00

0/00

0/11

0/10

0/01

1/11

1/01

1/00

0/00

0/11

0/10

0/01

1/11

1/01

1/00

0/00

6t1t 2t 3t 4t 5t

1 0 1 0 0

11 10 00 10 11

Input bits

Output bits

Tail bits

Page 22: Lecture Five Coding - AAST

Trellis Trellis –– cont’dcont’d

1/11

0/00

0/10

1/11

1/01

0/00

0/11

0/10

0/01

1/11

1/01

1/00

0/00

0/11

0/10

0/01

0/00

0/11

0/00

6t1t 2t 3t 4t 5t

1 0 1 0 0

11 10 00 10 11

Input bits

Output bits

Tail bits

Page 23: Lecture Five Coding - AAST

The Viterbi algorithmThe Viterbi algorithm

• The Viterbi algorithm performs Maximum likelihood decoding.

• It find a path through trellis with the largest metric (maximum correlation or minimum distance).

– It processes the demodulator outputs in an iterative manner.

– At each step in the trellis, it compares the metric of all paths entering each state, and keeps only the path with the smallest metric, called the survivor, together with its metric.

– It proceeds in the trellis by eliminating the least likely paths.

• It reduces the decoding complexity to !12 −KL

Page 24: Lecture Five Coding - AAST

The Viterbi algorithm The Viterbi algorithm -- cont’dcont’d

• Viterbi algorithm:A. Do the following set up:

– For a data block of L bits, form the trellis. The trellis has L+K-1 sections or levels and starts at time and ends up at time .

– Label all the branches in the trellis with their corresponding branch metric.

– For each state in the trellis at the time which is denoted by , define a parameter

B. Then, do the following:

it}2,...,1,0{)( 1−∈ K

itS ( )ii ttS ),(Γ

1t KLt +

Page 25: Lecture Five Coding - AAST

The Viterbi algorithm The Viterbi algorithm -- cont’dcont’d

1. Set and 2. At time , compute the partial path metrics for

all the paths entering each state.3. Set equal to the best partial path metric

entering each state at time . Keep the survivor path and delete the dead paths from the trellis.

1. If , increase by 1 and return to step 2. A. Start at state zero at time . Follow the

surviving branches backwards through the trellis. The path thus defined is unique and correspond to the ML codeword.

0),0( 1 =Γ t .2=i

it

( )ii ttS ),(Γ

it

KLi +< iKLt +

Page 26: Lecture Five Coding - AAST

Example of Hard decision Viterbi decodingExample of Hard decision Viterbi decoding

1/11

0/00

0/10

1/11

1/01

0/00

0/11

0/10

0/01

1/11

1/01

1/00

0/00

0/11

0/10

0/01

0/00

0/11

0/00

6t1t 2t 3t 4t 5t

)101(=m)1110001011(=U)0110111011(=Z

Page 27: Lecture Five Coding - AAST

Example of Hard decision Viterbi decodingExample of Hard decision Viterbi decoding--cont’dcont’d

• Label al the branches with the branch metric (Hamming distance)

0

2

0

1

2

1

0

1

1

0

1

2

2

1

0

2

1

1

1

6t1t 2t 3t 4t 5t

1

0

( )ii ttS ),(Γ

Page 28: Lecture Five Coding - AAST

Example of Hard decision Viterbi decodingExample of Hard decision Viterbi decoding--cont’dcont’d

• i=2

0

2

0

1

2

1

0

1

1

0

1

2

2

1

0

2

1

1

1

6t1t 2t 3t 4t 5t

1

0 2

0

Page 29: Lecture Five Coding - AAST

Example of Hard decision Viterbi decodingExample of Hard decision Viterbi decoding--cont’dcont’d

• i=3

0

2

0

1

2

1

0

1

1

0

1

2

2

1

0

2

1

1

1

6t1t 2t 3t 4t 5t

1

0 2 3

0

2

30

Page 30: Lecture Five Coding - AAST

Example of Hard decision Viterbi decodingExample of Hard decision Viterbi decoding--cont’dcont’d

• i=4

0

2

0

1

2

1

01

1

0

1

2

2

1

0

2

1

1

1

6t1t 2t 3t 4t 5t

1

0 2 3 0

3

2

3

0

2

30

Page 31: Lecture Five Coding - AAST

Example of Hard decision Viterbi decodingExample of Hard decision Viterbi decoding--cont’dcont’d

• i=5

0

2

0

1

2

1

01

1

0

1

2

2

1

0

2

1

1

1

6t1t 2t 3t 4t 5t

1

0 2 3 0 1

3

2

3

20

2

30

Page 32: Lecture Five Coding - AAST

Example of Hard decision Viterbi decodingExample of Hard decision Viterbi decoding--cont’dcont’d

• i=6

0

2

0

1

2

1

01

1

0

1

2

2

1

0

2

1

1

1

6t1t 2t 3t 4t 5t

1

0 2 3 0 1 2

3

2

3

20

2

30

Page 33: Lecture Five Coding - AAST

Example of Hard decision Viterbi decodingExample of Hard decision Viterbi decoding--cont’dcont’d

• Trace back and then:

0

2

0

1

2

1

01

1

0

1

2

2

1

0

2

1

1

1

6t1t 2t 3t 4t 5t

1

0 2 3 0 1 2

3

2

3

20

2

30

)100(ˆ =m)0000111011(ˆ =U

Page 34: Lecture Five Coding - AAST

Generating the OFDM signal (1)Generating the OFDM signal (1)

• Symbol (QPSK) of sub-carrier i at time k– Other symbol-alphabets can be used as well (BPSK, m-QAM)

• Baseband signal is generated by DSP

[ ])(2exp)()( ,,, kTtfijxkTtwts kikiBB −∆⋅⋅−= π

Re

Imxi,kWindow function Sub-carrier

Page 35: Lecture Five Coding - AAST

Generating the OFDM signal (2)Generating the OFDM signal (2)

serial-to-parallelxn

IDFT

(IFFT)

x0,kx1,k

xN,k

parallel-to-serial

s0,ks1,k

sN,k

sn

Re

Imxi,k

N data symbols:(in frequency-domain)

Base-band signal

(time-domain)

Page 36: Lecture Five Coding - AAST

OFDM System ModelOFDM System Model

• Multiplication of data symbols with (complex-valued) channel transfer-function:

x-N/2,k

h-N/2,k n-N/2,k

y-N/2,k

xN/2-1,k

hN/2-1,k nN/2-1,k

yN/2-1,k

iiii nhxy +=

Page 37: Lecture Five Coding - AAST

��������������

��� ��������� ������ ������������ ��������������

��� �����������������

����������� ������������

���

�������������������������� ����������� ���� ������

��������� ���������������

������� ���������������������������������� !������������"�#

Page 38: Lecture Five Coding - AAST
Page 39: Lecture Five Coding - AAST
Page 40: Lecture Five Coding - AAST
Page 41: Lecture Five Coding - AAST