Matlab toolbox for simulation of repeat-accumulate codes

55
Matlab toolbox for simulation of repeat-accumulate codes Batisani Motsie A thesis submitted in partial fulfilment of the requirements for the degree of Bachelor of Engineering in Telecommunications at the University of Newcastle ,Australia

Transcript of Matlab toolbox for simulation of repeat-accumulate codes

Page 1: Matlab toolbox for simulation of repeat-accumulate codes

Matlab toolbox for simulation of repeat-accumulate codes

Batisani Motsie

A thesis submitted in partial fulfilment of the requirements for the degree of Bachelor of Engineering in Telecommunications

at the University of Newcastle ,Australia

Page 2: Matlab toolbox for simulation of repeat-accumulate codes

Batisani Motsie 2005

i

Table of Contents LIST OF FIGURES............................................................................................................................. III ABSTRACT ......................................................................................................................................... IV ACKNOWLEDGEMENTS ..................................................................................................................V CONTRIBUTION OF THE PROJECT............................................................................................ VI CHAPTER 1 MOTIVATION AND INTRODUCTION .....................................................................1

1.1 MOTIVATION AND INTRODUCTION .........................................................................................1 1.2 REPORT OUTLINE ...........................................................................................................................2

CHAPTER 2 ERROR CORRECTION................................................................................................3 2.1 BACKGROUND ................................................................................................................................3 2.2 COMMUNICATION SYSTEMS............................................................................................................3 2.3 DIGITAL COMMUNICATIONS ...........................................................................................................4 2.4 ERROR CORRECTION.......................................................................................................................5 2.5 AN INTRODUCTION TO ERROR CORRECTION...................................................................................6

2.5.1 Hamming codes ................................................................................................................6 2.5.2 CONVOLUTIONAL CODES .............................................................................................................7

2.5.3 Repeat accumulate codes .......................................................................................................9 2.6 SCOPE OF THIS PROJECT................................................................................................................10

CHAPTER 3 CONVOLUTIONAL CODES......................................................................................11 3.1 THE CONVOLUTIONAL ENCODER..................................................................................................11

3.1.1 Parameters of a convolutional encoder and properties .......................................................11 Properties of Convolutional Codes...............................................................................................11 3.1.2 The structure of Convolutional Codes Representation ........................................................12 State Transition Table...................................................................................................................13 Finite State Machine Representation (Mealy Machine)................................................................14 State Transition Diagram .............................................................................................................14 Tree Representation of the encoder ..............................................................................................15 3.1.3 Analysis of the encoder and the algorithm for matlab implementation ...............................16 Illustration of a recursive through the 1/ (1+D) ...........................................................................17 3.1.4. Software design consideration for convolutional encoder..................................................17

3.2 VITERBI DECODING ......................................................................................................................17 3.2.1 The decoder structure and decoder parameters...................................................................17 3.2.2 Matlab Implementation (for a hard decision decoder) ........................................................18

CHAPTER 4 REPEAT-ACCUMULATE CODE..............................................................................20 4.1 USER DATA ...................................................................................................................................20 4.2 GENERATION OF USER DATA.........................................................................................................21 4.3 REPEATER.....................................................................................................................................21 4.4 THE INTERLEAVER........................................................................................................................22

4.4.1 Matlab implementation of the interleaver............................................................................23 4.4.2 Another design consideration in implementing the interleaver. ..........................................23

4.5 NOISY CHANNEL MODEL..............................................................................................................25 4.5.1 Properties of a BSC. ............................................................................................................25 4.5.2 Theoretical aspects of Binary Symmetric Channels.............................................................27

4.6 DEINTERLEAVER...........................................................................................................................30 4.6.1 Matlab implementation of the deinterleaver ........................................................................30

4.7 REPEATER DECODER ....................................................................................................................32 4.7.1 Odd number based repeater decoder. ..................................................................................32 4.7.2 Even number based repeater decoder. .................................................................................32 4.7.3 Output user data ..................................................................................................................32 4.7.4 Determination of errors. ......................................................................................................32

5. ITERATIVE DECODING...............................................................................................................35 5.1 HALF CODE RATE DESIGN AND IMPLEMENTATION. .......................................................................35

5.1.1 Matlab Implementation of trellis decoder............................................................................36 5.2 ITERATIVE DECODING SCHEME .....................................................................................................37

5.2.1 Details of the iterative scheme .............................................................................................38

Page 3: Matlab toolbox for simulation of repeat-accumulate codes

Batisani Motsie 2005

ii

5.2.2 Matlab Implementation of the Iterative scheme……………………………………………..….38

6. RESULTS OF SIMULATIONS AND ANALYSIS .......................................................................39 6.1 SIMPLE SCHEME, SEE FIGURE 4.1 ..................................................................................................39 6.2 ITERATIVE SCHEME .......................................................................................................................40 6.3 GAUSSIAN NOISE MODEL. .............................................................................................................41

FIGURE 6.4 ITERATIVE DECODING IN AGWN .........................................................................44 7. CONCLUSION AND FUTURE WORK ........................................................................................45

7.1 ACHIEVEMENTS AND CURRENT PROJECT STATUS..........................................................................45 7.2 FUTURE PLANS..............................................................................................................................45

8. PROBLEMS ENCOUNTERED......................................................................................................45 REFERENCES .....................................................................................................................................47

Page 4: Matlab toolbox for simulation of repeat-accumulate codes

Batisani Motsie 2005

iii

List of figures

FIGURE 1.1 ELEMENTS OF A COMMUNICATION SYSTEM ...........................................................................3 FIGURE 1.2 TRANSMITTER AND RECEIVER EXPLORED [1].........................................................................4 FIGURE 2.1 COMMUNICATION SYSTEM.....................................................................................................5 FIGURE 2.2 REPEAT-ACCUMULATE CODES ORIGIN....................................................................................7 FIGURE 2.3 [5] THE MAXIMUM LIKELIHOOD DECODER..............................................................................8 FIGURE 2.4 TURBO CODE ENCODER ..........................................................................................................9 FIGURE 2.5 [7] REPEAT ACCUMULATE SCHEME. ........................................................................................9 FIGURE 3.1 NON-RECURSIVE ENCODER....................................................................................................12 FIGURE 3.2 STATE TRANSITION TABLE.....................................................................................................13 FIGURE 3.3 STATE TRANSITION DIAGRAM ................................................................................................14 FIGURE 3.4 [3] TREE DIAGRAM ..............................................................................................................15 FIGURE 12 [3] TRELLIS DIAGRAM.........................................................................................................15 FIGURE 3.5 AN ENCODER OF RATE 1/2 .....................................................................................................16 FIGURE 3.6...............................................................................................................................................17 FIGURE 3.7 COMPONENTS OF A ML DECODER.........................................................................................18 FIGURE 4.1 A SCHEME FOR REPEAT-ACCUMULATE CODES......................................................................20 FIGURE 4.2 MATLAB SNAPSHOT FOR REPEATER......................................................................................22 FIGURE 4.3 DEMONSTRATION OF INTERLEAVING ....................................................................................24 FIGURE 4.4 BSC DIAGRAM......................................................................................................................26 FIGURE 4.5 BSC THEORETICAL BIT ERROR RATE CURVE .........................................................................29 FIGURE 4.6 ACTION OF DEINTERLEAVER .................................................................................................31 FIGURE 4.7 (A,B,C) DIFFERENT ERROR SCENARIOS................................................................................34 FIGURE 5.1 HALF CODE RATE ENCODER ..................................................................................................35 FIGURE 5.2 STATE TRANSITION TABLE.....................................................................................................35 FIGURE 5.3 FINITE STATE MACHINE DIAGRAM.........................................................................................36 FIGURE 5.4 TRELLIS DIAGRAM................................................................................................................36 FIGURE 5.5 ITERATIVE SCHEME ...............................................................................................................37 FIGURE 6.1 BSC RESULTS .......................................................................................................................39 FIGURE 6.2 ..............................................................................................................................................40 FIGURE 6.3, EFFECT OF Q .........................................................................................................................41 FIGURE 6.4 INFLUENCE OF AWGN ON CONVOLUTIONAL DECODING.......................................................42 FIGURE 6.5 DECODING WITHOUT ITERATION............................................................................................43

Page 5: Matlab toolbox for simulation of repeat-accumulate codes

Batisani Motsie 2005

iv

Abstract Reliability in communication systems has made it necessary for engineers to develop error correction schemes whose performance is near the Shannon limit. The more recent advancements include development of turbo codes which led to development of better codes. In this project the focus was on turbo-like scheme known as repeat-accumulate codes, and the main aim was to develop a MatLab tool box for simulation of this scheme which is based on iterative decoding and serial concatenation. A working prototype was developed based on hard decision decoding to show principle of repeat accumulate codes.

Page 6: Matlab toolbox for simulation of repeat-accumulate codes

Batisani Motsie 2005

v

Acknowledgements

To my supervisor Dr Sarah Johnson, I want to thank you for the guidance, support and advice you provided. Had it not been of you, it would not have been possible to get this far with this project. I am indebted to the University of Newcastle staff; they have made it a point that the learning environment was conducive. I am also indebted to my late father Loftus Mpaphi Motsie, thank you for your encouragement.

Page 7: Matlab toolbox for simulation of repeat-accumulate codes

Batisani Motsie 2005

vi

Contribution of the project The project involved the following items.

1. Research on repeat accumulate codes 2. Define the functionality of the scheme. 3. Develop softwares for the various functions of the repeat accumulate scheme

and the softwares were developed for the following, a. message generator, b. repeater c. convolutional encoder 1/(1+D) d. channel model e. unity code rate decoder f. ½ coderate decoder. g. Interleaver h. Deinterlearver i. Repeater decoder. j. Convolutional decoder 1/(1+D)

4. Test each of the software pieces. 5. Interlace and test software components. 6. Test complete scheme. 7. comment on results obtained against theoretical prediction.

Date ……………. Date……………........

Page 8: Matlab toolbox for simulation of repeat-accumulate codes

Batisani Motsie 2005

1

Chapter 1 Motivation and Introduction

1.1 Motivation and Introduction Brief History Mankind has always been engaged in the transfer of information and our desire to achieve better and advanced reliable information transfer has been demonstrated over the past two centuries. Some of the traditional means of relaying information was through messenger or via smoke or drums all of which had distinct advantages at their times in regard to reliability and efficiency. Certainly traditional communications were restricted to the small distances that man could travel. In 1799 the first electric battery was developed. This was a landmark in the development of modern communication which enabled communications to reach anywhere on earth. Shortly the first telegraph system was developed using Morse code. Morse code was first demonstrated in 1877 and the first telegraphic link was between the US and Europe. The transmitted information over wires was represented by dots and dashes from a finite set therefore the system is inherently digital communications. In the following years other means of communication where developed which used analog transmission where continuous wave forms are transmitted.

In 1876 Graham Bell invented the first telephone. • First transatlantic Cable between US and Europe in 1953.

First Television built by Vladimir in the US, first demonstrated in 1929, and in 1939, BBC began Commercial Broadcasting.

The last fifty years have seen major developments in the area of communication. The birth of the transistor in 1947 has made it all possible and now we have microwave radio systems, light wave and satellite communications e.g. Telstar and Early Bird.

All these communications media were born in the digital era. Digital communications is preferred to Analog Communications for several reasons. The reason that is of interest here is that it allows for error in the systems to be monitored and corrected and this is not relatively easy in their counterpart (analog communications)

The transfer of information is a process that is largely affected by other factors that may corrupt the message. These factors all constitute sources of noise, where noise is described as signal that corrupts the desired signal. Communication engineers have developed and continue to improve the means of ensuring correct delivery of information. The techniques of error control were devised and they are of course different depending on how they achieve their objective. Principally there are two ways of curbing the consequences of errors in received information and they are either to ask for a retransmission or to detect an error then try and fix it, at the receiver. The process of fixing errors will attempt to predict the information that was sent based on what has been received. In fact monitoring errors allows one to perform control at either the transmitter or the receiver.

Error control systems concern themselves with practical ways of achieving very low bit error rates after a transmission over a noisy band limited channel [5 pg198]. The reason why they have become so important is that everyone wishes to achieve the Shannon capacity. In 1949 an Engineer and Mathematician, Shannon presented his hypothesis that, it is theoretically possible to transmit information through a noisy channel with arbitrarily small probability of error provided that the information or source rate, R, is less than the channel capacity, that is , R < C for reliable transmission.” [ ]5 . Since Shannon’s remarkable theoretical results communications engineers have been trying to construct codes which perform as well as his promise. Appendix A shows the Shannon hypothesis.

Page 9: Matlab toolbox for simulation of repeat-accumulate codes

Batisani Motsie 2005

2

This project introduces some of the most recent technologies that are approaching the Shannon capacity. These are turbo-like codes known as repeat accumulate codes, RA codes for short.

All that has been presented in the above paragraph is the motivation of wanting to pursue this project which is to implement in software a system of error correction that is near the Shannon capacity limit. The project exposes me to the latest error correcting techniques.

Two parameters are crucial in determining the efficacy of a communication system, they are mainly rate of message delivery, and reliability which is a measure of how a message received resembles the original source message. When the project has been successfully completed this should be observed.

1.2 Report Outline Chapter 1 introduces digital communication systems and goes on to specify the special requirements for digital communication’s success. Digital communication systems are involved in data transfer between devices, eg antennae, computers, broadcasting stations and so on. Therefore it is necessary to make sure that information exchange between devices is error free and in Chapter 2 some error controlling mechanisms are introduced. In Chapter 2 the theory underlying the error correcting codes and in particular convolutional codes, Turbo codes are presented in order of their invention. A good grasp of these concepts is necessary for understanding how repeat accumulate codes. Repeat accumulate codes use principles of technologies that already exist hence the need to understand the principles. Chapter 3 introduces the design process for a convolutional encoder and the Viterbi decoder as these two form the basis of this project. In the last section of the chapter the implementation of the convolutional encoder and Viterbi decoders are considered and finally the chapter presents some software developed for the 1/ (1+D) encoder and decoder. Chapter 4 is mainly on construction of repeat accumulate codes. It outlines how the various components of the repeat accumulate codes, their functionality and the software implementation of such components Chapter 5 addresses an advancement in the repeat accumulate scheme. It outlines of technical aspects and functionality of iterative decoding. In this chapter a half rate decoder is studied and the associated software implementation is elaborated. Chapter 6 presents the simulation results for both the simple (code rate 1 decoding) and advanced (half rate decoding) repeat accumulate schemes. An analysis of the observed simulations is also presented in this chapter and in particular the areas of focus are: how the code rate influences the performance of a repeat accumulate scheme and how iterative decoding can also improve performance. Chapter 7 presents conclusion and future work plans. The project milestone is also considered in this chapter. Chapter 8 is just a brief description of the software that was used in implementation and the associated problems encountered through use of the software package.

Page 10: Matlab toolbox for simulation of repeat-accumulate codes

Batisani Motsie 2005

3

Chapter 2 Error Correction 2.1 Background This chapter introduces the general principles that are involved in a communications system in order to make information transfer from its source to its destination possible. It gives an overview of digital communications and error correction techniques. The final presentation is on the scope of the project.

2.2 Communication Systems Systems communicate to share information. A system is a machine that is capable of storing and using information. To communicate on the other hand means passing or transmitting of information. In a general sense information will refer to anything the transmitter knows while the receiver does not, hence the receiver has the responsibility of determining what information was sent.

All communication systems have the basic function of information transfer irrespective of their type. Fundamentally every communication system will achieve its objective through a succession of processes outlined below.

• Generate the message signal. • The description of the message signal with certain precision by assigning set of

symbols. • Encoding symbols into a form suitable for transmission over a channel (physical

medium). • The transmission of the encoded message to the desired destination • The decoding and reproduction of original symbols. • The reconstruction of the original message with tolerable degradation in the quality of

the message due to imperfections in the channel.

Based on the items listed above one can present a communication system graphically

Figure 1.1 Elements of a Communication system

Transmitter Transmission Channel

Receiver

Noise, interference, and distortion

Destination Source

Page 11: Matlab toolbox for simulation of repeat-accumulate codes

Batisani Motsie 2005

4

2.3 Digital Communications Although there is always a choice between analog and digital communications, the later offers more flexibility and hence widely used. Digital and analog communications are explored in [ ]5,3 . All the communication systems type still have the same objective that the message be delivered to the destination efficiently and reliable subject to design constraints: allowable power, available channel bandwidth and affordable cost of building the system, [ ]3

In its basic form digital communication allows the design of waveforms for the transmission of information from source to destination. The design of waveforms employs data transmission code, a procedure of representing a message using symbols from finite set of discrete elements and in binary digital systems, these are 0 and 1. The message portrayed resides in discrete symbols, so the design of a digital system is such that it delivers the symbols with a specified degree of accuracy in a specified amount of time.

A diagram representing a digital communication system From figure 1(elements of a communication system), the transmitter and the receiver can be represented as below. Between the waveform and the received is the transmission channel.

Transmitter Receiver

Figure 1.2 Transmitter and Receiver Explored [1]

Page 12: Matlab toolbox for simulation of repeat-accumulate codes

Batisani Motsie 2005

5

2.4 Error Correction The transmission of a message from its source to intended destination is by no means prone to channel noise and the well known noises are AWGN1, thermal noise. Error correcting seeks to protect the messages from distortions due to imperfections of the channel, normal a random behaviour. The nature of channel noise means that the system may transmit a wrong output for a certain input, sometimes correct bits are transmitted or sometimes the transmitted symbol is just lost. In all cases, the receiver is never certain what input was transmitted. In order to improve communications an encoder is inserted between the binary source and channel. The encoder’s purpose is to prepare the message so that any errors and erasures caused by the channel can be detected and corrected at the destination. Likewise, at the destination an error correction decoder is inserted between channel and the sink to correct transmission errors based on information from the encoder. Several error correcting techniques are deployed and their performances can be measured by comparing them each other and to the theoretical best performance given by Shannon channel information capacity theory [ ]5 .

The figure below summarises the above information graphically.

Figure 2.1 communication system

1 AWGN means additive white Gaussian noise, the most common form of noise in channels of transmission

Binary Source

Error Correction Encoder

Noisy channel

Error Correction Decoder

Sink

Page 13: Matlab toolbox for simulation of repeat-accumulate codes

Batisani Motsie 2005

6

2.5 An Introduction to Error Correction This section will introduce some of the error correction techniques in an ascending order i.e. from the low capability to high error correction capability. This approach is suitable here because the repeat-accumulate codes are not a standalone codes. In order to understand how they work one needs to have knowledge of the already existing codes. Repeat accumulate codes use the principles that are already known and are being applied in other codes. Surprisingly they are just a subset of serial concatenated turbo codes and turbo codes have been know for a while. I will consider Hamming, convolutional, turbo and repeat accumulate codes.

2.5.1 Hamming codes Hamming codes were invented in 1943 by Richard Hamming. Hamming codes are a class of linear block codes that can correct a single error at anytime and the success of Hamming codes relies on the use of parity bits. In Hamming codes the transmitter generates a sequence of parity bits and appends these bits to the message. Then the message to which parity bits were added is sent over a noisy channel and as result, on arrival at the receiver it will not be error free.

When data is launched into a communication channel it satisfies a specific pattern or condition and this is useful at the receiver. The receiver checks the incoming bit stream to determine if the pattern is satisfied and if it is not, then an error has occurred. The receiver is certain that an error occurred because no such pattern would have been transmitted by the encoder.

Hamming codes can detect the presence of an error as well as fix an error. In the case of correcting the error, it is really a simple idea. When the receiver detects that a bit is in error it will simple complement the bit. On the other hand Hamming codes may also be used as a detecting mechanism and then the receiver can decide to notify the transmitter that it received erroneous data so a retransmission should be made. Normally retransmission will be used if fixing errors at the receiver takes longer than retransmission process or if there is not enough information to determine which the bits in error are. The challenge associated with Hamming codes is determining the parity bits although this is an inexpensive scheme since the parity bits are each computed on different combination of bits in the data using exclusive OR operations.

The number of parity or error check bits required is given by the Hamming rule, and is a function of the number of bits of information transmitted and needs to be followed in order to produce reliable parity pattern. The Hamming rule is expressed by the following inequality:

d + p + 1 < = 2 p [6]. Where

d is the number of data bits, and

p is the number of parity bits.

Hamming codes have got their own limitations, as established before, they can only correct one error at any time [6] and because of this Hamming codes are also known as single error correcting (SEC) codes. The expression that is given above also impose some restrictions on performance of Hamming codes, if p<2 the codeword in that case may not be worthwhile if we consider the overhead involved because there is possibility that some errors go undetected. The last thing to note with Hamming codes is that they are high rate codes with minimum distance in codeword of 3 [5].The Hamming distance as it is normal called has direct

Page 14: Matlab toolbox for simulation of repeat-accumulate codes

Batisani Motsie 2005

7

influence on the error correcting capability of Hamming codes and hence their limited capability.

The limited capability has been attributed to its minimum Hamming distance, which by definition is minimum number of positions in which any pair of code words differ [5].

A binary block code with minimum distance dH min can correct all patterns of

21min −= Hd

t and here dHmin = 3 for Hamming codes thus can correct only one error.

Number of detectable errors = 1min −Hd

There are other complex block codes with larger minHd and hence provide better performances, but these wont be discussed in detail because the focus of this project is convolutional codes.

2.5.2 Convolutional codes Convolutional codes where introduced in around 1950 these codes are a class of codes in which an input stream is mapped onto a block of n output channel bits according to some rule or code generator. The biggest difference between the block codes and convolutional codes is that in convolution codes encoding is continuous. Some areas where convolution codes are applied include deep space communication, satellite communication. Below is we can see where RA come from

Figure 2.2 Repeat-Accumulate codes origin

LDPC Low density parity check codes Convolution codes use some of the common decoders known in communication engineering and among theses decoders are the:

Page 15: Matlab toolbox for simulation of repeat-accumulate codes

Batisani Motsie 2005

8

Maximum a posteriori probability (MAP) decoder. Maximum Likelihood Decoder (ML) Minimum distance decoder. Under certain conditions the MAP and ML perform equally. Furthermore assuming that the condition when performance of ML is same as that of MAP, we can show that, when a binary symmetric channel is considered, the ML will become a minimum distance decoder and the relevant mathematical background for this is presented in Chapter 4 under Review of MAP and ML. In the project the minimum Hamming distance has been used. Graphical representation of a maximum likelihood decoder c r c~

figure 2.3 [5] The maximum likelihood decoder In a maximum likelihood decoder one chooses an estimate codeword for which the log likelihood function )~(ln crp is maximised. r = r in the diagram, is the received codeword vector and c~ estimate codeword vector for c. When the estimated codeword is chosen such that the Hamming distance between it and the received code vector is minimum then the maximum likelihood is reduced to a minimum distance decoder [5 pg 204-207], and this is commonly known as the Viterbi decoder when we talk about ML decoding for convolutional codes [3,5]. Convolutional codes offer more error correcting capabilities than the Hamming codes because the number of errors that can be detected can be greater than two. Like many other coding schemes convolutional codes are not all perfect. Catastrophic conditions can be expected and in that situation the decoder cannot make out what the transmitted codeword was and this will happen if the bits received at the decoder and the codeword differ in many different positions [4]. This will be explained in the convolutional encoder and decoding in Chapter 3. Turbo codes Turbo codes whose name originates from the likening of their operation to a turbo engine use two convolutional codes for error correction. Turbo codes where born in 1993 and the people involved in the research were seeking to come up with codes that could approach the Shannon capacity. The work of Berrou et.al(1993)[7] was met with disbelief from other communication engineers at that time because the work lacked rigorous theoretical proof and the performance results were so far beyond that of the traditional error correction techniques at the time. However the work of other engineers performed independently proved that turbo codes were indeed remarkable. Turbo codes have enough randomness to achieve reliable communication at data rates near capacity, yet enough structure to allow practical encoding and decoding algorithms Turbo codes are also known as parallel concatenated convolutional codes (PCCC) because in their implementation two convolutional encoders are used in parallel. Since the encoders are parallel, they act on the same information at the same time rather than one encoder processing information and then passing it on to the second encoder [5].

Message generator

probabilistic transmission system

decision

Page 16: Matlab toolbox for simulation of repeat-accumulate codes

Batisani Motsie 2005

9

The decoding of turbo codes has given them their super performance, iterative decoding is used in this scheme and it allows two MAP decoders to be used. The decoders are such that they process data they interchange through a middle stage called the interleaver. The interleaver makes the position of non zero information bits independent, at each encoder. This helps improve the distance properties of the code. This leads to great improvement in their performance as compared to the classical large constraints lengths convolutional codes that are complex and it is very hard to achieve the low bit error rates through them. Turbo code encoder scheme.

Figure 2.4 Turbo code encoder

2.5.3 Repeat accumulate codes The invention of turbo codes opened paths to other better coding techniques and research is continuing to devise even better codes. Among the more recent codes we have the repeat accumulate codes invented in 1998. The principle in the repeat accumulate is to take an input message and increase the redundancy a certain number before encoding and that is illustrated in the figure below.

Figure 2.5 [7] repeat accumulate scheme.

From the figure above one would observe that the message after repetition stage goes through the interleaver which interleaves the message or rearranges it. The following stage is a rate 1 convolutional encode called the accumulator whose output is the coded sequence. In visualising the operation of this scheme one can just ignore the steps that precede the encoder and do the encoder just like any other convolutional encoder The encoder could be any of the possible ones but the repeat accumulate code are for short and less complex encoders and for

this reason the D+1

1 is commonly used, and in this project the same encoder will be used.

The encoder in this scheme has memory and consequently the output of the scheme will depend on both the previous and current input value (message bits) and the encoder or the accumulator exhibits a recursive nature so the output follows the pattern below;

Page 17: Matlab toolbox for simulation of repeat-accumulate codes

Batisani Motsie 2005

10

3213212

11

xxxyxxy

xy

++=+=

=.

2mod.......43211∑=+++++=

n

xnxnxxxxyn

Decoding of repeat accumulate codes is relatively easy to implement. Repeat accumulate codes are a subclass of the turbo codes and because turbo codes are convolutional codes then its practically possible that a maximum likelihood decoder be used and in the case where additive white Gaussian noise then maximum likelihood decoder is just the Viterbi decoder. The usual Viterbi algorithm is thus used to implement the decoder. 2.6 Scope of this Project The sections 1.1 to 1.3 of the Background have just highlighted how communication systems operate. The major attention is in overcoming the channel noise and this project aims at exploring one of the latest error correction techniques. These are the repeat accumulate codes, a member of the Shannon capacity approaching codes that are iteratively decoded. When full potential of the project has been realised, the end result would be a complete Matlab toolbox for the simulation of repeat accumulate codes

Page 18: Matlab toolbox for simulation of repeat-accumulate codes

Batisani Motsie 2005

11

Chapter 3 Convolutional Codes

Chapter 3 introduces my approach to providing the solution to project question, the convolutional encoder that was mentioned is illustrated and its relevancy to the problem at hand is made clear. The chapter ends by introducing the type of decoder that will be used in this project, the Viterbi decoder outlining the basic components and the operation of the decoder. In both the convolutional encoders and the Viterbi decoders sample results from simulations I performed are presented. 3.1 The Convolutional Encoder The convolutional encoders have a special feature that make them unique though and this is the fact that in processing data they do not necessarily have to wait for the whole block of the message, they process message bit by bit in serial fashion. The presence of memory structure makes it needless to have a block of information which in turn reduces the complexity of the encoding scheme since there are no buffers required as is in the case of storing information block[5] and this will be illustrated in the following paragraphs when we consider the mechanism of encoding. 3.1.1 Parameters of a convolutional encoder and properties All convolutional encoders are defined by the following parameters; (n,k,m,L and Sn )

• n = number of output bits • k = number of input bits • m = number of memory registers • L represented as K in other texts = constraint length • Sn = number of states = m2

In the context I will avoid using K since it might be confused with the k for number of input bits; rather L will be.The code rate of convolutional encoder is specified as the ratio of the k to n and it provides a measure of the efficiency of the encoder. Another term introduced is the constraint length, which in simple terms measures the number of shifts that an input bit makes and during those shifts it influence the output of the encoder. This quantity is normally given mathematically as L = m+1 and this is obvious from the encoder. To illustrate further on that, for an m-stage register a message bit requires m+1 shifts to enter the shift register and final leave the shift register. In other texts the constraint length is normally specified differently and this notation is used by manufactures, L = k (m-1). For it’s simplicity I will stick to the former definition of the constraint length [3, 5, 8]. Properties of Convolutional Codes

• k information bits are mapped onto codeword that is n bits and the code words are interdependent due to memory and this is not the case in block codes, the interdependence will be seen in the derivation of the encoder output.

• Small number of simple convolutional codes is of practical interest. • Convolutional decoders can easily process soft input decision and soft output

decision. • Convolutional encoders fall into one of two classes, non-systematic or systematic

codes.

Page 19: Matlab toolbox for simulation of repeat-accumulate codes

Batisani Motsie 2005

12

3.1.2 The structure of Convolutional Codes Representation Convolutional codes as already illustrated in chapter 2 can be represented in one of the three possible forms; tree, trellis, finite state machines and each of the methods have advantages depending on the size of the encoder. The larger the encoder the more complex it becomes and trellis will seem the most appropriate in that case. In the following example I illustrate the use of each structure through an example.

Figure 3.1 non-recursive encoder

Page 20: Matlab toolbox for simulation of repeat-accumulate codes

Let U represent input message and V be output or Codeword m = 2 , L = 3 , Sn =4 , k = 1 n = 2

The code rate of this encoder is therefore: R = 21

Initially register contents is 0 0 and there is no output until first bit is shifted in. Shift in 1 0 then the new state is 1 0 and output is 1 1. Shift in 1 1 then the new state is 1 1 and output is 1 0 So a message 1101 produces an output 1 1 1 0 Example The table in the next page shows a complete table of state transition. State Transition Table input Current register State Next register state Codeword

0

0

0

0

0

00

1

0

0

1

0

11

0 0 1 0 0 11

1 0 1 1 0 00

0 1 0 0 1 10

1 1 0 1 1 01

0 1 1 0 1 01

1

1

1

1

1

10

Figure 3.2 state transition table

Page 21: Matlab toolbox for simulation of repeat-accumulate codes

Batisani Motsie 2005

14

Finite State Machine Representation (Mealy Machine) In the state machine representation the influence of the input bits on the register states are considered. As the input bits are coming in they will obviously make the register value change and this will continue until all the message bits have all been encoded where the message bits will assume either a zero or a one in binary convolutional encoding. In particular the state of the encoder here refers to the contents of the shift registers value State Transition Diagram

figure 3.3 state transition diagram

Page 22: Matlab toolbox for simulation of repeat-accumulate codes

Tree Representation of the encoder The tree diagram below is based on encoder on the previous page and it shows the transitions possible for various binary inputs. The solid line refers to an input of 1 while a dashed line represents an input 0. Starting from initial conditions were 00 for the register state, the tree below can be developed. The bits associated with each brunch are the branch output.

R = 1/2

Dashed means input is 1 and solid line on the other hand represents a 0 input.

Figure 3.4 [3] Tree Diagram

Trellis Diagram of the encoder

Figure 12 [3] Trellis Diagram

The trellis diagram above shows the first depth levels (j-1) levels that show the departure of the encoder from the initial zero state there is also another so there is also another j-1 steps that will take the encoder to the all zero state.

Time

Page 23: Matlab toolbox for simulation of repeat-accumulate codes

Batisani Motsie 2005

16

3.1.3 Analysis of the encoder and the algorithm for matlab implementation

In the previous work only representation of the convolutional encoder has been illustrated, so in this section it will become apparent that in convolutional codes algebraic methods are not relevant, rather construction techniques are used. Previously it was mentioned that the convolutional encoder has memory and as such the current register output depends on both the current register value and previous register content. This is illustrated below and a special terminology is adopted. The letter D is used here to represent a delay. D just means a delayed version of input in the register.

Figure 3.5 an encoder of rate 1/2

)1( −= luD )2(2 −= luD and )( nluDn −= For the two paths that give us the generator polynomials we : g1 (D) = 1+ D + D2 g2 (D) = 1 + D In some texts generator polynomials are referred to as impulses since the coefficients of delayed version have magnitude 1 or precisely this indicates a connection to mod 2 adders. In general the coefficient of generator polynomials satisfy the condition

⎩⎨⎧

=01i

kg depending on connections to mod 2 operator

While a zero means there is no connection to mod 2 operators, i refer to the path and k here means the kth generator polynomial from a finite set of polynomials. 0 < i < N where N is the largest number of paths, also 0 < k< J where J is the total number of generator polynomials. If M (D) represents the message and codeword associated with each path is represented by ci (D) = gi (D) M (D). If the encoder in question has several paths then the final codeword v (l) is obtained by multiplexing the various paths output.

Page 24: Matlab toolbox for simulation of repeat-accumulate codes

Batisani Motsie 2005

17

Now if we use the alternative form of D i.e. u(l-i) then for the unique paths in the encoder we observe that: ci(D) = gi(D)M(D) = u(l) +g1* u(l-1) +g2*u(l-2) + g3*u(l-3)+……….+ gn*u(l-n)…* the gn are the coefficients of the generator polynomial being considered. This can be represented in a more compact form as; ci(D) = gi(D)M(D) = )(lu +∑ −

n nlugn0

)(* where maximum n assumes the memory length

, )(lu is the current register input and )( nlu − is previous register input. If we suppose that there is only one path which is the case with repeat accumulate codes then expression above is also the expression of the output/codeword. In the above algorithm a non recursive system has been addressed, here the output of the encoder at anytime depends on the current register value and the previous register state as well as the current input bit. There is another alternative, the recursive convolutional codes and these have proved to be better in performance that non recursive convolutional code. In a recursive convolutional code there is a feedback and the current output sequence is influenced by the previous output. Illustration of a recursive through the 1/ (1+D) U(i) y

figure 3.6 3.1.4. Software design consideration for convolutional encoder

1. Establish the generator polynomials. 2. Represent the message in delayed version 3. Encode, polynomial multiplication to obtain *.

3.2 Viterbi Decoding

The Viterbi decoder is one from the many decoders that could be used; in this particular case the desire is to have a maximum likelihood path decoder which actually works by tracing through a trellis structure. Depending on the received bit pattern the decoder will decide on the most likely path by using either the minimum Hamming distance or the Euclidian distance. Soft decision Viterbi decoders use the Euclidian distance while hard decision decoders use the minimum Hamming distance when channel model is binary symmetric which is used in the project simulation. Although the two types of Viterbi decoders use different measurement for decision making their algorithm is just the same and is given below after the decoder structure is presented. 3.2.1 The decoder structure and decoder parameters. The structure of decoder comprises of three functional blocks.

D

Page 25: Matlab toolbox for simulation of repeat-accumulate codes

Batisani Motsie 2005

18

r u Metric Computing Add, Compare Trace back memory Unit select unit

Figure 3.7 Components of a ML decoder r is the received bits and u is most likely path determined by the decoder. u is the decoded sequence. Figure 11 Block Diagram Representation of a Viterbi Decoder In order to be operational the decoder must have a state metric update or be able to store the previous path. Along with the above there is the decision step where three activities are carried out, these are add, compare and select(ACS) stages. These steps are repeated at each node in the trellis. The ACS and the metric update will pass on the decisions to the next stage and this repeats until end of a trellis, i.e. when decoding is completed. The last stage that is involved is a trace back, the decoder has memory to store the selected path and at the end of decoding the decoder will then trace back through the trellis and output contents of trace back memory and that is decoder decision.. A decoder has two parameters which are the state number and the depth level, all one has to know for a decoding process For an example showing mechanical decoding see [5] pg 305-306 3.2.2 Matlab Implementation (for a hard decision decoder)

Viterbi decoder algorithm. 1 Initialisation of the decoder 1.1 Initialise the left most state of the trellis to 0. 1.2 Set the first state to all zero. 1.3 Set metric path (MP), metric computation (MC) and branch metric (bm) to zero. 2. Branch metric computation. 2.1 at each depth level j determine the metric branch (Hamming or Euclidian). The value of the current branch metric is current value summed with previous cumulative branch value thus b )(),(( iuird H= for hard decision and for soft decision it is computed as

( )∑=

−=2

1

2

kkk vrbm [5] pg 308. For further details refer to section 7.3.3 of [5].

3. Sum branch metrics, compare and select For each state there are two incoming branches from previous precursor states so that the surviving branch is:

Branch Metric Computing

Sum branch metrics, compare, select

Update path and branch metrics

Page 26: Matlab toolbox for simulation of repeat-accumulate codes

Batisani Motsie 2005

19

M = min{ }iiii bmMbmM 2

121

11 , ++ −− where 1, 2 refer to branches 1 and 2 respectively.

4. Path memory update At each node of two predecessor branches the surviving branch is determined from the previous metric path and current branch output with minimum Hamming distance or surviving branch.

),( 1 kiIi vMPMP −= , kiv is the output surviving branch out-put 5. Decode symbols At the end of the 4 and when the entire trellis has been exhausted then decoder traces back through the trellis along the stored metric path only and the perceived sent codeword is simply the reversed order of the branch outputs. Limitations of Decoders In chapter 2 (2.4.2) it was mentioned that Catastrophic codes condition results in a decoder failing to determine the coded word. In a catastrophic situation the convolution code causes a large number of errors when only a small number of channel bit errors are received and such codes should be avoided because the decoder will never be able to guess any path in the trellis, a condition known as decoder failure. In State transition diagram this condition is noticed by loop behaviour in which none zero information sequence or bit stream correspond to an all zero output sequence.

Page 27: Matlab toolbox for simulation of repeat-accumulate codes

Batisani Motsie 2005

20

Chapter 4 Repeat-Accumulate code Serially concatenated repeat-accumulate error control codes. In serial concatenation components are arranged in series and information flows from one component to the other. Chapter four presents a complete listing of the various components of a repeat accumulate code scheme and the diagram below shows the link between the components. Simple scheme

Figure 4.1 A scheme for repeat-accumulate codes ML decoding scheme 4.1 User data The user data is basically the message represented in binary form that is to be transmitted. There are various sources of user data like speech, temperature measurement, and fluid flow. The most important thing is that the information be represented in binary form and this is addressed in another aspect of communication systems that deals with efficient data representation known as source coding source coding is not within scope of this project and thus will not be discussed any further.

Page 28: Matlab toolbox for simulation of repeat-accumulate codes

Batisani Motsie 2005

21

4.2 Generation of user data In this project data user is just a vector of length that can be determined before simulation. The bits in the user data are generated using the matlab randint function in matlab; the randint function generates a 0 or 1 with equal probability. In the Matlab code the generation of the user data occurs in the main function which inturn calls for the repeater function. The transpose is applied to the user data so that a column of the user data is generated. With this it is therefore simple to concatenate the column a number of times, i.e. number of repetition. The result is matrix n*length (user data). The n and user data are repetition number and randomly generated user data. The message that the scheme above will operate upon is read from the rows of the matrix that was generated and this is illustrated by an example under the repeater description below. 4.3 Repeater The repeater is one of the important components in the system and its main function is to introduce redundancy in the transmitted message. It achieves this by repeating each of the message bits a number of times before being passed on to the interleaver. In this experiment the number of choice is three although other numbers have been used to illustrate improved performance for lower code rates. Choosing a representation three is common and this is based on the fact that there is always a trade off between code rate and the throughput of a communication system. A repetition of three is the minimum number that achieves both reasonable throughput and sufficient capability to eliminate or reduce error in decoded data. Choosing the number of repetition is solely up to the code designer as long as other communication requirements are taken into consideration. The communication requirements in question are systems throughput and target error probability. The diagram below illustrates the operation of a repeater that uses three bits per bits. User data

0

0

1

1

1

0

0

1

0

0

1

Transmitted data

000 000 111 111 111 000 000 111 000 000 111

Transmitted data is: 000000111111111000000111000000111. Matlab Implementation of the Repeater The process of implementing a matlab using concatenation and the short code below shows that.

1. Accept the message user which is a column vector 2. Determine the number of times each is to be repeated and this is a value passed on

from the main function. 3. Repeat each bit number of times specified above. 4. Read row wise and print the vector of the new user data(user data after repetition).

Page 29: Matlab toolbox for simulation of repeat-accumulate codes

Batisani Motsie 2005

22

Matlab sample output

Figure 4.2 Matlab snapshot for Repeater C is just the transpose of message column vector B and is 11 bits long. Repeateroutcome is just C but with each bit repeated 3 times i.e. 33 bits. Step 4 in the above pseudo code is not shown in the above; only important features have been captured. Repeateroutcome is determined from the matrix of step 4. The output in the snapshot above is exactly the same as in the diagram in page 22. 4.4 The Interleaver. Interleaving is a process that is used to scramble data in a manner to increase the randomness of the bits before launched into channel, and by so doing the data is made more resistive to channel noise. Basically carefully introducing an interleaver will reduce the number of code words with minimum distance and as a result makes the decoding process to be improved greatly. If a poor interleaver is chosen the consequence is that code words may be so close. Several types of interleaving schemes exist and choice of any one the possible schemes depends on performance and simplicity. Some of these schemes are: Block or row-column interleaver, helical interleaver, pseudo-random interleaver, odd-even interleaver. In this project the interleaver of choice is the block interleaver. This scheme is very simple to implement but has one major decoding limitation in that it has little randomness and so it is not the best interleaver for very long codes. In a block inter leaver data written row wise and read column wise and this configuration has the advantage that a burst of errors less than

Page 30: Matlab toolbox for simulation of repeat-accumulate codes

Batisani Motsie 2005

23

number of rows or depth results in single errors in the codeword [10], and this is graphically illustrated below. Input stream a1 a2 a3 a4 a5 a6 a7 a8 a9 a10 a11 a12 a13 a14 a15

input output

Output of the interleaver a1 a6 a11 a2 a7 a12 a3 a8 a13 a4 a9 a14 a5 a10 a15 4.4.1 Matlab implementation of the interleaver The interleaver described above is realised in Matlab in the following pseudo code.

1. Accept a vector of transmitted user data.(which is n times as long as the user data.) 2. Determine the length of the vector or transmitted user data 3. Using the knowledge of length of the vector or transmitted user data create a matrix

with n (ceil (squareroot (length)) columns. 4. The number of rows is obtained as: (length of transmitted user data)/ columns 5. Write transmitted user data row wise. 6. Read transmitted user data column wise.

4.4.2 Another design consideration in implementing the interleaver. The designer is normal afforded the opportunity to decide on the specifications of interleaver matrix and in my project I have preferred to use square root of vector length. If this value is not whole the value is rounded up. In so doing the number of rows and columns will be equal and as a result the matrix is square. In implementing the interleaver it is also important to make sure that the interleaver matrix created is all filled. Basically we want the empty spaces if any exist to be filled with a special character eg -1 that can be ignored later when sending user data to the accumulator. In order to be able to do this the -1’s are appended at the end of transmitted user data as below. e = -1*ones (1, row^2-length (vector)); vector = [vector, e]; vector = transmitted user data Although other options like using the knowledge of repetition number would make the matrix easier, the choice I made above is more general and will handle a vector of any length and hence does not bind to always have a vector that is always a multiple of user data and assumes no knowledge of the repletion number.

Input stream

a1 a2 a3 a4 a5

a6 a7 a8 a9 a10

a11 a12 a13 a14 a15

Page 31: Matlab toolbox for simulation of repeat-accumulate codes

Batisani Motsie 2005

24

The screen shot below illustrates operation of an interleaver as described above.

Figure 4.3 Demonstration of interleaving interleverin is the input to the interleaver. e is -1’s to be appended. Vector is the vector to which e is appended. B is the interleaver Matrix. Interlvrout is vector obtained by reading B column by column and writing row wise. The results above agree with the general description given in page 22.

Page 32: Matlab toolbox for simulation of repeat-accumulate codes

Batisani Motsie 2005

25

4.5 Noisy Channel Model. The channel model used in this project is a simple binary symmetric channel (BSC) and by being binary symmetric means that the probability of receiving a 1 when a zero was sent is the same as the probability of receiving 0 when a 1 was sent. 4.5.1 Properties of a BSC.

• Memory-less and random in nature: As it is desired the current channel output should only be influenced by the current channel input and be independent of the rest, this allows for effective observation of the influence of the individual input on the channel output. The channel model has got a random behaviour and this makes sure that the noise in the channel affect transmitted symbols independently and errors therefore occur randomly i.e. errors are not deterministic and this is what happens in real world, when we observe the output of a communication system we have got no idea what the original bits/information was as well as what errors could have happened during the transmission.

• Discrete channel: By being discrete the channel model has got a finite alphabet from which symbols come from. In the case of a binary symmetric channel that alphabet is binary and there are only two representations possible being a zero or one. This allows for effective modelling of a transmission medium with the transmitted bits being a zero or one.

• Simplicity. : As evident from the schematic below, binary symmetric channels are

always associated with two probabilities, the probability that the received bits is not in error 1-p and the probability that the bit is in error, p sometimes known as the crossover or bit error rate (BER) probability. The simplicity of the scheme lies on the fact that we evaluate its performance by considering p, and this gives a relative number of corrupted bits with respect to transmitted and uncorrupted bits (source information).

Another point of interest in using the BSC is the fact that it allows communication engineers to study and analyse hard decision decoding which has been implemented for this project. This model will not be sufficient if we wanted to consider soft decision decoding and in that case models like the additive white Gaussian noise are utilised.

Page 33: Matlab toolbox for simulation of repeat-accumulate codes

Batisani Motsie 2005

26

Schematic representation of the binary symmetric channel.

Figure 4.4 BSC diagram

Matlab Implementation of the binary symmetric channel. The implementation of this channel is a few lines in Matlab using random generator function. This function generates a value between a zero and 1. If the value generated is greater than half then assume a 1 was sent otherwise a 0 was sent. The rand function has equal chance for either a 1 or 0 to happen so it is fair to use it to model the channel. The probability of a 0 or 1 happening are equal i.e. half Function Y = bsc noise(Y,f) For i = 1: length(Y) r = rand; If r<p %head so flip the bit If Y (i) ==1 Y (i) =0; else Y (i) =1; end end end Y is the channel input or specifically is the output of the encoder. The f quantity is the probability of an error occurring in the channel. This probability is the noise level of the channel and we will see it in the simulation results of a complete error correcting scheme.

1-p

1-p

1

0

p

p Y

X

0

1

Page 34: Matlab toolbox for simulation of repeat-accumulate codes

Batisani Motsie 2005

27

4.5.2 Theoretical aspects of Binary Symmetric Channels Review of MAP and ML The principle of operation of a MAP decoder was introduced earlier in Chapter 3, and further it was asserted that under certain condition the MAP and ML decoder are identical or precisely you can obtain a ML from a MAP. This brief mathematical insight will help to prove the assertion. Under normal circumstances the two decoders operate based on a posteriori probabilities and log likelihood function respectively. The ML is not necessarily optimal as the MAP however their performances are virtual identical. As pointed out the ML minimises the probability of error event and this will minimise bit or error symbol. MAP provides better solution in that it minimises probability of bit error directly. MAP A map decoder chooses the most likely path so as to maximise the a posteriori probabilities and that can be mathematically presented as: ap = max ( )rcp ~~ ↑ which is the probability of the codeword c~ conditioned on that a particular sequence r~ , was received. The quantity ( )rcp ~~ ↑ is known as a posteriori for a MAP decoder.

( ) ( )( )rp

rcprcp i

i ~~,~

~~ =↑ , ( )rcp i~,~ is the joint probability,

Now we consider a case where all the code words are equiprobarble. The expression above will reduce further to:

( ) ( )( )rp

rcprcp i

i ~~,~

~~ =↑ = ( )( )rp

crpr i~

~,~ ………………………………………………… 1

= ( ) ( )

( )∑↑

i

ii

crpcpcrpr

,~~~~

Using Bayes theorem we relate conditional probability and the joint probability as ( ) ( )ii cpcrp ~~~ ↑ = ( )icrp ~,~ ……………………………………………………… 2

From 1 and 2 above we get that ( )rcp i

~,~ = ( )icrp ~,~ ………………………..3 The ML choice of path is described statistically as:

Page 35: Matlab toolbox for simulation of repeat-accumulate codes

Batisani Motsie 2005

28

( )crpc ~~max~ ↑= , ( )crp ~~ ↑ is the log likelihood ratio or probability,(LLR) for short. The interpretation of the expression is that the correct codeword c~ will maximize the LLR. Using result 3 from previous page and statistically description of both the MAP and ML the realize that under the condition of equiprobarble code words then max ( )crp ~~ ↑ = max ( )rcp ~~ ↑ and here the output of both decoders are identical. Based on the above result then it makes sense that in the absence of priori information then codeword are equiprobarble. Binary Symmetric Channel (BSC)

( ) ( )ii

N

icrpcrp ↑=↑ Π

=1 , ir and ic are the elements of the received, r and codeword,

c respectively. The log-likelihood is therefore defined as:

( ) ( )∑=

↑=↑N

iii crpcrp

1

loglog and according to the figure of a binary symmetric

channel, the crossover or transition probabilities are defined as :

1-p

ii cr =

( )ii crp ↑

p ii cr ≠

If the received message and the codeword differ in exactly d positions and d is known as the Hamming distance between r and c. The log-likelihood can therefore be re-written as:

( )crp ↑log = ( ) ( )pdNpd −−+ 1loglog = )1log(1

log pNp

pd −+⎟⎟⎠

⎞⎜⎜⎝

⎛−

………………………………………… 4 Realising that )1log( pN − is a constant for all code words, c therefore the decoder can maximize this quantity by choosing an approximate codeword, c~ such that d is minimum and only in that case the log- likelihood function will be maximum.

This bit error rate (BER) = )1log(1

log pNp

pd −+⎟⎟⎠

⎞⎜⎜⎝

⎛−

= ( )Nd

pp

p−⎟⎟

⎞⎜⎜⎝

⎛−

11

and this

can be plotted against p.

Page 36: Matlab toolbox for simulation of repeat-accumulate codes

Batisani Motsie 2005

29

The decoder will perform the following

Following the table above it is easy to note that for p > 0.5 we just get a mirror image of events for p < 0.5 and the worst case occurs at p = 0.5

Figure 4.5 BSC theoretical bit error rate curve In the simulation we would consider p <= 0.5 and hence we would expect to observe first half of the plot above. Material in this section was adopted from communication systems by Simon Haykin, 4th edition

p

P < 0.5

P = 0.5

P > 0.5

Decoder action

Take bits as presented

Cannot do much

Flip bits

Page 37: Matlab toolbox for simulation of repeat-accumulate codes

Batisani Motsie 2005

30

4.6 Deinterleaver The deinterleaver is one of the components that complete the error control scheme and it performs an inverse permutation to that of the interleaver. For the block interleaver it basically receives the channel output and writes it column wise. The un-interleaved message is then obtained reading the deinterleaver row wise. With reference to the above example the channel output assuming no noise is a1 a6 a11 a2 a7 a12 a3 a8 a13 a4 a9 a14 a5 a10 a15. Here it is known that number of columns is 5 and of rows is 3. A table of the read and write process will be as below.

Output

In general the implementation of the deinterleaver allows it to determine the number of rows and columns required to process all channel data and in the scheme of this project column number is the repetition number. The number of rows can then be worked out since the length of channel output is known. From the table above the output is obtained by reading row entries and thus obtain out put a1 a2 a3 a4 a5 a6 a7 a8 a9 a10 a11 a12 a13 a14 a15 4.6.1 Matlab implementation of the deinterleaver The following pseudo-code describes the implementation used in this project.

1. Read output vector of the trellis decoder. 2. Determine the length of vector above. 3. Determine the number of rows and columns for the deinterleaver matrix to be

generated. 4. Check if the product of number of rows and number of columns is equivalent to the

length trellis decoder output. 5. If that is the case generate the deinterleaver matrix. 6. If 4 above is not true insert special character in appropriate positions before writing

trellis vector into deinterleaver matrix.(column wise writing) 7. Determine the output of deinterleaver from the matrix. (row wise reading), At this

step there is also need to check for characters that are neither zero or one and be able to ignore them during the process of reading rows of the matrix. In order to accomplish that there is need to keep track of where the reading process is up to. A variable is declared to store contents of the matrix, only zeros and ones.

As in the interleaver there are some issues to be considered. When the deinterleaver receives the trellis output, that vector is the size of the interleaver output and we must remember from the implementation of the interleaver that this vector misses the special character that had been attended. Therefore it is the task of the deinterleaver to be able to figure out how many special characters (-1’s) are missing and determine the appropriate locations for them. The main reason for inserting this character is to

a1 a2 a3 a4 a5

a6 a7 a8 a9 a10

a11 a12 a13 a14 a15

Page 38: Matlab toolbox for simulation of repeat-accumulate codes

Batisani Motsie 2005

31

make sure that the matrix to be generated is the same size as the original interleaver matrix. Another important to be aware of is the fact that an extra 0 is appended by the encoder so before the output of the decoder is fed into the deinterleaver the extra bit should be removed or the deinterleaver matrix will be bigger than the interleaver matrix and that will lead to fault detection of sent message or user data. The snap shot below will show all these important features outline above.

Figure 4.6 Action of deinterleaver message is the output of trellis decoder. DeintMatrix is the matrix that was generated based on message. This matrix is a n*n matrix as is the one for the corresponding interleaver. vec is the Deinterleaver output, obtained by reading rows of the deinterleaver. The results of the snapshot comply with the requirements of the deinterleaver described on page 27.

Page 39: Matlab toolbox for simulation of repeat-accumulate codes

Batisani Motsie 2005

32

4.7 Repeater Decoder The repeater decoder is a decision making stage and based on what it receives from the deinterleaver. For each row of the deinterleaver matrix the repeater decoder compares the number of 1’s and 0’s on the row. The bit that appears more often than the other is selected as transmitted bit and in other words the decoder is based on the principle of majority vote. The choice of the repetition number is one of the necessities for the repetition decoder and an odd number is usual a better choice because the 0’s and 1’s transmitted will never be equal and as such deciding by the decoder becomes an easy process. In this project the use of both even and odd number is permissible and hence the decoder is adaptive. 4.7.1 Odd number based repeater decoder. Although the odd number repeater decoder is simple it really imposes a limitation on the desired performance since through it only half of the possible repetition capability utilised. We realise that in this scheme for every bit and n times repeat, then if (n+1)/2 repeats are 1’s then predict a 1 was sent else a 0 was sent. 4.7.2 Even number based repeater decoder. The ability of the repeat decoder to be able to use even number of repetition makes sure that the decoding scheme utilises the other half of possible repetitions ignored in the odd number based decoder. When the decoder encounters an equal number of ones and zeros, it randomly picks a one or a zero and this event is equi probable. The only drawback with this idea is that we might be picking bit that will just introduce an error. If for instance we considered a four bits message in which after repeater decoder the number of one’s is equal to number of zeros and assume the desired bit is one, then if a zero is picked then the decoder will detect a zero. 4.7.3 Output user data The output user data is basically the output of the repeater decoder. 4.7.4 Determination of errors.

User data 0 1 1 0 0 0 1 Repeater outcome

000 111 111 000 000 000 111

Received Bits

001 110 001 101 001 000 111

Repeater Decoder Output

0 1 0 1 0 0 1

Correctable errors

yes yes yes

Un-correctable errors

no no

Relative error = (number of uncorrectable errors)/ length (repeater decoder output)

Page 40: Matlab toolbox for simulation of repeat-accumulate codes

Batisani Motsie 2005

33

Performance of repeater and repeaterdecoder.

a . no error

b error present and easily fixed

Page 41: Matlab toolbox for simulation of repeat-accumulate codes

Batisani Motsie 2005

34

Impossible to correct errors.

c errors not possible to fix

figure 4.7 (a,b,c) different error scenarios

In the last two snapshots errors were deliberately introduced at known positions but in reality the occurrence of errors is not deterministic. This is solely for illustration purpose. In the first snapshot there is no noise or interference problem and hence the decoder path is just the transmitted codeword. In the second example an error was deliberately introduced at position five of the transmitted codeword and the decoder has corrected the as we still get the same message as in error free transmission. In the third example two errors where deliberately introduced in positions five and six. Here we notice that the decoder fails to produce desirable results. The decoder is based on majority vote along the rows and in this situation the second rows has more ones than zeros and hence predict falsely that a one was sent.

Page 42: Matlab toolbox for simulation of repeat-accumulate codes

Batisani Motsie 2005

35

5. Iterative decoding 5.1 Half Code rate design and implementation.

In Chapter 4 the scheme considered was for a simple case with a code rate of unity and does not offer much capability in error correction. In Chapter 5 an improved scheme in which a decoder of half code rate is used. In this chapter we see how the concept of iterative decoding mentioned earlier on comes into play and we will also learn about advantages of this scheme. The diagrams below show both the experiment set up and the trellis structure that the half code rate decoder uses. Half rate Encoder

Figure 5.1 half code rate encoder

Transition State table corresponding to fig above.

U input message

v1 = u

D v2 = v1 +

Output v1,v2(u+D) code word

0

0

0

0

0,0

0

0

1

1

0,1

1

1

0

1

1,1

1

1

1

0

1,0

Figure 5.2 state transition table

v1

v2

D

u

Page 43: Matlab toolbox for simulation of repeat-accumulate codes

Batisani Motsie 2005

36

Transition State diagram.

Figure 5.3 Finite state machine diagram

time

Figure 5.4 Trellis Diagram The figure above allows communication engineers to quickly implement a matlab equivalence of what happens at each point in time. As mentioned earlier on the process of decoding in (maximum likelihood) ML conjecture is about survivor paths and then trace though the path to predict the sent word.

5.1.1 Matlab Implementation of trellis decoder The implementation of the above scenario is similar to the code rate 1 encoder seen earlier in page 19.

0/00

1/11

0/00 0/00 0/00

0/01 0/01 0/01

1/10

S0

S1

0/01

0/00

1/10

1/11

Page 44: Matlab toolbox for simulation of repeat-accumulate codes

Batisani Motsie 2005

37

5.2 Iterative decoding Scheme

Figure 5.5 iterative scheme Note that the half rate decoder and the repeater decoder are connected in series and there is a feedback connection between the two decoders. This arrangement allows for several comparisons by the convolutional decoder before settling for the most appropriate decision

User message

User message

Noise fromchannel

Encoder

interleaver

Repeater

Repeater

interleaver

Deinterleaver

Repeater

decoder

Half rate

decoderU

nity code rate decoder

Out1

receivedbits

Repeater decoder output

Page 45: Matlab toolbox for simulation of repeat-accumulate codes

Batisani Motsie 2005

38

5.2.1 Details of the iterative scheme

The error correction scheme under consideration is capable of achieving better performance if direct transmission of source data is permitted over the channel and the detector scheme at the receiving end uses source data and the corrupted signal to make a better decision. Basically the a detector scheme will compare the input data and corrupt signal, if they are not same then the corrupt signal is discarded and the detector may ask for a retransmission or some other mechanism may be required to correct the corrupt signal. Once the correction of errors or error free transmission is done the decoder will now be engaged to determine the source message. This trivial approach is not practical as it requires extra bandwidth, more transmission power. A practical solution is to implement a scheme that is intelligent. The scheme will try, based on what it has received make an estimate of source/input data and hence needless to transmit the input data. A feed back is required that will allow decoded data to be sent back to the decoder iteratively and if there were any errors that went undetected in the previous iteration then we hope that the second time the data goes through the decoder the errors will be picked.

5.2.2 Matlab Implementation of the Iterative scheme.

The iterative scheme has got two decoders, the half rate decoder and the unity rate decoder. For any given number of iterations, the first iteration is always performed in a separate manner, First iteration.

• The channel feeds data to the receiver and which in turn calls the first decoder. • Once the decoding has been completed the decoded sequence is sent to the

deinterleaver. • Finally the Repeater decoder is called, it acts on the deinterleaver output and the

out put of the Repeater decoder is then stored for use in the next iteration.

For the remaining Iterations

• During the second iteration, the output of the first iteration be comes the input to the half rate decoder and it is used together with the channel out put (received sequence) by the half rate decoder.

• Once the half rate decoder is done the vector is passed on to the de-interleaver .

• The deinterleaver then presents its output to the Repeater decoder. • If only two iterations existed then the repeater decoder will send its out put to

displayer and that is the user sent data. • If there are more than two iterations the first three steps are repeated until all

the iterations are exhausted, after which the user sent message will be displayed.

Page 46: Matlab toolbox for simulation of repeat-accumulate codes

Batisani Motsie 2005

39

6. Results of Simulations and Analysis

At time of submission of the report most simulations carried out where for a binary symmetric channel.

6.1 Simple scheme, see figure 4.1

(a)

(b)

Figure 6.1 BSC results

0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4 0.45 0.50

0.05

0.1

0.15

0.2

0.25

0.3

0.35

0.4

0.45

0.5

Noise Level

post

dec

oder

erro

r,BER

threshold values for varying q

q = 3q = 5q = 7

Uncoded scheme

Page 47: Matlab toolbox for simulation of repeat-accumulate codes

Batisani Motsie 2005

40

The results above where obtained by setting the number of iterations for the scheme shown in figure 5.5 to 1 and these results are similar to those obtained before without iteration. The colours correspond with those in the results before i.e red means q = 3,blue means q = 5 and green corresponds to q = 7.

6.2 iterative scheme With 10 iterations

Figure 6.2

Page 48: Matlab toolbox for simulation of repeat-accumulate codes

Batisani Motsie 2005

41

Effect of q on performance of repeater decoding

figure 6.3, effect of q

6.3 Gaussian noise model.

In order to make fair comparison of performance against other code already known, the Gaussian channel is better than the binary symmetric channel as it allows one to see the relationship between power requirements and attainable probabilities of error. Several results where observed while the scheme operated on a Gaussian model. Test for the encoder and convolutional decoding process under the influence of Gaussian noise.

Page 49: Matlab toolbox for simulation of repeat-accumulate codes

Batisani Motsie 2005

42

The diagram below refers to the half rate decoder

Figure 6.4 influence of AWGN on convolutional decoding

Page 50: Matlab toolbox for simulation of repeat-accumulate codes

Batisani Motsie 2005

43

Test of the rate half convolutional decoding under the influence of additive white Gaussian noise and no iteration. See fig 5.5

figure 6.5 decoding without iteration

Page 51: Matlab toolbox for simulation of repeat-accumulate codes

Batisani Motsie 2005

44

Test of Convolutional encoder-decoder system

figure 6.4 iterative decoding in AGWN

Page 52: Matlab toolbox for simulation of repeat-accumulate codes

Batisani Motsie 2005

45

7. Conclusion and Future work

7.1 Achievements and current project status.

The main objective of the undertaking this project was to design and implement repeat accumulate codes in software, an error correcting scheme that achieves low probability of error. In order to accomplish this task I had to build in software several blocks that constitute the scheme and ascertain that each block performed as expected. The main components of the scheme are the repeater, interleaver, encoder, decoder, deinterleaver, repeater decoder. The decoding scheme constructed in this project is maximum likelihood type in which judgement is based on low metric at each decision point. The components where then concatenated in a serial fashion in the listed above. A further improvement in the coding scheme was considered by having iterations in the decoding process. The inclusion of iterative decoding improves or lowers the probability that bits received in error are uncorrected. Indeed some sample results obtained from simulation have shown that repetition of bits and iteration lead to codes that offer resistance to channel imperfections.

7.2 Future plans. In future the project intends to look at optimal and soft decision schemes. At the moment I am implementing a maximum a posteriori probability (MAP) decoder which is based on BCJR algorithm and hopefully by the open day may be able to demonstrate performance of MAP decoder as well as soft decision performance over hard decision. If this has been successfully achieved a more realistic and practical channel, the Gaussian will be used instead of binary symmetric channel. Using a Gaussian channel model allows us to understand both power and bandwidth requirements for reliable data transmission when using a particular error correcting scheme. Another area of possible exploration would be on the other types of interleavers that have not been considered in this project. In this project a simple interleaving scheme was considered for demonstration purpose. Complex interleavers like helical, pseudo random, even-even and diagonal can greatly improve the repeat accumulate scheme performance. A graphical user interface is also being developed and I expect to have it completed it by open day.

Page 53: Matlab toolbox for simulation of repeat-accumulate codes

Batisani Motsie 2005

46

8. Problems Encountered The project that I was involved in is software based and Matlab was used. Matlab has got disadvantage of not being able to compute quickly more especial if large data is concerned and as a result simulations are time consuming. As a result some of results could not be verified by further simulation, I had to be relying on theory prediction and the few simulations attempted. One major draw back was my programming background. I had to be learning features of matlab as I used it and sometimes it would take longer time to resolve a problem caused by the matlab platform and in worst cases the program developed would have bugs that took a long time to figure out.

Page 54: Matlab toolbox for simulation of repeat-accumulate codes

Batisani Motsie 2005

47

References

[ ]1 Carlson, B. (1986). Communication Systems. Singapore, Mc-Graw-Hill Book Co. [ ]2 Guy, G. (1992). Data Communications for Engineers. London, Macmillan Education LTD. [ ]3 Haykin, S. (2001). Communication Systems. New York, John Wiley & Sons, Inc. [ ]4 Proakis, J. and M. Salehi (1994). Communication systems Engineering. New Jersey, Prentice Hall. [ ]5 Wade, G. (2000). Coding Techniques. New York, Palgrave. [6] Benvenuto Nevio, C. G. (2002). Algorithms for Communications Systems and Their Applications., John Wiley & Sons,: chapter 11. [7] http://userver.ftw.at/~jossy/turbo/intro.pdf [8] http://www.complextoreal.com/convo.htm [9] {Divsalar et.al(1998), #217} [10] {Steele, 1999 #218}

Page 55: Matlab toolbox for simulation of repeat-accumulate codes

Batisani Motsie 2005

48