A family of fast index and redundancy assignments for error resilient multiple description coding

13
A family of fast index and redundancy assignments for error resilient multiple description coding $ Rui Ma n , Fabrice Labeau Centre for Advanced Systems & Technologies in Communications (SYTACom), Department of Electrical and Computer Engineering, McGill University, Montreal, Quebec, Canada H3A 2A7 article info Article history: Received 23 March 2011 Accepted 26 January 2012 Available online 10 February 2012 Keywords: Error analysis Quantization abstract Multiple description coding (MDC) was originally developed to combat channel failures and packet losses. For the particular case of fixed-rate scalar quantization, it has been shown that the multiple description scalar quantizer (MDSQ) can be made robust to binary transmission errors, through a proper binary labelling of the indices transmitted in each description. The design of this so-called index assignment (IA) is computationally very demanding, especially for systems with large redundancy or high bit rates. In this paper, we propose an alternative low-complexity IA design for this problem. For low redundancy, this new design provides results that are close to optimum (in the sense of minimizing the side distortions for a given central distortion), while maintaining the robustness to transmission errors and being amenable to designing systems with large redundancy or high bit rates. For high redundancy, the proposed design provides high robustness while allowing for successive decoding of different signal qualities. Thus, it can be applied for the case of progressive quantization and transmission. & 2012 Elsevier B.V. All rights reserved. 1. Introduction Multiple description coding (MDC) was originally pro- posed to combat channel failures [1–4]. In MDC, the source is decomposed into two or more descriptions that are transmitted over independent channels. At the recei- ver end, if both descriptions are received, the received signal is reconstructed with high fidelity; if only one description is correctly received, the received signal is reconstructed with high distortion. Several authors have explored MDC systems. Entropy-constrained MDC tech- niques were discussed and developed in [5,6]. Algorithms to generate more than two descriptions have been developed in [7–9]. The MDC technique was extended to protect from packet losses [10–12]. Algorithms with low computation complexity have been developed to facilitate implementation of MDC in [13,14]. In order to apply MDC in progressive transmissions, the embedded MDC has been developed in [15–17]. In addition, MDC techniques are widely applied in image and video transmissions [18–23]. These techniques are sometimes called signal processing based MDC. In practice, transmissions over hybrid wireline and wireless networks can be considered as suffering from packet losses but also bit errors. Usually, forward error control (FEC) codes are applied to protect from bit errors. The design of classical signal processing based MDC, such as the abovementioned work, does not consider bit errors contained in received descriptions/packets. However, in some cases where the number of binary errors exceeds the error correction capability of the FEC codes applied, one or more descriptions may have to be discarded, although all are received, which results in a significant performance Contents lists available at SciVerse ScienceDirect journal homepage: www.elsevier.com/locate/image Signal Processing: Image Communication 0923-5965/$ - see front matter & 2012 Elsevier B.V. All rights reserved. doi:10.1016/j.image.2012.01.020 $ This work was supported by the Natural Sciences and Engineering Research Council of Canada (NSERC) and the Fonds Que ´be ´ cois de Recherche sur la Nature et les Technologies (FQRNT). n Corresponding author. E-mail addresses: [email protected] (R. Ma), [email protected] (F. Labeau). Signal Processing: Image Communication 27 (2012) 612–624

Transcript of A family of fast index and redundancy assignments for error resilient multiple description coding

Page 1: A family of fast index and redundancy assignments for error resilient multiple description coding

Contents lists available at SciVerse ScienceDirect

Signal Processing: Image Communication

Signal Processing: Image Communication 27 (2012) 612–624

0923-59

doi:10.1

$ Thi

Researc

Rechercn Corr

E-m

fabrice.

journal homepage: www.elsevier.com/locate/image

A family of fast index and redundancy assignments for error resilientmultiple description coding$

Rui Ma n, Fabrice Labeau

Centre for Advanced Systems & Technologies in Communications (SYTACom), Department of Electrical and Computer Engineering, McGill University,

Montreal, Quebec, Canada H3A 2A7

a r t i c l e i n f o

Article history:

Received 23 March 2011

Accepted 26 January 2012Available online 10 February 2012

Keywords:

Error analysis

Quantization

65/$ - see front matter & 2012 Elsevier B.V. A

016/j.image.2012.01.020

s work was supported by the Natural Scienc

h Council of Canada (NSERC) and the Fo

he sur la Nature et les Technologies (FQRNT)

esponding author.

ail addresses: [email protected] (R. Ma),

[email protected] (F. Labeau).

a b s t r a c t

Multiple description coding (MDC) was originally developed to combat channel failures

and packet losses. For the particular case of fixed-rate scalar quantization, it has been

shown that the multiple description scalar quantizer (MDSQ) can be made robust to

binary transmission errors, through a proper binary labelling of the indices transmitted

in each description. The design of this so-called index assignment (IA) is computationally

very demanding, especially for systems with large redundancy or high bit rates. In this

paper, we propose an alternative low-complexity IA design for this problem. For low

redundancy, this new design provides results that are close to optimum (in the sense of

minimizing the side distortions for a given central distortion), while maintaining the

robustness to transmission errors and being amenable to designing systems with large

redundancy or high bit rates. For high redundancy, the proposed design provides high

robustness while allowing for successive decoding of different signal qualities. Thus, it

can be applied for the case of progressive quantization and transmission.

& 2012 Elsevier B.V. All rights reserved.

1. Introduction

Multiple description coding (MDC) was originally pro-posed to combat channel failures [1–4]. In MDC, thesource is decomposed into two or more descriptions thatare transmitted over independent channels. At the recei-ver end, if both descriptions are received, the receivedsignal is reconstructed with high fidelity; if only onedescription is correctly received, the received signal isreconstructed with high distortion. Several authors haveexplored MDC systems. Entropy-constrained MDC tech-niques were discussed and developed in [5,6]. Algorithmsto generate more than two descriptions have been

ll rights reserved.

es and Engineering

nds Quebecois de

.

developed in [7–9]. The MDC technique was extended toprotect from packet losses [10–12]. Algorithms with lowcomputation complexity have been developed to facilitateimplementation of MDC in [13,14]. In order to apply MDCin progressive transmissions, the embedded MDC hasbeen developed in [15–17]. In addition, MDC techniquesare widely applied in image and video transmissions[18–23]. These techniques are sometimes called signal

processing based MDC.In practice, transmissions over hybrid wireline and

wireless networks can be considered as suffering frompacket losses but also bit errors. Usually, forward errorcontrol (FEC) codes are applied to protect from bit errors.The design of classical signal processing based MDC, suchas the abovementioned work, does not consider bit errorscontained in received descriptions/packets. However, insome cases where the number of binary errors exceeds theerror correction capability of the FEC codes applied, one ormore descriptions may have to be discarded, although allare received, which results in a significant performance

Page 2: A family of fast index and redundancy assignments for error resilient multiple description coding

1 This work has been partially presented at the IEEE International

Conference on Image Processing, October 2008 [44], and the Asilomar

Conference on Signals, Systems and Computers, October 2008 [45].

R. Ma, F. Labeau / Signal Processing: Image Communication 27 (2012) 612–624 613

deterioration. By utilizing FEC codes to provide descrip-tion/packet-level protection, FEC-based MDC techniquesare able to moderate this performance degradation[24–30]. However, these techniques incur a significantdecoding delay, since the decoder has to receive all of theFEC information before decoding can begin. In addition,rate-distortion performance comparison showed that FEC-based MDC is more robust against separate packet losses;in contrast, signal processing based MDC is more robustagainst burst packet losses [31].

Using a classic multiple description scalar quantizer(MDSQ) [3] as a starting point, the error-resilient multipledescription coding (ERMDC) was designed to withstandboth packet losses and bit errors [32]. The ERMDC modifiesboth the encoder and the decoder: the ERMDC encodermaximizes the minimum Hamming distance hmin amongERMDC index pairs, so that the ERMDC decoder can detectmore transmission errors; once the ERMDC decoder findstransmission errors by using the inherent redundancy, theoutput values are estimated by utilizing the residualinformation in the erroneous description so as to minimizethe reconstruction distortion. Experimental results showedthat the ERMDC achieved graceful performance degrada-tion with decreasing channel quality, and outperformedthe MDSQ in presence of bit errors. Furthermore, by usingsoft decoding, the ERMDC showed high robustness againstnoise over Rayleigh fading channels [33].

The ERMDC design is inspired from earlier work, suchas robust quantizers [34–38], channel-optimized quanti-zers [39–41], soft decoding of MDC [42], and channel-optimized MDC [43]. Robust quantizers and channel-optimized quantizers have been developed to protectone single description against noisy channels. Channel-optimized quantizers are optimized in terms of a certainnoisy channel, which usually results in performancedegradation over an error-free channel [39–41]. Theprinciple of channel-optimized quantizers has been usedin MDC [43]; the performance of this technique wasoptimized for a given target bit error rate (BER), butdeteriorated in other circumstances, especially, over anerror-free channel. In contrast, the encoder of a robustquantizer is trained for an error-free channel anddesigned to be robust against noisy channels by assigningproper binary indices to code words. This index assignment

(IA) assigns a binary representation to each of the quan-tizer outputs, typically differently from a natural binaryordering. In [34–36], IA methods, which were calledpseudo-Gray coding [36], were developed to provideredundancy-free error protection schemes for vectorquantizers. In [37], anti-Gray coding was proposed toencode code vectors of a vector quantizer as far aspossible from one another in terms of Hamming distance.At the receiver end, erroneous code vectors weredetected, and output values were approximated so as toachieve lower distortion of the reconstructed signal. Thegenetic algorithm (GA) was used to assign indices in [38].Some authors have also used the inherent dependencyamong MDSQ index pairs to estimate the outputs, whentransmission errors occur [42,43].

As a robust quantizer for MDC, the ERMDC is trainedfor an error-free channel and designed to be robust

against both packet losses and bit errors by exploitingthe dependency and the redundancy inherent in MDC.Similar to the design of robust quantizers, the design ofthe IA function for an ERMDC encoder is computationallydemanding. In order to maximize the ERMDC decoder’scapacity to detect transmission errors, the Hammingdistance between ERMDC index pairs is maximized, usinga principle similar to that of anti-Gray coding. The originalERMDC design [32] tackles the IA as a two-step process:first, an exhaustive search over the optimization space isconducted in order to maximize the Hamming distancebetween binary representations of the two descriptions;then, single description indices are mapped to MDSQindex pairs by using a GA. As the number of quantizationlevels or the redundancy increases, the training time ofthe IA for the ERMDC encoder becomes significantlylonger, due to the increase of the search space size. Thistime-consuming process might be inappropriate in someapplications. Moreover, the IA design result must also becommunicated to the receiver, which entails overhead onthe transmission of data. The GA itself also provides an IAwhich is not necessarily optimum, and the obtained resultdepends on initial conditions and actual random mutationpatterns. Therefore, an IA technique with low computa-tional complexity and less transmission overhead isrequired so as to make the ERMDC more applicable.

In this paper, we propose a novel robust IA algorithmwith low computational complexity for the ERMDC enco-der.1 Although it was mentioned that redundancy couldbe added as parity bits in [32], no construction methodwas given. Here, we develop a simple method to addredundancy equivalent to one or more parity bits tocombat both packet losses and bit errors. The techniqueproposed here actually bridges the gap between signalprocessing and FEC-based MDC techniques. Transmissionerrors can be easily detected by checking parity bits, andthe output values are estimated by using the inherentredundancy and side codebooks. This algorithm can beimplemented ‘‘on-the-fly’’ without transmitting the IAscheme to the decoder.

When adding more than one bit redundancy, theproposed method divides information bits into partitionsso as to achieve robustness against both packet losses andbit errors; at the same time, this approach naturallyallows successive refinement decoding of bit partitions.In addition, the optimal bit allocation scheme is derived.Experimental results indicate that the proposed algorithmachieves good compromise between robustness againstpacket losses and bit errors with low computationalcomplexity. It shows that the proposed algorithm outper-forms the MDSQ [3] in terms of computational complexityand robustness against bit errors. Although the proposedalgorithm is not optimized in the sense of rate distortion,it provides similar side distortion with the MDSQ at highredundancy, and sometimes lower than that of the MDSQat low redundancy.

Page 3: A family of fast index and redundancy assignments for error resilient multiple description coding

R. Ma, F. Labeau / Signal Processing: Image Communication 27 (2012) 612–624614

Furthermore, two embedded fast IA algorithms aredeveloped to extend the proposed IA algorithm to pro-gressive quantization and transmissions. It suggests thatthe proposed IA algorithm can be flexibly and easilyadapted to various application requirements, includingimage/video transmission techniques that rely on succes-sive refinement [46–48].

After describing the problem formulation and nota-tions in Section 2, the proposed IA algorithm is describedin Section 3. Thereafter, the optimal bit allocation schemeis derived in Section 3.5. In order to accommodateprogressive transmissions, two embedded IA schemesare provided and compared in Section 4. In Section 5,experimental results show that the proposed algorithmoutperforms existing algorithms. Further discussion andconclusion is given in Section 6.

2. Notations and problem formulation

2.1. Notations

In this paper, we only consider an ERMDC system withtwo channels and three receivers as shown in Fig. 1. At theencoder of the ERMDC, source samples x 2 R, which arezero-mean, independent and identical distributed (i.i.d.),are quantized to indices l 2 f0;1, . . . ,2R0�1g by a 2R0 -levelsingle description scalar quantizer (SDSQ). Each l is one-to-one mapped to an ERMDC index pair (i,j) by two side IAfunctions. ERMDC indices i and j are in the setf0;1, . . . ,2Rs�1g. R0 and Rs are the central bit rate andthe side bit rate in bits per source sample (bpss), and2RsZR0. The total code rate R of each source sample isR¼ 2Rs bpss. We refer to the mapping between SDSQindex l and ERMDC indices (i,j) as the index assignment(IA) of the ERMDC encoder. The inverse mappingl¼ a�1ði,jÞ, decodes a pair of ERMDC indices back intothe corresponding SDSQ index l. i and j are separatelytransmitted over noisy channels, and received as i and j.

At the receiver, there are three decoders: the centraldecoder g0 and two side decoders g1, g2. If both i and j arereceived, the reconstructed value x0 2 R is given by thecentral decoder g0 : x0 ¼ g0ði, jÞ. If only i ¼ i or j ¼ j isreceived, reconstructed values x1,x2 2 R are obtained byside decoders g1: x1 ¼ g1ðiÞ and g2: x2 ¼ g2ðjÞ, respectively.When both i and j are received, but bit errors are found ineither i or j, x0 is estimated by using the conditionalexpectation with knowledge of source statistics andchannel conditions, or, simply the side output associated

SourceScalar

Quantizerq

Side IA1a1

Side IA2a2

Fig. 1. The ERMDC system with two

with the correct ERMDC index. For example, if i ¼ i butjaj, x0 can be estimated by x1. Distortions obtainedcorrespondingly by the three decoders are the centraldistortion D0 and side distortions D1, D2. The average sidedistortion is Ds ¼ ðD1þD2Þ=2. Here, we only considerbalanced side distortions, i.e., D1CD2.

Spread as a measure of distortion: Spreads s1ðiÞ and s2ðjÞ

denote the range of indices l spanned by given i and j,respectively. Specifically, spreads s1ðiÞ and s2ðjÞ are given bys1ðiÞ ¼maxja

�1ði,jÞ�minja�1ði,jÞþ1 and s2ðjÞ ¼maxia

�1ði,jÞ�minia

�1ði,jÞþ1, respectively, where ði,jÞ 2 A. The conceptof spread can also be illustrated with the classic representa-tion of IA for the MDSQ [3], in which a matrix of size M�M

represents the IA; row and column numbers represent thevalues of MDSQ indices i and j, respectively, whereas thematrix entries are the corresponding values of SDSQ index l.This matrix can be used as a lookup table for the imple-mentation of the IA. In this case, assuming that i representsthe rows of the matrix, s1ðiÞ is the difference between thelargest and smallest entry in the i-th row of the matrix plus1. Stated otherwise, it is a measure in the SDSQ quantizeddomain of the range of values of x represented by the value i

in the first description. In the case of MDSQ transmissionwhere the second description (column index j) is lost, thisspread is thus a measure of how precisely it is possible toinfer the value of l from the availability of simply i: a lowerspread will entail a lower side distortion.

Let the side spread sk be the average spread for thek-th description, i.e., sk ¼ 2�Rs

P2Rs�1q ¼ 0 skðqÞ, for k¼1,2. The

average spread of an IA scheme is given by s ¼ ðs1þs2Þ=2.Because we use mean square error (MSE) as the measureof distortion, in high rate systems, the average sidedistortion Ds is roughly proportional to the squaredaverage spread s2. However, since the influence of differ-ent numbers of SDSQ levels on Ds is not considered incalculating s, it is not enough to evaluate Ds of IA schemeswith various numbers of SDSQ levels. Thus, taking intoaccount the total number of SDSQ levels 2R0 , the normal-ized average spread sn is introduced as sn ¼ 2�R0 s.

The optimum ERMDC encoder–decoder pair can beobtained by minimizing Ds subject to D1CD2 [32]. This isan NP-complete problem. In order to simplify this opti-mization, in this paper, the ERMDC is designed for a highrate system without taking into account the source pdf.Thus, instead of Ds, s is minimized as a measure of Ds

subject to s1Cs2. As a result, the obtained ERMDC systemdesign will be independent of the input pdf, and shouldwork well for a wide range of inputs.

Channel1

Channel2

SideDecoder1

g1

SideDecoder2

g2

CentralDecoder0

g0

User

channels and three receivers.

Page 4: A family of fast index and redundancy assignments for error resilient multiple description coding

l1l2l3 l0

j1 j0 i1 i0

0 1

2 3

8 9

10 11

4 5

6 7

12 13

14 15

0j

0

1

2

3

i1 2 3

Fig. 2. Generation of Bð4;0Þ: (a) the four bits of l are split to i and jand (b) the resulting map between SDSQ indices l and ERMDC index

pairs (i,j).

R. Ma, F. Labeau / Signal Processing: Image Communication 27 (2012) 612–624 615

Let l, i and j be binary representations of l, i and j with,respectively, R0, Rs and Rs bits: l¼ ½lR0�1 � � � l1 l0�, i¼ ½iRs�1

� � � i1 i0� and j¼ ½jRs�1 � � � j1 j0�, where the bit indexed by0 is the LSB. Bits of l are information bits. ½i,j� denotes thebinary representation of the index pair (i,j).

Jw�vJ denotes the Hamming distance between w andv. We also call w¼ ½i,j� codewords, i.e., w¼ ½wR�1 � � � w1

w0� ¼ ½iRs�1 � � � i0 jRs�1 � � � j0�. LetW be a set of codewordsw with minimum Hamming distance hmin between two setelements: W ¼ fw 2 f0;1gR : Jw�vJZhmin,8v 2W,vawg.

2.2. Problem formulation

r¼ R�R0 denotes the number of bits of redundancy.We use the notation BðR0,rÞ to represent an ERMDCencoder that adds r bits of redundancy to each R0

information bits. If R0 ¼ r, the redundancy is generatedby duplicating information bits. In the following, we onlyconsider the situation where R04r.

In this paper, we find a solution with low computationalcomplexity to add r-bit redundancy so as to jointly achieve alow average spread and balanced side distortions,

min s, ð1Þ

min9s1�s29, ð2Þ

while maintaining robustness against packet losses and biterrors. Robustness against packet losses will be ensured bydesign as an MDC system, whereas robustness against biterrors will be achieved by enforcing a minimum Hammingdistance between ERMDC code words.

This problem is then decomposed into three sub-problems according to different levels of redundancy,specifically, r¼ 0, r¼ 1 and rZ2. For BðR0,0Þ, R0 infor-mation bits are split into two descriptions subject to min s

and min9s1�s29. When r¼ 0, only the robustness againstpacket losses is provided. However, when rZ1, redun-dancy is added so as to protect from both packet lossesand bit errors. IA algorithms for rZ1 is developed basedon that for r¼ 0. For BðR0,1Þ, ðR0þ1Þ bits are used togenerate two descriptions so as to achieve min s, s1 ¼ s2,and a minimum Hamming distance of hmin ¼ 2.

For BðR0,rÞ, rZ2, two descriptions are produced fromðR0þrÞ bpss so as to achieve min s, s1 ¼ s2, and thecapability of detecting r-bit transmission errors. ðR0þrÞbpss are divided into several partitions, which allow for asuccessive refinement decoding. When rZ2, we showthat an optimal partitioning always adds one-bit redun-dancy to protect the MSB. Because in addition to thehighest magnitude, the most significant bits (MSB) ofsource samples often represent synchronization marksin image compression, such as SPIHT [49], erroneousMSBs may result in wrongly decoding succeeding bits.Hence, this additional protection of MSBs provided by ourdesign is well suited to these types of applications [29].

3. Fast index and redundancy assignment

In this section, a fast index and redundancy assign-ments algorithm is proposed to solve the three sub-problems defined in Section 2.2.

3.1. Index assignment without redundancy

When r¼ 0, since no redundancy is added in convert-ing l to ½i,j�, the resulting two descriptions are only robustagainst packet losses, like any MDC system would be.First, we deal with a simple situation where the numberof information bits is an even number, i.e., R¼ R0 ¼ 2Rs.Information bits l are simply split into i and j. In order tominimize s and 9s1�s29, the MSB lR0�1 should be arrangedin a different description than the next R0=2 significantbits. In order to keep the same numbers of bits in bothdescriptions, the remaining bits in the description wherelR0�1 is located have to be filled with ðR0=2�1Þ LSBs. Thus,BðR0,0Þ is produced by

i¼ ½lR0�2 lR0�3 � � � lR0=2�1� ð3Þ

and

j¼ ½lR0�1 lR0=2�2 � � � l0�: ð4Þ

Side spreads are given by s1 ¼ 2R0�1þ2R0=2�1 and

s2 ¼ 2R0�1�2R0=2�1

þ1, respectively. Thus, s ¼ 2R0�1þ0:5.

As an example, generation of Bð4;0Þ is illustrated inFig. 2. Four information bits are split into i and j in Fig. 2a.The resulting map between indices l and index pairs (i,j) isshown in Fig. 2b. As a result, s1 ¼ 10, s2 ¼ 7 and s ¼ 8:5.

In the case that the number of information bits R0 is anodd number, we have two solutions: (i) adding one-bitredundancy by using the method provided in the nextsubsection; (ii) splitting different numbers of informationbits into two descriptions. Since the design target of thealgorithm proposed in this paper is to provide protectionagainst both packet losses and bit errors, we only considerthe first solution in the following discussion. As for thesecond solution, we would send one more bit to theERMDC index i, i.e., i¼ ½lR0�2 lR0�3 � � � lðR0�1Þ=2�1� and j¼½lR0�1 lðR0�1Þ=2=�2 � � � l0�. Therefore, s1 ¼ 2R0�1

þ2ðR0�1Þ=2�1

and s2 ¼ 2R0�1�2ðR0�1Þ=2�1

þ1, and s ¼ 2R0�1þ0:5. In order

to keep equal average bit rates of two descriptions, weapply l¼ ½i,j� on samples with odd index, and l¼ ½j,i� onsamples with even index.

3.2. Index assignment with one-bit redundancy

When r¼ 1, an overall redundancy of one bit is addedin the two descriptions, as compared to the SDSQ. First,we prove Lemma 1, based on which the bit allocationalgorithm is provided.

Page 5: A family of fast index and redundancy assignments for error resilient multiple description coding

R. Ma, F. Labeau / Signal Processing: Image Communication 27 (2012) 612–624616

Lemma 1. Let a codeword w consist of an ðR�1Þ-bit binary

vector l¼ ½lR�2 � � � l0� and a parity bit e. For a binary

constant n 2 f0;1g, let the set Wn be defined as

Wn ¼ fw 2 f0;1gR : �R�1

k ¼ 0 wk ¼ ng, where � is exclusive

OR (XOR). In this case, Wn has the minimum Hamming

distance hmin ¼ 2.

Proof. This lemma is a direct consequence of the singleparity-check property. Its proof is explicitly re-derivedbelow for completeness. By definition of Wn, for any v,w 2Wn, the parity bit e is chosen so that

�R�1

k ¼ 0wk ¼ �

R�1

k ¼ 0vk ¼ n: ð5Þ

Let us consider two codewords v and w such that wav.Let l and l0 be length-ðR�1Þ binary words made up of theðR�1Þ first bits of w and v, respectively. In this case, lal0

(i.e., Jl�l0JZ1), since, if they were equal, then wR�1 wouldnecessarily also be equal to vR�1 to satisfy (5), whichwould lead to the contradiction that w¼ v. With w¼ ½l e�

and v¼ ½l0 e0� for some e and e0, (5) can be rewritten as�R�2

k ¼ 0lk � e¼ �R�2k ¼ 0 l

0

k � e0, or e0 � e¼ �R�2k ¼ 0 ðlk � l

0

kÞ.The Hamming distance between w and v can then be

written as Jw�vJ¼PR�1

k ¼ 0 wk � vk ¼PR

k ¼ 0 �2lk� l0

kþe�

e0 ¼PR�2

k ¼ 0 lk �l0

kþ �R�2k ¼ 0 ðlk � l

0

kÞ ¼ Jl�l0Jþ �R�2k ¼ 0 ðlk � l

0

kÞ.If Jl�l0J is an even value (i.e., Jl�l0J is even andJl�l0JZ2), then �R�2

k ¼ 0ðlk � l0

kÞ ¼ 0, hence Jw�vJZ2; ifJl�l0J is an odd value (i.e., Jl�l0J is odd and Jl�l0JZ1),then �R�2

k ¼ 0ðlk � l0

kÞ ¼ 1, hence Jw�vJZ2, which concludesthe proof. &

W0 and W1 are the two sets defined by Wn for n¼0,1:W0 ¼ fw 2 f0;1g

R : �R�1k ¼ 0 wk ¼ 0g, and W1 ¼ fw 2 f0;1g

R :

�R�1k ¼ 0wk ¼ 1g. Two IA patterns of Bð5;1Þ associated withW0 and W1, respectively, are shown in Fig. 3. In Fig. 3,greyed cells are selected as ERMDC index pairs. Each ofthese two possible designs allows for Hamming distanceof 2 between any pairs of ERMDC indices.

Based on Lemma 1, BðR0,1Þ is generated by one-to-onemapping indices l 2 L to codewords ½i,j� 2Wn, where n ischosen as either 0 or 1, so as to minimize s, subject to

�Rs�1

k ¼ 0ðik � jkÞ ¼ n, ð6Þ

0 1 2 3 4 5 6 7

0

1

2

3

4

5

6

7

row

(i)

column (j)

Fig. 3. Two possible IA patterns associated with Bð5;1Þ: (a) the IA pattern corre

ERMDC index pairs.

s1 ¼ s2: ð7Þ

The minimum Hamming distance between any two ½i,j�is 2. Therefore, one-bit errors are detectable.

For the purpose of minimizing s, except lR0�1, the MSBof l, all other information bits lR0�2, . . . ,l1,l0 are split intoiRs�1, . . . ,i2,i1 and jRs�1, . . . ,j2,j1, respectively. In order tosatisfy (6), the LSBs of i and j, i.e., i0 and j0, are used asparity bits and generated by using modulo-2 addition ofcorresponding information bits. ~i and ~j represent a reali-zation of i and j, respectively, that satisfies (6). Specifi-cally, ~i and ~j are produced by

~i ¼ ½lR0�3 � � � lðR0�3Þ=2 ei�

and

~j ¼ ½lR0�2 lðR0�5Þ=2 � � � l0 ej�, ð8Þ

where

ei ¼ n� lR0�1 � �R0�3

k ¼ ðR0�3Þ=2lk,

ej ¼ n� lR0�1 � lR0�2 � �ðR0�5Þ=2

k ¼ 0lk ð9Þ

and n 2 f0;1g. With the construction of i¼ ~i and j¼ ~j, sidespreads are given by s1 ¼ 2R0�2

�2ðR0�3Þ=2þ1 and

s2 ¼ 2R0�2þ2ðR0�3Þ=2. In order to achieve (7), i.e., equal

side spreads, we modify this construction by exchanging ~iand ~j, when lR0�1 ¼ 1. Therefore, when lR0�1 ¼ 0, the out-put codewords are w¼ ½~i,~j�, i.e., i¼ ~i and j¼ ~j; whenlR0�1 ¼ 1, the codewords are w¼ ½~j,~i�, i.e., i¼ ~j and j¼ ~i.Consequently, the side spreads are equal and are given by

s ¼ s1 ¼ s2 ¼ 2R0�2þ0:5: ð10Þ

Generation of Bð5;1Þ is shown in Fig. 4 as an example,where s ¼ s1 ¼ s2 ¼ 8:5.

In this case, except the MSB lR0�1, the other ðR0�1Þinformation bits are directly sent to two descriptions. Twoparity bits i0 and j0 are, respectively, generated by usingðRs�1Þ XOR operations of corresponding information bits,that is to say, 2ðRs�1Þ ¼ ðR0�1Þ XOR operations in total.Since the process time of sending out ðR0�1Þ information

0 1 2 3 4 5 6 7

0

1

2

3

4

5

6

7

row

(i)

column (j)

sponds to W0 and (b) the IA pattern corresponds to W1. Greyed cells are

Page 6: A family of fast index and redundancy assignments for error resilient multiple description coding

l4 l3 l2 l1 l0

j2 j1 j0 i2 i1 i0

0 1 8 9

2

4

6

16 18 20 22

17 19 21 23

3 10 11

24 26 28 30

5 12 13

7 14 15

25 27 29 31

00

1

2

3

4

5

6

7

10

7

7

10

7

10

10

7

7

spread

spread

1 2 3 4 5 6 7

10 10 7 10 7 7 10

Fig. 4. Generation of Bð5;1Þ associated with W0: (a) generation of i and j, when l4 ¼ 0 and (b) the resulting map between indices l and index pairs (i,j).

j

R. Ma, F. Labeau / Signal Processing: Image Communication 27 (2012) 612–624 617

bits is negligible, dealing with each R0 bpss needs ðR0�1ÞXOR operations.

A A A A

A

A

A

B B B B

B B B B

A A A

B B B B

A A A

A A A

B B B B

0

0

1

2

3

4

5

6

7

i

1 2 3 4 5 6 7

Fig. 5. The ERMDC index pairs used in the IA scheme illustrated in

Fig. 4b can be divided into two non-overlapping groups A and B.

3.3. Geometric IA algorithm for one-bit redundancy

In this subsection, we propose alternate approach tothe IA design algorithm for one-bit redundancy that iscalled the geometric IA algorithm, which will provide auseful interpretation of the derivations in Section 3.2.One-bit redundancy is added by allocating specific slots,instead of XOR operations used in the previous subsec-tion. In addition, this method explore the structure of theIA matrix, which can be used to simplify the optimizationprocedure.

Fig. 4b suggests that the ERMDC index pairs (i,j) can bedivided into two non-overlapping groups of entries A andB, as illustrated in Fig. 5. Note that all occupied slots inany given row or column exclusively belong to one of thetwo groups.

We use Lemma 2 to confirm and justify this observa-tion. Lemma 2 claims that these two groups are deter-mined by the bit lR0�1. That is to say, Group A is associatedwith lR0�1 ¼ 0; Group B is associated with lR0�1 ¼ 1.

Lemma 2. For BðR0,1Þ, let SDSQ indices l and l0 are,

respectively, decomposed into ERMDC codewords ½i,j� 2Wn

and ½i0,j0� 2Wn, where n¼ f0;1g, by using (8) and (9).Provided that lR0�1al

0

R0�1, where lR0�1 and l0

R0�1 are MSBs

of l and l0, respectively, iai0 and jaj0.

Proof. Let l¼ ½lR0�1 lR0�2 � � � l1 l0�, i¼ ½iRs�1 � � � i1 i0�,j¼ ½jRs�1 � � � j1 j0� and l0 ¼ ½l0R0�1 l0R0�2 � � � l01 l00�, i0 ¼ ½i0Rs�1

� � � i01 i00�, j0 ¼ ½j0Rs�1 � � � j01 j00�, where lR0�1 and l0R0�1 areMSBs of l and l0, respectively, R0 ¼ 2Rs�1.

Based on (6), (8) and (9), �Rs�1k ¼ 0 ðik � jkÞ ¼ n, and �Rs�1

k ¼ 0

ði0k � j0kÞ ¼ n, where

i0 ¼ n� lR0�1 � �Rs�1

k ¼ 1ik, j0 ¼ n� lR0�1 � �

Rs�1

k ¼ 1jk,

i00 ¼ n� l0R0�1 � �Rs�1

k ¼ 1i0k, j00 ¼ n� l0R0�1 � �

Rs�1

k ¼ 1j0k: ð11Þ

According to (11), lR0�1 ¼ n��Rs�1k ¼ 0ik ¼ n��Rs�1

k ¼ 0jk, andl0R0�1 ¼ n��Rs�1

k ¼ 0i0k ¼ n��Rs�1k ¼ 0j0k:

In this case, lR0�1al0R0�1 implies that �Rs�1k ¼ 0ika�

Rs�1k ¼ 0 i0k,

and �Rs�1k ¼ 0jka�

Rs�1k ¼ 0 j0k. Stated otherwise, it implies that

iai0 and jaj0, that is to say, iai0 and jaj0. &

Because i and j are used as the row and column indicesin the IA matrix, Lemma 2 indicates that the rows andcolumns are exclusively occupied by Group A or B.

Based on Lemma 2, we develop the geometric IAalgorithm to map SDSQ indices to ERMDC index pairs.The resulting IA schemes are equivalent to those obtainedby the method described in Section 3.2. In Fig. 6, we usean example to illustrate the geometric IA algorithm forBð5;1Þ. In this example, we use the IA pattern associatedwith W0, illustrated in Fig. 3. The target IA scheme isshown in Fig. 4b.

First, as shown in Fig. 5, selected cells are dividedinto two non-overlapping groups A and B. All cells in

Page 7: A family of fast index and redundancy assignments for error resilient multiple description coding

0 1

2 3

8 9

10 11

4 5

6 7

12 13

14 15

0c

0j

0

1

2

3

r

0

3

5

6

i

16 17

18 19

24 25

26 27

20 21

22 23

28 29

30 31

0c

1j

0

1

2

3

r

1

2

4

7

i1 2 3

3 5 6 2 4 7

1 2 3

Fig. 6. Using the geometric IA algorithm to obtain the IA scheme shown in Fig. 4b: (a) Matrix A: LA is assigned to Group A illustrated in Fig. 5 and (b)

Matrix B: LB is assigned to Group B illustrated in Fig. 5.

...

LSBMSB

Partition #: 01

Granularity: b1 b0

�-1

b�-1

...... lb0+b1-1 lb0lb0-1 l0lR0-b�-1

......lR0-1

Fig. 7. R0 bits of l are divided into r partitions in sequence in BðR0 ,rÞ,rZ2.

R. Ma, F. Labeau / Signal Processing: Image Communication 27 (2012) 612–624618

Groups A and B are, respectively, taken out to reshape two2Rs�1

� 2Rs�1 matrices A and B, as illustrated in Fig. 6a andb. i and j are row and column indices of the original 2Rs �

2Rs matrix; r and c are row and column indices of MatricesA and B. SDSQ indices l 2 L¼ f0;1, . . . ,2R0�1g are dividedinto two sets LA and LB according to lR0�1 ¼ 0 andlR0�1 ¼ 1, respectively, i.e., LA ¼ f0;1, . . . ,2R0�1

�1g andLB ¼ f2

R0�1,2R0�1þ1, . . . ,2R0�1g. SDSQ indices in LA and

LB are, respectively, assigned to slots in Matrices A and Bby using the IA method provided in Section 3.1, i.e., withno redundancy. The resulting IA schemes are shown inFig. 6. Thereafter, according to i and j, elements of MatrixA are mapped back to Group A. Similarly, elements ofMatrix B are mapped back to Group B. In order to keepbalanced spreads, however, Matrix B is transposed beforemapping back. Finally, the IA scheme shown in Fig. 4b isobtained.

Therefore, according to this algorithm, it can beinferred that the average spread of BðR0,1Þ equals to thatof BðR0�1;0Þ. Furthermore, it shows that the originally IAproblem in a 2Rs � 2Rs matrix can be simplified to an IAproblem in a 2Rs�1

� 2Rs�1 matrix, which is only onefourth of the original IA matrix. In order to obtain a‘‘close-to-optimum’’ IA scheme, we can apply the GAprovided in [32] to obtain a ‘‘close-to-optimum’’ solutionin the 2Rs�1

� 2Rs�1 matrix, and then expand it to thetarget 2Rs � 2Rs matrix. Since the GA is applied in a muchsmaller search space, the time on obtaining a goodsolution can be significantly shorten. In addition, insteadof random IA schemes, the IA scheme obtained by usingBðRs�1;0Þ can be used in the first generation to speed upthe search. In the following, without loss of generality, weonly consider the set W ¼W0.

3.4. Index assignment for more than one bit redundancy

Next, we describe the algorithm producing BðR0,rÞ,rZ2. As illustrated in Fig. 7, R0 information bits of each lare divided into r partitions in sequence so as to achievesuccessive refinement at the receiver. That is to say, asmore partitions are received, more information bits aredecoded and the reconstruction distortion reduces. Each

partition is protected by one-bit redundancy, which isadded by applying the methods previously described.

bk denotes the number of information bits involved inthe k-th partition, k¼ 0;1, . . . ,r�1. Thus, R0 ¼

Pr�1k ¼ 0 bk.

We use a partition sequence /br�1, . . . ,b0S to represent apartition scheme for BðR0,rÞ. More than one partitionsequence may exist for a given BðR0,rÞ. For instance,two partition sequences for Bð7;3Þ are /1;3,3S and/1;1,5S, as illustrated in Fig. 8a and b. Fig. 8c and d,respectively, illustrates IA matrices corresponding to/1;3,3S and /1;1,5S. In Fig. 8a, /1;3,3S means todivide l¼ ½l6l5l4l3l2l1l0� to three partitions: ½l6�, ½l5l4l3� and½l2l1l0�, each of which is protected by one-bit redundancy.

For a given partition sequence of BðR0,rÞ, the IA scheme isaccomplished partition by partition. For partitions with bk¼1,two descriptions are generated by duplicating the informa-tion bit, so that (6) is satisfied. For partitions with bkZ3, twodescriptions are produced in the same way as BðR0,1Þ. Sinceeach partition is considered individually, each partitioncorresponds to an IA matrix. Thus, side spreads and theaverage spread of each partition can be obtained in the sameway as those defined in Section 2. However, the order ofmagnitude of spreads of a partition is determined by the LSBin this partition. For example, in /1;3,3S illustrated inFig. 8a, side spreads of the partition ½l5l4l3� are 2� 23

¼ 16and 3� 23

¼ 24, and the average spread is 2:5� 23¼ 20.

Equations to calculate the average spread of a partition andthe overall average spread are given in the next section.

Therefore, in each partition, two descriptions aregenerated that have the same properties as the partitionsdefined in Section 3.2, individually satisfying (6) andachieving small average spread and balanced sidespreads. Consequently, the IA scheme is uniquely deter-mined by the partition sequence; furthermore, each ½i,j�

Page 8: A family of fast index and redundancy assignments for error resilient multiple description coding

Fig. 8. (a) /1;3,3S and (b) /1;1,5S are two realizations of Bð7;3Þ. (c) and (d) are IA patterns corresponding to /1;3,3S and /1;1,5S, respectively.

Greyed cells represent ERMDC index pairs assigned to SDSQ indices. As explained in Section 3.4.1, these IA schemes are obtained by recursive application

of the geometric interpretation in Section 3.3. (a) /1;3,3S. (b) /1;1,5S. (c) The IA pattern of /1;3,3S. (d) The IA pattern of /1;1,5S.

R. Ma, F. Labeau / Signal Processing: Image Communication 27 (2012) 612–624 619

satisfies (6). Similar to the algorithm described in Section3.2, in the k-th partition, if bkZ2, there are ðbk�1Þ XORoperations; thus, there are at most

Pr�1k ¼ 0ðbk�1Þ ¼ R0�r

XOR operations in total. In addition, the process time oftwo cases is negligible: (i) when bk¼1, the information bitis duplicated and sent to two descriptions; (ii) whenbkZ3, except the MSB of this partition, other informationbits are directly sent to two descriptions. Therefore,encoding a source sample requires ðR0�rÞ XOR operationsin the worst case.

3.4.1. Geometric explanation

The proposed IA procedure for BðR0,rÞ, rZ2 can alsobe given a geometric interpretation, like was done forr¼ 1 in Section 3.3. The IA matrix of the k-th partition is a2ðbkþ1Þ=2

� 2ðbk þ1Þ=2 matrix, cells of which are selected byusing the method described in Section 3.3. If bk¼1, thecells along the main diagonal are selected. Each cell of theIA matrix of the k-th partition is substituted by a2ðbk�1þ1Þ=2

� 2ðbk�1þ1Þ=2 matrix, when kZ1. The IA matrixof the zeroth partition is a 2ðb0þ1Þ=2

� 2ðb0þ1Þ=2 matrix,which is assigned to SDSQ indices by using the methoddescribed in Section 3.3. That is say, the 2ðb0þ1Þ=2

2ðb0þ1Þ=2 matrix is decomposed to two 2ðb0�1Þ=2� 2ðb0�1Þ=2

matrices for IA; the IA pattern of /1;3,3S is shown inFig. 8c. On the other hand, for the particular caseBðR0,rÞ ¼/1;1, . . . ,1,R0�rþ1S, i.e., bk¼1 when r4kZ

1, 2r�1 blocks along the main diagonal are chosen, andeach block is a 2ðR0�rÞ=2þ1

� 2ðR0�rÞ=2þ1 matrix, which isassigned to SDSQ indices by using the method describedin Section 3.3; the IA pattern of /1;1,5S is shown inFig. 8d. Therefore, by using the proposed algorithm, alarge IA matrix can be decomposed into a number of smallIA matrices. As illustrated in Fig. 8c and d, 32�32matrices are decomposed into 4�4 and 8�8 matrices,

respectively, which are further shrunk to 2�2 and 4�4matrices for assigning indices and redundancy, by usingthe geometric IA algorithm provided in Section 3.3. Thecomputational complexity of obtaining the optimal solu-tion is significantly reduced, so that it is even conceivableto deploy algorithms such as the GA or the exhaustivesearch.

3.5. Optimal bit allocation

As shown in Fig. 8, for a given BðR0, rÞ, rZ2, morethan one way of partitioning the binary representation ofthe data (i.e., a bit allocation scheme) may exist. We derivehere the optimal bit allocation scheme in terms of mini-mization of the average spread s.

Given a partition sequence /br�1, . . . ,b0S of a givenBðR0,rÞ, rZ2, the average spread of the k-th partition spk

is given by

spk¼

1, bk ¼ 1,

2bk�2þ0:5, bkZ3 as per ð10Þ:

(ð12Þ

The average spread s of BðR0, rÞ, rZ2, is given by

s ¼ sp0þXr�1

k ¼ 1

ðspk�1Þ � 2

Pk�1

t ¼ 0bt : ð13Þ

Due to (12), minkspk¼ 1 is achieved when bk¼1. Let

spH¼Pr�1

k ¼ 1ðspk�1Þ � 2

Pk�1

t ¼ 0bt . minspH

¼ 0 is achievedwhen bk¼1, r�1ZkZ1. According to (13), therefore,given BðR0, rÞ,

min s ¼ sp0¼

1, b0 ¼ 1,

2b0�2þ0:5, b0Z3

(ð14Þ

is achieved when bk¼1, where r�1ZkZ1, andb0 ¼ R0�rþ1. That is to say, the optimal bit allocation

Page 9: A family of fast index and redundancy assignments for error resilient multiple description coding

R. Ma, F. Labeau / Signal Processing: Image Communication 27 (2012) 612–624620

scheme is

/1;1, . . . ,1|fflfflfflfflfflffl{zfflfflfflfflfflffl}r�1

, R0�rþ1S: ð15Þ

The optimal bit allocation therefore always protects moreheavily the MSBs with one-bit redundancy each.

3.6. Error detection, correction and estimation

Redundancy added in the ERMDC system provides pro-tection from packet losses and bit errors. At the receiver, FECcodes, such as cyclic redundancy check (CRC), can inform thedecoder on whether or not each packet contains binaryerrors. If both descriptions are lost or carry bit errors, thistransmission fails. When only one description is received andcorrect, the corresponding side decoder reconstructs signalswith a quality determined by the MDC’s side distortion.When both descriptions are received correctly, the centraldecoder recovers received signals with a quality determinedby the central distortion. In the case where one of tworeceived descriptions carries bit errors, each code word ½i,j� ischecked partition by partition in terms of (6). If (6) is notsatisfied, the reconstruction value is estimated by using theconditional expectation associated with the source statisticsand the channel conditions or the side codebook [32]. Inparticular, for partitions with unit granularity (bk¼1), takingthe bits from the correctly received description will notintroduce bit errors, because information bits are simplyduplicated in both descriptions.

4. Embedded fast index assignment

Embedded coding is usually utilized in progressiveimage transmissions. Embedded MDSQ techniques havebeen developed for progressive image transmissions overunreliable channels [15–17].

The IA algorithm proposed in Section 3 allows thedecoder to produce successive refinement on a partition-by-partition basis. Specifically, in the case of BðR0,1Þ, sinceparity bits, which are, respectively, LSBs of i and j, aregenerated by using the MSB lR0�1 of SDSQ index l,decoding and error detection can only proceed afterreceiving all bits of i and j. However, this technique canbe easily modified to achieve embedded coding at the bitlevel. Here, we provide two solutions. According to howredundancy is allocated to protect partitions, these twomethods are called (i) unequal error protection (UEP) and(ii) equal error protection (EEP). In both methods, at thereceiver end, the reconstruction distortion decreases, asmore bits arrive at the decoder.

In order to develop these embedded codes, weconsider a more general case of allocating informationbits and redundancy bits. Specifically, for BðR0,rÞ, R0 bitsof information are divided into g partitions, so that in thek-th partition bk bits of information are protected by ek

bits of redundancy, subject to R0 ¼Pg�1

k ¼ 0 bk andr¼

Pg�1k ¼ 0 ek. The resulting bit allocation scheme is

/ðbg�1,eg�1Þ, . . . ,ðb1,e1Þ,ðb0,e0ÞS, where the ðg�1Þ-th parti-tion is the most significant partition, and the zerothpartition is the least significant partition.

Different from the previous discussion, g may not beequal to r, and ek may not be one. As a particular case,when g¼ r and ek¼1, the bit allocation scheme is/ðbr�1,1Þ, . . . ,ðb1,1Þ,ðb0,1ÞS. By omitting ek¼1, we get/br�1, . . . ,b1,b0S, which is the bit allocation schemediscussed before.

4.1. Unequal error protection

In the UEP method, all redundancy is allocated toprotect more significant partitions and no redundancy isapplied to protect the zeroth partition. For BðR0,rÞ, R0

information bits are divided into ðrþ1Þ partitions. Each ofr most significant partitions contains only one informa-tion bit, which is duplicated and sent to two descriptions.The IA of the zeroth partition is obtained by the methodfor BðR0�r,0Þ provided in Section 3.1. Specifically, the bitallocation scheme is given by

/ð1;1Þ, . . . ,ð1;1Þ|fflfflfflfflfflfflfflfflfflfflfflffl{zfflfflfflfflfflfflfflfflfflfflfflffl}r

, ðR0�r,0ÞS: ð16Þ

For BðR0,rÞ, thus, the average spread s ¼ 2R0�r�1þ0:5. The

IA pattern of the UEP method is 2r blocks with 2ðR0�rÞ=2�

2ðR0�rÞ=2 cells along the main diagonal of the IA matrix.Each cell in a block represents the map between an SDSQindex and an ERMDC index pair.

4.2. Equal error protection

In the EEP method, every partition is ‘‘equally’’ pro-tected by one-bit redundancy. The only differencebetween this method and the method provided inSection 3 is the IA scheme for BðR0,1Þ. For BðR0,1Þ, paritybits i0 and j0 are produced by using the LSB l0, instead of

the MSB lR0�1. Specifically, i¼ ½lR0�2 � � � lðR0�1Þ=2 i0� and

j¼ ½lR0�1 lðR0�3Þ=2 � � � l1 j0�, where i0 ¼ l0 ��R0�2k ¼ ðR0�1Þ=2lk,

and j0 ¼ l0 � lR0�1 � �ðR0�3Þ=2k ¼ 1 lk. Thus, s1 ¼ 2R0�1

2ðR0�1Þ=2þ1, s2 ¼ 2R0�1

þ2ðR0�1Þ=2�1, and s ¼ 2R0�1. For

BðR0,rÞ, rZ2, (15) still provides the optimal bit allocationscheme with the minimized s:

mins ¼1, b0 ¼ 1,

2b0�1, b0Z3:

(ð17Þ

Consequently, the average spread s of this method isequal to or larger than that of the method provided inSection 3. The EEP scheme for Bð5;1Þ illustrated in Fig. 9gives s ¼ 16. IA patterns of the EEP algorithms are thesame as those of the ERMDC IA with the same partitions.For example, /1;3,3S and /1;1,5S of the EEP method arethe same as those illustrated in Fig. 8c and d.

In both embedded IA methods, an information bit canbe an output as soon as it is correctly received. Inaddition, (6) is satisfied and allows for error detection.

5. Performance comparison

We exploit experimental results to demonstrate per-formance of the proposed algorithms in terms of compu-tational complexity, distortion measurement, robustness

Page 10: A family of fast index and redundancy assignments for error resilient multiple description coding

0.002 0.004 0.006 0.008 0.010

0.51

1.52

x 10−5

20 40 60 80 100 120

0.51

1.52

x 10−5

0.04 0.06 0.08 0.1 0.12 0.14 0.16 0.18 0.2 0.22 0.24

0.51

1.52

x 10−5

D0

D0

D0

Ds

s

sn

Fig. 10. Comparison between behaviours of side distortions Ds, average

spreads s and normalized average spreads sn with respect to central

distortions D0 at the total code rate 10 bpss. Data points in each figure

from the left to the right correspond to partition sequences /1;1,1;3S,

/1;1,5S, /1;3,3S, /1;7S, /3;5S, /9S.

R. Ma, F. Labeau / Signal Processing: Image Communication 27 (2012) 612–624 621

against packet losses and bit errors. For convenience ofdiscussion, the fast IA algorithm proposed in Section 3 iscalled the ERMDC IA algorithm, and the embedded IAalgorithms described in Section 4 are called UEP and EEP,respectively.

5.1. Spreads vs. side distortions

In Section 2.1, we claim that in high rate systems, theaverage spread s and the normalized average spread sn

can be used as measures of the average side distortion Ds

so as to evaluate different bit allocation schemes. Takinginto account different bit rates, in particular, sn is used.Here, we use experiments to verify the validity of thisstatement.

In experiments, bit allocation schemes for BðR0,rÞ,where R0þr¼ 10 and rZ1, are compared. Uniformscalar quantizers are applied with i.i.d. source samplesuniformly distributed in ½�0:5,0:5�. Six bit allocationschemes are obtained: Bð9;1Þ ¼/9S, B1ð8;2Þ ¼/1;7S,B2ð8;2Þ ¼/3;5S, B1ð7;3Þ ¼/1;1,5S, B2ð7;3Þ ¼/1;3,3S,Bð6;4Þ ¼/1;1,1;3S. Behaviours of Ds, s and sn associatedwith D0 are further compared in Fig. 10. It indicates thatcompared with s, sn provides a better description of thebehaviour of Ds with respect to D0. According to experi-mental results, optimal bit allocation schemes associatedwith r¼ 1;2,3;4 are, respectively, Bð9;1Þ ¼/9S, B1ð8;2Þ¼/1;7S, B1ð7;3Þ ¼/1;1,5S, Bð6;4Þ ¼/1;1,1;3S. Theseresults are consistent with optimal bit allocation schemesobtained by using (15).

5.2. Computational complexity

The fast IA is designed to cut down the computationalcomplexity of designing the optimum ERMDC encoder.Here, we compare computational complexity of the pro-posed fast IA algorithm for BðR0,1Þ with the IA algorithmprovided in [32], in which a genetic algorithm (GA) wasused to find a ‘‘close-to-optimum’’ solution, and theMDSQ [3]. In the following, we mention the algorithmprovided in [32] as the GA, which is the key step andconsumes most of the computational complexity. In order

0 2 16 18

4

8

12

1 3 17 19

5 7 21 23

6 20 22

9 11 25 27

10 24 26

14 28 30

13 15 29 31

0

0

1

2

3

4

5

6

7

1 2 3 4 5 6 7

Fig. 9. The embedded EEP IA scheme of Bð5;1Þ.

to match the design target of the proposed IA algorithm,the GA is searching for a solution that minimizessþ9s1�s29.

Experimental results are shown in Table 1. In the fastIA and the GA, r¼ 1, and, thus, R¼ R0þ1. In the MDSQ, ris usually not an integer. For purposes of comparison, wechose r of the MDSQ close to 1. Both the ERMDC IA andMDSQ achieve similar s. However, when considering9s1�s29, the ERMDC IA always achieves better balanceddescriptions. The central distortion D0 and the averageside distortion Ds are obtained by applying obtained IAschemes on Gaussian signals with zero mean and unitvariance, which are quantized by Lloyd–Max quantizers.It indicates that the ERMDC IA schemes achieve the mostconsistent performance in terms of the side distortion Ds.As the bit rate grows, Ds of the MDSQ becomes higher.

The computational complexity of the GA is influencedby many factors, such as the applied sorting algorithms aswell as the number of genes, chromosomes and genera-tions. The elapsed time t in minutes is utilized to evaluatethe computational complexity of the GA. It indicates thatif the time is long enough, the GA can find a good IAscheme, i.e., with an average spread s similar to that of thefast IA. As the size of the search space (represented by R)increases, the computational complexity of finding a goodsolution increases significantly. In a small search space,such as R¼6 bpss, the GA finds a comparable solutionwith the fast IA in a short time. However, when it searchesa space with R¼10 bpss, it takes more than 128 h toobtain a good solution. In contrast, with its substantiallylower computational complexity, the fast IA algorithmalways produces a ‘‘close-to-optimum’’ IA scheme in notmore than 1 ms.

Page 11: A family of fast index and redundancy assignments for error resilient multiple description coding

Table 1Comparison among (a) the ERMDC IA, (b) the GA and (c) the MDSQ. D0 and Ds are in decibel.

R s 9s1�s29 D0 Ds

(a) ERMDC IA ðr¼ 1Þ

6 8.5 0 �25.27 �6.79

8 32.5 0 �35.07 �6.61

10 128.5 0 �45.90 �6.85

R s 9s1�s29 D0 Ds t (min)

(b) GA ðr¼ 1Þ

6 8.5 0 �25.11 �6.80 1.53

8 32.72 13.1 �35.01 �7.09 24.48

10 182.5 8 �45.75 �5.62 11 827.5

R r s 9s1�s29 D0 Ds

(c) MDSQ

6 0.91 8.875 0.5 �25.82 �7.12

8 1.05 30.125 0.5 �34.43 �4.40

10 0.98 137.781 0.56 �45.96 �3.78

−20 −15 −10 −5 0 5 10−46

−44

−42

−40

−38

−36

−34

−32

−30

−28

−26

Side distortions (dB)

Cen

tral

dis

tort

ions

(dB

)MDSQERMDCUEP (embedded)EEP (embedded)

Fig. 11. Side distortions of various bit allocations with respect to central

distortions at the total code rate 10 bpss. Note that the MSE values

plotted in dB can easily be converted to PSNR for this particular source

as PSNR¼ 6dB�MSEdB.

R. Ma, F. Labeau / Signal Processing: Image Communication 27 (2012) 612–624622

Index assignment of an MDSQ is realized in two steps: (i)all L� 2R0 MDSQ index pairs are scanned and, then, stored inan IA table; (ii) each SDSQ index l is mapped to an MDSQindex pair (i,j) by checking the IA table with 2R0 entries. Atthe second step, at most log22R0 ¼ R0 comparisons areneeded to find the right (i,j) that is mapped to l. Conse-quently, for encoding K indices l, the total computationalcomplexity is at most ð2R0þKR0Þ. Because the number ofsource samples is usually much larger than that of quantiza-tion levels, i.e., Kb2R0 , the computational complexity of theMDSQ in the worst case is approximately equal to KR0. It is alittle higher than that of the fast IA algorithm, which needs atmost KðR0�rÞ XOR operations as explained in Section 3.4.

5.3. Performance against packet losses and bit errors

In order to evaluate performance over erasure andnoisy channels, we compare the proposed methods withthe MDSQ. In the following experiments, we test thesealgorithms at the total code rate R¼10 bpss. Gaussiansignals with zero mean and unit variance are quantized byLloyd–Max scalar quantizers. Only performance of thebest schemes of the fast IA algorithms at given r, i.e.,Bð9;1Þ ¼/9S, Bð8;2Þ ¼/1;7S, Bð7;3Þ ¼/1;1,5S, andBð6;4Þ ¼/1;1,1;3S, is plotted. The redundancy of theMDSQ is generated by selecting different numbers ofdiagonals within the IA matrix [3].

Fig. 11 shows the performance of different MDCschemes when no bit errors are present; in this case, thecentral distortion/side distortion trade-off is taken as themeasure of performance. The figure shows that perfor-mance of the ERMDC and embedded UEP IA methods isvery close. Compared with the MDSQ, their performanceis a little worse at high redundancy, and better at lowredundancy, say, one-bit redundancy. Even though theembedded EEP IA method is worse than the other two fastIA methods at high redundancy, it outperforms the MDSQat low redundancy. Therefore, proposed algorithms pro-vide much higher robustness against packet losses at lowredundancy.

Next, we compare the robustness of various IAschemes against both packet losses and bit errors. Biterrors are i.i.d. and uniformly distributed. In Fig. 12,central distortions obtained by the ERMDC and the MDSQare plotted as functions of BERs. For R0 ¼ 7, we compareperformance of three ERMDC schemes: Bð7;1Þ ¼/7S,B1ð7;3Þ ¼/1;1,5S, and B2ð7;3Þ ¼/1;3,3S. The perfor-mance of the MDSQ is also provided for the purpose ofcomparison. Since the redundancy of the MDSQ is deter-mined by the number of diagonals chosen in the IA matrix[3], the number of SDSQ levels usually cannot be repre-sented in integer number of bits. Hence, we choose anMDSQ with 4 bpss at each description and 124 SDSQlevels, which is close to 128¼ 27, the number of quantiza-tion levels of the SDSQ for the proposed IA algorithms.

Fig. 12 shows that with similar redundancy, theERMDC IA scheme Bð7;1Þ is more robust than the MDSQagainst both packet losses and bit errors (both schemes

Page 12: A family of fast index and redundancy assignments for error resilient multiple description coding

5 10 15 20 25 30

−30

−25

−20

−15

−10

−5

BER (%)

MSE

(dB

)

D0 (MDSQ)

D0 (<7>)

D0 (<1, 3, 3>)

D0 (<1, 1, 5>)

Fig. 12. Central distortions achieved by proposed IA schemes and the MDSQ

at various BERs. Ds obtained by the MDSQ, /7S, /1;3,3S and /1;1,5S are

�4.26 dB, �6.78 dB, �8.65 dB and �13.86 dB, respectively.

0 0.05 0.1 0.15 0.2−20

−18

−16

−14

−12

−10

−8

−6

−4

−2

0

BER

MSE

(dB

)

ERMDCEEPUEPMDSQ

Fig. 13. Central distortions of various IA methods at various BERs.

R. Ma, F. Labeau / Signal Processing: Image Communication 27 (2012) 612–624 623

correspond to an overall channel rate of 8 bpss). The samefigure shows two schemes that have a higher redundancyfor the same source rate (R0 ¼ 7 bpss, R¼ 10 bpss), withdifferent bit allocations ð/1;1,5S and /1;3,3S). As theredundancy increases, the proposed algorithm provideshigher robustness against packet losses and bit errors. Inaddition, /1;1,5S outperforms /1;3,3S even with thesame redundancy. It suggests that the bit allocationscheme generated by using (15) provides the highestrobustness against bit errors with given R0 and r.

In Fig. 13, we compare central distortions obtained bythree fast IA methods, i.e., the ERMDC IA method,embedded UEP and EEP IA algorithms, at R¼10 andr¼ 1. Central distortions obtained by the MDSQ are alsoplotted as reference. It shows that the ERMDC accom-plishes the highest robustness against bit errors. The UEPachieves similar robustness to the MDSQ, and the EEPoutperforms the UEP marginally.

6. Discussion and conclusion

In this paper, a new family of fast index and redun-dancy assignment algorithms for the ERMDC is proposed.The proposed algorithms exploit the redundancy equiva-lent to a number of parity bits to enhance the robustnessof the ERMDC system against both bit errors and packetlosses. Without checking a predetermined IA table, theproposed algorithms can be implemented ‘‘on-the-fly’’.Experimental results show that the proposed algorithmsachieve consistent robustness against both bit errors andpacket losses. In order to accommodate progressivetransmissions, the proposed algorithms can achieve suc-cessive refinement at the level of bit partitions or bits.

The IA method proposed here for rZ2 can be inter-preted as a sort of multi-level MDC. Multi-level MDC wasproposed earlier based on the MDSQ to achieve higherrobustness against packet losses [12]. Each source sampleis decomposed into several levels of multiple descriptions.Each level of a description is encapsulated into an indivi-dual packet. Consequently, packet losses only affect thecorresponding levels, which might be decoded by sidedecoders; meanwhile, the residual levels from all descrip-tions are decoded by the central decoder. IA algorithmsproposed here for rZ2 are actually multi-level MDC,because bit partitions can be treated as levels, andencapsulated into different packets. In addition, each levelor bit partition provides protection from bit errors.

In this paper, we only discuss the case of two descrip-tions. Extension of these schemes to more than twodescriptions are left for future work, as it is not straight-forward to generalize the techniques developed here tomore than two dimensions. It will be very interesting tocompare performance against packet losses achieved bymulti-level MDC and more than two descriptions.

As we mentioned, the ERMDC was proposed to protectfrom both packet losses and bit errors. Here, we onlytestify the performance in cases that either packets arelost, or transmission errors occur. We have exploredelsewhere the possible trade-offs between ERMDC pro-tection and FEC protection; such designs would requirefurther investigation in the future.

References

[1] L. Ozarow, On a source-coding problem with two channels andthree receivers, Bell System Technical Journal 59 (10) (1980)1909–1921.

[2] J.K. Wolf, A.D. Wyner, J. Ziv, Source coding for multiple descriptions,Bell System Technical Journal 59 (8) (1980) 1417–1426.

[3] V.A. Vaishampayan, Design of multiple description scalar quanti-zers, IEEE Transactions on Information Theory 39 (3) (1993)821–834.

[4] C. Tian, S.S. Hemami, Universal multiple description scalar quanti-zation: analysis and design, IEEE Transactions on InformationTheory 50 (9) (2004) 2089–2102.

[5] V.A. Vaishampayan, J. Domaszewicz, Design of entropy-constrainedmultiple-description scalar quantizers, IEEE Transactions on Infor-mation Theory 40 (1) (1994) 245–250.

[6] J. Cardinal, Entropy-constrained index assignments for multipledescription quantizers, IEEE Transactions on Signal Processing 52(1) (2004) 265–270.

[7] T.Y. Berger-Wolf, E.M. Reingold, Index assignment for multichannelcommunication under failure, IEEE Transactions on InformationTheory 48 (10) (2002) 2656–2668.

Page 13: A family of fast index and redundancy assignments for error resilient multiple description coding

R. Ma, F. Labeau / Signal Processing: Image Communication 27 (2012) 612–624624

[8] J. Cardinal, Multistage index assignments for M-description coding,Proceedings of the IEEE International Conference on Image Proces-sing, vol. 3, 2003, pp. III-249–III-252.

[9] I. Radulovic, P. Frossard, Balanced multiple description scalarquantization, in: Proceedings of the IEEE International Symposiumon Information Theory, 2008.

[10] W. Jiang, A. Ortega, Multiple description coding via polyphasetransform and selective quantization, Proceedings of the SPIEVisual Communications and Image Processing, vol. 3653, January1999, pp. 998–1008.

[11] S.D. Servetto, K. Ramchandran, V.A. Vaishampayan, K. Nahrstedt,Multiple description wavelet based image coding, IEEE Transac-tions on Image Processing 9 (5) (2000) 813–826.

[12] T.A. Beery, R. Zamir, Multi level multiple description, in: Proceed-ings of the Data Compression Conference, 2009, pp. 63–72.

[13] I. Radulovic, P. Frossard, Fast index assignment for balancedN-description scalar quantization, in: Proceedings of the DataCompression Conference, 2005.

[14] J. Klejsa, M. Kuropatwinski, W.B. Kleijn, Adaptive resolution-con-strained scalar multiple-description coding, in: Proceedings of theIEEE International Conference on Acoustics, Speech and SignalProcessing, 2008, pp. 2945–2948.

[15] T. Guionnet, C. Guillemot, S. Pateux, Embedded multiple descrip-tion coding for progressive image transmission over unreliablechannels, Proceedings of the IEEE International Conference onImage Processing, vol. 1, October 2001, pp. 94–97.

[16] A.I. Gavrilescu, A. Munteanu, P. Schelkens, J. Cornelis, Embeddedmultiple description scalar quantisers, IET Electronic Letters 39(13) (2003) 979–980.

[17] A.I. Gavrilescu, A. Munteanu, J. Cornelis, P. Schelkens, General-isation of embedded multiple description scalar quantisers, IETElectronic Letters 41 (2) (2005) 63–65.

[18] A.C. Ashwin, K.R. Ramakrishman, S.H. Srinivasan, A multipledescription method for wavelet based image coding, Proceedingsof the IEEE International Conference on Image Processing, vol. 2,September 2002, pp. II-709–II-712.

[19] I.-K. Eom, Y.-S. Kim, Robust EZW coding with shared threshold, IETElectronic Letters 39 (21) (2003) 1514–1515.

[20] I.V. Bajic, J.W. Woods, Domain-based multiple description coding ofimages and video, IEEE Transactions on Image Processing 12 (10)(2003) 1211–1225.

[21] C. Cai, J. Chen, K.-K. Ma, S.K. Mitra, Multiple description waveletcoding with dual decomposition and cross packetization, Signal,Image and Video Processing 1 (1) (2007) 53–61.

[22] C. Cai, J. Chen, S.K. Mitra, Structure unanimity multiple descriptioncoding, Signal Processing: Image Communication 22 (1) (2007)59–68.

[23] Y. Wang, A.R. Reibman, S. Lin, Multiple description coding for videodelivery, Proceedings of the IEEE 93 (1) (2005) 57–70.

[24] R. Puri, K. Ramchandran, Multiple description source coding usingforward error correction codes, Proceedings of the Asilomar Con-ference on Signals, Systems, and Computers, vol. 1, October 1999,pp. 342–346.

[25] K. Lee, R. Puri, T. Kim, K. Ramchandran, V. Bharghavan, Anintegrated source coding and congestion control framework forvideo streaming in the Internet, in: Proceedings of the IEEEInternational Conference on Computer Communications, March2000, pp. 747–756.

[26] D.G. Sachs, R. Anand, K. Ramchandran, Wireless image transmissionusing multiple-description based concatenated codes, Proceedingsof the SPIE Visual Communications and Image Processing, vol.3974, January 2000, pp. 300–311.

[27] A.E. Mohr, E.A. Riskin, R.E. Ladner, Unequal loss protection: gracefuldegradation of image quality over packet erasure channels through

forward error correction, IEEE Journal on Selected Areas in Com-munications 18 (2000) 819–828.

[28] J. Barros, J. Hagenauer, N. Gortz, Turbo cross decoding of multipledescriptions, in: Proceedings of the IEEE International Conferenceon Communications, April 2002, pp. 1398–1402.

[29] M. Grangetto, E. Magli, G. Olmo, Ensuring quality of service forimage transmission: hybrid loss protection, IEEE Transactions onInformation Theory 13 (6) (2004) 751–757.

[30] J. Goshi, A.E. Mohr, R.E. Ladner, E.A. Riskin, A. Lippman, Unequalloss protection for H.263 compressed video, IEEE Transactions onCircuits and Systems for Video Technology 15 (3) (2005) 412–419.

[31] M.Y. Kim, W.B. Kleijn, Rate-distortion comparisons between FECand MDC based on Gilbert channel model, in: Proceedings of theIEEE International Conference on Networks, September 2003,pp. 495–500.

[32] R. Ma, F. Labeau, Error-resilient multiple description coding, IEEETransactions on Signal Processing 56 (8) (2008) 3996–4007.

[33] R. Ma, F. Labeau, Soft input error resilient multiple description codingfor Rayleigh fading channels, in: Proceedings of the IEEE InternationalConference on Multimedia Expo, July 2007, pp. 1147–1150.

[34] K. Zeger, A. Gersho, Zero redundancy channel coding in vectorquantisation, IET Electronic Letters 23 (12) (1987) 654–656.

[35] K. Zeger, A. Gersho, Vector quantizer design for memoryless noisychannels, Proceedings of the IEEE International Conference onCommunications, vol. 3, June 1988, pp. 1593–1597.

[36] K. Zeger, A. Gersho, Pseudo-Gray coding, IEEE Transactions onCommunications 38 (12) (1990) 2147–2158.

[37] C.J. Kuo, C.H. Lin, C.H. Yeh, Noise reduction of VQ encoded imagesthrough anti-Gray coding, IEEE Transactions on Image Processing 8(1) (1999) 33–40.

[38] W.-W. Chang, T.-H. Tan, D.-Y. Wang, Robust vector quantization forwireless channels, IEEE Journal on Selected Areas in Communica-tions 19 (7) (2001) 1365–1373.

[39] N. Farvardin, V. Vaishampayan, Optimal quantizer design for noisychannels: an approach to combined source-channel coding, IEEETransactions on Information Theory 33 (6) (1987) 827–837.

[40] N. Farvardin, A study of vector quantization for noisy channels, IEEETransactions on Information Theory 36 (4) (1990) 799–809.

[41] N. Farvardin, V. Vaishampayan, On the performance and complex-ity of channel-optimized vector quantizers, IEEE Transactions onInformation Theory 37 (1) (1991) 155–160.

[42] T. Guionnet, C. Guillemot, E. Fabre, Soft decoding of multipledescriptions, Proceedings of the IEEE International Conference onMultimedia Expo, vol. 2, August 2002, pp. 601–604.

[43] Y. Zhou, W.-Y. Chan, Multiple description quantizer design using achannel optimized quantizer approach, in: Proceedings of theConference on Information, Science, and Systems, March 2004.

[44] R. Ma, F. Labeau, Fast index assignment for robust multipledescription coding, in: Proceedings of the IEEE International Con-ference on Image Processing, October 2008, pp. 2052–2055.

[45] R. Ma, F. Labeau, Generalized fast index assignment for robustmultiple description scalar quantizers, in: Proceedings of theAsilomar Conference on Signals, Systems, and Computers, October2008, pp. 1287–1291.

[46] V.N. Koshelev, Hierarchical coding of discrete sources, Problems ofInformation Transmission 16 (3) (1980) 31–49.

[47] W.H.R. Equitz, T.M. Cover, Successive refinement of information,IEEE Transactions on Information Theory 37 (2) (1991) 269–275.

[48] B. Rimoldi, Successive refinement of information: characterizationof achievable rates, IEEE Transactions on Information Theory 40 (1)(1994) 253–259.

[49] A. Said, W.A. Pearlman, A new, fast, and efficient image codec basedon set partitioning in hierarchical trees, IEEE Transactions onCircuits and Systems for Video Technology 6 (3) (1996) 243–250.