SPIE Proceedings [SPIE SPIE Defense, Security, and Sensing - Baltimore, Maryland (Monday 23 April...

7
On Linear Block Codes and Deterministic Compressive Sampling Nicholas Tsagkarakis and Dimitris A. Pados Department of Electrical Engineering, State University of New York at Buffalo, Buffalo, NY 14260 ABSTRACT We suggest and explore a parallelism between linear block code parity check matrices and binary zero/one measurement matrices for compressed sensing. The resulting family of deterministic compressive samplers renders itself to the development of effective and efficient recovery algorithms for sparse signals that are not 1 -based. Experimental results that we include herein demonstrate the utility of the presented developments. Keywords: Compressive sampling, compressed sensing, data storage, linear block codes, error correcting codes, Shannon-Nyquist theorem, signal reconstruction, sparse signals. 1. INTRODUCTION In many practical cases, signals that we collect are sparse. Sparsity may manifest itself directly in the time domain or in a frequency/transform domain. A compressive sensing (CS) system attempts sampling of sparse signals at a rate lower than the Nyquist required. Yet, (near) perfect reconstruction is expected. In particular, CS is a matrix operation on a given sparse long signal vector with output a vector of much smaller dimension. Finding a deterministic matrix with good compression ratio that preforms this mapping and developing reconstruction algorithms that inverse this operation are two coupled problems that we investigate in this paper. In this context, Candes and Tao 1 and Donoho 2 showed that under certain plurality and energy preserving conditions O(k log n k ) linear projections (measurements), where n denotes the dimensionality of the long original vector signal with k nonzero elements, suffice for perfect signal vector reconstruction later on by solving an 1 -norm minimization problem. Although it is practically impossible to check the required matrix conditions * (complexity factorial in matrix size), randomly drawn (Gaussian, Bernoulli, etc.) measurement matrices satisfy the conditions with near-one (“overwhelming” 1, 2 ) probability. For recovery, the complexity of solving the 1 - norm minimization problem requires O(n 3 ) elementary operations, quickly turning numerically infeasible as n grows. There is, however, by now a host of fast, greedy algorithms to perform reconstruction suboptimally, such as orthogonal matching pursuit (OMP) 4, 5 and its variations 6,7,8 , and iterative hard thresholding (IHT) 10 . The examples of works mentioned above utilize Gaussian or Bernoulli generated random sampling matri- ces. Such matrices, of course, have no specific algebraic structure design, require explicit storage at the re- ceiver/reconstruction site, and with non-zero probability may fail to meet the required sampling conditions which makes them less favored for use in critical field applications. Instead, DeVore 10 developed a deterministic construction method for measurement matrices of size m-by-n and strictly sparse signals with sparsity in the or- der of O( m). Xu, Hassibi, Jafapour, and Calderbank 11,12,13 suggested use of adjacent matrices from expander graphs as measurement matrices. While such matrices have no explicit algebraic description, it is shown that if the expansion coefficient is greater that 0.75 and the number of projections m is of order O(k log n) exact signal recovery can be achieved with O(n log n k ) complexity. Direct algebraic construction of measurement ma- trices from classical finite generalized polygons is proposed in 14 together with algorithms for guaranteed perfect reconstruction in noiseless sensing and belief-propagation based 15, 16 recovery in noisy sensing. Further author information: N.T.: E-mail: ntsagkar@buffalo.edu, Telephone: 1 716 479 2440 D.A.P.: E-mail: pados@buffalo.edu, Telephone: 1 716 645 1150 * In general, there is no practical algorithm to test the suitability of a matrix. Some work has been done in this direction in 3 that provides necessary, but not sufficient, conditions for perfect recovery of at most k-sparse signals. Compressive Sensing, edited by Fauzia Ahmad, Proc. of SPIE Vol. 8365, 836506 © 2012 SPIE CCC code: 0277-786X/12/$18 · doi: 10.1117/12.920180 Proc. of SPIE Vol. 8365 836506-1 DownloadedFrom:http://proceedings.spiedigitallibrary.org/on05/23/2013TermsofUse:http://spiedl.org/terms

Transcript of SPIE Proceedings [SPIE SPIE Defense, Security, and Sensing - Baltimore, Maryland (Monday 23 April...

Page 1: SPIE Proceedings [SPIE SPIE Defense, Security, and Sensing - Baltimore, Maryland (Monday 23 April 2012)] Compressive Sensing - On linear block codes and deterministic compressive sampling

On Linear Block Codes and Deterministic CompressiveSampling

Nicholas Tsagkarakis and Dimitris A. Pados

Department of Electrical Engineering, State University of New York at Buffalo,Buffalo, NY 14260

ABSTRACT

We suggest and explore a parallelism between linear block code parity check matrices and binary zero/onemeasurement matrices for compressed sensing. The resulting family of deterministic compressive samplers rendersitself to the development of effective and efficient recovery algorithms for sparse signals that are not `1-based.Experimental results that we include herein demonstrate the utility of the presented developments.

Keywords: Compressive sampling, compressed sensing, data storage, linear block codes, error correcting codes,Shannon-Nyquist theorem, signal reconstruction, sparse signals.

1. INTRODUCTION

In many practical cases, signals that we collect are sparse. Sparsity may manifest itself directly in the time domainor in a frequency/transform domain. A compressive sensing (CS) system attempts sampling of sparse signals ata rate lower than the Nyquist required. Yet, (near) perfect reconstruction is expected. In particular, CS is amatrix operation on a given sparse long signal vector with output a vector of much smaller dimension. Findinga deterministic matrix with good compression ratio that preforms this mapping and developing reconstructionalgorithms that inverse this operation are two coupled problems that we investigate in this paper.

In this context, Candes and Tao1 and Donoho2 showed that under certain plurality and energy preservingconditions O(k log n

k ) linear projections (measurements), where n denotes the dimensionality of the long originalvector signal with k nonzero elements, suffice for perfect signal vector reconstruction later on by solving an`1-norm minimization problem. Although it is practically impossible to check the required matrix conditions∗

(complexity factorial in matrix size), randomly drawn (Gaussian, Bernoulli, etc.) measurement matrices satisfythe conditions with near-one (“overwhelming”1,2) probability. For recovery, the complexity of solving the `1-norm minimization problem requires O(n3) elementary operations, quickly turning numerically infeasible as ngrows. There is, however, by now a host of fast, greedy algorithms to perform reconstruction suboptimally, suchas orthogonal matching pursuit (OMP)4,5 and its variations6,7,8, and iterative hard thresholding (IHT)

10.

The examples of works mentioned above utilize Gaussian or Bernoulli generated random sampling matri-ces. Such matrices, of course, have no specific algebraic structure design, require explicit storage at the re-ceiver/reconstruction site, and with non-zero probability may fail to meet the required sampling conditionswhich makes them less favored for use in critical field applications. Instead, DeVore10 developed a deterministicconstruction method for measurement matrices of size m-by-n and strictly sparse signals with sparsity in the or-der of O(

√m). Xu, Hassibi, Jafapour, and Calderbank11,12,13 suggested use of adjacent matrices from expander

graphs as measurement matrices. While such matrices have no explicit algebraic description, it is shown thatif the expansion coefficient is greater that 0.75 and the number of projections m is of order O(k log n) exactsignal recovery can be achieved with O(n log n

k ) complexity. Direct algebraic construction of measurement ma-trices from classical finite generalized polygons is proposed in14 together with algorithms for guaranteed perfectreconstruction in noiseless sensing and belief-propagation based15,16 recovery in noisy sensing.

Further author information:N.T.: E-mail: [email protected], Telephone: 1 716 479 2440D.A.P.: E-mail: [email protected], Telephone: 1 716 645 1150∗In general, there is no practical algorithm to test the suitability of a matrix. Some work has been done in this direction

in3 that provides necessary, but not sufficient, conditions for perfect recovery of at most k-sparse signals.

Compressive Sensing, edited by Fauzia Ahmad, Proc. of SPIE Vol. 8365, 836506 © 2012 SPIE

CCC code: 0277-786X/12/$18 · doi: 10.1117/12.920180

Proc. of SPIE Vol. 8365 836506-1

Downloaded From: http://proceedings.spiedigitallibrary.org/ on 05/23/2013 Terms of Use: http://spiedl.org/terms

Page 2: SPIE Proceedings [SPIE SPIE Defense, Security, and Sensing - Baltimore, Maryland (Monday 23 April 2012)] Compressive Sensing - On linear block codes and deterministic compressive sampling

This paper is purely concerned with binary measurement matrices closely related to the algebra behind linearblock codes. In particular, we set the measurement matrices to be the parity-check matrices of certain linear blockcodes known as low-density parity-check (LDPC) codes17. Naturally, for recovery we pursue belief propagationreconstruction which is known as a most successful decoding algorithm for LDPC codes.

The rest of this paper is organized as follows. A formal description and background of the compressive sensingproblem is presented in Section II. In Section III, we discuss extensively the steps that one may follow to buildLDPC samplers and we give a complete description of the reconstruction algorithm. We present experimentalevaluations of the performance of our scheme in Section IV. Finally, in Section V we draw a few conclusions.

2. BACKGROUND

A signal x in Rn is said to be k-sparse in a transform domain represented by the basis matrix Φn×n, if Φxhas all but k coordinates equal to zero. For simplicity and without any loss of generality, in the following weassume that our signals of concern are directly sparse in the time/observation domain (i.e., Φn×n = In where Inrepresents the size-n identity matrix).

In compressed sensing, we select m > k projections (linear combinations of the elements) of x, i.e., wecalculate

ym×1 = Am×nxn×1 (1)

where A is the utilized measurement matrix (sampler) and y is the resulting compressed signal vector of lengthm much smaller that n (m� n). In general, the term objective of CS is to design samplers and correspondingrecovery procedures (that take us back to x from y) that can perform well with n and k being as large aspossible and m as small as possible. As it is known from1, under the assumption that m = O(k log n

k ), withhigh probability a sampler A in {±1}m×n or {0, 1}m×n filled in randomly with entries drawn independentlyfrom the same distribution may be used to recover exactly any k-sparse signal x from y by solving the followingminimization problem

x = arg minx‖x‖`1 subject to y = Ax. (2)

Although the above scenario looks simple and promising, it comes with a few points of concern: (i) Thefirst issue has to do with the storage requirements. Since A is random and without structure it needs to bestored with minimum irreducible space required O(mn) (if the sparsity basis Φ is not standard, the samplingside needs to store AΦ and the recovery side needs to store A and Φ−1). (ii) Computationally, the direct`1 minimization approach has cost O(n3). Even the fastest greedy optimizers (stage-wise orthogonal matchingpursuit6 for example) need O(n log n) elementary operations. Still, for n large enough, O(n log n) operationscould prove to be impractical. (iii) Finally, performance-wise, in critical applications the lack of a lower boundin reconstruction performance cannot be easily accepted. Candes and Tao1 showed that based on the randomchoices of A the recovery is exact with probability 1 − O(n−ρ/α) for some constants ρ > 0 and α ≥ k

m log n;then, with non-zero probability failure of unspecified proportions may occur. As an encompassing concludingremark, it is, therefore, highly desirable to look for ways to build samplers with deterministically describablestructure which can be used to reduce storage requirements and renders associated recovery algorithms provablyeffective and efficient. In the following section, we explore the behavior of a CS system when the sampler is theparity check matrix of an LDPC linear block code and the recovery algorithm is an associated belief-propagationchannel decoder15.

3. LDPC SAMPLERS AND THE RECOVERY ALGORITHM

3.1 Samplers from LDPC codes

Consider an LDPC code fully described by its parity check matrix H in {0, 1}m×n. We say that each row[hi,1 · · ·hi,n], i ∈ {1, 2, . . .m}, carries out a parity check on the n received vector variables [r1 · · · rn] in the formof [hi,1 · · ·hi,n][r1 · · · rn]T where all operations are in GF(2) (binary Galois field). The code qualifies as LDPC ifH itself is sparse, that is H has only few non-zero elements.

Proc. of SPIE Vol. 8365 836506-2

Downloaded From: http://proceedings.spiedigitallibrary.org/ on 05/23/2013 Terms of Use: http://spiedl.org/terms

Page 3: SPIE Proceedings [SPIE SPIE Defense, Security, and Sensing - Baltimore, Maryland (Monday 23 April 2012)] Compressive Sensing - On linear block codes and deterministic compressive sampling

The belief-propagation algorithm15 has been proven to be a most effective and computationally efficientdecoder for LDPC codes18. An LDPC code metric that has been seen to have positive correlation with theperformance of belief propagation decoding is the girth† of the graph representation of the code. Two popularways to construct randomly low-density parity check matrices that avoid small girth are the MacKay-Nealconstruction19 and the Progressive Edge Growth20 design.

a) MacKay-Neal19: The matrix H is generated with fixed small weight wc per column (wc 1’s in each column)and as uniform weight per row as possible with no two columns having a common 1 in more than one position(avoiding this way cycles of length four).

b) PEG20: The algorithm builds a graph representation of the LDPC code edge by edge such that at eachtime the added edge has minimal impact on the girth of the graph.

A simple way to design an LDPC matrix was suggested in Gallager’s original work.

c) Gallager17: The rows are divided into wc sets with m/wc rows in each set. Each row in the first setcontains wr consecutive ones without “vertical” overlapping. Every other set of rows is a randomly chosencolumn permutation of this first set.

Finally, an explicit algebraic design of LDPC matrices was suggested by Liu and Pados in21.

d) Generalized Polygons21: In the field of generalized geometry, an incidence structure Γ = (C,V,H) consistsof the set of points C, the set of lines V, and the incidence set H ⊆ V ×C which identifies which lines go throughwhich points. The matrix H with entries H(i, j) = 1 if line i goes through point j and H(i, j) = 0 otherwise, iscalled the incidence matrix of Γ. Classical finite generalized polygons are a specific widely studied subclass ofΓ22,23. For our generalized polygon LDPC design, we simply use the incidence matrix of our selected polygons21.

3.2 The recovery algorithm

We consider an iterative belief-propagation-type algorithm for the recovery of a signal vector xn×1 from its noisycompressed sampled version ym×1 with sampler matrix Am×n,

ym×1 = Am×nxn×1 + nm×1 (3)

where n is a zero-mean additive white Gaussian noise (AWGN) vector with covariance matrix σ2Im.

The goal of a belief propagation algorithm is to estimate every marginal posterior distribution of the en-tries of the original signal x. If the signal entries can be considered statistically independent, the transforma-tion/measurement is linear, and the noise is AWG, then the maximum likelihood (ML) estimate of the originalsignal is its estimated mean value. If we have a good estimate of the probability density functions (pdfs) of theposteriors, then calculating the mean value is straightforward. Obtaining the pdf estimates is done iterativelyby exchanging messages over the edges of the graph representation of the measurement matrix16. The outgoingmessage of a node is a function of the incoming messages, therefore we prefer graphs with as small girth aspossible so that messages do not propagation error due to “recycling.”

One way to implement this idea is to define the message over the bidirectional edge (v, c) as the quantizedmarginal pdf of node v (or c, depending on the direction of the current message) associated with this edge. For

algorithmic iteration t, let µ(t)v−→c denote the message from variable node v to check node c; similarly, denote by

µ(t)c−→v the message from check node c to variable node v. The first (initial) step is to set the estimate of the

variable node posterior pdfs equal to the priors of x. These estimates will be the first messages sent from thevariable nodes side to the check nodes side. The iterative update rules are as follows:

1) Send messages from c to v

µ(t)c−→v(xv) = α

∫xi,i∈N(c)−{v}

p (xv|yc, {xi, i,∈ N(c)− {v}}) ·

∏g∈N(c)−{v}

µ(t)g−→c(xg)

(4)

†Girth is defined as the number of edges participating in the smallest cycle in the graph.

Proc. of SPIE Vol. 8365 836506-3

Downloaded From: http://proceedings.spiedigitallibrary.org/ on 05/23/2013 Terms of Use: http://spiedl.org/terms

Page 4: SPIE Proceedings [SPIE SPIE Defense, Security, and Sensing - Baltimore, Maryland (Monday 23 April 2012)] Compressive Sensing - On linear block codes and deterministic compressive sampling

lifting the conditioning from all the variables connected to c except v; α is a pdf normalizing constant.

2) Send messages from v to c

µ(t)v−→c(xv) = α

∏g∈N(v)−{c}

µ(t−1)g−→v(xv) (5)

which is the product of all check node messages received at v at time t − 1, excluding the check node v (α isagain a pdf normalizer).

At the final step of the algorithm we collect all incoming messages to each variable node and compute theirproduct. The normalized result is our estimate of the posterior pdfs of each entry of the original signal x.

4. EXPERIMENTS

In this section, we experiment with the concept of using LDPC matrices as measurement matrices. We createk-sparse signals x ∈ Rn to be compressed drawing their non-zero entries independently Gaussian distributed withzero mean and variance 1000. The n−k zero entries in x are placed randomly. We build the LDPC-CS samplersA according to the methods described in Section III-A. Then, we preform noisy compressed sensing of x as in(3) with noise variance σ2 = 1 (SNR=30db). The recovery algorithm used is belief propagation described inSection III-B run for 10 iterations. The reconstruction performance evaluation criteria that we consider are the

normalized `1 norm of the error ‖x−x‖1‖x‖1 and the average fraction of the large (non-zero) coefficients that where

correctly recovered‡. The results that we present is Figs. 1, 2, 3, 4 are averages over 400 trials. The generalizedpolygons (GPs) used for these studies are the quadrangle H(3, 32) (of size 280 by 112) and the hexagon D2(2)(of size 2457 by 819). The Bernoulli samplers are filled as in conventional CS literature with density (percentageof 1’s in measurement matrix) given in parenthesis.

The generalized quadrangle and the MacKay-Neal parity check matrix for 280 by 112 compressive samplingand the generalized hexagon for 2457 by 819 compressive sampling show most appealing performance.

5. SUMMARY

We suggested compressive sensing systems with samplers being low-density parity check (LDPC) matrices andreconstruction by belief propagation. In particular, we demonstrated that LDPC-CS from generalized polygons isan exceptionally well performing choice for both small and large problem scales. Among other helpful properties,generalized polygons have incidence matrices which are associated with bipartite graphs with typically high girththat facilitates effective belief propagation decoding (reconstruction).

REFERENCES

[1] E. Candes and T. Tao, “Near optimal signal recovery from random projections: Universal encoding strate-gies?,” IEEE Trans. Inf. Theory, vol. 52, pp. 5406-5425, Dec. 2006.

[2] D. L. Donoho, “Compressed sensing,” IEEE Trans. Inf. Theory, vol. 52, no. 4, pp. 1289-1306, Apr. 2006.

[3] R. Calderbank, S. Howard, and S. Jafarpour, “Construction of a large class of deterministic sensing matricesthat satisfy a statistical isometry property,” IEEE J. Sel. Topics Signal Process., vol. 4, pp. 358-374, Apr.2010.

[4] Y. C. Pati, R. Rezaiifar, and P. S. Krishnaprasad, “Orthogonal matching pursuit: Recursive functionapproximation with applications to wavelet decomposition,” in Proc. 27th Asilomar Conf. Signals, Syst.,Comput., Pacific Grove, CA, Nov. 1993, pp. 40-44.

[5] J. A. Tropp and A. C. Gilbert, “Signal recovery from random measurements via orthogonal matchingpursuit,” IEEE Trans. Inf. Theory, vol. 53, pp. 4655-4666, Dec. 2007.

‡We consider a large coefficient xi correctly recovered if the deviation from the true value is less than twenty times itsabsolute value, |xi−xi|

|xi|≤ 0.05.

Proc. of SPIE Vol. 8365 836506-4

Downloaded From: http://proceedings.spiedigitallibrary.org/ on 05/23/2013 Terms of Use: http://spiedl.org/terms

Page 5: SPIE Proceedings [SPIE SPIE Defense, Security, and Sensing - Baltimore, Maryland (Monday 23 April 2012)] Compressive Sensing - On linear block codes and deterministic compressive sampling

0 10 20 30 40 50 60 700

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

sparsity

Nor

mal

ized

l 1−er

ror

GallagerMacKay−NealPEGGPBernoulli (2%)Bernoulli (10%)

Figure 1. Normalized `1-error versus signal sparsity k for noisy (SNR=30db) samples and samplers of size 280 by 112(averages over 400 trials).

0 10 20 30 40 50 60 700

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

sparsity

Ave

rage

Fra

ctio

n of

Rec

over

y S

uppo

rt

GallagerMacKay−NealPEGGPBernoulli (2%)Bernoulli (10%)

Figure 2. Average percentage of recovered important entries versus signal sparsity k for noisy (SNR=30db) samples andsamplers of size 280 by 112 (averages over 400 trials).

Proc. of SPIE Vol. 8365 836506-5

Downloaded From: http://proceedings.spiedigitallibrary.org/ on 05/23/2013 Terms of Use: http://spiedl.org/terms

Page 6: SPIE Proceedings [SPIE SPIE Defense, Security, and Sensing - Baltimore, Maryland (Monday 23 April 2012)] Compressive Sensing - On linear block codes and deterministic compressive sampling

0 50 100 150 200 250 300 350 4000

0.2

0.4

0.6

0.8

1

1.2

1.4

1.6

1.8

sparsity

Nor

mal

ized

l 1−er

ror

GallagerMacKay−NealPEGGPBernoulli (0.1%)Bernoulli (1%)

Figure 3. Normalized `1-error versus signal sparsity k for noisy (SNR=30db) samples and samplers of size 2457 by 819(averages over 400 trials).

0 50 100 150 200 250 300 350 4000

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

sparsity

Ave

rage

Fra

ctio

n of

Rec

over

y S

uppo

rt

GallagerMacKay−NealPEGGPBernoulli (0.1%)Bernoulli (1%)

Figure 4. Average percentage of recovered important entries versus signal sparsity k for noisy (SNR=30db) samples andsamplers of size 2457 by 819 (averages over 400 trials).

Proc. of SPIE Vol. 8365 836506-6

Downloaded From: http://proceedings.spiedigitallibrary.org/ on 05/23/2013 Terms of Use: http://spiedl.org/terms

Page 7: SPIE Proceedings [SPIE SPIE Defense, Security, and Sensing - Baltimore, Maryland (Monday 23 April 2012)] Compressive Sensing - On linear block codes and deterministic compressive sampling

[6] D. L. Donoho, Y. Tsaig, I. Drori, and J. C. Starck, “Sparse solution of underdetermined linear equationsby stagewise orthogonal matching pursuit,” Statistics Dept., Stanford Univ., Stanford, CA, Tech. Rep.TR-2006-2, 2006.

[7] D. Needell and J. A. Tropp, “CoSaMP: Iterative signal recovery from incomplete and inaccurate samples,”Appl. Comput. Harmon. Anal., vol. 26, no. 3, pp. 301-321, May 2009.

[8] W. Dai and O. Milenkovic, “Subspace pursuit for compressive sensing: Closing the gap between performanceand complexity,” IEEE Trans. Inf. Theory, vol. 55, pp. 2230-2249, May 2009.

[9] T. Blumensath and M. E. Davies, “Iterative hard thresholding for compressed sensing,” Appl. Comput.Harmon. Anal., vol. 27, no. 3, pp. 265-274, Nov. 2009.

[10] R. A. DeVore, “Deterministic constructions of compressed sensing matrices,” J. Complexity, vol. 23, pp.918-925, Aug. 2007.

[11] W. Xu and B. Hassibi, “Efficient compressive sensing with deterministic guarantees using expander graphs,”in Proc. IEEE Inf. Theory Workshop, Lake Tahoe, CA, Sept. 2007, pp. 414-419.

[12] S. Jafarpour, W. Xu, B. Hassibi, and R. Calderbank, “Efficient and robust compressed sensing using opti-mized expander graphs,” IEEE Trans. Inf. Theory, vol. 55, pp. 4299-4308, Sept. 2009.

[13] M. Sipser and D. A. Spielman, “Expander codes,” IEEE Trans. Inf. Theory, vol. 42, pp. 1710-1722, Nov.1996.

[14] K. Gao, S. N. Batalama, D. A. Pados, and B. W. Suter, “Compressive sampling with generalized polygons,”IEEE Trans. Signal Proc., vol. 59, pp. 4759-4766, Oct. 2011.

[15] J. Pearl, “Reverend Bayes on inference engines: A distributed hierarchical approach,” in Proc. AAAI Na-tional Conf. AI, Pittsburgh, PA, 1982, pp. 133-136.

[16] D. Baron, S. Sarvotham, and R. Baraniuk, “Bayesian compressive sensing via belief propagation,” IEEETrans. Signal Proc., vol. 58, pp. 269-280, Jan. 2010.

[17] R. G. Gallager, Low-Density Parity-Check Codes. Cambridge, MA: MIT Press, 1963.

[18] M. Fossorier, M. Mihaljevic, and H. Imai, “Reduced complexity iterative decoding of low density paritycheck codes based on belief propagation,” IEEE Trans. Commun., vol. 47, pp. 673-680, May 1999.

[19] D. J. MacKay and R. M. Neal, “Near Shannon limit performance of low-density parity-check codes,” Elec-tron. Lett., vol. 33, no. 6, pp. 475-458, Mar. 1997.

[20] X.-Y. Hu, E. Eleftheriou, and D. M. Arnold, “Regular and irregular progressive edge-growth Tanner graphs,”IEEE Trans. Inf. Theory, vol. 51, pp. 386-398, Jan. 2005.

[21] Z. Liu and D. A. Pados, “LDPC codes from generalized polygons,” IEEE Trans. Inf. Theory, vol. 51, pp.3890-3898, Nov. 2005.

[22] J. Tits, “Sur la trialite et certains groupes qui s’em deduisent,” Inst. Hautes Etudes Sci. Publ. Math., vol.2, pp. 14-60, 1959.

[23] H. van Maldeghem, Generalized Polygons. Basel, Switzerland: Birkhauser-Verlag, 1998.

Proc. of SPIE Vol. 8365 836506-7

Downloaded From: http://proceedings.spiedigitallibrary.org/ on 05/23/2013 Terms of Use: http://spiedl.org/terms