DECODING LOW-DIMENSIONAL LINEAR CODES BY LINEAR PROGRAMMING · DECODING LOW-DIMENSIONAL LINEAR...

14
CANADIAN APPLIED MATHEMATICS QUARTERLY Volume 16, Number 3, Fall 2008 DECODING LOW-DIMENSIONAL LINEAR CODES BY LINEAR PROGRAMMING In memoriam Isabelle D´ ech` ene RYUICHI ASHINO, TRUONG NGUYEN-BA AND R ´ EMI VAILLANCOURT ABSTRACT. Decoding linear codes by 1 linear program- ming consists in recovering an input vector x R n from cor- rupted oversampled measurements y = Ax+z where A R m×n is a full rank matrix with m>n and z R m is a sparse vector. Appropriate random matrices A are empirically con- structed by stacking the left and negative right singular matri- ces U i and -V i of random matrices M i R n×n uniformly dis- tributed [0, 1] so that the vector x can be recovered numerically to an error smaller than 10 -5 , provided z is sufficiently sparse. High average breakdown points are obtained numerically for m =2n, 4n, 6n, 8n, 16n. Other less effective matrices A are constructed whose breakdown-point means are lower than the above but in good agreement with Donoho’s theoretical results based on neighborly polytopes. A corrupted image is decoded and an application to cryptography is mentioned. 1 Introduction A linear code is a block code used in error correc- tion and detection schemes. Linear codes allow for efficient encoding and decoding algorithms used in transmitting symbols on a communications channel so that, if errors occur in the communication, some errors can be detected by the recipient of a message block. The “codes” in a linear code are blocks of symbols which are encoded using more symbols than This work was supported in part by JSPS.KAKENHI (C)19540180 of Japan, the Natural Sciences and Engineering Research Council of Canada and the Centre de recherches math´ ematiques of the Universit´ e de Montr´ eal. AMS subject classification: Primary: 94B05; Secondary: 90C25, 94B40, 11T71, 14G50. Keywords: Linear code, sparsely corrupted ciphertext, random code words, con- vex linear programming. Copyright c Applied Mathematics Institute, University of Alberta. 241

Transcript of DECODING LOW-DIMENSIONAL LINEAR CODES BY LINEAR PROGRAMMING · DECODING LOW-DIMENSIONAL LINEAR...

Page 1: DECODING LOW-DIMENSIONAL LINEAR CODES BY LINEAR PROGRAMMING · DECODING LOW-DIMENSIONAL LINEAR CODES BY LINEAR PROGRAMMING In memoriam Isabelle D ech ene RYUICHI ASHINO, TRUONG NGUYEN-BA

CANADIAN APPLIED

MATHEMATICS QUARTERLY

Volume 16, Number 3, Fall 2008

DECODING LOW-DIMENSIONAL LINEAR

CODES BY LINEAR PROGRAMMING

In memoriam Isabelle Dechene

RYUICHI ASHINO, TRUONG NGUYEN-BA AND REMIVAILLANCOURT

ABSTRACT. Decoding linear codes by `1 linear program-ming consists in recovering an input vector x ∈ R

n from cor-rupted oversampled measurements y = Ax+z where A ∈ R

m×n

is a full rank matrix with m > n and z ∈ Rm is a sparse

vector. Appropriate random matrices A are empirically con-structed by stacking the left and negative right singular matri-ces Ui and −Vi of random matrices Mi ∈ R

n×n uniformly dis-tributed [0, 1] so that the vector x can be recovered numericallyto an error smaller than 10−5, provided z is sufficiently sparse.High average breakdown points are obtained numerically form = 2n, 4n, 6n, 8n, 16n. Other less effective matrices A areconstructed whose breakdown-point means are lower than theabove but in good agreement with Donoho’s theoretical resultsbased on neighborly polytopes. A corrupted image is decodedand an application to cryptography is mentioned.

1 Introduction A linear code is a block code used in error correc-tion and detection schemes. Linear codes allow for efficient encoding anddecoding algorithms used in transmitting symbols on a communicationschannel so that, if errors occur in the communication, some errors canbe detected by the recipient of a message block. The “codes” in a linearcode are blocks of symbols which are encoded using more symbols than

This work was supported in part by JSPS.KAKENHI (C)19540180 of Japan, theNatural Sciences and Engineering Research Council of Canada and the Centre derecherches mathematiques of the Universite de Montreal.

AMS subject classification: Primary: 94B05; Secondary: 90C25, 94B40, 11T71,14G50.

Keywords: Linear code, sparsely corrupted ciphertext, random code words, con-vex linear programming.

Copyright c©Applied Mathematics Institute, University of Alberta.

241

Page 2: DECODING LOW-DIMENSIONAL LINEAR CODES BY LINEAR PROGRAMMING · DECODING LOW-DIMENSIONAL LINEAR CODES BY LINEAR PROGRAMMING In memoriam Isabelle D ech ene RYUICHI ASHINO, TRUONG NGUYEN-BA

242 R. ASHINO, T. NGUYEN-BA AND R. VAILLANCOURT

the original value to be sent. A linear code of length m transmits blockscontaining m symbols.

The linear code problem of recovering an input vector x ∈ Rn from

corrupted oversampled measurements is formulated as a corrupted overde-termined linear system,

(1) y = Ax + z,

where the matrix A ∈ Rm×n with m > n has full rank, y, z ∈ R

m andthe noise z is a sparse vector.

Following [4], the linear code problem (1) can be reduced to the so-called compressed sensing problem of recovering a sparse signal z ∈ R

m

from an underdetermined system

(2) w = Bz,

where B ∈ Rd×m has full rank, d < m, and w ∈ R

d.The `1 minimization solution of these systems has been extensively

studied [4, 5, 6, 9, 10], culminating in important theoretical results byDonoho and collaborators [12, 13, 14, 15, 17, 23].

A breakdown point for systems (1) or (2) is the maximum numberof nonzero elements in z beyond which recovery by `1 minimizationbreaks down.

The location of the breakdown points of system (1) has been investi-gated numerically by Candes and Tao in [4] with m×n matrices made ofindependent and identically distributed (i.i.d.) Gaussian entries. It wasfound that, for m = 2n, exact recovery occurred as long as about 17%or less of the entries of y were corrupted for m = 512 and m = 1024.Exact recovery occurred if less than 34% of the entries were corruptedfor m = 4n and n = 128.

Donoho [11, Cor. 1.3] derived from the theory of centrally-symmetricpolytopes that, with m − 2 ≥ d > 2, if the `1 minimization correctlyfinds all sparse solutions of (2) having not more than k nonzeros, thenk ≤ b(d + 1)/3]c, where btc denotes the floor or integral part of t.

Donoho [11, Cor. 1.5] proved the following overwhelming probabilis-tic result for underdetermined systems. Let m and d tend to ∞ andd = bδmc where δ < 1 and k/d < 1. Let w = Bz0, where x0 containsnonzeros at k sites selected uniformly at random with signs chosen uni-formly at random and where B is a uniform random orthoprojector fromR

m to Rd. With overwhelming probability for large m, the minimum `1

norm solution to w = Bz is also the sparsest solution and is preciselyz0.

Page 3: DECODING LOW-DIMENSIONAL LINEAR CODES BY LINEAR PROGRAMMING · DECODING LOW-DIMENSIONAL LINEAR CODES BY LINEAR PROGRAMMING In memoriam Isabelle D ech ene RYUICHI ASHINO, TRUONG NGUYEN-BA

DECODING LINEAR CODES BY LINEAR PROGRAMMING 243

This paper pursues the empirical construction of linear codes withhigher breakdown points and compare the results with Donoho’s theo-retical results. Since many results in the literature are for compressedsensing, for the benefit of the reader, the `1 optimization solution ofunderdetermined systems and the transformation of a linear code intoan underdetermined system will be mentioned briefly.

The plan of the paper is as follows. The solution of the linear codeproblem (1) is formulated in Section 2 in terms of a standard `1 pro-gramming algorithm. The underdetermined problem (2) is formulatedin Section 3 as an `1 linear program and system (1) is reduced to system(2) in Section 4. Numerical results are in Section 5.

2 Decoding linear codes by linear programming The follow-ing notation is needed to describe the construction of random matrices.

Notation. A random matrix M ∈ Rm×n with entries uniformly dis-

tributed in [a, b] will be denoted as M ∼ U(m, n; a, b). Similarly, arandom matrix M ∈ R

m×n with entries normally distributed with meanµ and variance σ2 will be denoted as M ∼ N (m, n; µ, σ2). The sameapply to vectors x ∈ R

m×1.Efficient random linear code matrices A ∈ R

m×n for m = 2pn, p =1, 2, . . ., to decode system (1) can be formed by stacking p blocks of size2n× n in the form

(3) A =

A1

...Ap

, where Ai =

[Ui

−Vi

]∈ R

2n×n.

The random orthogonal matrices Ui and Vi come from the singular valuedecomposition (SVD) [18] of random matrices Mi ∼ U(n, n; 0, 1):

(4) Mi = UiΣiVTi .

For comparison purposes and for testing programs, random matricesMi ∼ N (n, n; 0, 1) will also be used. A similar construction by QRdecomposition is in [1].

Given a random vector x ∼ U(n, 1; 0, 1), we have the exact value

v = Ax.

Now let z ∼ U(m, 1; 0, 1) be a sparse vector and consider the corruptedmeasurement

(5) y = Ax + z.

Page 4: DECODING LOW-DIMENSIONAL LINEAR CODES BY LINEAR PROGRAMMING · DECODING LOW-DIMENSIONAL LINEAR CODES BY LINEAR PROGRAMMING In memoriam Isabelle D ech ene RYUICHI ASHINO, TRUONG NGUYEN-BA

244 R. ASHINO, T. NGUYEN-BA AND R. VAILLANCOURT

We want to recover v = y − z by solving the `1 convex minimizationproblem [4]

(6) ming∈Rn

‖y − Ag‖`1 .

An equivalent linear program in standard form for problem (6) is

(7) min 1T t, subject to − t ≤ y − Ag ≤ t,

where the optimization variables are t ∈ Rm and g ∈ R

n. We remarkthat 1T t = ‖t‖`1 since 1 is the m vector with all components equal to1 and t ≥ 0. Here the generalized vector inequality x ≤ y means thatxi ≤ yi for all i.

3 Sparse solutions of underdetermined systems The basispursuit of Chen, Donoho and Sanders [8] can be used to find the sparsesolution of an underdetermined system by means of `1 optimization.

Let B ∈ Rd×m, where d < m. We want to solve the underdetermined

system

(8) Bz = w

by means of the `1 convex minimization

(9) min ‖z‖`1 , subject to Bz = w,

under the condition that z is a sparse m-vector. We recall that a linearprogram in standard form for the variable y ∈ R

r is

min cT y, subject to Cy = a, y ≥ 0.

System (8) can be reformulated into a standard linear program [7, 8]by the following substitutions:

(10)r ⇔ 2m, C ⇔ (B,−B), a ⇔ w,

c ⇔ (1; 1), y ⇔ (u; v), z ⇔ u − v.

Page 5: DECODING LOW-DIMENSIONAL LINEAR CODES BY LINEAR PROGRAMMING · DECODING LOW-DIMENSIONAL LINEAR CODES BY LINEAR PROGRAMMING In memoriam Isabelle D ech ene RYUICHI ASHINO, TRUONG NGUYEN-BA

DECODING LINEAR CODES BY LINEAR PROGRAMMING 245

4 Solving linear codes as underdetermined problems In [4]the solution of the sparsely corrupted signal (1) is reduced to the solutionof the underdetermined system (2) for pedagogical reasons.

We describe the reduction process for a matrix A ∈ Rm×n, m = 2n,

obtained by the SVD of a matrix M ∈ Rn×n, M = UΣV T . Form the

full rank matrix

(11) A =

[U

−V

]∈ R

m×n, m = 2n.

Then construct the n × m matrix B,

(12) B =[UT , V T

],

such that BA = 0 and left multiply system (5) by B to get

y = Bz.

We then solve this underdetermined system for z by the `1-convex min-imization method (10) to get the approximate value z and take theuncorrupted value v as

v = y − z.

Since A has full rank, we can find x from v = Ax.Here is an example of a matrix B ∈ R

d×m where m = 4n and d =m − n = 3n. Given the SVD of two random matrices Mi = UiΣiV

Ti ,

where Mi ∼ U(n, n; 0, 1) as in (4) and (3), the matrices A and B aretaken to be

(13) A =

U1

−V1

U2

−V2

, B =

UT1 V T

1 UT2 V T

2

UT1 −V T

1 −UT2 V T

2

−UT1 −V T

1 UT2 V T

2

.

It is seen that both A and B have full rank and BA = 0.Compressed sensing as underdetermined systems is treated by QR

decomposition in [2].

5 Numerical results Our numerical results show that the choiceof the matrices A influences the level of breakdown points for decodinglinear codes.

Let m = 2pn where p = 1, 2, . . ..

Page 6: DECODING LOW-DIMENSIONAL LINEAR CODES BY LINEAR PROGRAMMING · DECODING LOW-DIMENSIONAL LINEAR CODES BY LINEAR PROGRAMMING In memoriam Isabelle D ech ene RYUICHI ASHINO, TRUONG NGUYEN-BA

246 R. ASHINO, T. NGUYEN-BA AND R. VAILLANCOURT

For each j, j = 1, 2, . . . , 100, a matrix Aj ∈ R(m×n) is generated by

the SVD of p random matrices Mi ∼ U(n, n; 0, 1), i = 1, 2, . . . p. Thenthe linear code problem (1) is solved by `1 linear programming withrandom vectors zq ∼ U(m, 1; 0, 1) with q = 1, 2, . . . randomly indexednonzero elements until the error in the solution is larger than 10−5 forq = qbp The number qmin = min{qbp} is taken to be the breakdownpoint of the 100 runs.

As in the previous case, 100 runs are done with random matricesMi ∼ N (n, n; 0, 1) and Aj ∼ N (m, n; 0, 1).

Table 1 lists the smallest breakdown point qmin, the fraction qmin/mof nonzero elements in z at the breakdown point, the mean µ for 100 runsof a given problem for matrices Aj ∈ R

m×n generated by the SVD ofrandom matrices Mi ∼ U(n, n; 0, 1) and Mi ∼ N (n, n; 0, 1), and by ran-dom matrices Aj ∼ N (m, n; 0, 1). It is seen that matrices A constructedfrom uniformly distributed matrices M produce linear codes with higherbreakdown points than those produced by normally distributed matricesM or by normally distributed matrices A.

Mi ∼ U(n,n; 0, 1) Mi ∼ N (n,n; 0, 1) Aj ∼ N (m, n; 0, 1)

n m qmin qmin/m µ qmin qmin/m µ qmin qmin/m µ

128 512 262 0.51 279 178 0.34 193 171 0.33 193

256 1024 531 0.51 560 358 0.34 388 365 0.35 389

128 768 500 0.65 516 338 0.44 368 340 0.44 366

256 1536 1002 0.65 1034 728 0.47 745 720 0.46 745

128 1024 726 0.70 761 512 0.50 556 528 0.51 555

256 2048 1463 0.71 1525 1101 0.53 1122 1065 0.52 1122

128 2048 1741 0.85 1762 1341 0.65 1376 1339 0.65 1375

256 4096 3511 0.85 3534 2761 0.67 2776 2784 0.67 2796

TABLE 1: Results for 100 runs of the linear decoder with Aj ∈ Rm×n

in (3), m = 2pn, p = 2, 3, 4, 8, Mi ∼ U(n, n; 0, 1), Mi ∼ N (n, n; 0, 1) andAj ∼ N (m,n; 0, 1).

Since the results of the runs with Mi ∼ N (n, n; 0, 1) and Aj ∼N (m, n; 0, 1) in Table 1 agree with Donoho’s estimate ρ±

W (δ)d in Ta-ble 2, one may consider that the Mathematica program generating theseresults are reliable in the sense of Donoho et al. [16]. Moreover, sincethe program which generate random matrices Aj from Mi ∼ U(n, n; 0, 1)differ from the one with Mi ∼ N (n, n; 0, 1) only by the Mathematica

Page 7: DECODING LOW-DIMENSIONAL LINEAR CODES BY LINEAR PROGRAMMING · DECODING LOW-DIMENSIONAL LINEAR CODES BY LINEAR PROGRAMMING In memoriam Isabelle D ech ene RYUICHI ASHINO, TRUONG NGUYEN-BA

DECODING LINEAR CODES BY LINEAR PROGRAMMING 247

command UniformDistribution[0,1] instead ofNormalDistribution[0,1], we may conclude that the former one, whichproduces higher breakdown points, is also correct.

The Mathematica 7 program used to produce columns 3 to 8 in Ta-ble 1 is listed in Appendix 7. This and other programs are availableelectronically from the third author.

In Table 2 the threshold ρ±

W (δ) lying on the weak phase transitionfor cross-polytopes [15, Fig. 2] or [17, Fig. 1.2] is compared with themean of breakdown points obtained by normally distributed matricesAj ∼ N (m, n; 0, 1) and by the singular value decomposition of matricesMi ∼ U(n, n; 0, 1). It is seen that the results obtained by random ma-

d δ = (m − n)/m ρ±W

(δ) ρ±W

(δ)d Aj ∼ N (m, n; 0, 1) Mi ∼ U(n,n; 0, 1)

768 3/4=0.75 0.5337 409 389 560

1280 5/6=0.833 0.6034 772 745 1034

1792 7/8=0.873 0.6472 1159 1122 1525

3840 15/16=0.9375 0.7360 2826 2796 3534

TABLE 2: Threshold ρ±W (δ), mean breakdown points for Aj ∼

N (m, n; 0, 1) and Mi ∼ U(n, n; 0, 1) producing Aj ∈ Rm×n with n =

256, m = 2pn, p = 2, 3, 4, 8, and d = m − n.

trices Aj ∼ N (m, n; 0, 1) are very closed to Donoho’s theoretical resultsρ±W (δ)d. On the other hand, the results in the last column are superior.

As a further comparison on the choice of linear codes, we considerlinear decoding with three types of matrices A ∈ R

512×128 produced bythe SVD Mi = UiΣiV

Ti of random matrices Mi ∼ N (128, 128; 0, 1), for

i = 1, 2, namely,

(14) Aa =

U1

V1

U2

V2

, Ab =

UT1

−V T1

UT2

−V T2

, Ac =

U1

−V1

U2

−V2

,

respectively. It is seen from Table 3 that the mean µ, the median q, themode and the standard deviation σ increase from (a) to (c).

The histogram in Figure 1 shows the distribution of 100 breakdownpoints obtained by matrices A ∈ R

(4096,256) produced by the SVD of

Page 8: DECODING LOW-DIMENSIONAL LINEAR CODES BY LINEAR PROGRAMMING · DECODING LOW-DIMENSIONAL LINEAR CODES BY LINEAR PROGRAMMING In memoriam Isabelle D ech ene RYUICHI ASHINO, TRUONG NGUYEN-BA

248 R. ASHINO, T. NGUYEN-BA AND R. VAILLANCOURT

Type qmin qmin/512 µ q Mode σ

(a) 138 0.26 147 148 148 3.8523

(b) 171 0.33 192 180 195 7.6486

(c) 264 0.51 279 278 277 6.4286

TABLE 3: Results for 100 runs with matrices A ∈ R512×128 obtained by

SVD (a) with positive signs, (b) with transposes and alternating signs,and (c) with alternating signs.

matrices M ∼ U(256, 256; 0, 1) as in(3). The lowest and highest break-down points are qmin = 3511 and qmax = 3564, respectively. The meanµ = 3534.

3520 3530 3540 3550 3560

1

2

3

4

5

6

FIGURE 1: Histogram of breakdown points for 100 runs with matri-ces A ∈ R

4096×256 produced by the SVD of random matrices M ∼

U(256, 256; 0, 1).

The histogram in Figure 2 shows the distribution of 100 breakdownpoints obtained by matrices A ∈ R

(4096,256) produced by the SVD ofmatrices M ∼ N (256, 256; 0, 1) as in(3). The lowest and highest break-down points are qmin = 2761 and qmax = 2823, respectively. The meanµ = 2776.

Page 9: DECODING LOW-DIMENSIONAL LINEAR CODES BY LINEAR PROGRAMMING · DECODING LOW-DIMENSIONAL LINEAR CODES BY LINEAR PROGRAMMING In memoriam Isabelle D ech ene RYUICHI ASHINO, TRUONG NGUYEN-BA

DECODING LINEAR CODES BY LINEAR PROGRAMMING 249

2770 2780 2790 2800 2810 2820

2

4

6

8

10

FIGURE 2: Histogram of breakdown points for 100 runs with matri-ces A ∈ R

4096×256 produced by the SVD of random matrices M ∼

N (256, 256; 0, 1).

The histogram in Figure 3 shows the distribution of 100 breakdownpoints obtained by random matrices A ∼ N (4096, 256; 0, 1). The low-est and highest breakdown points are qmin = 2785 and qmax = 2851,respectively. The mean µ = 2796.

2790 2800 2810 2820 2830 2840 2850

5

10

15

FIGURE 3: Histogram of breakdown points for 100 runs with randommatrices A ∼ N (4096, 256; 0, 1).

Page 10: DECODING LOW-DIMENSIONAL LINEAR CODES BY LINEAR PROGRAMMING · DECODING LOW-DIMENSIONAL LINEAR CODES BY LINEAR PROGRAMMING In memoriam Isabelle D ech ene RYUICHI ASHINO, TRUONG NGUYEN-BA

250 R. ASHINO, T. NGUYEN-BA AND R. VAILLANCOURT

0 50 100 150 200 2500

50

100

150

200

250

FIGURE 4: Decoded Barbara image of dimension 256 × 256.

The first histogram differs substantially from the second and thirdones on the basis of its higher mean.

As another decoding application, a part of the Barbara image F ofdimensions 256 × 256 with integer element fij ∈ [0, 255] was oversam-pled by a linear code A ∈ R

1024×256 as defined in (3) with matricesM ∼ U(256, 256; 0, 1). The signal was corrupted column-wise by sparsevectors z ∈ R

1024 with 512 nonzero entries, that is to 50%. The decod-ing was perfect within point-wise errors smaller than 10−3 with F takenas is and F divided element-wise by 255 to have entries in [0, 1]. Thedecoded Barbara image is in Figure 4.

Several softwares can be freely downloaded for decoding linear codesand compressed sensing, mainly in Matlab, for instance, `1magic [3]and http://sparselab.stanford.edu/.

6 Possible application of linear codes to cryptography Thelinear code problem may find use in cryptography. A plaintext x canbe encoded by means of a random matrix A ∈ R

m×n, m > n, and asparse noise z can be added to corrupt the ciphertext y. The seed of the

Page 11: DECODING LOW-DIMENSIONAL LINEAR CODES BY LINEAR PROGRAMMING · DECODING LOW-DIMENSIONAL LINEAR CODES BY LINEAR PROGRAMMING In memoriam Isabelle D ech ene RYUICHI ASHINO, TRUONG NGUYEN-BA

DECODING LINEAR CODES BY LINEAR PROGRAMMING 251

random generator that produces the linear code A can be encrypted bysymmetric-key or public-key cryptosystems and sent together with thecorrupted ciphertext. The length m of the linear code can be adjustedto the sparseness of the added noise so that the number of nonzero el-ements of z lies below the breakdown point of the linear code. Therecipient can use the seed to construct the random matrix A and re-cover the plaintext by means of `1 minimization even if the artificiallycorrupted ciphertext was corrupted by an unknown highly sparse noiseonce more. The strength of cryptography is based on the fact thatfinding the sparsest solution to an equivalent general underdeterminedsystem of equations is NP-hard [22].

7 Conclusion Linear codes with high breakdown points are em-pirically constructed by means of the singular value decomposition ofmatrices M uniformly distributed [0, 1]. A favorable comparison is madewith Donoho’s theoretical results and with codes made from normallydistributed matrices N(0, 1). The construction of linear codes by meansof QR decomposition equivalent to the construction by SVD is in [1].Research is ongoing on the construction of polytopes which would cor-roborate the higher overwhelmingly probable breakdown-point means ofthe new linear codes presented in this paper. The geometric approachof Mendelson, Pajor and Tomczak-Jaegermann [19, 20, 21] is also in-vestigated.

Acknowledgement Thanks are due to the reviewer for an exten-sive and friendly report which improved the paper considerably and laidthe basis for future work. Thanks are also due to David L. Donoho andJared Tanner for helping us with SparseLab and sharing their recentprograms.

Appendix A. The Mathematica program This Mathematica 7program finds the breakdown points in decoding linear codes generatedby random matrices M ∈ R

n×n uniformly distributed [0, 1] or normallydistributed N(0, 1) and produces the histogram of the number of break-down points q in the interval [qmin, qmax].

ClearAll;

SetDirectory["/Users/myname/myfile"];

<< Combinatorica‘ (*with Mathematica 6 and more recent*)

stmp = OpenAppend["mydoc", FormatType -> OutputForm];

Page 12: DECODING LOW-DIMENSIONAL LINEAR CODES BY LINEAR PROGRAMMING · DECODING LOW-DIMENSIONAL LINEAR CODES BY LINEAR PROGRAMMING In memoriam Isabelle D ech ene RYUICHI ASHINO, TRUONG NGUYEN-BA

252 R. ASHINO, T. NGUYEN-BA AND R. VAILLANCOURT

n = 64; (* dimension of square matrix M *)

r = 4; (* dimension r*n by n of matrix A *)

nnf = 10; (* number of iterations *)

qmin = 120; (* starting value of q *)

(*Write[stmp," \% mydoc2","n = ", n, ", r = ", r, ",

nnf = ", nnf, ";"];*) (*to keep track of parameters*)

For[nn = 1, nn < nnf + 1, nn++,

usvst = Array[xx, {r, n, n}];

rr = 1;

While[rr < 1 + r/2, (*choose UD or ND *)

aa = RandomReal[UniformDistribution[{0, 1}], {n, n}];

(*aa=RandomReal[NormalDistribution[0,1],{n,n}];*)

{us, ss, vs} = SingularValueDecomposition[aa];

usvst[[2*rr - 1]] = us;

usvst[[2*rr]] = -vs;

rr++ ]

usvs = Flatten[usvst, 1];

normerr2 = 0;

q = qmin;

While[normerr2 < 10^(-5),

x = RandomReal[{0, 1}, {n}];

yexact = usvs.x;

w1 = Flatten[Join[RandomReal[{0, 1}, {q}],

Table[0, {r*n - q}]]];

w2 = RandomPermutation[w1];

y = yexact + w2;

c = Flatten[{Table[1, {i, r*n}], Table[0, {i, n}]}];

idm = IdentityMatrix[r*n];

m1 = Table[{idm[[i]], -usvs[[i]]}, {i, r*n}];

m2 = Table[{idm[[i]], usvs[[i]]}, {i, r*n}];

mm = Partition[Flatten[{m1, m2}], (r + 1)*n];

bb = Join[Table[{-y[[i]], 1}, {i, r*n}],

Table[{y[[i]], 1}, {i, r*n}]];

lp = LinearProgramming[c, mm, bb,

Method -> "InteriorPoint"];

tlp = Table[lp[[i]], {i, r*n + 1, (r + 1)*n}];

erreur1 = usvs.tlp - yexact;

erreur2 = tlp - x;

normerr2 = Chop[Sum[Abs[erreur2[[i]]], {i, n}], 10^(-5)];

q = q + 1; ]

Write[stmp, q]; ]

Page 13: DECODING LOW-DIMENSIONAL LINEAR CODES BY LINEAR PROGRAMMING · DECODING LOW-DIMENSIONAL LINEAR CODES BY LINEAR PROGRAMMING In memoriam Isabelle D ech ene RYUICHI ASHINO, TRUONG NGUYEN-BA

DECODING LINEAR CODES BY LINEAR PROGRAMMING 253

Close[stmp];

results = ReadList["mydoc", Number];

minn = Min[results];

maxx = Max[results];

meann = Mean[results] // N;

mediann = Median[results] // N;

std = StandardDeviation[results] // N;

{minn "= min", maxx "= max", meann "= mean",

mediann "= median", std "= standard deviation"}

Histogram[results]

Dimensions[results]

REFERENCES

1. R. Ashino, T. Nguyen-Ba and R. Vaillancourt, Low-dimensional linear codeswith high breakdown points by QR decompostion, Int. J. Pure Appl. Math., inpress.

2. R. Ashino, T. Nguyen-Ba and R. Vaillancourt, Low-dimensional compressedsensing with high breakdown-point mean by QR decomposition, submitted.

3. E. Candes and J. Romberg, `1-magic: Recovery of sparse signals via convexprogramming (2005). http://www.acm.caltech.edu/l1magic/

4. E. Candes and T. Tao, Decoding by linear programming, IEEE Trans. Inform.Theory 51(12) (2005), 4203–4215.

5. E. Candes, J. Romberg and T. Tao, Robust uncertainty principles: Exact sig-nal reconstruction from highly incomplete frequency information, IEEE Trans.Inform. Theory 52(2) (2006), 489–509.

6. E. Candes and T. Tao, Near-optimal signal recovery from random projections:Universal encoding strategies?, IEEE Trans. Inform. Theory 52(12) (2006),5406–5425.

7. S. S. Chen and D. L. Donoho and M. A. Saunders, Atomic decomposition bybasis pursuit, SIAM J. Sci. Comput. 20(1) (1998), 33–61.

8. S. S. Chen, D. L. Donoho and M. A. Saunders, Atomic decomposition by basispursuit, SIAM Review 43(1) (2001), 129–159.

9. D. L. Donoho and P. B. Stark, Uncertainty principles and signal recovery,SIAM J. Appl. Math. 49(3) (1989), 906–931.

10. D. L. Donoho and M. Elad, Optimally sparse representation in general (nonorthog-onal) dictionaries via `1 minimization, Proc. Nat. Acad. Sci. U.S.A. 100(5)(2003), 2845–2862.

11. D. L. Donoho, Neighborly polytopes and sparse solution of underdeterminedlinear equations, preprint, 2004.

12. D. L. Donoho and J. Tanner, Sparse nonnegative solutions of underdeterminedlinear equations by linear programming, Proc. Nat. Acad. Sci. U.S.A. 102(27)(2005), 9446–9451.

13. D. L. Donoho and J. Tanner, Neighborliness of randomly-projected simplicesin high dimensions, Proc. Nat. Acad. Sci. U.S.A. 102(27) (2005), 9452–9457.

Page 14: DECODING LOW-DIMENSIONAL LINEAR CODES BY LINEAR PROGRAMMING · DECODING LOW-DIMENSIONAL LINEAR CODES BY LINEAR PROGRAMMING In memoriam Isabelle D ech ene RYUICHI ASHINO, TRUONG NGUYEN-BA

254 R. ASHINO, T. NGUYEN-BA AND R. VAILLANCOURT

14. D. L. Donoho, For most large underdetermined systems of linear equations theminimal `1-norm solution is the sparsest solution, Comm. Pure Appl. Math.59(7) (2006), 907–934.

15. D. L. Donoho (2006), High-dimensional centrally-symmetric polytopes withneighborliness proportional to dimension, Discete Comput. Geom. 35(4) (2006),617–652.

16. D. L. Donoho, A. Maaleki, I. Rhaman, M. Shaahram and V. Stodden, 15 yearsof reproducible research in computational analysis, preprint, 2008.

17. D. L. Donoho and J. Tanner, Counting faces of randomly-projected polytopeswhen the projection radically lowers dimension, J. Amer. Math. Soc. 22(1)(2009), 1–53.

18. G. H. Golub and C. L. Van Loan, Matrix Computations, 3rd ed., The JohnsHopkins University Press, Baltimore and London, 1996.

19. S. Mendelson, A. Pajor and N. Tomczak-Jaegermann, Reconstruction and sub-gaussian processes, C.R. Acad. Sci. Paris, Ser. I Math. 340 (2005), 885–888.

20. S. Mendelson, A. Pajor and N. Tomczak-Jaegermann, Reconstruction and sub-gaussian operators in asymptotic geometric analysis, Geom. Funct. Anal. 17(4)(2007), 1248–1282.

21. S. Mendelson, A. Pajor and N. Tomczak-Jaegermann, Uniform uncertaintyprinciple for Bernoulli and subgaussian ensembles, Constr. Approx. 28 (2008),277–289.

22. B. K. Natarajan (1995), Sparse approximate solutions to linear systems, SIAMJ. Comput. 24(1) (1995), 227–234.

23. Y. Tsaig and D. L. Donoho, Breakdown of local equivalence between sparsesolutions and `1 minimization, Signal Processing 86(3) (2006), 533–548.

Division of Mathematical Sciences, Osaka Kyoiku University,

Kashiwara, Osaka 582-8582, Japan

E-mail address: [email protected]

Department of Mathematics and Statistics, University of Ottawa,

585 King Edward Avenue, Ottawa, ON, Canada K1N 6N5

E-mail address: [email protected]

Department of Mathematics and Statistics, University of Ottawa,

585 King Edward Avenue, Ottawa, ON, Canada K1N 6N5

E-mail address: [email protected]