# Ramji Venkataramanan Sekhar rv285/itw13_sparc_talk.pdf · PDF file Ramji Venkataramanan...

date post

25-May-2020Category

## Documents

view

12download

0

Embed Size (px)

### Transcript of Ramji Venkataramanan Sekhar rv285/itw13_sparc_talk.pdf · PDF file Ramji Venkataramanan...

Sparse Regression Codes: Recent Results & Future Directions

Ramji Venkataramanan Sekhar Tatikonda

University of Cambridge Yale University

(Acknowledgements: A. Barron, A. Joseph, S. Cho, T. Sarkar)

1 / 21

GOAL : Efficient, rate-optimal codes for

Point-to-point communication

Lossy Compression

Multi-terminal models:

Wyner-Ziv, Gelfand-Pinsker, Multiple-access, broadcast, multiple descriptions, . . .

2 / 21

Achieving the Shannon limits

Textbook Constructions

Random codes for point-to-point source & channel coding

Random Binning, Superposition

High Coding and Storage Complexity

- exponential in block length n

WANT: Compact representation + fast encoding/decoding

‘Fast’ ⇒ polynomial in n LDPC/LDGM codes, Polar codes

- For finite input-alphabet channels & sources

3 / 21

Achieving the Shannon limits

Textbook Constructions

Random codes for point-to-point source & channel coding

Random Binning, Superposition

High Coding and Storage Complexity

- exponential in block length n

WANT: Compact representation + fast encoding/decoding

‘Fast’ ⇒ polynomial in n LDPC/LDGM codes, Polar codes

- For finite input-alphabet channels & sources

3 / 21

SParse Regression Codes (SPARC)

Introduced by Barron & Joseph [ISIT ’10, Arxiv ’12]

- Provably achieve AWGN capacity with efficient decoding

Also achieve Gaussian R-D function [RV-Sarkar-Tatikonda ’13]

Codebook structure facilitates binning, superposition [RV-Tatikonda, Allerton ’12]

Overview of results and open problems

4 / 21

Codebook Construction

Block length n, rate R , construct enR codewords

Gaussian source/channel coding

5 / 21

Codebook Construction

A:

β: 0, c1, 0, c2, 0, cL, 0, , 00, T

n rows

A: design matrix or ‘dictionary’ with ind. N (0, 1) entries Codewords Aβ - sparse linear combinations of columns of A

5 / 21

SPARC Codebook

A:

β: 0, c2, 0, cL, 0, , 00,

M columns M columnsM columns Section 1 Section 2 Section L

T

n rows

0, c1,

n rows, ML columns

Choosing M and L:

For rate R codebook, need ML = enR

Choose M polynomial of n ⇒ L ∼ n/log n Storage Complexity ↔ Size of A: polynomial in n

6 / 21

SPARC Codebook

A:

β: 0, c2, 0, cL, 0, , 00,

M columns M columnsM columns Section 1 Section 2 Section L

T

n rows

0, c1,

n rows, ML columns

Choosing M and L:

For rate R codebook, need ML = enR

Choose M polynomial of n ⇒ L ∼ n/log n Storage Complexity ↔ Size of A: polynomial in n

6 / 21

SPARC Codebook

A:

β: 0, c2, 0, cL, 0, , 00,

M columns M columnsM columns Section 1 Section 2 Section L

T

n rows

0, c1,

n rows, ML columns

Choosing M and L:

For rate R codebook, need ML = enR

Choose M polynomial of n ⇒ L ∼ n/log n Storage Complexity ↔ Size of A: polynomial in n

6 / 21

Point-to-point AWGN Channel

Noise

+ X Z

M̂M

Z = X + Noise ‖X‖2

n ≤ P, Noise ∼ Normal(0,N)

GOAL: Achieve rates close to C = 12 log ( 1 + PN

)

7 / 21

SPARC Construction

A:

β: 0, c2, 0, cL, 0, , 00,

M columns M columnsM columns Section 1 Section 2 Section L

T

n rows

0, c1,

How to choose c1, c2, . . . for efficient decoding?

Think of sections as L users of a MAC (L ∼ n/log n) If rate of each user CL , for successive decoding,

c2i = κ · exp(−2Ci/L), i = 1, . . . , L

c2i ∼ Θ(1/L), κ determined by ∑

i c 2 i = P

8 / 21

SPARC Construction

A:

β: 0, c2, 0, cL, 0, , 00,

M columns M columnsM columns Section 1 Section 2 Section L

T

n rows

0, c1,

How to choose c1, c2, . . . for efficient decoding?

Think of sections as L users of a MAC (L ∼ n/log n) If rate of each user CL , for successive decoding,

c2i = κ · exp(−2Ci/L), i = 1, . . . , L

c2i ∼ Θ(1/L), κ determined by ∑

i c 2 i = P

8 / 21

SPARC Construction

A:

β: 0, c2, 0, cL, 0, , 00,

M columns M columnsM columns Section 1 Section 2 Section L

T

n rows

0, c1,

How to choose c1, c2, . . . for efficient decoding?

Think of sections as L users of a MAC (L ∼ n/log n) If rate of each user CL , for successive decoding,

c2i = κ · exp(−2Ci/L), i = 1, . . . , L

c2i ∼ Θ(1/L), κ determined by ∑

i c 2 i = P

8 / 21

Efficient Decoder

A:

β: 0, c2, 0, cL, 0, , 00,

M columns M columnsM columns Section 1 Section 2 Section L

T

n rows

0, c1,

Z = Aβ + Noise

Each section has rate R/L

Successive decoding efficient, but does poorly

– Section errors propagate!

9 / 21

Efficient Decoder

A:

β: 0, c2, 0, cL, 0, , 00,

M columns M columnsM columns Section 1 Section 2 Section L

T

n rows

0, c1,

Z = Aβ + Noise

Adaptive Successive Decoder [Barron-Joseph]

Set R0 = Y. In Step k = 1, 2, . . .

1 Compute inner product of each column with Rk−1 2 Pick columns whose test statistic crosses threshold τ

3 Rk = Rk−1 − Fitk 4 Complexity O(ML log M)

9 / 21

Efficient Decoder

A:

β: 0, c2, 0, cL, 0, , 00,

M columns M columnsM columns Section 1 Section 2 Section L

T

n rows

0, c1,

Z = Aβ + Noise

Adaptive Successive Decoder [Barron-Joseph]

Set R0 = Y. In Step k = 1, 2, . . .

1 Compute inner product of each column with Rk−1 2 Pick columns whose test statistic crosses threshold τ

3 Rk = Rk−1 − Fitk 4 Complexity O(ML log M)

9 / 21

Performance

How many sections are decoded correctly?

Test statistics in each step are dependent on prev. steps

A variant of the adaptive successive decoder is analyzable

Theorem ([Barron-Joseph ’10])

Let δM = 1√

π log M and R = C(1− δM −∆).

The probability that the section error rate is more than δM2C is at most

Pe = exp(−cL∆2)

(Use outer R-S code of rate (1− δM/C) to get block error prob Pe)

10 / 21

Performance

How many sections are decoded correctly?

Test statistics in each step are dependent on prev. steps

A variant of the adaptive successive decoder is analyzable

Theorem ([Barron-Joseph ’10])

Let δM = 1√

π log M and R = C(1− δM −∆).

The probability that the section error rate is more than δM2C is at most

Pe = exp(−cL∆2)

(Use outer R-S code of rate (1− δM/C) to get block error prob Pe)

10 / 21

Open Questions

Better decoding algorithms

- Different power allocation across sections

- [Cho-Barron ISIT ’12]: Adaptive Soft-decision Decoder

- Connections to message passing

Codebook has good structural properties

- With ML decoding, SPARC attains capacity

- Error-exponent same order as random coding exponent [Barron-Joseph IT ’12]

Low-complexity dictionaries

- ML Performance with ±1 matrix [Takeishi et al ISIT ’13]

11 / 21

Open Questions

Better decoding algorithms

- Different power allocation across sections

- [Cho-Barron ISIT ’12]: Adaptive Soft-decision Decoder

- Connections to message passing

Codebook has good structural properties

- With ML decoding, SPARC attains capacity

- Error-exponent same order as random coding exponent [Barron-Joseph IT ’12]

Low-complexity dictionaries

- ML Performance with ±1 matrix [Takeishi et al ISIT ’13]

11 / 21

Lossy Compression

Source Coding with SPARCs . . .

12 / 21

Lossy Compression

Codebook R bits/sample

S = S1, . . . , Sn

enR

Ŝ = Ŝ1, . . . , Ŝn

Distortion criterion: 1n ∑

k(Sk − Ŝk)2

For i.i.d N (0, σ2) source, min distortion σ2e−2R

GOAL: achieve this with low-complexity algorithms?

- Computation & Storage

12 / 21

Compression with SPARC

A:

β: 0, c2, 0, cL, 0, , 00,

M columns M columnsM columns Section 1 Section 2 Section L

T

n rows

0, c1,

ML codewords of form Aβ (ML = enR)

With L ∼ n/log n, can we efficiently find β̂ s.t. 1 n‖S− Aβ̂‖2 < σ2e−2R + ∆

13 / 21

Compression with SPARC

A:

β: 0, c2, 0, cL, 0, , 00,

M columns M columnsM columns Section 1 Section 2 Section L

T

n rows

0, c1,

ML codewords of form Aβ (ML = enR)

With L ∼ n/log n, can we efficiently find β̂ s.t. 1 n‖S− Aβ̂‖2 < σ2e

*View more*