Shaping Low-Density Lattice Codes Using Voronoi...
Transcript of Shaping Low-Density Lattice Codes Using Voronoi...
Shaping Low-Density Lattice Codes Using Voronoi Integers
Information Theory Workshop Hobart, Tasmania, Australia
November 2014
Nuwan S. Ferdinand
Brian M. Kurkoski
Behnaam Aazhang
Matti Latva-aho
University of Oulu, Finland
Japan Advanced Institute of Science and Technology
Rice University, USA
University of Oulu, Finland
/24Brian Kurkoski, JAIST
Lattice codes can achieve the capacity of AWGN channel [Erez and Zamir ’04] Nested lattice codes: Want which is simultaneously good for coding and shaping Other information theoretic results using lattices: • Lattices for relay channel e.g. [Song-Devroye ’13] • Two-way (Bidirectional) relay channel e.g. [Wilson et al.] • Compute-forward relaying [Nazer-Gastpar ’11] How to move from information theory to practical lattice codes?
Nested Lattice Codes Achieve Capacity
2
/24Brian Kurkoski, JAIST
Recent high-dimension lattice constructions approach capacity • Construction A with LDPC codes • Construction D with turbo codes, spatially coupled LDPC • Lattices based on polar codes • Low-Density Lattice Codes [Sommer et al. 2008] Common claim: within few tenth of dB of unconstrained capacity: !
!
!
No assumption about the channel power constraint.
Capacity-Approaching Lattice Constructions
3
/24Brian Kurkoski, JAIST
Satisfy Power Constraint with Nested Lattices
4
Good for correcting errors
Good for quantization (satisfy the power constraint)
high complexity!
/24Brian Kurkoski, JAIST
1.53 dB Shaping Gain of Sphere over Cube
5
Depends only on shape of B (normalized second moment)
Depends only on coding lattice Λ
limn�⇥
G(n-sphere) =1
2�e G(cube) =112
limn�⇥
G(cube)G(n-sphere)
=�e
6= 1.53 dB
B B
−1 −0.5 0 0.5 1
−1
−0.5
0
0.5
1
−1 −0.5 0 0.5 1
−1
−0.5
0
0.5
1
Separate lattice � and shaping region B contribution to signal power:
Average Power ⇥�
B ||x||2dxnV (B) 2
n +1
⌅ ⇤⇥ ⇧G(B)
·MnV (�)
Shaping Gain
1 2 3 4 5 7 8 16 240.055
0.06
0.065
0.07
0.075
0.08
0.085 Z1A2D3D4D5
E7E8Λ16
Λ24
for Well-Known Lattices
!6
lattice dimension n
/24Brian Kurkoski, JAIST 7
n = 1, 2 n = 8 n = 23 n = 105n = 104n = 1000
small n Well-known lattices •Weak coding gain •efficient shaping algorithms •Good shaping gain (0.65~1.0 dB)
Large n BP-based lattices •Strong coding gain • Inefficient shaping algorithms •Uncertain coding gains: two cases: 0.4 dB shaping gain
E8 Leech
Satisfy Power Constraint with Nested Lattices
/24Brian Kurkoski, JAIST
Find a construction that: • Has the capacity-approaching coding gain high-dimension lattices • Has the shaping gains and implementation complexity of a well-known lattice like E8.
!
Must overcome the problem of mismatch in dimensions
It Would Be Great If…
8
/24Brian Kurkoski, JAIST
Key result: • a lattice construction technique for shaping LDLC lattices Elements of the technique: 1. “Voronoi Integers” Shape integers using small-dimension lattices 2. Systematic lattice encoding: lattice point is nearby corresponding integer Results Full 0.65 dB shaping gain of the E8 lattice. (2.1 dB from 1/2 log(SNR+1) ) Competing nested LDLCs obtained only 0.4 dB, using higher complexity !
First, review 1.53 shaping gain result and LDLC lattices
Outline
9
/24Brian Kurkoski, JAIST
LDLC lattices introduced by Sommer, Shalvi and Feder [IT 2008] • LDLC have a sparse inverse generator matrix H • Gaussian Belief-propagation decoding • High dimension, n = 100, 1000, 10000, 100000 • Come within 0.6 dB of unconstrained capacity !
LDLCs for the power-constrained channel [Sommer et al ITW 2009] • H matrix in triangular form, use M algorithm for quantization • Obtained 0.4 dB gain over hypercube (out of 1.53 dB)
Low-Density Lattice Codes
10
/24Brian Kurkoski, JAIST
Inverse generator H = G –1 has constant row and column weight d.
Latin square: each row/column {h1, h2,…,hd} with random ±, h1 ≥ h2 ≥ … ≥hd • Choose h1 = 1 (forces determinant to be 1)
• Random sign changes • d = 7 gives good performance • BP convergence condition:
LDLC “Latin Square” Construction
11
Example: {1, 1/2, 1/3}
/24Brian Kurkoski, JAIST
Sommer [ITW 2009]: Triangular construction • Construct • dimension n = 10,000 • Put 1's on main diagonal, make triangular • “90% Latin square” weight d: {h1, h2,…,hd} • Quantization/shaping using M-Algorithm Complexity is O(ndM), but M is large • Shaping gain of 0.4 dB over hypercube
Modest shaping gain for high complexity
Nested Lattice Codes With LDLCs
12
that achieves the generalized capacity of the AWGN channel
without restrictions, also achieves the channel capacity of the
power constrained AWGN channel, with a properly chosen
spherical shaping region [1]. A shaping algorithm for LDLC,
based on the iterative LDLC decoding algorithm, was sug-
gested in [8], and was demonstrated for small dimensions. In
this paper, several shaping methods for LDLC are proposed.
The methods are demonstrated by simulations to give good
shaping gains for practical dimensions.
II. USING A LOWER TRIANGULAR PARITY CHECK
MATRIX
In [2], Latin square LDLC were defined. In a Latin square
LDLC, every row and column of the parity check matrix Hhas the same d nonzero values, except for a possible change
of order and random signs. The sorted sequence of these dvalues h1 ⌅ h2 ⌅ ... ⌅ hd > 0 is referred to as the generating
sequence of the Latin square LDLC.
We would like to use a simpler structure for H , which
will be more convenient for encoding and shaping. First, it
is assumed that all the values on the main diagonal of Hare 1. This can be assumed, without loss of generality, for
any Latin square LDLC that one of its generating sequence
elements is 1, by permuting rows and columns of H and by
multiplying whole rows by �1 as necessary. Also, we would
like H to have a lower triangular structure. Obviously, a parity
check matrix of an LDLC can not be lower triangular, since
the upper rows (as well as the rightmost columns) can not
contain d nonzero elements. Therefore, it is suggested that the
column degree of the rightmost column of H will start from
1 and gradually increase until d. In the same manner, the row
degree of the top row will be 1, and it will gradually increase
until d.
The following matrix is an example of such a matrix with
dimension n = 8, degree d = 3 and generating sequence
(1, 0.7, 0.5). The two rightmost columns have a single nonzero
element, the next two have 2 nonzero element and the 4
leftmost columns have the final degree d = 3. The same is
true for the matrix rows, starting from the top.
����������↵
1.0 0 0 0 0 0 0 00 1.0 0 0 0 0 0 0
0.7 0 1.0 0 0 0 0 00 0 �0.7 1.0 0 0 0 0�0.5 0 0 0.7 1.0 0 0 0
0 �0.7 0 0.5 0 1.0 0 00 �0.5 0 0 0.7 0 1.0 00 0 �0.5 0 0 0.7 0 1.0
⌦
�����������
Such a lower triangular matrix can be generated with similar
methods to those described in [2] for Latin square LDLC,
where the location of each element of the generating sequence
in each row can be described by a permutation (since each
element appears once, and only once, in each row and column).
The difference is that here, instead of a permutation, we shall
use a mapping from a subset of the rows (starting at the first
row where this element appears, ending at the bottom row) to a
subset of the columns (starting at the leftmost column, ending
at the rightmost column where this element still appears).
As explained in the next section, the lower triangular struc-
ture is very convenient for encoding and shaping. However, it
has a drawback: the codeword components whose respective
H columns have low degree are less protected. For example,
an element whose column has a single nonzero element is
effectively uncoded, since it only takes place in a single check
equation. As a result, the information integers whose check
equations involve less protected codeword components should
contain a smaller amount of information, i.e. belong to a
smaller constellation. For example, we can use the following
constellations for the integers in the example matrix above.
The first 2 integers can assume one of 2 possible integer
values (i.e. contain 1 bit of information). The next 2 integers
can assume one of 4 integer values and contain 2 bits of
information, where all the other integers can assume one of 8
possible values and contain 3 bits of information. As shown
in Section IV, the rate loss due to this selective constellation
reduction can be made relatively small, especially for large
codeword length n.
We note that for a lower triangular H , the LDLC shaping
problem becomes similar to shaping of signal codes [9], [10].
Signal codes are lattice codes for which encoding is done by
convolving the information integers with a fixed, finite-length
filter. The resulting lattice generator matrix has a Toeplitz
structure, which is close to being lower-triangular.
III. SHAPING METHODS FOR LDLC
For the methods in Sections III-A-III-C below, it is assumed
that H is lower triangular with ones on the diagonal, as
defined in Section II. In Section III-D, the methods of Sections
III-A and III-C are extended to an arbitrary H . We shall
assume that each information integer bi is drawn from the finite
constellation {0, 1, ..., L � 1}, where L is the constellation
size. As discussed above, we may want to use a different
constellation size for each integer, so the constellation size
of the ith integer bi will be denoted by Li.
A. Hypercube ShapingHypercube shaping finds b⇥ such that the components of
x⇥ = Gb⇥ are uniformly distributed. To achieve that, we shall
assume that
b⇥i = bi � Liki, (1)
where ki is an integer. The method starts from the first (top)
check equation, i.e., from i = 1, and continues to i = n. For
each equation, the value of ki is chosen such that |x⇥i| ⇤ Li/2,
where x⇥i is the resulting codeword element, i.e.
ki =
⌥1Li
⇧bi �
i�1⇣
l=1
Hi,lx⇥l
⌃�(2)
The modified integer b⇥i is then calculated according to (1),
and the codeword element x⇥i is then easily calculated as:
x⇥i = b⇥i �i�1⇣
l=1
Hi,lx⇥l (3)
239
/24Brian Kurkoski, JAIST
Offset to reduce average power
Proposed Construction
13
u P Zn Splitter
u1
u2
un{m
Zm{⇤s
c1
c2
cn{m
Combineintegers
c P Zn Systematicshaping
x “ H´1pc ´ kq Substractoffset a
x1 “ x ´ a
(a) Encoder
noise (z)
y “ x1 ` z Addoffset a
x ` z LDLCDecoding
x Y
¨U c
Splitter
c1
c2
cn{m
⇤s Ñ Zm
u1
u2
un{m
Combinedintegers
u
(b) Decoder
Fig. 2. Encoder and Decoder using Zm{⇤s integers in LDLC.
E. Numerical Example
Fig. 1 shows an example of Z2{4D2, using the scaled D2
lattice:
G “ 4
„
1 0
1 2
⇢
. (23)
The figure shows the Voronoi region and the elements c, whichare integers inside the shaping region, labeled as u “ pu1, u2q.Note that for consistent tie-breaking, some elements on theVoronoi boundary and included, and some are not. In this case|detG| “ 32, so the rate is R “ 2.5 bits/dimension.
IV. PROPOSED CODE CONSTRUCTION AND DECODER
A. Code Construction
This section describes the code construction obtained byperforming systematic lattice encoding on the Voronoi inte-gers. Systematic lattice encoding does not change the structureof the lattice — it is only a mapping from integers to latticepoints, so the high coding gain properties of ⇤c are retained.By using systematic lattice shaping, the average power of xis similar to average power of c. The Voronoi integers givesthe same shaping gain observed in ⇤s lattice with much lesscomplexity. Thus, we can reach the goal of simultaneouslyhaving good coding gain along with some shaping gain.
The shaping lattice ⇤s has dimension m and the codinglattiice ⇤c has dimension n ° m. We require that n{m isan integer. The transmitter divides integer vector u to n{minteger vectors such that u “ ru1,u2, . . .un{ms where ui PZm. Then, these are encoded to Zm{⇤s to obtain ci,@i Pt1, . . . n{mu as described in Section III-B.
It is evident [13] that mean of c may not be zero, therefore,x also has non-zero mean. Hence, subtract the mean of Zm{⇤s
to obtain the transmitted codeword x1:
x1 “ x ´ a, (24)
where
a “ ras as . . . asslooooooomooooooon
repeat n{m
, (25)
and as P Rm is the mean of Zm{⇤s.
B. Channel and Decoding
The source transmits the codeword x1 via an AWGN chan-nel and the received signal is y:
y “ x1 ` z (26)
where z is the AWGN noise vector with variance �2z .
At the decoder, the receiver adds the offset a to obtain y1 “x`z and performs LDLC iterative decoding to obtain x. Next,it rounds to the nearest integer to find the integer vector c.Finally, it performs the inverse mapping for Voronoi integersas in Section III-C to obtain estimated information vector u.The encoding and decoding block diagram is shown in Fig. 2.
Remark 1: This shaping method significantly reduces thecomplexity compare to nested lattice shaping that uses of M -algorithm as proposed in [9]. Further, it has been found that forsmall dimensions, i.e., n § 1000, QR-decomposition methodgives better performance with hypercube/systematic shapingschemes, since, there is no rate penalty [9]. However, thenested lattice shaping is not possible with QR decompositionmethod due to high complexity, hence, there is no additionalshaping gain over hypercube. Interestingly, with the use ofproposed Voronoi integers with QR-decomposition, and sys-tematic shaping, we can obtain higher shaping gain with lowercomplexity.
V. NUMERICAL RESULTS
Efficient decoding and quantization schemes are availablefor E8 lattice [12], further it has a shaping gain of 0.65 dB[11]. Hence, it motivates us to use the E8 lattice to obtainVoronoi integers, i.e., Z8{ME8 in our simulations.
“Voronoi Integers” Systematic lattice encoding
/24Brian Kurkoski, JAIST
Under systematic shaping, if the integers are “shaped,” then lattice code will be shaped. !
L is a small-dimensional lattice quantization, i.e. shaping, is easy !
Define “Voronoi Integers” set of integers inside fundamental region !
!
“Voronoi Integers”
14−5 −4 −3 −2 −1 0 1 2 3 4 5−5
−4
−3
−2
−1
0
1
2
3
4
5
1,7
2,4
2,5
0,6
3,0
3,6
3,1 3,5
0,2
3,6
0,3
0,7 1,6
1,7
0,6
0,5
2,3
0,6
0,3
0,1
1,6
0,5
2,0
1,7
0,5
2,4
1,4
1,0
1,4
2,7
3,0
2,4
1,2
1,5
3,0 3,4 3,0
1,0
2,6
1,1
1,3
2,4
1,7 3,5
0,4
1,7
1,5
2,1
2,0
0,0
3,6
0,3
0,4
3,2 3,2
0,5
3,4
0,2
0,1
0,7
3,2
3,6
2,3 1,0
1,4
1,5
2,6
0,3
2,1
2,0
2,6 1,3 0,0
3,6
1,0
1,6
3,5
3,7
0,5
1,0
3,3
3,0
1,5
0,5
1,6
2,3
1,2
3,3
1,3 3,1
3,0 1,6
1,4
1,0
2,4
2,2
1,5
3,0
0,5
3,7
0,2
2,3 3,2
0,5
0,6
3,3
0,1
1,6
2,2 1,3
3,7
2,5
2,0 2,0
1,3 0,0
3,4 1,2
0,1
1,2
3,3 1,1
3,5 3,5
3,7
1,4
2,3
3,1 0,0
0,4
2,3 1,0
3,5
1,1
3,6
3,3
3,6
0,1
3,3
1,7
3,6
2,7
1,1
1,2
2,0
0,1
3,6
2,7
3,7
0,7
2,0
1,7
3,0
2,6 0,0
1,0
1,3 1,7
1,2
0,1
0,0
2,0
2,3
0,5
1,0
2,1
3,5
3,6
3,5
2,5
1,4
3,0
2,3
3,5
3,4
3,3
2,1
2,7 2,3
3,0 3,0 2,5
2,3
1,4
3,5
3,0
2,0
2,5
0,6
1,7
1,0
0,0
3,0
0,0 0,0
2,4
0,1
0,2
0,4
1,2
2,6
1,4
1,2
3,7
2,1
0,4
1,2 2,1
2,0
0,2
0,7
3,3
0,3
1,5
1,7
3,7
1,2
3,1
0,4
3,3
0,0
3,4
1,5
3,2
2,1
3,5
0,3
2,7
2,5
2,7
0,3
1,5
3,3
3,2 2,3
3,5
3,0 1,6
0,0 1,7
1,4
1,0
2,6
1,0
0,5
1,5
0,5
1,0
1,5 3,7 3,7
3,6
2,3
0,5
3,2
3,0 1,2
0,5
1,7
1,5
0,4
2,6
2,7
3,1
0,6
0,3
1,5 0,6
3,4
1,0
2,0
0,1
1,5
3,3 0,2
2,6 1,3
3,4
2,2
1,6
1,3
1,5
2,5
1,3
1,0
1,7 3,5
3,6 3,6
3,0
1,3
1,0
3,5
0,1 2,3
0,3
0,6
3,0 2,5 1,6
1,0
0,6
2,6 2,6
0,3 0,3
3,2
2,0 0,6 0,6 1,5
0,5
1,7 1,3
0,7
1,0
2,4
1,6
3,2 2,3
0,4
3,0 3,4
1,1
0,3
2,2
3,7
2,3
2,5
0,3
1,4
3,3
0,7
3,2
1,6
2,4 2,0 3,7 0,6
3,0 2,5
1,5
3,4
2,2
3,3
2,6 1,3
2,3
0,5
0,3
0,7
2,7
1,7 1,7
3,3
2,4
3,1
0,7
0,1
2,2
3,7
3,3
3,1
2,5
0,4
3,2
0,6
1,1
0,5
2,2
2,5
1,3
1,6 0,7
1,3 3,5
0,5
0,7 3,4
2,7
0,6
3,6
0,3
0,2
0,3
2,6
1,4
3,7
2,7
0,6
1,1
3,2
0,0 3,5
3,7
2,7
1,7
3,2
0,6
0,5
1,5
0,4
2,0
1,7
0,3
2,2 3,5
0,4
3,4
3,7
3,1
0,1
0,0
2,4
3,5 1,3
1,5
0,5
3,5
1,4
1,0
3,5 1,7
0,2 1,1
0,5
0,4
1,3 3,1
0,7
2,4 2,0
3,2 2,3
0,4
2,1
0,6
0,2
3,7
2,6
2,5
2,7 2,3 2,3
0,3
2,3
1,6
2,3
0,2
1,5
2,7
3,0
3,1
3,4
0,0
1,6
0,2
0,1
0,5
0,4
0,5
1,2
3,6
3,0
2,0
0,0
1,1 1,1
1,3 2,6
1,6
2,6 0,0
2,5
0,0 2,6
3,0
2,7 1,0
0,5
0,3
2,7
0,0
2,0
3,1 2,6
3,3
3,7
3,1 2,6
0,5 1,4
1,0
1,7
1,2
3,6
0,7
3,2
2,2
3,2
c1
c 2
Zm/�shape
Zm/�shape
/24Brian Kurkoski, JAIST
Voronoi Integers Example
15−5 −4 −3 −2 −1 0 1 2 3 4 5
−5
−4
−3
−2
−1
0
1
2
3
4
5
6
7
8
9
10
11
12
13
/24Brian Kurkoski, JAIST 16−5 −4 −3 −2 −1 0 1 2 3 4 5
−5
−4
−3
−2
−1
0
1
2
3
4
5
6
7
8
9
10
11
12
13
Voronoi Integers Example
/24Brian Kurkoski, JAIST 17−5 −4 −3 −2 −1 0 1 2 3 4 5
−5
−4
−3
−2
−1
0
1
2
3
4
5
6
7
8
9
10
11
12
13
Voronoi Integers Example
/24Brian Kurkoski, JAIST
Systematic Lattice Encoding
18
Triangular H with 1’s on diagonal
Requirement
/24Brian Kurkoski, JAIST
c =
2 3 4
2
3
4
(2,2)
(2,3)
(2,4)
(3,2)
(3,3)
(3,4)
(4,2)
(4,3)
(4,4)!
Example using !
Recall: c = round(x) !
Note Voronoi volume det(H) = 1 and the integer grid also has vol. 1 No “gaps”
Systematic Lattice Encoding
19
/24Brian Kurkoski, JAIST
c =
2 3 4
2
3
4
(2,2)
(2,3)
(2,4)
(3,2)
(3,3)
(3,4)
(4,2)
(4,3)
(4,4)!
Example using !
Recall: c = round(x) !
Note Voronoi volume det(H) = 1 and the integer grid also has vol. 1 No “gaps”
Systematic Lattice Encoding
19
/24Brian Kurkoski, JAIST 20
/24Brian Kurkoski, JAIST 21
2 2.5 3 3.5 4 4.5 5
0.2
0.25
0.3
0.35
0.4
0.45
0.5
0.55
0.6
0.65
0.7
log2(L)
Gai
n o
ver
hy
per
cub
e sh
apin
g
c (Z8/LE8)
x’ (Z8/LE8 with LDLC)
E8 shaping gain bound
a=aopt
a=0
Z8/LE8 with LDLC systematic
No LDLC
Transmit power,Gain over hypercube
E8 lattice alone
Average Transmit Power
/24Brian Kurkoski, JAIST
AWGN channel with average power constraint • 5 bits/dimension • coding: LDLC lattice dimension n = 10,000 • shaping: E8 lattice with m = 8 • Compare with M-Algorithm LDLC shaping of Sommer et al
Power-Constrained AWGN Channel
22
/24Brian Kurkoski, JAIST
0.65 dB Gain Over Hypercube Shaping!
23
23.5 24 24.5 25 25.5 26 26.5
10−5
10−4
10−3
10−2
10−1
Average SNR in dB
SE
R
Hypercube shaping [7]
Nested lattice shaping |7]
Proposed shaping
Uniform input capacity
AWGN capacity
0.4 dB
0.65dB
0.15 dB better than M algorithm, and much lower complexity
/24Brian Kurkoski, JAIST
Lattices are an alternative to finite-field codes for AWGN Shaping techniques to obtain 1.53 dB are “accessible” • Coset codes/nested lattice codes, high complexity !
We proposed: • “Voronoi integers” using low-dimension lattices • Systematic lattice shaping for LDLCs High coding gain of LDLCs, good shaping gain of E8 lattice
Conclusion
24