Exponential decay of reconstruction error frombinary measurements of sparse signals
Deanna Needell
Joint work with R. Baraniuk, S. Foucart, Y. Plan, and M. Wootters
Outline
G Introduction
G Mathematical Formulation & Methods
G Practical CS
G Other notions of sparsityG Heavy quantizationG Adaptive sampling
The mathematical problem
1. Signal of interest f ∈Cd (=CN×N )
2. Measurement operator A :Cd →Cm (m ¿ d)
3. Measurements y =A f +ξ
y
=
A
f
+
ξ
4. Problem: Reconstruct signal f from measurements y
Sparsity
Measurements y =A f +ξ.
y
=
A
f
+
ξ
Assume f is sparse:
G In the coordinate basis: ‖ f ‖0def= |supp( f )| ≤ s ¿ d
G In orthonormal basis: f = B x where ‖x‖0 ≤ s ¿ d
In practice, we encounter compressible signals.F fs is the best s-sparse approximation to f
Many applications...
G Radar, Error Correction
G Computational Biology, Geophysical Data Analysis
G Data Mining, classification
G Neuroscience
G Imaging
G Sparse channel estimation, sparse initial state estimation
G Topology identification of interconnected systems
G ...
Sparsity...
Sparsity in coordinate basis: f=x
Reconstructing the signal f frommeasurements y
F `1-minimization [Candès-Romberg-Tao]
Let A satisfy the Restricted Isometry Property and set:
f = argming
‖g‖1 such that ‖A f − y‖2 ≤ ε,
where ‖ξ‖2 ≤ ε. Then we can stably recover the signal f :
‖ f − f ‖2 . ε+ ‖x −xs‖1ps
.
This error bound is optimal.
Restricted Isometry Property
G A satisfies the Restricted Isometry Property (RIP) when there is δ< c
such that
(1−δ)‖ f ‖2 ≤ ‖A f ‖2 ≤ (1+δ)‖ f ‖2 whenever ‖ f ‖0 ≤ s.
G m ×d Gaussian or Bernoulli measurement matrices satisfy the RIP withhigh probability when
m & s logd .
G Random Fourier and others with fast multiply have similar property:m & s log4 d .
Other recovery methods
Greedy Algorithms
G If A satisfies the RIP, then A∗A is “close” to the identity on sparsevectors
G Use proxy p = A∗y = A∗Ax ≈ x
G Threshold to maintain sparsity: x = Hs(p)
G Repeat
G (Iterative Hard Thresholding)
The One-Bit Compressive Sensing Problem
I Standard CS: vectors x ∈ Rn with ‖x‖0 ≤ s acquired vianonadaptive linear measurements 〈ai , x〉, i = 1, . . . ,m.
I In practice, measurements need to be quantized.I One-Bit CS: extreme quantization as y = sign(Ax), i.e.,
yi = sign〈ai , x〉, i = 1, . . . ,m.
I Goal: find reconstruction maps ∆ : {±1}m → Rn such that,
assuming the `2-normalization of x (why?),
‖x−∆(y)‖ ≤ γ
provided the oversampling factor satisfies
λ :=m
s ln(n/s)≥ f (γ)
for f slowly increasing when γ decreases to zero, equivalently
‖x−∆(y)‖ ≤ g(λ)
for g rapidly decreasing to zero when λ increases.
The One-Bit Compressive Sensing ProblemI Standard CS: vectors x ∈ Rn with ‖x‖0 ≤ s acquired via
nonadaptive linear measurements 〈ai , x〉, i = 1, . . . ,m.
I In practice, measurements need to be quantized.I One-Bit CS: extreme quantization as y = sign(Ax), i.e.,
yi = sign〈ai , x〉, i = 1, . . . ,m.
I Goal: find reconstruction maps ∆ : {±1}m → Rn such that,
assuming the `2-normalization of x (why?),
‖x−∆(y)‖ ≤ γ
provided the oversampling factor satisfies
λ :=m
s ln(n/s)≥ f (γ)
for f slowly increasing when γ decreases to zero, equivalently
‖x−∆(y)‖ ≤ g(λ)
for g rapidly decreasing to zero when λ increases.
The One-Bit Compressive Sensing ProblemI Standard CS: vectors x ∈ Rn with ‖x‖0 ≤ s acquired via
nonadaptive linear measurements 〈ai , x〉, i = 1, . . . ,m.I In practice, measurements need to be quantized.
I One-Bit CS: extreme quantization as y = sign(Ax), i.e.,
yi = sign〈ai , x〉, i = 1, . . . ,m.
I Goal: find reconstruction maps ∆ : {±1}m → Rn such that,
assuming the `2-normalization of x (why?),
‖x−∆(y)‖ ≤ γ
provided the oversampling factor satisfies
λ :=m
s ln(n/s)≥ f (γ)
for f slowly increasing when γ decreases to zero, equivalently
‖x−∆(y)‖ ≤ g(λ)
for g rapidly decreasing to zero when λ increases.
The One-Bit Compressive Sensing ProblemI Standard CS: vectors x ∈ Rn with ‖x‖0 ≤ s acquired via
nonadaptive linear measurements 〈ai , x〉, i = 1, . . . ,m.I In practice, measurements need to be quantized.I One-Bit CS: extreme quantization as y = sign(Ax), i.e.,
yi = sign〈ai , x〉, i = 1, . . . ,m.
I Goal: find reconstruction maps ∆ : {±1}m → Rn such that,
assuming the `2-normalization of x (why?),
‖x−∆(y)‖ ≤ γ
provided the oversampling factor satisfies
λ :=m
s ln(n/s)≥ f (γ)
for f slowly increasing when γ decreases to zero, equivalently
‖x−∆(y)‖ ≤ g(λ)
for g rapidly decreasing to zero when λ increases.
The One-Bit Compressive Sensing ProblemI Standard CS: vectors x ∈ Rn with ‖x‖0 ≤ s acquired via
nonadaptive linear measurements 〈ai , x〉, i = 1, . . . ,m.I In practice, measurements need to be quantized.I One-Bit CS: extreme quantization as y = sign(Ax), i.e.,
yi = sign〈ai , x〉, i = 1, . . . ,m.
I Goal: find reconstruction maps ∆ : {±1}m → Rn such that,
assuming the `2-normalization of x (why?),
‖x−∆(y)‖ ≤ γ
provided the oversampling factor satisfies
λ :=m
s ln(n/s)≥ f (γ)
for f slowly increasing when γ decreases to zero, equivalently
‖x−∆(y)‖ ≤ g(λ)
for g rapidly decreasing to zero when λ increases.
The One-Bit Compressive Sensing ProblemI Standard CS: vectors x ∈ Rn with ‖x‖0 ≤ s acquired via
nonadaptive linear measurements 〈ai , x〉, i = 1, . . . ,m.I In practice, measurements need to be quantized.I One-Bit CS: extreme quantization as y = sign(Ax), i.e.,
yi = sign〈ai , x〉, i = 1, . . . ,m.
I Goal: find reconstruction maps ∆ : {±1}m → Rn such that,assuming the `2-normalization of x (why?),
‖x−∆(y)‖ ≤ γ
provided the oversampling factor satisfies
λ :=m
s ln(n/s)≥ f (γ)
for f slowly increasing when γ decreases to zero, equivalently
‖x−∆(y)‖ ≤ g(λ)
for g rapidly decreasing to zero when λ increases.
The One-Bit Compressive Sensing ProblemI Standard CS: vectors x ∈ Rn with ‖x‖0 ≤ s acquired via
nonadaptive linear measurements 〈ai , x〉, i = 1, . . . ,m.I In practice, measurements need to be quantized.I One-Bit CS: extreme quantization as y = sign(Ax), i.e.,
yi = sign〈ai , x〉, i = 1, . . . ,m.
I Goal: find reconstruction maps ∆ : {±1}m → Rn such that,assuming the `2-normalization of x (why?),
‖x−∆(y)‖ ≤ γ
provided the oversampling factor satisfies
λ :=m
s ln(n/s)≥ f (γ)
for f slowly increasing when γ decreases to zero
, equivalently
‖x−∆(y)‖ ≤ g(λ)
for g rapidly decreasing to zero when λ increases.
The One-Bit Compressive Sensing ProblemI Standard CS: vectors x ∈ Rn with ‖x‖0 ≤ s acquired via
nonadaptive linear measurements 〈ai , x〉, i = 1, . . . ,m.I In practice, measurements need to be quantized.I One-Bit CS: extreme quantization as y = sign(Ax), i.e.,
yi = sign〈ai , x〉, i = 1, . . . ,m.
I Goal: find reconstruction maps ∆ : {±1}m → Rn such that,assuming the `2-normalization of x (why?),
‖x−∆(y)‖ ≤ γ
provided the oversampling factor satisfies
λ :=m
s ln(n/s)≥ f (γ)
for f slowly increasing when γ decreases to zero, equivalently
‖x−∆(y)‖ ≤ g(λ)
for g rapidly decreasing to zero when λ increases.
A visual
A visual
Existing Theoretical Results
I Convex optimization algorithms [Plan–Vershynin 13a, 13b].
I Uniform, nonadaptive, no quantization error:If A ∈ Rm×n is a Gaussian matrix, then w/hp∥∥∥∥x− ∆LP(y)
‖∆LP(y)‖2
∥∥∥∥2
. λ−1/5 whenever ‖x‖0 ≤ s, ‖x‖2 = 1.
I Nonuniform, nonadaptive, random quantization error:Fix x ∈ Rn with ‖x‖0 ≤ s, ‖x‖2 = 1. If A ∈ Rm×n is aGaussian matrix, then w/hp
‖x−∆SOCP(y)‖2 . λ−1/4.
I Uniform, nonadaptive, adversarial quantization error:If A ∈ Rm×n is a Gaussian matrix, then w/hp
‖x−∆SOCP(y)‖2 . λ−1/12 whenever ‖x‖0 ≤ s, ‖x‖2 = 1.
Existing Theoretical Results
I Convex optimization algorithms [Plan–Vershynin 13a, 13b].
I Uniform, nonadaptive, no quantization error:If A ∈ Rm×n is a Gaussian matrix, then w/hp∥∥∥∥x− ∆LP(y)
‖∆LP(y)‖2
∥∥∥∥2
. λ−1/5 whenever ‖x‖0 ≤ s, ‖x‖2 = 1.
I Nonuniform, nonadaptive, random quantization error:Fix x ∈ Rn with ‖x‖0 ≤ s, ‖x‖2 = 1. If A ∈ Rm×n is aGaussian matrix, then w/hp
‖x−∆SOCP(y)‖2 . λ−1/4.
I Uniform, nonadaptive, adversarial quantization error:If A ∈ Rm×n is a Gaussian matrix, then w/hp
‖x−∆SOCP(y)‖2 . λ−1/12 whenever ‖x‖0 ≤ s, ‖x‖2 = 1.
Existing Theoretical Results
I Convex optimization algorithms [Plan–Vershynin 13a, 13b].
I Uniform, nonadaptive, no quantization error:If A ∈ Rm×n is a Gaussian matrix, then w/hp∥∥∥∥x− ∆LP(y)
‖∆LP(y)‖2
∥∥∥∥2
. λ−1/5 whenever ‖x‖0 ≤ s, ‖x‖2 = 1.
I Nonuniform, nonadaptive, random quantization error:Fix x ∈ Rn with ‖x‖0 ≤ s, ‖x‖2 = 1. If A ∈ Rm×n is aGaussian matrix, then w/hp
‖x−∆SOCP(y)‖2 . λ−1/4.
I Uniform, nonadaptive, adversarial quantization error:If A ∈ Rm×n is a Gaussian matrix, then w/hp
‖x−∆SOCP(y)‖2 . λ−1/12 whenever ‖x‖0 ≤ s, ‖x‖2 = 1.
Existing Theoretical Results
I Convex optimization algorithms [Plan–Vershynin 13a, 13b].
I Uniform, nonadaptive, no quantization error:If A ∈ Rm×n is a Gaussian matrix, then w/hp∥∥∥∥x− ∆LP(y)
‖∆LP(y)‖2
∥∥∥∥2
. λ−1/5 whenever ‖x‖0 ≤ s, ‖x‖2 = 1.
I Nonuniform, nonadaptive, random quantization error:Fix x ∈ Rn with ‖x‖0 ≤ s, ‖x‖2 = 1. If A ∈ Rm×n is aGaussian matrix, then w/hp
‖x−∆SOCP(y)‖2 . λ−1/4.
I Uniform, nonadaptive, adversarial quantization error:If A ∈ Rm×n is a Gaussian matrix, then w/hp
‖x−∆SOCP(y)‖2 . λ−1/12 whenever ‖x‖0 ≤ s, ‖x‖2 = 1.
Existing Theoretical Results
I Convex optimization algorithms [Plan–Vershynin 13a, 13b].
I Uniform, nonadaptive, no quantization error:If A ∈ Rm×n is a Gaussian matrix, then w/hp∥∥∥∥x− ∆LP(y)
‖∆LP(y)‖2
∥∥∥∥2
. λ−1/5 whenever ‖x‖0 ≤ s, ‖x‖2 = 1.
I Nonuniform, nonadaptive, random quantization error:Fix x ∈ Rn with ‖x‖0 ≤ s, ‖x‖2 = 1. If A ∈ Rm×n is aGaussian matrix, then w/hp
‖x−∆SOCP(y)‖2 . λ−1/4.
I Uniform, nonadaptive, adversarial quantization error:If A ∈ Rm×n is a Gaussian matrix, then w/hp
‖x−∆SOCP(y)‖2 . λ−1/12 whenever ‖x‖0 ≤ s, ‖x‖2 = 1.
Limitations of the Framework
I Power decay is optimal since
‖x−∆opt(y)‖2 & λ−1
even if supp(x) known in advance [Goyal–Vetterli–Thao 98].
I Geometric intuition
Sn−1
x
http://dsp.rice.edu/1bitCS/choppyanimated.gif
I Remedy: adaptive choice of dithers τ1, . . . , τm in
yi = sign(〈ai , x〉 − τi ), i = 1, . . . ,m.
Limitations of the Framework
I Power decay is optimal since
‖x−∆opt(y)‖2 & λ−1
even if supp(x) known in advance [Goyal–Vetterli–Thao 98].
I Geometric intuition
Sn−1
x
http://dsp.rice.edu/1bitCS/choppyanimated.gif
I Remedy: adaptive choice of dithers τ1, . . . , τm in
yi = sign(〈ai , x〉 − τi ), i = 1, . . . ,m.
Limitations of the Framework
I Power decay is optimal since
‖x−∆opt(y)‖2 & λ−1
even if supp(x) known in advance [Goyal–Vetterli–Thao 98].
I Geometric intuition
Sn−1
x
http://dsp.rice.edu/1bitCS/choppyanimated.gif
I Remedy: adaptive choice of dithers τ1, . . . , τm in
yi = sign(〈ai , x〉 − τi ), i = 1, . . . ,m.
Limitations of the Framework
I Power decay is optimal since
‖x−∆opt(y)‖2 & λ−1
even if supp(x) known in advance [Goyal–Vetterli–Thao 98].
I Geometric intuition
Sn−1
x
http://dsp.rice.edu/1bitCS/choppyanimated.gif
I Remedy: adaptive choice of dithers τ1, . . . , τm in
yi = sign(〈ai , x〉 − τi ), i = 1, . . . ,m.
Exponential Decay: General Strategy
I Rely on an order-one quantization/recovery scheme:for any x ∈ Rn with ‖x‖0 ≤ s, ‖x‖2 ≤ R, take q � s ln(n/s)one-bit measurements and estimate both the direction and themagnitude of x by producing x such that
‖x− x‖2 ≤ R/4.
I Let x ∈ Rn with ‖x‖0 ≤ s, ‖x‖2 ≤ R. Start with x0 = 0.
I For t = 0, 1, . . ., estimate x− xt by x− xt , then set
xt+1 =
Hs(
xt + x− xt
)
, so that ‖x− xt+1‖2 ≤ R/
4t+12t+1
.
I After T iterations, number of measurements is m = qT , and
‖x− xT‖2 ≤ R 2−T = R 2−mq = R exp (−cλ) .
I Software step needed to compute the thresholds τi = 〈ai , xt〉.
Exponential Decay: General Strategy
I Rely on an order-one quantization/recovery scheme:for any x ∈ Rn with ‖x‖0 ≤ s, ‖x‖2 ≤ R, take q � s ln(n/s)one-bit measurements and estimate both the direction and themagnitude of x by producing x such that
‖x− x‖2 ≤ R/4.
I Let x ∈ Rn with ‖x‖0 ≤ s, ‖x‖2 ≤ R. Start with x0 = 0.
I For t = 0, 1, . . ., estimate x− xt by x− xt , then set
xt+1 =
Hs(
xt + x− xt
)
, so that ‖x− xt+1‖2 ≤ R/
4t+12t+1
.
I After T iterations, number of measurements is m = qT , and
‖x− xT‖2 ≤ R 2−T = R 2−mq = R exp (−cλ) .
I Software step needed to compute the thresholds τi = 〈ai , xt〉.
Exponential Decay: General Strategy
I Rely on an order-one quantization/recovery scheme:for any x ∈ Rn with ‖x‖0 ≤ s, ‖x‖2 ≤ R, take q � s ln(n/s)one-bit measurements and estimate both the direction and themagnitude of x by producing x such that
‖x− x‖2 ≤ R/4.
I Let x ∈ Rn with ‖x‖0 ≤ s, ‖x‖2 ≤ R. Start with x0 = 0.
I For t = 0, 1, . . ., estimate x− xt by x− xt , then set
xt+1 =
Hs(
xt + x− xt
)
, so that ‖x− xt+1‖2 ≤ R/
4t+12t+1
.
I After T iterations, number of measurements is m = qT , and
‖x− xT‖2 ≤ R 2−T = R 2−mq = R exp (−cλ) .
I Software step needed to compute the thresholds τi = 〈ai , xt〉.
Exponential Decay: General Strategy
I Rely on an order-one quantization/recovery scheme:for any x ∈ Rn with ‖x‖0 ≤ s, ‖x‖2 ≤ R, take q � s ln(n/s)one-bit measurements and estimate both the direction and themagnitude of x by producing x such that
‖x− x‖2 ≤ R/4.
I Let x ∈ Rn with ‖x‖0 ≤ s, ‖x‖2 ≤ R. Start with x0 = 0.
I For t = 0, 1, . . ., estimate x− xt by x− xt , then set
xt+1 =
Hs(
xt + x− xt
)
, so that ‖x− xt+1‖2 ≤ R/4t+1
2t+1
.
I After T iterations, number of measurements is m = qT , and
‖x− xT‖2 ≤ R 2−T = R 2−mq = R exp (−cλ) .
I Software step needed to compute the thresholds τi = 〈ai , xt〉.
Exponential Decay: General Strategy
I Rely on an order-one quantization/recovery scheme:for any x ∈ Rn with ‖x‖0 ≤ s, ‖x‖2 ≤ R, take q � s ln(n/s)one-bit measurements and estimate both the direction and themagnitude of x by producing x such that
‖x− x‖2 ≤ R/4.
I Let x ∈ Rn with ‖x‖0 ≤ s, ‖x‖2 ≤ R. Start with x0 = 0.
I For t = 0, 1, . . ., estimate x− xt by x− xt , then set
xt+1 = Hs(xt + x− xt), so that ‖x− xt+1‖2 ≤ R/
4t+1
2t+1.
I After T iterations, number of measurements is m = qT , and
‖x− xT‖2 ≤ R 2−T = R 2−mq = R exp (−cλ) .
I Software step needed to compute the thresholds τi = 〈ai , xt〉.
Exponential Decay: General Strategy
I Rely on an order-one quantization/recovery scheme:for any x ∈ Rn with ‖x‖0 ≤ s, ‖x‖2 ≤ R, take q � s ln(n/s)one-bit measurements and estimate both the direction and themagnitude of x by producing x such that
‖x− x‖2 ≤ R/4.
I Let x ∈ Rn with ‖x‖0 ≤ s, ‖x‖2 ≤ R. Start with x0 = 0.
I For t = 0, 1, . . ., estimate x− xt by x− xt , then set
xt+1 = Hs(xt + x− xt), so that ‖x− xt+1‖2 ≤ R/
4t+1
2t+1.
I After T iterations, number of measurements is m = qT , and
‖x− xT‖2 ≤ R 2−T = R 2−mq = R exp (−cλ) .
I Software step needed to compute the thresholds τi = 〈ai , xt〉.
Exponential Decay: General Strategy
I Rely on an order-one quantization/recovery scheme:for any x ∈ Rn with ‖x‖0 ≤ s, ‖x‖2 ≤ R, take q � s ln(n/s)one-bit measurements and estimate both the direction and themagnitude of x by producing x such that
‖x− x‖2 ≤ R/4.
I Let x ∈ Rn with ‖x‖0 ≤ s, ‖x‖2 ≤ R. Start with x0 = 0.
I For t = 0, 1, . . ., estimate x− xt by x− xt , then set
xt+1 = Hs(xt + x− xt), so that ‖x− xt+1‖2 ≤ R/
4t+1
2t+1.
I After T iterations, number of measurements is m = qT , and
‖x− xT‖2 ≤ R 2−T = R 2−mq = R exp (−cλ) .
I Software step needed to compute the thresholds τi = 〈ai , xt〉.
Order-One Scheme Based on Convex Optimization
I Measurement vectors a1, . . . , aq: independent N (0, Iq).
I Dithers τ1, . . . , τq: independent N (0,R2).
I x = argmin ‖z‖1 subject to ‖z‖2 ≤ R, yi (〈ai , z〉 − τi ) ≥ 0.
I If q ≥ cδ−4s ln(n/s), then w/hp
‖x− x‖ ≤ δR whenever ‖x‖0 ≤ s, ‖x‖2 ≤ R.
I Pros: dithers are nonadaptive.
I Cons: slow, post-quantization error not handled.
Order-One Scheme Based on Convex Optimization
I Measurement vectors a1, . . . , aq: independent N (0, Iq).
I Dithers τ1, . . . , τq: independent N (0,R2).
I x = argmin ‖z‖1 subject to ‖z‖2 ≤ R, yi (〈ai , z〉 − τi ) ≥ 0.
I If q ≥ cδ−4s ln(n/s), then w/hp
‖x− x‖ ≤ δR whenever ‖x‖0 ≤ s, ‖x‖2 ≤ R.
I Pros: dithers are nonadaptive.
I Cons: slow, post-quantization error not handled.
Order-One Scheme Based on Convex Optimization
I Measurement vectors a1, . . . , aq: independent N (0, Iq).
I Dithers τ1, . . . , τq: independent N (0,R2).
I x = argmin ‖z‖1 subject to ‖z‖2 ≤ R, yi (〈ai , z〉 − τi ) ≥ 0.
I If q ≥ cδ−4s ln(n/s), then w/hp
‖x− x‖ ≤ δR whenever ‖x‖0 ≤ s, ‖x‖2 ≤ R.
I Pros: dithers are nonadaptive.
I Cons: slow, post-quantization error not handled.
Order-One Scheme Based on Convex Optimization
I Measurement vectors a1, . . . , aq: independent N (0, Iq).
I Dithers τ1, . . . , τq: independent N (0,R2).
I x = argmin ‖z‖1 subject to ‖z‖2 ≤ R, yi (〈ai , z〉 − τi ) ≥ 0.
I If q ≥ cδ−4s ln(n/s), then w/hp
‖x− x‖ ≤ δR whenever ‖x‖0 ≤ s, ‖x‖2 ≤ R.
I Pros: dithers are nonadaptive.
I Cons: slow, post-quantization error not handled.
Order-One Scheme Based on Convex Optimization
I Measurement vectors a1, . . . , aq: independent N (0, Iq).
I Dithers τ1, . . . , τq: independent N (0,R2).
I x = argmin ‖z‖1 subject to ‖z‖2 ≤ R, yi (〈ai , z〉 − τi ) ≥ 0.
I If q ≥ cδ−4s ln(n/s), then w/hp
‖x− x‖ ≤ δR whenever ‖x‖0 ≤ s, ‖x‖2 ≤ R.
I Pros: dithers are nonadaptive.
I Cons: slow, post-quantization error not handled.
Order-One Scheme Based on Convex Optimization
I Measurement vectors a1, . . . , aq: independent N (0, Iq).
I Dithers τ1, . . . , τq: independent N (0,R2).
I x = argmin ‖z‖1 subject to ‖z‖2 ≤ R, yi (〈ai , z〉 − τi ) ≥ 0.
I If q ≥ cδ−4s ln(n/s), then w/hp
‖x− x‖ ≤ δR whenever ‖x‖0 ≤ s, ‖x‖2 ≤ R.
I Pros: dithers are nonadaptive.
I Cons: slow, post-quantization error not handled.
Order-One Scheme Based on Convex Optimization
I Measurement vectors a1, . . . , aq: independent N (0, Iq).
I Dithers τ1, . . . , τq: independent N (0,R2).
I x = argmin ‖z‖1 subject to ‖z‖2 ≤ R, yi (〈ai , z〉 − τi ) ≥ 0.
I If q ≥ cδ−4s ln(n/s), then w/hp
‖x− x‖ ≤ δR whenever ‖x‖0 ≤ s, ‖x‖2 ≤ R.
I Pros: dithers are nonadaptive.
I Cons: slow, post-quantization error not handled.
Order-One Scheme Based on Hard Thresholding
I Measurement vectors a1, . . . , aq: independent N (0, Iq).I Use half of them to estimate the direction of x as
u = H ′s(A∗sign(Ax)).
I Construct sparse vectors v,w (supp(v) ⊂ supp(u)) accordingto
x
2Ru2Rv
w
I Use other half to estimate the direction of x−w applyinghard thresholding again.
I Plane geometry to estimate direction and magnitude of x.I Cons: dithers 〈ai ,w〉 are adaptive.I Pros: deterministic, fast, handles pre/post-quantization errors.
Order-One Scheme Based on Hard Thresholding
I Measurement vectors a1, . . . , aq: independent N (0, Iq).
I Use half of them to estimate the direction of x as
u = H ′s(A∗sign(Ax)).
I Construct sparse vectors v,w (supp(v) ⊂ supp(u)) accordingto
x
2Ru2Rv
w
I Use other half to estimate the direction of x−w applyinghard thresholding again.
I Plane geometry to estimate direction and magnitude of x.I Cons: dithers 〈ai ,w〉 are adaptive.I Pros: deterministic, fast, handles pre/post-quantization errors.
Order-One Scheme Based on Hard Thresholding
I Measurement vectors a1, . . . , aq: independent N (0, Iq).I Use half of them to estimate the direction of x as
u = H ′s(A∗sign(Ax)).
I Construct sparse vectors v,w (supp(v) ⊂ supp(u)) accordingto
x
2Ru2Rv
w
I Use other half to estimate the direction of x−w applyinghard thresholding again.
I Plane geometry to estimate direction and magnitude of x.I Cons: dithers 〈ai ,w〉 are adaptive.I Pros: deterministic, fast, handles pre/post-quantization errors.
Order-One Scheme Based on Hard Thresholding
I Measurement vectors a1, . . . , aq: independent N (0, Iq).I Use half of them to estimate the direction of x as
u = H ′s(A∗sign(Ax)).
I Construct sparse vectors v,w (supp(v) ⊂ supp(u)) accordingto
x
2Ru2Rv
w
I Use other half to estimate the direction of x−w applyinghard thresholding again.
I Plane geometry to estimate direction and magnitude of x.I Cons: dithers 〈ai ,w〉 are adaptive.I Pros: deterministic, fast, handles pre/post-quantization errors.
Order-One Scheme Based on Hard Thresholding
I Measurement vectors a1, . . . , aq: independent N (0, Iq).I Use half of them to estimate the direction of x as
u = H ′s(A∗sign(Ax)).
I Construct sparse vectors v,w (supp(v) ⊂ supp(u)) accordingto
x
2Ru2Rv
w
I Use other half to estimate the direction of x−w applyinghard thresholding again.
I Plane geometry to estimate direction and magnitude of x.I Cons: dithers 〈ai ,w〉 are adaptive.I Pros: deterministic, fast, handles pre/post-quantization errors.
Order-One Scheme Based on Hard Thresholding
I Measurement vectors a1, . . . , aq: independent N (0, Iq).I Use half of them to estimate the direction of x as
u = H ′s(A∗sign(Ax)).
I Construct sparse vectors v,w (supp(v) ⊂ supp(u)) accordingto
x
2Ru2Rv
w
I Use other half to estimate the direction of x−w applyinghard thresholding again.
I Plane geometry to estimate direction and magnitude of x.
I Cons: dithers 〈ai ,w〉 are adaptive.I Pros: deterministic, fast, handles pre/post-quantization errors.
Order-One Scheme Based on Hard Thresholding
I Measurement vectors a1, . . . , aq: independent N (0, Iq).I Use half of them to estimate the direction of x as
u = H ′s(A∗sign(Ax)).
I Construct sparse vectors v,w (supp(v) ⊂ supp(u)) accordingto
x
2Ru2Rv
w
I Use other half to estimate the direction of x−w applyinghard thresholding again.
I Plane geometry to estimate direction and magnitude of x.I Cons: dithers 〈ai ,w〉 are adaptive.
I Pros: deterministic, fast, handles pre/post-quantization errors.
Order-One Scheme Based on Hard Thresholding
I Measurement vectors a1, . . . , aq: independent N (0, Iq).I Use half of them to estimate the direction of x as
u = H ′s(A∗sign(Ax)).
I Construct sparse vectors v,w (supp(v) ⊂ supp(u)) accordingto
x
2Ru2Rv
w
I Use other half to estimate the direction of x−w applyinghard thresholding again.
I Plane geometry to estimate direction and magnitude of x.I Cons: dithers 〈ai ,w〉 are adaptive.I Pros: deterministic, fast, handles pre/post-quantization errors.
Measurement Errors
I Pre-quantization error e ∈ Rm in
yi = sign(〈ai , x〉 − τi + ei ).
I If ‖e‖∞ ≤ εR 2−T (or ‖et‖2 ≤ ε√q‖x− xt‖2 throughout),
then‖x− xT‖2 ≤ R 2−T = R exp(−cλ)
for the convex-optimization and hard-thresholding schemes.
I Post-quantization error f ∈ {±1}m in
yi = fi sign(〈ai , x〉 − τi ).
I If card({i : f ti = −1}) ≤ ηq throughout, then
‖x− xT‖2 ≤ R 2−T = R exp(−cλ)
for the hard-thresholding scheme.
Measurement Errors
I Pre-quantization error e ∈ Rm in
yi = sign(〈ai , x〉 − τi + ei ).
I If ‖e‖∞ ≤ εR 2−T (or ‖et‖2 ≤ ε√q‖x− xt‖2 throughout),
then‖x− xT‖2 ≤ R 2−T = R exp(−cλ)
for the convex-optimization and hard-thresholding schemes.
I Post-quantization error f ∈ {±1}m in
yi = fi sign(〈ai , x〉 − τi ).
I If card({i : f ti = −1}) ≤ ηq throughout, then
‖x− xT‖2 ≤ R 2−T = R exp(−cλ)
for the hard-thresholding scheme.
Measurement Errors
I Pre-quantization error e ∈ Rm in
yi = sign(〈ai , x〉 − τi + ei ).
I If ‖e‖∞ ≤ εR 2−T (or ‖et‖2 ≤ ε√q‖x− xt‖2 throughout),
then‖x− xT‖2 ≤ R 2−T = R exp(−cλ)
for the convex-optimization and hard-thresholding schemes.
I Post-quantization error f ∈ {±1}m in
yi = fi sign(〈ai , x〉 − τi ).
I If card({i : f ti = −1}) ≤ ηq throughout, then
‖x− xT‖2 ≤ R 2−T = R exp(−cλ)
for the hard-thresholding scheme.
Measurement Errors
I Pre-quantization error e ∈ Rm in
yi = sign(〈ai , x〉 − τi + ei ).
I If ‖e‖∞ ≤ εR 2−T (or ‖et‖2 ≤ ε√q‖x− xt‖2 throughout),
then‖x− xT‖2 ≤ R 2−T = R exp(−cλ)
for the convex-optimization and hard-thresholding schemes.
I Post-quantization error f ∈ {±1}m in
yi = fi sign(〈ai , x〉 − τi ).
I If card({i : f ti = −1}) ≤ ηq throughout, then
‖x− xT‖2 ≤ R 2−T = R exp(−cλ)
for the hard-thresholding scheme.
Measurement Errors
I Pre-quantization error e ∈ Rm in
yi = sign(〈ai , x〉 − τi + ei ).
I If ‖e‖∞ ≤ εR 2−T (or ‖et‖2 ≤ ε√q‖x− xt‖2 throughout),
then‖x− xT‖2 ≤ R 2−T = R exp(−cλ)
for the convex-optimization and hard-thresholding schemes.
I Post-quantization error f ∈ {±1}m in
yi = fi sign(〈ai , x〉 − τi ).
I If card({i : f ti = −1}) ≤ ηq throughout, then
‖x− xT‖2 ≤ R 2−T = R exp(−cλ)
for the hard-thresholding scheme.
Numerical Illustration
3 4 5 6 7 8 9 100
0.005
0.01
0.015
0.02
0.025
0.03
t
||x−
x*|
| / ||x||
1000 hard−thresholding−based testsn=100, s=15, m=100000, T=10 (4 minutes)
perfect one−bit measurements
prequantization noise (st. dev.=0.1)
bit flips (5%)
1 1.5 2 2.5 3 3.5 4 4.5 50
0.005
0.01
0.015
0.02
0.025
0.03
0.035
0.04
t
||x−
x*|
| / ||x||
1000 second−order−cone−programming−based testsn=100, s=10, m=20000, T=5 (11 hours)
perfect one−bit measurements
prequantization noise (st. dev.=0.005)
Numerical Illustration, ctd
0 1 2 3 4 5 6 7 8 9
x 104
10−4
10−3
10−2
m/(s log(n/s))
||x−
x*|
| / ||x||
500 hard−thresholding−based testsn=100, m/s=2000:2000:20000, T=6 (9 hours)
s=10
s=15
s=20
s=25
0 100 200 300 400 500 600 700 80010
−3
10−2
10−1
m/(s log(n/s))
||x−
x*|
| / ||x||
100 second−order−cone−programming−based testsn=100, m/s=20:20:200, T=5 (11 hours)
s=10
s=15
s=20
s=25
Ingredients for the Proofs
I Let A ∈ Rq×n with independent N (0, 1) entries.I Sign Product Embedding Property: if q ≥ Cδ−6s ln(n/s),
then with w/hp∣∣∣∣∣√π/2
q〈Aw, sign(Ax)〉 − 〈w, x〉
∣∣∣∣∣ ≤ δfor all w, x ∈ Rn with ‖w‖0, ‖x‖0 ≤ s and ‖w‖2 = ‖x‖2 = 1.
I Simultaneous (`2, `1)-Quotient Property: w/hp, every e ∈ Rq
can be written as
e = Au with
{‖u‖2 ≤ d‖e‖2/
√q,
‖u‖1 ≤ d ′√s∗‖e‖2/
√q,
where s∗ = q/ ln(n/q).I Restricted Isometry Property: if q ≥ Cδ−2s ln(n/s), then with
w/hp ∣∣∣∣1q ‖Ax‖22 − ‖x‖22∣∣∣∣ ≤ δ‖x‖22
for all x ∈ Rn with ‖x‖0 ≤ s.
Ingredients for the ProofsI Let A ∈ Rq×n with independent N (0, 1) entries.
I Sign Product Embedding Property: if q ≥ Cδ−6s ln(n/s),then with w/hp∣∣∣∣∣
√π/2
q〈Aw, sign(Ax)〉 − 〈w, x〉
∣∣∣∣∣ ≤ δfor all w, x ∈ Rn with ‖w‖0, ‖x‖0 ≤ s and ‖w‖2 = ‖x‖2 = 1.
I Simultaneous (`2, `1)-Quotient Property: w/hp, every e ∈ Rq
can be written as
e = Au with
{‖u‖2 ≤ d‖e‖2/
√q,
‖u‖1 ≤ d ′√s∗‖e‖2/
√q,
where s∗ = q/ ln(n/q).I Restricted Isometry Property: if q ≥ Cδ−2s ln(n/s), then with
w/hp ∣∣∣∣1q ‖Ax‖22 − ‖x‖22∣∣∣∣ ≤ δ‖x‖22
for all x ∈ Rn with ‖x‖0 ≤ s.
Ingredients for the ProofsI Let A ∈ Rq×n with independent N (0, 1) entries.I Sign Product Embedding Property: if q ≥ Cδ−6s ln(n/s),
then with w/hp∣∣∣∣∣√π/2
q〈Aw, sign(Ax)〉 − 〈w, x〉
∣∣∣∣∣ ≤ δfor all w, x ∈ Rn with ‖w‖0, ‖x‖0 ≤ s and ‖w‖2 = ‖x‖2 = 1.
I Simultaneous (`2, `1)-Quotient Property: w/hp, every e ∈ Rq
can be written as
e = Au with
{‖u‖2 ≤ d‖e‖2/
√q,
‖u‖1 ≤ d ′√s∗‖e‖2/
√q,
where s∗ = q/ ln(n/q).I Restricted Isometry Property: if q ≥ Cδ−2s ln(n/s), then with
w/hp ∣∣∣∣1q ‖Ax‖22 − ‖x‖22∣∣∣∣ ≤ δ‖x‖22
for all x ∈ Rn with ‖x‖0 ≤ s.
Ingredients for the ProofsI Let A ∈ Rq×n with independent N (0, 1) entries.I Sign Product Embedding Property: if q ≥ Cδ−6s ln(n/s),
then with w/hp∣∣∣∣∣√π/2
q〈Aw, sign(Ax)〉 − 〈w, x〉
∣∣∣∣∣ ≤ δfor all w, x ∈ Rn with ‖w‖0, ‖x‖0 ≤ s and ‖w‖2 = ‖x‖2 = 1.
I Simultaneous (`2, `1)-Quotient Property: w/hp, every e ∈ Rq
can be written as
e = Au with
{‖u‖2 ≤ d‖e‖2/
√q,
‖u‖1 ≤ d ′√s∗‖e‖2/
√q,
where s∗ = q/ ln(n/q).
I Restricted Isometry Property: if q ≥ Cδ−2s ln(n/s), then withw/hp ∣∣∣∣1q ‖Ax‖22 − ‖x‖22
∣∣∣∣ ≤ δ‖x‖22for all x ∈ Rn with ‖x‖0 ≤ s.
Ingredients for the ProofsI Let A ∈ Rq×n with independent N (0, 1) entries.I Sign Product Embedding Property: if q ≥ Cδ−6s ln(n/s),
then with w/hp∣∣∣∣∣√π/2
q〈Aw, sign(Ax)〉 − 〈w, x〉
∣∣∣∣∣ ≤ δfor all w, x ∈ Rn with ‖w‖0, ‖x‖0 ≤ s and ‖w‖2 = ‖x‖2 = 1.
I Simultaneous (`2, `1)-Quotient Property: w/hp, every e ∈ Rq
can be written as
e = Au with
{‖u‖2 ≤ d‖e‖2/
√q,
‖u‖1 ≤ d ′√s∗‖e‖2/
√q,
where s∗ = q/ ln(n/q).I Restricted Isometry Property: if q ≥ Cδ−2s ln(n/s), then with
w/hp ∣∣∣∣1q ‖Ax‖22 − ‖x‖22∣∣∣∣ ≤ δ‖x‖22
for all x ∈ Rn with ‖x‖0 ≤ s.
Ingredients for the Proofs, ctd
I Random hyperplane tessellations of√sBn
1 ∩ Sn−1:
I a1, . . . , aq ∈ Rn independent N (0, Iq).I If q ≥ Cδ−4s ln(n/s), then w/hp all x, x′ ∈
√sBn
1 ∩ Sn−1 withsign〈ai , x〉 = sign〈ai , x′〉, i = 1, . . . , q, satisfy
‖x− x′‖2 ≤ δ.
I Random hyperplane tessellations of√sBn
1 ∩ Bn2 :
I a1, . . . , aq ∈ Rn independent N (0, Iq),I τ1, . . . , τq ∈ R independent N (0, 1),I apply the previous results to [ai ,−τi ], [x, 1], [x′, 1].I If q ≥ Cδ−4s ln(n/s), then w/hp all x, x′ ∈
√sBn
1 ∩ Bn2 with
sign(〈ai , x〉 − τi ) = sign(〈ai , x′〉 − τi ), i = 1, . . . , q, satisfy
‖x− x′‖2 ≤ δ.
Ingredients for the Proofs, ctd
I Random hyperplane tessellations of√sBn
1 ∩ Sn−1:
I a1, . . . , aq ∈ Rn independent N (0, Iq).I If q ≥ Cδ−4s ln(n/s), then w/hp all x, x′ ∈
√sBn
1 ∩ Sn−1 withsign〈ai , x〉 = sign〈ai , x′〉, i = 1, . . . , q, satisfy
‖x− x′‖2 ≤ δ.
I Random hyperplane tessellations of√sBn
1 ∩ Bn2 :
I a1, . . . , aq ∈ Rn independent N (0, Iq),I τ1, . . . , τq ∈ R independent N (0, 1),I apply the previous results to [ai ,−τi ], [x, 1], [x′, 1].I If q ≥ Cδ−4s ln(n/s), then w/hp all x, x′ ∈
√sBn
1 ∩ Bn2 with
sign(〈ai , x〉 − τi ) = sign(〈ai , x′〉 − τi ), i = 1, . . . , q, satisfy
‖x− x′‖2 ≤ δ.
Ingredients for the Proofs, ctd
I Random hyperplane tessellations of√sBn
1 ∩ Sn−1:I a1, . . . , aq ∈ Rn independent N (0, Iq).
I If q ≥ Cδ−4s ln(n/s), then w/hp all x, x′ ∈√sBn
1 ∩ Sn−1 withsign〈ai , x〉 = sign〈ai , x′〉, i = 1, . . . , q, satisfy
‖x− x′‖2 ≤ δ.
I Random hyperplane tessellations of√sBn
1 ∩ Bn2 :
I a1, . . . , aq ∈ Rn independent N (0, Iq),I τ1, . . . , τq ∈ R independent N (0, 1),I apply the previous results to [ai ,−τi ], [x, 1], [x′, 1].I If q ≥ Cδ−4s ln(n/s), then w/hp all x, x′ ∈
√sBn
1 ∩ Bn2 with
sign(〈ai , x〉 − τi ) = sign(〈ai , x′〉 − τi ), i = 1, . . . , q, satisfy
‖x− x′‖2 ≤ δ.
Ingredients for the Proofs, ctd
I Random hyperplane tessellations of√sBn
1 ∩ Sn−1:I a1, . . . , aq ∈ Rn independent N (0, Iq).I If q ≥ Cδ−4s ln(n/s), then w/hp all x, x′ ∈
√sBn
1 ∩ Sn−1 withsign〈ai , x〉 = sign〈ai , x′〉, i = 1, . . . , q, satisfy
‖x− x′‖2 ≤ δ.
I Random hyperplane tessellations of√sBn
1 ∩ Bn2 :
I a1, . . . , aq ∈ Rn independent N (0, Iq),I τ1, . . . , τq ∈ R independent N (0, 1),I apply the previous results to [ai ,−τi ], [x, 1], [x′, 1].I If q ≥ Cδ−4s ln(n/s), then w/hp all x, x′ ∈
√sBn
1 ∩ Bn2 with
sign(〈ai , x〉 − τi ) = sign(〈ai , x′〉 − τi ), i = 1, . . . , q, satisfy
‖x− x′‖2 ≤ δ.
Ingredients for the Proofs, ctd
I Random hyperplane tessellations of√sBn
1 ∩ Sn−1:I a1, . . . , aq ∈ Rn independent N (0, Iq).I If q ≥ Cδ−4s ln(n/s), then w/hp all x, x′ ∈
√sBn
1 ∩ Sn−1 withsign〈ai , x〉 = sign〈ai , x′〉, i = 1, . . . , q, satisfy
‖x− x′‖2 ≤ δ.
I Random hyperplane tessellations of√sBn
1 ∩ Bn2 :
I a1, . . . , aq ∈ Rn independent N (0, Iq),I τ1, . . . , τq ∈ R independent N (0, 1),I apply the previous results to [ai ,−τi ], [x, 1], [x′, 1].I If q ≥ Cδ−4s ln(n/s), then w/hp all x, x′ ∈
√sBn
1 ∩ Bn2 with
sign(〈ai , x〉 − τi ) = sign(〈ai , x′〉 − τi ), i = 1, . . . , q, satisfy
‖x− x′‖2 ≤ δ.
Ingredients for the Proofs, ctd
I Random hyperplane tessellations of√sBn
1 ∩ Sn−1:I a1, . . . , aq ∈ Rn independent N (0, Iq).I If q ≥ Cδ−4s ln(n/s), then w/hp all x, x′ ∈
√sBn
1 ∩ Sn−1 withsign〈ai , x〉 = sign〈ai , x′〉, i = 1, . . . , q, satisfy
‖x− x′‖2 ≤ δ.
I Random hyperplane tessellations of√sBn
1 ∩ Bn2 :
I a1, . . . , aq ∈ Rn independent N (0, Iq),
I τ1, . . . , τq ∈ R independent N (0, 1),I apply the previous results to [ai ,−τi ], [x, 1], [x′, 1].I If q ≥ Cδ−4s ln(n/s), then w/hp all x, x′ ∈
√sBn
1 ∩ Bn2 with
sign(〈ai , x〉 − τi ) = sign(〈ai , x′〉 − τi ), i = 1, . . . , q, satisfy
‖x− x′‖2 ≤ δ.
Ingredients for the Proofs, ctd
I Random hyperplane tessellations of√sBn
1 ∩ Sn−1:I a1, . . . , aq ∈ Rn independent N (0, Iq).I If q ≥ Cδ−4s ln(n/s), then w/hp all x, x′ ∈
√sBn
1 ∩ Sn−1 withsign〈ai , x〉 = sign〈ai , x′〉, i = 1, . . . , q, satisfy
‖x− x′‖2 ≤ δ.
I Random hyperplane tessellations of√sBn
1 ∩ Bn2 :
I a1, . . . , aq ∈ Rn independent N (0, Iq),I τ1, . . . , τq ∈ R independent N (0, 1),
I apply the previous results to [ai ,−τi ], [x, 1], [x′, 1].I If q ≥ Cδ−4s ln(n/s), then w/hp all x, x′ ∈
√sBn
1 ∩ Bn2 with
sign(〈ai , x〉 − τi ) = sign(〈ai , x′〉 − τi ), i = 1, . . . , q, satisfy
‖x− x′‖2 ≤ δ.
Ingredients for the Proofs, ctd
I Random hyperplane tessellations of√sBn
1 ∩ Sn−1:I a1, . . . , aq ∈ Rn independent N (0, Iq).I If q ≥ Cδ−4s ln(n/s), then w/hp all x, x′ ∈
√sBn
1 ∩ Sn−1 withsign〈ai , x〉 = sign〈ai , x′〉, i = 1, . . . , q, satisfy
‖x− x′‖2 ≤ δ.
I Random hyperplane tessellations of√sBn
1 ∩ Bn2 :
I a1, . . . , aq ∈ Rn independent N (0, Iq),I τ1, . . . , τq ∈ R independent N (0, 1),I apply the previous results to [ai ,−τi ], [x, 1], [x′, 1].
I If q ≥ Cδ−4s ln(n/s), then w/hp all x, x′ ∈√sBn
1 ∩ Bn2 with
sign(〈ai , x〉 − τi ) = sign(〈ai , x′〉 − τi ), i = 1, . . . , q, satisfy
‖x− x′‖2 ≤ δ.
Ingredients for the Proofs, ctd
I Random hyperplane tessellations of√sBn
1 ∩ Sn−1:I a1, . . . , aq ∈ Rn independent N (0, Iq).I If q ≥ Cδ−4s ln(n/s), then w/hp all x, x′ ∈
√sBn
1 ∩ Sn−1 withsign〈ai , x〉 = sign〈ai , x′〉, i = 1, . . . , q, satisfy
‖x− x′‖2 ≤ δ.
I Random hyperplane tessellations of√sBn
1 ∩ Bn2 :
I a1, . . . , aq ∈ Rn independent N (0, Iq),I τ1, . . . , τq ∈ R independent N (0, 1),I apply the previous results to [ai ,−τi ], [x, 1], [x′, 1].I If q ≥ Cδ−4s ln(n/s), then w/hp all x, x′ ∈
√sBn
1 ∩ Bn2 with
sign(〈ai , x〉 − τi ) = sign(〈ai , x′〉 − τi ), i = 1, . . . , q, satisfy
‖x− x′‖2 ≤ δ.
Thank you!
E-mail:G [email protected]
Web:G www.cmc.edu/pages/faculty/DNeedell
References:G R. Baraniuk, S. Foucart, D. Needell, Y. Plan, M. Wootters. Exponential decay of reconstruction error
from binary measurements of sparse signal, submitted.
G E. J. Candès, J. Romberg, and T. Tao. Stable signal recovery from incomplete and inaccuratemeasurements. Communications on Pure and Applied Mathematics, 59(8):1207U1223, 2006.
G E. J. Candès, Y. C. Eldar, D. Needell and P. Randall. Compressed sensing with coherent and redundantdictionaries. Applied and Computational Harmonic Analysis, 31(1):59-73, 2010.
G M. A. Davenport, D. Needell and M. B. Wakin. Signal Space CoSaMP for Sparse Recovery withRedundant Dictionaries, submitted.
G D. Needell and R. Ward. Stable image reconstruction using total variation minimization. J. FourierAnalysis and Applications, to appear.
G Y. Plan and R. Vershynin. One-bit compressed sensing by linear programming, Comm. Pure Appl.Math., to appear.
Top Related