Compressive Blind Source Separation

23
Backgrounds Bayesian Probabilistic Inference Numerical Results Conclusions and Future Work Compressive Blind Source Separation Yiyue Wu, Yuejie Chi, Robert Calderbank Dept. of Electrical Engineering Princeton University Sept. 27, 2010 Yuejie Chi ICIP 2010: Compressive BSS

Transcript of Compressive Blind Source Separation

BackgroundsBayesian Probabilistic Inference

Numerical ResultsConclusions and Future Work

Compressive Blind Source Separation

Yiyue Wu, Yuejie Chi, Robert Calderbank

Dept. of Electrical EngineeringPrinceton University

Sept. 27, 2010

Yuejie Chi ICIP 2010: Compressive BSS

BackgroundsBayesian Probabilistic Inference

Numerical ResultsConclusions and Future Work

Outline

1 BackgroundsMotivationsBackground

2 Bayesian Probabilistic InferenceProblem FormulationPrior DistributionMarkov Chain Monte Carlo

3 Numerical ResultsOne-dimensional BSSTwo-dimensional BSS

4 Conclusions and Future Work

Yuejie Chi ICIP 2010: Compressive BSS

BackgroundsBayesian Probabilistic Inference

Numerical ResultsConclusions and Future Work

MotivationsBackground

Bridging Compressive Sensing and Machine Learning

There are growing interests in applying sparse techniques tomachine learning and image processing:

SVM can be done in compressed domain [CJS09];Multi-label prediction via CS [HKLZ09];Bayesian inference for reconstruction [HC09, DWB08] [IMD06];Bayesian inference for image denoising, inpainting...

This work and take-away message: pick an interestingproblem to showcase compressed measurements are as goodas complete measurements as long as you have enoughmeasurements.

Yuejie Chi ICIP 2010: Compressive BSS

BackgroundsBayesian Probabilistic Inference

Numerical ResultsConclusions and Future Work

MotivationsBackground

Bridging Compressive Sensing and Machine Learning

There are growing interests in applying sparse techniques tomachine learning and image processing:

SVM can be done in compressed domain [CJS09];Multi-label prediction via CS [HKLZ09];Bayesian inference for reconstruction [HC09, DWB08] [IMD06];Bayesian inference for image denoising, inpainting...

This work and take-away message: pick an interestingproblem to showcase compressed measurements are as goodas complete measurements as long as you have enoughmeasurements.

Yuejie Chi ICIP 2010: Compressive BSS

BackgroundsBayesian Probabilistic Inference

Numerical ResultsConclusions and Future Work

MotivationsBackground

Motivations

Blind Source Separation (BSS) from conventional mixtures

Important in many areas: speech recognition, MIMOcommunications, etc.

In many cases, measurements are expensive:

Body Area Networks (BAN): sensors are power hungry andneed to last a few days to a few weeks.Recent developments in Compressive Sensing (CS) provide anintriguing solution.

Our work: recover mixtures from small measurements:

Conventional methods like PCA and ICA may fail due toreduced dimensionality.

Yuejie Chi ICIP 2010: Compressive BSS

BackgroundsBayesian Probabilistic Inference

Numerical ResultsConclusions and Future Work

MotivationsBackground

Motivations

Blind Source Separation (BSS) from conventional mixtures

Important in many areas: speech recognition, MIMOcommunications, etc.

In many cases, measurements are expensive:

Body Area Networks (BAN): sensors are power hungry andneed to last a few days to a few weeks.Recent developments in Compressive Sensing (CS) provide anintriguing solution.

Our work: recover mixtures from small measurements:

Conventional methods like PCA and ICA may fail due toreduced dimensionality.

Yuejie Chi ICIP 2010: Compressive BSS

BackgroundsBayesian Probabilistic Inference

Numerical ResultsConclusions and Future Work

MotivationsBackground

Motivations

Blind Source Separation (BSS) from conventional mixtures

Important in many areas: speech recognition, MIMOcommunications, etc.

In many cases, measurements are expensive:

Body Area Networks (BAN): sensors are power hungry andneed to last a few days to a few weeks.Recent developments in Compressive Sensing (CS) provide anintriguing solution.

Our work: recover mixtures from small measurements:

Conventional methods like PCA and ICA may fail due toreduced dimensionality.

Yuejie Chi ICIP 2010: Compressive BSS

BackgroundsBayesian Probabilistic Inference

Numerical ResultsConclusions and Future Work

MotivationsBackground

Background: Compressive Sensing

Recovery of a sparse or compressible signal from a smallnumber of linear measurements [Don06, CT05].

y = Φx + n, Φ ∈ CM×N , M � N

Many classes of reconstruction algorithms available now:

”Old-fashioned” `1 minimization.Greedy algorithms: OMP, CoSaMP, GPSR...Bayesian inference.

Theoretical performance gaurantee is usually given byRestricted Isometry Property (RIP) of measurement matrixsatisfied by random matrices with high probability.

Yuejie Chi ICIP 2010: Compressive BSS

BackgroundsBayesian Probabilistic Inference

Numerical ResultsConclusions and Future Work

MotivationsBackground

Background: Compressive Sensing

Recovery of a sparse or compressible signal from a smallnumber of linear measurements [Don06, CT05].

y = Φx + n, Φ ∈ CM×N , M � N

Many classes of reconstruction algorithms available now:

”Old-fashioned” `1 minimization.Greedy algorithms: OMP, CoSaMP, GPSR...Bayesian inference.

Theoretical performance gaurantee is usually given byRestricted Isometry Property (RIP) of measurement matrixsatisfied by random matrices with high probability.

Yuejie Chi ICIP 2010: Compressive BSS

BackgroundsBayesian Probabilistic Inference

Numerical ResultsConclusions and Future Work

MotivationsBackground

Background: Compressive Sensing

Recovery of a sparse or compressible signal from a smallnumber of linear measurements [Don06, CT05].

y = Φx + n, Φ ∈ CM×N , M � N

Many classes of reconstruction algorithms available now:

”Old-fashioned” `1 minimization.Greedy algorithms: OMP, CoSaMP, GPSR...Bayesian inference.

Theoretical performance gaurantee is usually given byRestricted Isometry Property (RIP) of measurement matrixsatisfied by random matrices with high probability.

Yuejie Chi ICIP 2010: Compressive BSS

BackgroundsBayesian Probabilistic Inference

Numerical ResultsConclusions and Future Work

MotivationsBackground

Background: Blind Source Separation

Goal: to recover T independent sources from L observedmixture of sources, possibly corrupted by noise.

X = ΘA + ε

where

X ∈ CN×L is the matrix of observations;Θ ∈ CN×T is the matrix of sources;ε ∈ CN×L is the noise;A ∈ CT×L is the mixing matrix.

Without loss of generality, we let all representations lie in thewavelet domain.

particularly fit for image applications.

Yuejie Chi ICIP 2010: Compressive BSS

BackgroundsBayesian Probabilistic Inference

Numerical ResultsConclusions and Future Work

MotivationsBackground

Approaches

Separate procedures: Mixture Recovery + BSS

We address a Baysian answer to this problem.

Simplified proceduresBetter performance

Yuejie Chi ICIP 2010: Compressive BSS

BackgroundsBayesian Probabilistic Inference

Numerical ResultsConclusions and Future Work

Problem FormulationPrior DistributionMarkov Chain Monte Carlo

Problem Formulation

Compressed measurements of mixtures of the sources:

Y = ΦΘA + N,

sop(Yk |Φ,Θ,A, αN

k ) ∼ N (ΦΘAk , (αNk )−1I),

where Yk and Ak are the kth columns of Y and A..

To maximize the posterior distribution:

p(Θ,A,N|Y,Φ)

∝ p(Y|Φ,Θ,A, αN)π(N|αN)π(A|αA)π(Θ|αΘ)

where α = [αN, αA, αΘ] is the set of the hyper parameters.

Yuejie Chi ICIP 2010: Compressive BSS

BackgroundsBayesian Probabilistic Inference

Numerical ResultsConclusions and Future Work

Problem FormulationPrior DistributionMarkov Chain Monte Carlo

Hidden Markov Tree Model

Model the statistical dependencies between wavelet-domaincoefficients. [CNB98]

Persistence:Parent node large/small

h.p.−→ Child node large/small

A node large/smallh.p.−→ Adjacent nodes large/small

Mixed Gaussian Model:

θs,i ∼ (1− πs,i )δ0

+πs,iN (0, (αs)−1)

Yuejie Chi ICIP 2010: Compressive BSS

BackgroundsBayesian Probabilistic Inference

Numerical ResultsConclusions and Future Work

Problem FormulationPrior DistributionMarkov Chain Monte Carlo

Prior Distributions

Prior distribution of noise variance:

αNk ∼ Gamma(a0, b0)

Prior distribution of A = {aij}:

aij ∼ N (µij , α−1ij ), 1 ≤ i ≤ T , 1 ≤ j ≤ L

Prior distributions of Θ:

θs,i ∼ (1− πs,i )δ0 + πs,iN (0, (αs)−1)

with πs,i =

πr ∼ Beta(er0, f

r0 ), if s = 1

πs0 ∼ Beta(es00 , f

s00 ), if 2 ≤ s ≤ S , θp(s,i) = 0

πs1 ∼ Beta(es10 , f

s10 ), if 2 ≤ s ≤ S , θp(s,i) 6= 0

αs ∼ Gamma(c0, d0)

Yuejie Chi ICIP 2010: Compressive BSS

BackgroundsBayesian Probabilistic Inference

Numerical ResultsConclusions and Future Work

Problem FormulationPrior DistributionMarkov Chain Monte Carlo

MCMC

The Gibbs sampler samples from the following conditionaldistributions at iteration t,

θs,ik (t) ∼ p(θs,ik |Y,Φ,A(t − 1), αN(t − 1), αsk(t − 1), πs,ik (t − 1)),

A(t) ∼ p(A|Y,Φ,A(t − 1),Θ(t − 1), αN(t − 1)),

αNk (t) ∼ p(αN

k |Yk ,Φ,Ak(t − 1),Θ(t − 1)),

αsk(t) ∼ p(αs

k |θs,ik (t − 1)),

πs,ik (t) ∼ p(πs,ik |θs,ik (t − 1)),

where {θs,ik (t)} is the set of wavelet coefficients associatedwith the kth source in the tth iteration.

Note: all distributions belong to the exponential family!

Yuejie Chi ICIP 2010: Compressive BSS

BackgroundsBayesian Probabilistic Inference

Numerical ResultsConclusions and Future Work

One-dimensional BSSTwo-dimensional BSS

One-dimensional BSS

100 200 300 400 500−1

−0.50

0.5

Original Signal 0

100 200 300 400 500

−1

−0.5

0Original Signal 1

100 200 300 400 500−1

−0.50

0.5Mixture Signal 0

100 200 300 400 500

−1−0.5

0

Mixture Signal 1

100 200 300 400 500

−1−0.5

00.5

Sep−Recovered Signal 0

100 200 300 400 500

−1−0.5

0

Sep−Recovered Signal 1

100 200 300 400 500−1

−0.50

0.5Bayes−Recovered Signal 0

100 200 300 400 500

−1

−0.5

0Bayes−Recovered Signal 1

Figure: Bayesian compressive blind separation of one dimensional signals

Our proposed method out performs the separate procedureYuejie Chi ICIP 2010: Compressive BSS

BackgroundsBayesian Probabilistic Inference

Numerical ResultsConclusions and Future Work

One-dimensional BSSTwo-dimensional BSS

One-dimensional BSS

100 200 300 400 500

−2

0

2

4

Original Signal 0

100 200 300 400 500−2

−1

0

1

Original Signal 1

100 200 300 400 500

−2

0

2

4

Bayes−Recovered Signal 0

100 200 300 400 500

−2

0

2Bayes−Recovered Signal 1

100 200 300 400 500

−2

0

2

4

Sep−Recovered Signal 0

100 200 300 400 500

−2

0

2Sep−Recovered Signal 1

Figure: Comparisons of recovered wavelet coefficients

Original signals are sparse

Our proposed method out performs the separate procedure

Yuejie Chi ICIP 2010: Compressive BSS

BackgroundsBayesian Probabilistic Inference

Numerical ResultsConclusions and Future Work

One-dimensional BSSTwo-dimensional BSS

Two-dimensional BSS

Original Image 0

10 20 30

10

20

30

Mixed Image 0

10 20 30

10

20

30

Bayes−Recovered Image 0

10 20 30

10

20

30

Sep−Recovered Image 0

10 20 30

10

20

30

Original Image 1

10 20 30

10

20

30

Mixed Image 1

10 20 30

10

20

30

Bayes−Recovered Image 1

10 20 30

10

20

30

Sep−Recovered Image 1

10 20 30

10

20

30

Figure: Bayesian compressive blind separation of two images

Yuejie Chi ICIP 2010: Compressive BSS

BackgroundsBayesian Probabilistic Inference

Numerical ResultsConclusions and Future Work

One-dimensional BSSTwo-dimensional BSS

Two-dimensional BSS

50 100 150 200 250

0

500

1000

Original Signal 0

50 100 150 200 250

−500

0

500

1000

Original Signal 1

50 100 150 200 250−400−200

0200400600

Bayes−Recovered Signal 0

50 100 150 200 250

−500

0

500

Bayes−Recovered Signal 1

50 100 150 200 250−1000

0

1000

Sep−Recovered Signal 0

50 100 150 200 250

−1000

0

1000

Sep−Recovered Signal 1

Figure: Comparisons of the first 256 recovered wavelet coefficients

Original signals are not well sparse

Yuejie Chi ICIP 2010: Compressive BSS

BackgroundsBayesian Probabilistic Inference

Numerical ResultsConclusions and Future Work

Conclusions and Future Work

Conclusions:

We have addressed the blind source separation problemdirectly from the compressed mixtures obtained fromcompressive sensing measurements.

Our approach out performs the existing separate procedure.

Future work:

Improving our proposed method in separation and recovery ofnearly sparse signals.

Incorporating dictionary learning in the inference procedure in2-D case to obtain better performance.

Yuejie Chi ICIP 2010: Compressive BSS

BackgroundsBayesian Probabilistic Inference

Numerical ResultsConclusions and Future Work

Main References

Robert Calderbank, Sina Jafarpour, and Robert Schapire.

Compressed learning: Universal sparse dimensionality reduction and learning in the measurement domain.Preprint, 2009.

M. S. Crouse, R. D. Nowak, and R. G. Baraniuk.

Wavelet-based statistical signal processing using hidden markov models.IEEE Trans. Signal Processing, 46(4):882–902, 1998.

E. J. Candes and T. Tao.

Decoding by linear programming.IEEE Trans. Info. Theory, 51:4203– 4215, 2005.

D. L. Donoho.

Compressed sensing.IEEE Trans. Info. Theory, 52(4):1289–1306, 2006.

Marco Duarte, Michael Wakin, and Richard Baraniuk.

Wavelet-domain compressive signal reconstruction using a hidden markov tree model.2008.

Lihan He and Lawrence Carin.

Exploiting structure in wavelet-based bayesian compressive sensing.IEEE Trans. Signal Processing, 57(9):3488–3497, 2009.

Mahieddine M. Ichir and Ali Mohammad-Djafari.

Hidden markov models for wavelet-based blind source separation.IEEE Transactions on Image Processing, 15(7):1887–1899, 2006.

Yuejie Chi ICIP 2010: Compressive BSS

BackgroundsBayesian Probabilistic Inference

Numerical ResultsConclusions and Future Work

THANK YOU!

Yuejie Chi ICIP 2010: Compressive BSS