Signal Processing Course : Approximation

82
Approximation and Coding with Orthogonal Decompositions Gabriel Peyré http://www.ceremade.dauphine.fr/~peyre/numerical-tour/

description

Slides for a course on signal and image processing.

Transcript of Signal Processing Course : Approximation

Page 1: Signal Processing Course : Approximation

Approximation and Coding

with Orthogonal Decompositions

Gabriel Peyréhttp://www.ceremade.dauphine.fr/~peyre/numerical-tour/

Page 2: Signal Processing Course : Approximation

Overview

• Approximation and Compression

• Decay of Approximation Error

• Fourier for Smooth Functions

• Wavelet for Piecewise Smooth Functions

• Curvelets and Finite Elements for Cartoons

Page 3: Signal Processing Course : Approximation

Sparse Approximation in a Basis

Page 4: Signal Processing Course : Approximation

Sparse Approximation in a Basis

Page 5: Signal Processing Course : Approximation

Sparse Approximation in a Basis

Page 6: Signal Processing Course : Approximation

Hard Thresholding

Page 7: Signal Processing Course : Approximation

(usually polynomial)

Approximation Speed

Approximation error decay:

Page 8: Signal Processing Course : Approximation

(usually polynomial)

Approximation Speed

Approximation error decay:

log 1

0(||

f�

f M||)

Log/Log plot: approx. a�ne curve

log10(M/N)

Page 9: Signal Processing Course : Approximation

Efficiency of Transforms

Fourier DCT

Local DCT Wavelets

log 1

0(||

f�

f M||)

log10(M/N)

Page 10: Signal Processing Course : Approximation

Overview

• Approximation and Compression

• Decay of Approximation Error

• Fourier for Smooth Functions

• Wavelet for Piecewise Smooth Functions

• Curvelets and Finite Elements for Cartoons

Page 11: Signal Processing Course : Approximation

fforward

Compression by Transform-coding

a[m] = ⇥f, �m⇤ � R

Image f Zoom on f

transform

Page 12: Signal Processing Course : Approximation

fforward

Compression by Transform-coding

a[m] = ⇥f, �m⇤ � R

Quantization: q[m] = sign(a[m])�

|a[m]|T

⇥� Z

Image f Zoom on f

transform

a[m]

�T T 2T�2T a[m]

Quantized q[m]

bin Tq[m] � Z

Page 13: Signal Processing Course : Approximation

fforward coding

Compression by Transform-coding

a[m] = ⇥f, �m⇤ � R

Quantization: q[m] = sign(a[m])�

|a[m]|T

⇥� Z

Image f Zoom on f

transform

Entropic coding: use statistical redundancy (many 0’s).a[m]

�T T 2T�2T a[m]

Quantized q[m]

bin Tq[m] � Z

Page 14: Signal Processing Course : Approximation

fforward coding

Compression by Transform-coding

a[m] = ⇥f, �m⇤ � R

Quantization: q[m] = sign(a[m])�

|a[m]|T

⇥� Z

Image f Zoom on f

decoding

q[m] � Za[m] dequantization

transform

Entropic coding: use statistical redundancy (many 0’s).a[m]

�T T 2T�2T a[m]

Quantized q[m]

bin Tq[m] � Z

Dequantization:

Page 15: Signal Processing Course : Approximation

fforward coding

Compression by Transform-coding

a[m] = ⇥f, �m⇤ � R

Quantization: q[m] = sign(a[m])�

|a[m]|T

⇥� Z

Image f Zoom on f fR, R =0.2 bit/pixel

decoding

q[m] � Za[m] dequantizationtransform

backwardfR =

m�IT

a[m]�m

transform

Entropic coding: use statistical redundancy (many 0’s).a[m]

�T T 2T�2T a[m]

Quantized q[m]

bin Tq[m] � Z

Dequantization:

Page 16: Signal Processing Course : Approximation

Thresholding vs. Quantizing

Page 17: Signal Processing Course : Approximation

Non-linear Approximation and Compression

Page 18: Signal Processing Course : Approximation

Non-linear Approximation and Compression

a[m]

�T T 2T�2T a[m]

Quantization: q[m] = sign(a[m])�

|a[m]|T

⇥� Z

=⇥ |a[m]� a[m]| � T

2Dequantization: a[m] = sign(q[m])

�|q[m]| +

12

⇥T

Page 19: Signal Processing Course : Approximation

Non-linear Approximation and Compression

||f � fM ||2 # = M

Theorem: ||f � fR||2 � ||f � fM ||2 + MT 2/4

where M = # {m \ a[m] �= 0}.

||f � fR||2 =⇤

m

(a[m]� a[m])2 �⇤

|a[m]|<T

|a[m]|2 +⇤

|a[m]|�T

�T

2

⇥2

a[m]

�T T 2T�2T a[m]

Quantization: q[m] = sign(a[m])�

|a[m]|T

⇥� Z

=⇥ |a[m]� a[m]| � T

2Dequantization: a[m] = sign(q[m])

�|q[m]| +

12

⇥T

Page 20: Signal Processing Course : Approximation

A Naive Support Coding ApproachCoding: relate # bits R to # coe�cients M .

(H1) Ordered coe�cients |�f, �m⇥| decays like m��+12 .

=⇤ ||f � fM ||2 ⇥M��.

Page 21: Signal Processing Course : Approximation

A Naive Support Coding ApproachCoding: relate # bits R to # coe�cients M .

(H1) Ordered coe�cients |�f, �m⇥| decays like m��+12 .

f � RN sampled from f0, error:

(H2) To ensure ||f � f0||2 ⇥ ||f � fM ||2: M ⇥ N⇥/�.

||f � f0||2 ⇥ N��

=⇤ ||f � fM ||2 ⇥M��.

Page 22: Signal Processing Course : Approximation

A Naive Support Coding Approach

Simple coding strategy: R = Rval + Rind

=� Rind � log2

�N

M

⇥= O(M log2 (N/M)) = O(M log2(M)).

Coding: relate # bits R to # coe�cients M .(H1) Ordered coe�cients |�f, �m⇥| decays like m��+1

2 .

f � RN sampled from f0, error:

(H2) To ensure ||f � f0||2 ⇥ ||f � fM ||2: M ⇥ N⇥/�.

||f � f0||2 ⇥ N��

=⇤ ||f � fM ||2 ⇥M��.

Page 23: Signal Processing Course : Approximation

A Naive Support Coding Approach

Simple coding strategy: R = Rval + Rind

=� Rind � log2

�N

M

⇥= O(M log2 (N/M)) = O(M log2(M)).

=� Rval = O(M | log2(T )|) = O(M log2(M))

Coding: relate # bits R to # coe�cients M .(H1) Ordered coe�cients |�f, �m⇥| decays like m��+1

2 .

f � RN sampled from f0, error:

(H2) To ensure ||f � f0||2 ⇥ ||f � fM ||2: M ⇥ N⇥/�.

||f � f0||2 ⇥ N��

=⇤ ||f � fM ||2 ⇥M��.

Page 24: Signal Processing Course : Approximation

A Naive Support Coding Approach

Simple coding strategy: R = Rval + Rind

=� Rind � log2

�N

M

⇥= O(M log2 (N/M)) = O(M log2(M)).

Theorem: Under hypotheses (H1) and (H2), ||f � fR||2 = O(R�� log�(R)).

=� Rval = O(M | log2(T )|) = O(M log2(M))

Coding: relate # bits R to # coe�cients M .(H1) Ordered coe�cients |�f, �m⇥| decays like m��+1

2 .

f � RN sampled from f0, error:

(H2) To ensure ||f � f0||2 ⇥ ||f � fM ||2: M ⇥ N⇥/�.

||f � f0||2 ⇥ N��

=⇤ ||f � fM ||2 ⇥M��.

Page 25: Signal Processing Course : Approximation

Entropic Coders

Page 26: Signal Processing Course : Approximation

Entropic Coders

Page 27: Signal Processing Course : Approximation

Entropic Coders

Page 28: Signal Processing Course : Approximation

JPEG-2000 Overview

Page 29: Signal Processing Course : Approximation

JPEG-2000 Overview

Page 30: Signal Processing Course : Approximation

JPEG-2000 Overview

Page 31: Signal Processing Course : Approximation

JPEG-2000 Overview

Page 32: Signal Processing Course : Approximation

JPEG-2000 Overview

Page 33: Signal Processing Course : Approximation

Contextual Coding

code block width

3 ! 3 context window

Page 34: Signal Processing Course : Approximation

JPEG-2000 vs. JPEG, 0.2bit/pixel

Page 35: Signal Processing Course : Approximation

Overview

• Approximation and Compression

• Decay of Approximation Error

• Fourier for Smooth Functions

• Wavelet for Piecewise Smooth Functions

• Curvelets and Finite Elements for Cartoons

Page 36: Signal Processing Course : Approximation

1D Fourier Approximation

Page 37: Signal Processing Course : Approximation

1D Fourier Approximation

Page 38: Signal Processing Course : Approximation

Sobolev and Fourier

Page 39: Signal Processing Course : Approximation

Singularities and Fourier

0

0.2

0.4

0.6

0.8

1

0

0.2

0.4

0.6

0.8

1

0

0.2

0.4

0.6

0.8

1

0

0.2

0.4

0.6

0.8

1

Page 40: Signal Processing Course : Approximation

Sobolev for Images

Page 41: Signal Processing Course : Approximation

Sobolev for Images

Page 42: Signal Processing Course : Approximation

Overview

• Approximation and Compression

• Decay of Approximation Error

• Fourier for Smooth Functions

• Wavelet for Piecewise Smooth Functions

• Curvelets and Finite Elements for Cartoons

Page 43: Signal Processing Course : Approximation

Vanishing moments:

p = 3

p = 4

Magnitude of Wavelet Coefficients

f(x)

−1 0 1 2−2

−1

0

1

2

−2 −1 0 1 2

−1

0

1

2

−2 0 2 4−1

−0.5

0

0.5

1

1.5

� k < p,

��(x)xkdx = 0

p = 2

Page 44: Signal Processing Course : Approximation

Vanishing moments:

p = 3

p = 4

Magnitude of Wavelet Coefficients

f(x)

−1 0 1 2−2

−1

0

1

2

−2 −1 0 1 2

−1

0

1

2

−2 0 2 4−1

−0.5

0

0.5

1

1.5

� k < p,

��(x)xkdx = 0

p = 2

|�f, �j,n⇥| � Cf ||�||12j(�+d/2)

t =x� 2jn

2j

⇥f, �j,n⇤ =1

2j d2

⇤f(x)�

�x � 2jn

2j

⇥dx = 2j d

2

⇤R(2jt)�(t)dt

If f is C� on supp(⇥j,n), p � �:

f(x) = P (x� 2jn) + R(x� 2jn) = P (2jt) + R(2jt)

Page 45: Signal Processing Course : Approximation

Vanishing moments:

p = 3

p = 4

Magnitude of Wavelet Coefficients

f(x)

−1 0 1 2−2

−1

0

1

2

−2 −1 0 1 2

−1

0

1

2

−2 0 2 4−1

−0.5

0

0.5

1

1.5

� k < p,

��(x)xkdx = 0

|�f, �j,n⇥| � ||f ||�||�||12j d2

p = 2

|�f, �j,n⇥| � Cf ||�||12j(�+d/2)

t =x� 2jn

2j

⇥f, �j,n⇤ =1

2j d2

⇤f(x)�

�x � 2jn

2j

⇥dx = 2j d

2

⇤R(2jt)�(t)dt

If f is C� on supp(⇥j,n), p � �:

f(x) = P (x� 2jn) + R(x� 2jn) = P (2jt) + R(2jt)

Page 46: Signal Processing Course : Approximation

1D Wavelet Coefficient Behavior

−0.2

−0.1

0

0.1

0.2

−0.2

−0.1

0

0.1

0.2

−0.5

0

0.5

−0.5

0

0.5

0

0.2

0.4

0.6

0.8

1

Page 47: Signal Processing Course : Approximation

1D Wavelet Coefficient Behavior

If f is C� in supp(�j,n), then

|�f, �j,n⇥| � 2j(�+1/2)||f ||C� ||�||1

−0.2

−0.1

0

0.1

0.2

−0.2

−0.1

0

0.1

0.2

−0.5

0

0.5

−0.5

0

0.5

0

0.2

0.4

0.6

0.8

1

Page 48: Signal Processing Course : Approximation

1D Wavelet Coefficient Behavior

If f is C� in supp(�j,n), then

|�f, �j,n⇥| � 2j(�+1/2)||f ||C� ||�||1

|�f, �j,n⇥| � 2j/2||f ||�||�||1

If f is bounded (e.g. arounda singularity), then

−0.2

−0.1

0

0.1

0.2

−0.2

−0.1

0

0.1

0.2

−0.5

0

0.5

−0.5

0

0.5

0

0.2

0.4

0.6

0.8

1

Page 49: Signal Processing Course : Approximation

For Fourier, linear�non-linear, sub-optimal.

For wavelets, linear�non-linear, optimal.

Piecewise Regular Functions in 1D

Theorem: If f is C� outside a finite set of discontinuities:

�n[M ] = ||f � fnM ||2 =

�O(M�1) (Fourier),O(M�2�) (wavelets).

Page 50: Signal Processing Course : Approximation

Examples of 1D Approximations

0

0.2

0.4

0.6

0.8

1

0

0.2

0.4

0.6

0.8

1

0

0.2

0.4

0.6

0.8

1

0

0.2

0.4

0.6

0.8

1

0

0.2

0.4

0.6

0.8

1

0

0.2

0.4

0.6

0.8

1

0

0.2

0.4

0.6

0.8

1

−2 −1.8 −1.6 −1.4 −1.2 −1 −0.8−6

−5.5

−5

−4.5

−4

−3.5

−3

−2.5

−2

−1.5

−1

Page 51: Signal Processing Course : Approximation

Large coe�cient

|�f, �jn⇥| < Tx1 x2

S = {x1, x2}

Localizing the Singular Support

f(x)

j1

j2

Small coe�cient

Singular support:

|Cj | � K|S| = constant

Page 52: Signal Processing Course : Approximation

Large coe�cient

|�f, �jn⇥| < Tx1 x2

S = {x1, x2}

Localizing the Singular Support

f(x)

j1

j2

Small coe�cient

Singular support:

Coe�cient behavior:Regular, n � Cc

j : |⇥f, �j,n⇤| � C2j(�+1/2)

Singular, n � Cj : |⇥f, �j,n⇤| � C2j/2

|Cj | � K|S| = constant

Page 53: Signal Processing Course : Approximation

Large coe�cient

|�f, �jn⇥| < Tx1 x2

S = {x1, x2}

Localizing the Singular Support

f(x)

j1

j2

Small coe�cient

Singular support:

Coe�cient behavior:Regular, n � Cc

j : |⇥f, �j,n⇤| � C2j(�+1/2)

Singular, n � Cj : |⇥f, �j,n⇤| � C2j/2

Cut-o� scales (depends on T ):Singular: 2j1 = (T/C)

1�+1/2

Regular: 2j2 = (T/C)2

|Cj | � K|S| = constant

Page 54: Signal Processing Course : Approximation

Large coe�cient

|�f, �jn⇥| < T

Hand-made approximate: fM =�

j�j2

n�Cj

�f, �j,n⇥�j,n +�

j�j1

n�Ccj

�f, �j,n⇥�j,n

x1 x2

S = {x1, x2}

Localizing the Singular Support

f(x)

j1

j2

Small coe�cient

Singular support:

Coe�cient behavior:Regular, n � Cc

j : |⇥f, �j,n⇤| � C2j(�+1/2)

Singular, n � Cj : |⇥f, �j,n⇤| � C2j/2

Cut-o� scales (depends on T ):Singular: 2j1 = (T/C)

1�+1/2

Regular: 2j2 = (T/C)2

|Cj | � K|S| = constant

Page 55: Signal Processing Course : Approximation

||f � fM ||2 � ||f � fM ||2 ��

j<j2,n�Cj

|⇥f, �j,n⇤|2 +�

j<j1,n�Ccj

|⇥f, �j,n⇤|2

Computing Error and #Coefficients

f(x)

j1

j2n � Cc

j : |⇥f, �j,n⇤| � C2j(�+1/2)

n � Cj : |⇥f, �j,n⇤| � C2j/2

2j1 = (T/C)1

�+1/2

2j2 = (T/C)2

��

j<j2

(K|S|)� C22j +�

j<j1

2�j � C22j(2�+1)

= O(2j2 + 22�j1) = O(T 2 + T2�

�+1/2 ) = O(T2�

�+1/2 )

|Cj | � K|S| = constant

Singulatities

regular part

Page 56: Signal Processing Course : Approximation

||f � fM ||2 � ||f � fM ||2 ��

j<j2,n�Cj

|⇥f, �j,n⇤|2 +�

j<j1,n�Ccj

|⇥f, �j,n⇤|2

Computing Error and #Coefficients

f(x)

j1

j2n � Cc

j : |⇥f, �j,n⇤| � C2j(�+1/2)

n � Cj : |⇥f, �j,n⇤| � C2j/2

2j1 = (T/C)1

�+1/2

2j2 = (T/C)2

��

j<j2

(K|S|)� C22j +�

j<j1

2�j � C22j(2�+1)

M ��

j�j2

|Cj | +�

j�j1

|Ccj | �

j�j2

K|S| +�

j�j1

2�j

= O(| log(T )| + T�1

�+1/2 ) = O(T�1

�+1/2 )

= O(2j2 + 22�j1) = O(T 2 + T2�

�+1/2 ) = O(T2�

�+1/2 )

|Cj | � K|S| = constant

Singulatities

regular part

Page 57: Signal Processing Course : Approximation

||f � fM ||2 � ||f � fM ||2 ��

j<j2,n�Cj

|⇥f, �j,n⇤|2 +�

j<j1,n�Ccj

|⇥f, �j,n⇤|2

Computing Error and #Coefficients

f(x)

j1

j2n � Cc

j : |⇥f, �j,n⇤| � C2j(�+1/2)

n � Cj : |⇥f, �j,n⇤| � C2j/2

2j1 = (T/C)1

�+1/2

2j2 = (T/C)2

��

j<j2

(K|S|)� C22j +�

j<j1

2�j � C22j(2�+1)

M ��

j�j2

|Cj | +�

j�j1

|Ccj | �

j�j2

K|S| +�

j�j1

2�j

= O(| log(T )| + T�1

�+1/2 ) = O(T�1

�+1/2 ) ||f � fM ||2 = O(M�2�)

= O(2j2 + 22�j1) = O(T 2 + T2�

�+1/2 ) = O(T2�

�+1/2 )

|Cj | � K|S| = constant

Singulatities

regular part

Page 58: Signal Processing Course : Approximation

2D Wavelet Approximation

If f is C� in supp(�j,n), then

|�f, �j,n⇥| � 2j(�+1)||f ||C� ||�||1

|�f, �j,n⇥| � 2j ||f ||�||�||1

If f is bounded (e.g. around a singularity), then

Page 59: Signal Processing Course : Approximation

Fourier � Wavelet, both sub-optimal.

Wavelets: same result for BV functions (optimal).

Piecewise Regular Functions in 2D

�n[M ] = ||f � fnM ||2 =

�O(M�1/2) (Fourier),O(M�1) (wavelets).

Theorem: If f is C� outside a set of finite length edge curves,

Page 60: Signal Processing Course : Approximation

Example of 2D Approximations

Page 61: Signal Processing Course : Approximation

Length(S) = L

Localizing the Singular Support in 2D

j1

j2

f(x, y)

|Cj | � LK2�j �= constant

Page 62: Signal Processing Course : Approximation

Length(S) = L

Localizing the Singular Support in 2D

j1

j2

f(x, y)

Coe�cient behavior:Regular, n � Cc

j : |⇥f, �j,n⇤| � C2j(�+1)

Singular, n � Cj : |⇥f, �j,n⇤| � C2j

|Cj | � LK2�j �= constant

Page 63: Signal Processing Course : Approximation

Length(S) = L

Localizing the Singular Support in 2D

j1

j2

f(x, y)

Coe�cient behavior:Regular, n � Cc

j : |⇥f, �j,n⇤| � C2j(�+1)

Singular, n � Cj : |⇥f, �j,n⇤| � C2j

Cut-o� scales (depends on T ):Singular: 2j1 = (T/C)

1�+1

Regular: 2j2 = T/C

|Cj | � LK2�j �= constant

Page 64: Signal Processing Course : Approximation

Hand-made approximate: fM =�

j�j2

n�Cj

�f, �j,n⇥�j,n +�

j�j1

n�Ccj

�f, �j,n⇥�j,n

Length(S) = L

Localizing the Singular Support in 2D

j1

j2

f(x, y)

Coe�cient behavior:Regular, n � Cc

j : |⇥f, �j,n⇤| � C2j(�+1)

Singular, n � Cj : |⇥f, �j,n⇤| � C2j

Cut-o� scales (depends on T ):Singular: 2j1 = (T/C)

1�+1

Regular: 2j2 = T/C

|Cj | � LK2�j �= constant

Page 65: Signal Processing Course : Approximation

||f � fM ||2 � ||f � fM ||2 ��

j<j2,n�Cj

|⇥f, �j,n⇤|2 +�

j<j1,n�Ccj

|⇥f, �j,n⇤|2

Singulatities�

regular part

Computing Error and #Coefficients

j1

j2

f(x, y)

n � Ccj : |⇥f, �j,n⇤| � C2j(�+1)

n � Cj : |⇥f, �j,n⇤| � C2j

2j2 = T/C

2j1 = (T/C)1

�+1

��

j<j2

LK2�j � C222j +�

j<j1

2�2j � C222j(�+1)

|Cj | � LK2�j �= constant

= O(2j2 + 22�j1) = O(T + T2�

�+1 ) = O(T )

Page 66: Signal Processing Course : Approximation

||f � fM ||2 � ||f � fM ||2 ��

j<j2,n�Cj

|⇥f, �j,n⇤|2 +�

j<j1,n�Ccj

|⇥f, �j,n⇤|2

Singulatities�

regular part

Computing Error and #Coefficients

j1

j2

f(x, y)

n � Ccj : |⇥f, �j,n⇤| � C2j(�+1)

n � Cj : |⇥f, �j,n⇤| � C2j

2j2 = T/C

2j1 = (T/C)1

�+1

��

j<j2

LK2�j � C222j +�

j<j1

2�2j � C222j(�+1)

|Cj | � LK2�j �= constant

= O(2j2 + 22�j1) = O(T + T2�

�+1 ) = O(T )

M ��

j�j2

|Cj | +�

j�j1

|Ccj | �

j�j2

LK2�j +�

j�j1

2�2j

= O(T�1 + T�1

�+1 ) = O(T�1)

Page 67: Signal Processing Course : Approximation

||f � fM ||2 = O(M�1)

||f � fM ||2 � ||f � fM ||2 ��

j<j2,n�Cj

|⇥f, �j,n⇤|2 +�

j<j1,n�Ccj

|⇥f, �j,n⇤|2

Singulatities�

regular part

Computing Error and #Coefficients

j1

j2

f(x, y)

n � Ccj : |⇥f, �j,n⇤| � C2j(�+1)

n � Cj : |⇥f, �j,n⇤| � C2j

2j2 = T/C

2j1 = (T/C)1

�+1

��

j<j2

LK2�j � C222j +�

j<j1

2�2j � C222j(�+1)

|Cj | � LK2�j �= constant

= O(2j2 + 22�j1) = O(T + T2�

�+1 ) = O(T )

M ��

j�j2

|Cj | +�

j�j1

|Ccj | �

j�j2

LK2�j +�

j�j1

2�2j

= O(T�1 + T�1

�+1 ) = O(T�1)

Page 68: Signal Processing Course : Approximation

Overview

• Approximation and Compression

• Decay of Approximation Error

• Fourier for Smooth Functions

• Wavelet for Piecewise Smooth Functions

• Curvelets and Finite Elements for Cartoons

Page 69: Signal Processing Course : Approximation

Geometric image model: f is C� outside a set of C� edge curves.

BV image: level sets have finite lengths.Geometric image: level sets are regular.

Geometry = cartoon image Sharp edges Smoothed edges

Geometrically Regular Images

�|�f |

Page 70: Signal Processing Course : Approximation

Approximation of f , C2 outside C2 edges.

Piecewise linear approximation on M triangles: fM .

Geometic Construction : Finite Elements

Page 71: Signal Processing Course : Approximation

Approximation of f , C2 outside C2 edges.

Piecewise linear approximation on M triangles: fM .

Geometic Construction : Finite Elements

Regular areas:�M/2 equilateral triangles.

M�1/2

M�1/2

Page 72: Signal Processing Course : Approximation

Approximation of f , C2 outside C2 edges.

Piecewise linear approximation on M triangles: fM .

Geometic Construction : Finite Elements

Regular areas:�M/2 equilateral triangles.

M�1/2

M�1/2

�M/2 anisotropic triangles.Singular areas:

Page 73: Signal Processing Course : Approximation

Approximation of f , C2 outside C2 edges.

Piecewise linear approximation on M triangles: fM .

Di�culties to build e�cient approximations.No optimal strategies (greedy solutions).

Theorem: If f is C2 outside a set of C2 contours, then one has for an adaptedtriangulation ||f � fM ||2 = O(M�2).

Geometic Construction : Finite Elements

Regular areas:�M/2 equilateral triangles.

M�1/2

M�1/2

�M/2 anisotropic triangles.Singular areas:

Page 74: Signal Processing Course : Approximation

Greedy Triangulation OptimizationBougleux, Peyre, Cohen, ECCV’08

Page 75: Signal Processing Course : Approximation

Curvelet Atoms

Parabolic dyadic scaling:

Rotation:

[Candes, Donoho] [Candes, Demanet, Ying, Donoho]

“width � length2”

Page 76: Signal Processing Course : Approximation

Curvelet Tight Frame

Spacial sampling:

Tight frame of L2(R2):

Angular sampling:

Page 77: Signal Processing Course : Approximation

Discrete curvelets: O(N log(N)) algorithm.

Redundancy � 5 =⇥ not e�cient for compression.

M -term curvelet approximation:

Curvelet Approximation

Theorem: If f is C2 outside a set of C2 edges, ||f � fM ||2 = O(M�2(log M)3).

Page 78: Signal Processing Course : Approximation

Works on elongated edges.

Works also on locally parallel textures !

Curvelets Denoising

Page 79: Signal Processing Course : Approximation

Conclusion

Page 80: Signal Processing Course : Approximation

Conclusion

Page 81: Signal Processing Course : Approximation

Conclusion

0

0.2

0.4

0.6

0.8

1

Page 82: Signal Processing Course : Approximation

Conclusion

0

0.2

0.4

0.6

0.8

1