Post on 15-Feb-2021
Contents
1 The Continuous Wavelet Transform 11.1 The continuous wavelet transform (CWT) . . . . . . . . . . . . . 11.2 Discretisation of the CWT . . . . . . . . . . . . . . . . . . . . . . 2
2 Stationary wavelet transform or redundant wavelet transformor a trous algorithm 32.1 Reconstruction . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42.2 properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42.3 RWT and frames . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
3 2D Wavelet transform 5
4 Polyphase 84.1 The lazy wavelet transform . . . . . . . . . . . . . . . . . . . . . 11
5 Polyphase in Z-domain 12
6 Lifting 136.1 In place calculation of the Haar transform . . . . . . . . . . . . . 136.2 Split, predict, update . . . . . . . . . . . . . . . . . . . . . . . . . 146.3 Subdivision, cascade . . . . . . . . . . . . . . . . . . . . . . . . . 156.4 Interpolating subdivision . . . . . . . . . . . . . . . . . . . . . . . 16
6.4.1 The linear interpolating subdivision . . . . . . . . . . . . 166.4.2 Update . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
6.5 Application of the lifting scheme . . . . . . . . . . . . . . . . . . 196.6 Integer transforms . . . . . . . . . . . . . . . . . . . . . . . . . . 206.7 2nd generation wavelets . . . . . . . . . . . . . . . . . . . . . . . 20
1. The Continuous Wavelet Transform
• Historically, the continuous wavelet transform came first.
• it is completely different from the discrete wavelet transform
• it is popular among physicists, whereas the DWT is more common innumerical analysis, signal- and image-processing.
1.1. The continuous wavelet transform (CWT)
Recall the CWT
Wψf = S(a, b) =1√a
∫ ∞−∞
f(t)ψ(t− ba
)dt
This is an overcomplete function representation. (from one dimension, we trans-form into a 2D space!!). We want to be able to reconstruct f(t) from thisrepresentation. This is possible if the admissibility condition is satisfied:
Cψ =∫ ∞−∞
Ψ(ω)dω
|ω|
A necessary (and normally also sufficient) condition is: Ψ(0) = 0 which means:∫ ∞−∞
ψ(t)dt = 0
All functions with zero integral are waveletsThere are:
• no multi-resolution analysis
• no father/scaling function
All wavelets in the DWT-theory remain wavelets here, but these functions aremostly too irregular for proper use in a CWT-analysis.
The CWT is often used to characterize singularities in functions, and fromthis it can distinguish between noise and signal (Jaffard, Mallat, Hwang, Zhong)
It can be used to study fractals, etc.
2
Two functions are popular for CWT-analysis. These functions do not fit intoa MRA.
1. Mexican hat waveletψ(x) = (1− x2)e−x
2/2
This is the second derivative of a Gaussian.
Ψ(ω) = ω2e−ω2
−5 0 5−0.5
0
0.5
1mexican hat
2. The Morlet waveletψ(x) = eiω0xe−x
2/2σ20
This function does not satisfy the admissibility condition exactly. It there-fore has to be corrected a bit.
1.2. Discretisation of the CWT
Another difference between CWT and DWT:DWT starts with a discrete set of data and considers a dyadic set of scales.Discretisation comes first! (Hence Discrete WT!!)
A CWT must be decretised to be implemented on a computer.If ψ(t) fits into a MRA, then a discretisation like in the discrete case is of coursepossible, but in general, the representation remains redundant.
Such overcomplete function expansions are called frames. Loosely spoken, aframe is a family of functions ψj , such that a function f can be reconstructedfrom
〈f(t), ψj(t)〉
3
2. Stationary wavelet transform or redundant wavelettransform or a trous algorithm
Suppose we have a FWT:
����������������������������������������������������������������������������������������������������������������������������������������������������������������������������
����������������������������������������������������������������������������������������������������������������������������������������������������������������������������
* 2
2
2
2
2h*
* g*
g*
g* ~~
~ ~
~
h
2~h
*
* *
*
**
Because of the subsampling step, we have less coefficients at coarse scales.If we omit the subsampling step, we get a redundant wavelet transform. (RWT)
������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������
������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������������
*h g*
( 2)
( 2)( 2) ( 2)
( 2)
( 2)
* h *
** g
g
h~ ~
~
~
~
~* *
**
* *
4
The subsampling is compensated by upsampling the filters.→ the transform is consistent with the FWT.
2.1. Reconstruction
In each step (i.e. scale) the number of data doubles. Hence at each step theinput can be reconstructed from the output in 2 independent ways.
2.2. properties
1. the number of coefficients is equal at all scales. It equals the number ofinput samples.
2. the transform is translation-invariant (↔ FWT)
3. the transform is immediately extensible for non-dyadic inputs.
4. This is an overcomplete data representation.reconstruction is not unique: if the coefficients are manipulated, the resultis probably not the RWT of any function. Averaging the results fromdifferent reconstructions has smoothing effect.
5. Computational complexity and storage is O(N logN)
5
2.3. RWT and frames
Suppose:f(t) =
∑k∈ZZ
sJ,kϕ(2J t− k)
then we know:wFWTj,k = 2
j/2
∫ ∞−∞
f(t) ψ̃(2jt− k) dt
And similarly:
wRWTj,k = 2j/2
∫ ∞−∞
f(t) ψ̃(2jt− 2j−Jk) dt
This can be interpreted as a dyadic discretisation of the CWT:
(Wψf)(a, b) =1√a
∫ ∞−∞
f(t) ψ̃(t− ba
)dt
with:aj = 2−j and bk = 2−Jk
3. 2D Wavelet transform
1. the square wavelet transform
We first perform one step of the trans-form on all rows:The left side of the matrix containsdownsampled lowpass coefficients ofeach row, the right contains the high-pass coefficients
HL
Next, we apply one step to all columns:
LH
HL HH
LL
This results in four types of coefficients:
(a) coefficients that result from a convolution with g in both directions(HH) represent diagonal features of the image.
6
(b) coefficients that result from a convolution with g on the columnsafter a convolution with h on the rows (HL) correspond to horizontalstructures.
(c) coefficients from highpass filtering on the rows, followed by lowpassfiltering of the columns (LH) reflect vertical information.
(d) the coefficients from lowpass filtering in both directions are furtherprocessed in the next step.
HH
LH
HL
HH
LH
HL
LL
At each level, we have three components or subbands. If we start with:
f(t) =∑k∈ZZ
∑l∈ZZ
sJ,k,lϕJ,k(x)ϕJ,l(y)
we end up with:
f(t) =∑k∈ZZ
∑l∈ZZ
sL,k,lϕL,k(x)ϕL,l(y) ∈ VL ⊗ VL
+J−1∑j=L
[∑k∈ZZ
∑l∈ZZ
wHHj,k,lψj,k(x)ψj,l(y)J−1⊕j=L
[ Wj ⊗Wj
+wLHj,k,lψj,k(x)ϕj,l(y) ⊕Wj ⊗ Vj
+wHLj,k,lϕj,k(x)ψj,l(y)
]⊕ Vj ⊗Wj ]
7
2. The rectangular wavelet transformWe could also transform in each step all rows and columns (and not onlythose that are in the LL-subband) This is the following scheme:
In this case, we can change order of operations: we could for instance,perform the complete transform on the columns, before transforming therows.
If w = Wy is the matrix representation of a 1D wavelet transform, thenthe rectangular transform, applied to an image M is:
WMWT
The basis corresponding to this decomposition contains functions that aretensor products of wavelets at different scales:
ψj,k(x)ψi,l(y)
Such functions do not appear in the basis of a square wavelet transform.
For image processing, we prefer the square transform:
• less computations• more specific info on vertical, diagonal, horizontal features at each
scale
Both rectangular and square wavelet transform are separable transforms.example
8
sL,k,l and 255− |wcj,k,l|,c = HH,HL,LH
comparison of rectangular and square transform
4. Polyphase
If we want to describe the forward and inverse wavelet transform
sj,k =∑l∈ZZ
¯̃hl−2ksj+1,l
wj,k =∑l∈ZZ
¯̃gl−2ksj+1,l
sj+1,k =∑l∈ZZ
hk−2lsj,l +∑l∈ZZ
gk−2lwj,l
9
as stationary filter-operations, then we need downsampling and upsampling op-erations.In the classical scheme, downsampling comes after analysis filtering and upsam-pling comes before synthesis filtering.
g
h
2
2
^
^
2
2 h
g
+
1 2 3 4
50
If we implement the algorithm according to this scheme, we throw away half ofthe filtering work.Therefore, we try to subsample on the input and output. Consider:
sej,l = sj,2l or sej = (↓2)sj
soj,l = sj,2l+1 or soj = (↓2)Ssj
S = D−1 is the (forward) shift operator:
y = Sx⇔ yk = xk+1
We can write:
sj,k =∑l∈ZZ
¯̃hl−2ksj+1,l
=∑l∈ZZ
¯̃h2l−2ksj+1,2l +
∑l∈ZZ
¯̃h2l+1−2ksj+1,2l+1
=∑l∈ZZ
¯̃he
l−ksej+1,l +
∑l∈ZZ
¯̃ho
l−ksoj+1,l
If we call ĥk =¯̃h−k as before, we can write:
¯̃he
−m = ĥem.
BUT (!!!!) ¯̃ho
= Dĥo
since
¯̃ho
−m =¯̃h−2m+1 = ĥ2m−1 = ĥ2(m−1)+1 = ĥom−1
And so: ∑l∈ZZ
¯̃ho
l−ksoj+1,l =
∑l∈ZZ
ĥok−1−lsoj+1,l =
(Dĥ
o)∗ soj+1
D is a delay operator (the inverse shift).D can be interchanged with convolution:
(Dx) ∗ y = D (x ∗ y)
The equation
sj,k =∑l∈ZZ
¯̃he
l−ksej+1,l +
∑l∈ZZ
¯̃ho
l−ksoj+1,l
10
becomes:sj = ĥ
e∗ sej+1 +Dĥ
o∗ soj+1
A similar argument leads to:
wj = ĝe ∗ sej+1 +Dĝ
o ∗ soj+1
We re-write the reconstruction:
sej+1,k = sj+1,2k
=∑l∈ZZ
h2k−2lsj,l +∑l∈ZZ
g2k−2lwj,l
=∑l∈ZZ
hek−lsj,l +∑l∈ZZ
gek−lwj,l
soj+1,k = sj+1,2k+1
=∑l∈ZZ
h2k+1−2lsj,l +∑l∈ZZ
g2k+1−2lwj,l
=∑l∈ZZ
hok−lsj,l +∑l∈ZZ
gok−lwj,l
Or:sej+1 = h
e ∗ sj + ge ∗wjand:
soj+1 = ho ∗ sj + go ∗wj
This can be represented as:
11
ĥe
ĥo
^eg
ĝo
2
2S
+
+
he
ho
eg
go
wj
sj+1
sj +
+
2
2
wj
sj
+sj+1
S-1This is called polyphase: the filter bank is written as combinations of differ-ent phases of filters and signals. Implementing this scheme as such does notintroduce useless computations.
4.1. The lazy wavelet transform
The lazy wavelet transform is the most simple filterbank: It has:
ĥek = δk = ĝok
ĥok = 0 = ĝek
hek = δk = gok
hok = 0 = gek
It just splits the signal in even and odd:
12
2
2S
sj+1
2
2
+sj+1
S-1
sj+1
sj+1o
e
Note 1: The shift-operator S is not causal, and therefore not practicallyrealisable. Therefor, in practice we use the following scheme:
2
2
sj+1
2
2
+
S-1
S-1
S-1sj+1
The output is a delay of the input. As a matter of fact, the same problemoccurs for the classical filterbank representation of a wavelet transform. Since,for instance, g̃k = (−1)kh̄1−k, at least one of these filters is non causal (unlessthe filter length is less than 3, like in the Haar case)From the mathematics point of view, we can ignore this problem.
Note 2: The lazy wavelet transform corresponds to
hk = δk
The cascade or iteration algorithm with this filter does not converge in L2, i.e.there is no L2 function for which:
ϕ(x) =√
2ϕ(2x)
5. Polyphase in Z-domain
Z(Ssj+1) = zZ(sj+1) = zSj+1(z)Z(S−1sj+1) = z−1Sj+1(z)
Sj(z) = Ĥe(z)Sej+1(z) + z−1Ĥo(z)Soj+1(z)
Wj(z) = Ĝe(z)Sej+1(z) + z−1Ĝo(z)Soj+1(z)
13
We know: h̃k =¯̂h−k g̃k = ¯̂g−k and so:
Ĥe(z) =∑k∈ZZ
ĥ2kz−k =
∑k∈ZZ
¯̃h−2kz
−k =∑l∈ZZ
¯̃h2lz
l = H̃e∗(z)
Ĥo(z) =∑k∈ZZ
ĥ2k+1z−k =
∑k∈ZZ
¯̃h−2k−1z
−k =∑l∈ZZ
¯̃h2l−1z
l
=∑m∈ZZ
¯̃h2m+1z
m+1 = zH̃o∗(z)
We can write a matrix-equation:[Sj(z)Wj(z)
]=
[H̃e∗(z) G̃
e∗(z)
H̃o∗(z) G̃o∗(z)
]T·[Sej+1(z)Soj+1(z)
]And for the synthesis:[
Sej+1(z)Soj+1(z)
]=
[He(z) Ge(z)Ho(z) Go(z)
]·[Sj(z)Wj(z)
]Call the polyphase matrix:
P (z) =[He(z) Ge(z)Ho(z) Go(z)
]Its para-hermitian conjugate equals:
P∗(z) = P (1/z)T
=[He∗(z) G
e∗(z)
Ho∗(z) Go∗(z)
]TThus we have PR iff P (z)P̃∗(z) = I
For orthogonal filterbanks, we have
h̃k = hk and g̃k = gkH̃(z) = H(z) and G̃(z) = G(z)
So: P̃ (z) = P (z) and PR is equivalent to a para-unitary polyphase matrix:P (z)P∗(z) = I.
6. Lifting
6.1. In place calculation of the Haar transform
Fast Haar transform: {sj,k =
sj+1,2k+1+sj+1,2k2
wj,k = sj+1,2k+1 − sj+1,2k
If we implement the algorithm as such, we need the values of sj+1,2k+1 andsj+1,2k until the computation of wj,k is done. Therefore we need new memory
14
locations for sj,k. This is not an in-place calculation: we cannot overwrite theexisting locations of sj+1,2k+1 and sj+1,2k.
However:sj+1,2k+1 = wj,k + sj+1,2k
and so:
sj,k =sj+1,2k+1 + sj+1,2k
2
=wj,k + sj+1,2k + sj+1,2k
2= sj+1,2k +
wj,k2
After the computation of wj,k, we don’t need sj+1,2k+1 anymore, so we couldoverwrite sj+1,2k+1 with the value of wj,k and sj+1,2k with the value of sj,k. Wedrop the level index j:
s2k+1 ← s2k+1 − s2ks2k ← s2k + s2k+1/2
Recovering from this is immediate:
s2k ← s2k − s2k+1/2s2k+1 ← s2k+1 + s2k
6.2. Split, predict, update
One step of this in-place Haar transform fits into the following scheme:
Keep odd
Keep even
DualLifting
PrimalLifting
Split
sj+1,2k+1
sj+1,2k
LP
HP
Input
-
+
P Us
wj,k
sj,k
j+1,k
This is the lifting scheme. It consists of a subsequence of three fundamentaloperations:
15
1. Splittingsplits the signal in an even and odd part.also appears in the polyphase-representationin fact, it is the forward lazy wavelet transform.
After splitting, we have an alternating subsequence of two lifting operations:
2. Dual lifting (prediction P)substracts a filtered version of the evens from the odd. For the Haartransform, the operator P is simply the identity: P (sj+1,2l) = sj+1,2l.One can iterpret this step as a prediction of the odd set by the evens:the prediction error is the essential/detail information: these are highpasscoefficients
3. Primal lifting (update U)It is necessary to update the lowpass coefficients before using them in thenext step. For the Haar transform, this update is simply:U(wj,k) = wj,k/2.
After these first two lifting operations, more steps may follow. (this is not thecase for the Haar-transform)
6.3. Subdivision, cascade
Inverting this operation is very easy: just go through the scheme in oppositedirection (and replace all plus with minus and vice versa):
U P
HP
LP
PrimalLifting
DualLifting
+
-
Merge
s
wj,k
j,ksj+1,2k
sj+1,2k+1
If we start this algorithm at a coarse scale j = L with
sL,k = δk
16
and all wavelet coefficients equal to zero, the samples sj,k converge to the pri-mal scaling function ϕ(x). After one step in the iteration procedure, we havecoefficients sL+1,k.Since all detail coefficients are zero, we can write:∑
k∈ZZ
sL,kϕ(2Lx− k) =∑k∈ZZ
sL+1,kϕ(2L+1x− k)
And, because sL,k = δk, we have for u = 2Lx:
ϕ(u) =∑k∈ZZ
sL+1,kϕ(2u− k)
Hence, the values after the first step are the primal refinement coefficients.
6.4. Interpolating subdivision
This is the most simple example of subdivision: the lifting scheme has only oneprediction-operator and one update.
The predictor is the interpolating polynomial in the even samples
prediction by cubic interpolation in this pointThe resulting scaling function has the interpolating property:
ϕ(m) = δm
6.4.1. The linear interpolating subdivision
has:P (sj+1) = (1/2)(sj+1,2l + sj+1,2l+2)
17
Linear interpolation in odd samplesAfter one step of the subdivision scheme, we have:
U P
HP
LP
PrimalLifting
DualLifting
+
-
Merge
s
wj,k
j,ksj+1,2k
sj+1,2k+1
sL+1,0 = sL,0 = 1 = c0sL+1,1 = 0 + (1/2)(sL+1,0 + sL+1,2) = 1/2 = c1sL+1,−1 = 0 + (1/2)(sL+1,−2 + sL+1,0) = 1/2 = c−1
All other refinement coefficients are zero. The algorithm converges to the hatfunction.
18
6.4.2. Update
We now return to the analysis:
Keep odd
Keep even
DualLifting
PrimalLifting
Split
sj+1,2k+1
sj+1,2k
LP
HP
Input
-
+
P Us
wj,k
sj,k
j+1,k
We see that the same predictor now defines the dual wavelet coefficients, butwith alternating sign, this corresponds to
g̃k = (−1)kh̄1−k
If we would not introduce an update operator, we see that:
sj,k = sj+1,2k
This would mean that h̃l = δl which is too lazy to converge!Note wj,k = sj+1,2k+1 − 12 (sj+1,2k + sj+1,2k+2)To determine how to update, we consider the interpretation of one step in
wavelet transform as a decomposition of:∑k∈ZZ
sj+1,kϕ(2u− k) =∑k∈ZZ
sj,kϕ(u− k) +∑k∈ZZ
wj,kψ(u− k)
(with u = 2jx)We have seen that ψ̃ must have at least one vanishing moment. This is notstrictly necessary for ψ, but wavelets without this property are very irregular.So, we impose: ∫ ∞
−∞ψ(u)du = 0
and integrate the decomposition equation:∑k∈ZZ
sj+1,k
∫ ∞−∞
ϕ(2u− k)du =∑k∈ZZ
sj,k
∫ ∞−∞
ϕ(u− k)du
which leads to: ∑k∈ZZ
sj+1,k = 2∑k∈ZZ
sj,k
19
Imposing a second vanishing moment leads to:∑k∈ZZ
ksj+1,k = 2∑k∈ZZ
ksj,k
This gives two conditions, therefore we need at least two free parameters in theupdate filter U . We propose:
U(wj) = Awj,k−1 +Bwj,k
and find that:A = 1/4 = B
So the forward wavelet transform is:
wj,k = −1/2sj+1,2k + sj+1,2k+1 − 1/2sj+1,2k+2sj,k = sj,2k + (1/4)wj,k + (1/4)wj,k−1
Substituting the predict in the update, the second equation becomes:
sj,k = (−1/8)sj+1,2k−2 + (1/4)sj+1,2k−1+(3/4)sj+1,2k+(1/4)sj+1,2k+1 + (−1/8)sj+1,2k+2
From this, one sees that
h̃ = ((−1/8), (1/4), (3/4), (1/4), (−1/8))
This is the dual filter of the CDF 2,2. The primal filter indeed corresponds tothe hat function !!
6.5. Application of the lifting scheme
The lifting scheme has following properties:
1. Inverse transform is obvious
2. In place calculation
3. Faster (up to factor 2)lifting is a decomposition of polyphase
4. Integer transforms
5. Second generation transforms
6. Arbitrary size (not dyadic)
7. All existing wavelet transforms can be decomposed into lifting steps
20
6.6. Integer transforms
Digital images are matrices of integer values. A wavelet transform maps thesenumbers onto reals. Reals need more computer memory. Rounding would causean invertible transform.
But the following scheme is invertible:
Keep odd
DualLifting
PrimalLifting
Split
+
...
...
Keep even
Round
Round
HP
S
LP
-
This is the integer wavelet transform.It illustrates the fact that we have freedom in choosing the filters in the liftingsteps: the operations even do not have to be linear.
6.7. 2nd generation wavelets
If data are given on an irregular grid (non-equidistant samples), the liftingscheme allows to take into account this grid structure: e.g. interpolating asprediction can be done on an irregular grid.
For finite signals, we can also adapt the filters in the neighbourhood of thebegin or end point. In this way, we can define wavelets on an interval.
Imposing a second vanishing moment leads to:∑k∈ZZ
ksj+1,k = 2∑k∈ZZ
ksj,k
21
which leads to: ∑k∈ZZ
sj+1,k = 2∑k∈ZZ
sj,k
this is: ∑k∈ZZ
sj+1,k = 2∑k∈ZZ
(sj+1,2k + U(wj)k)
From this, there will follow a condition for even and odd k. This gives twoconditions, therefore we need at least two free parameters in the update filterU . We propose:
U(wj) = Awj,k−1 +Bwj,k
and find that:∑k∈ZZ
sj+1,k = 2∑k∈ZZ
(sj+1,2k +Awj,k−1 +Bwj,k)
= 2∑k∈ZZ
sj+1,2k + 2(A+B)∑k∈ZZ
wj,k
= 2∑k∈ZZ
sj+1,2k + 2(A+B)
·∑k∈ZZ
(s
j+1,2k+1 −s
j+1,2k
2−s
j+1,2k+2
2
)= 2
∑k∈ZZ
sj+1,2k
+2(A+B)∑k∈ZZ
sj+1,2k+1
−2(A+B)∑k∈ZZ
sj+1,2k
This means for 2k:1 = 2− 2(A+B)
and for 2k + 1:1 = 2(A+B)
Les 4
22
The Continuous Wavelet TransformThe continuous wavelet transform (CWT)Discretisation of the CWT
Stationary wavelet transform or redundant wavelet transform or a trous algorithmReconstructionpropertiesRWT and frames
2D Wavelet transformPolyphaseThe lazy wavelet transform
Polyphase in Z-domainLiftingIn place calculation of the Haar transformSplit, predict, updateSubdivision, cascadeInterpolating subdivisionThe linear interpolating subdivisionUpdate
Application of the lifting schemeInteger transforms2nd generation wavelets