Tetrolet Based Adaptive Edge-preserving Image...
Transcript of Tetrolet Based Adaptive Edge-preserving Image...
34
CHAPTER 3
Tetrolet Based Adaptive Edge-preserving Image
Denoising
3.1 Introduction
In the context of denoising, the wavelet transforms have been successfully proven as efficient
means due to their ability of decorrelation (separation of noise and useful signal). The basic
concept related to the noise reduction based on the wavelet transform is to compute the
multiscale wavelet decomposition of the corrupted image into the wavelet coefficients and to
modify the obtained wavelet coefficients. The modified coefficients are obtained by applying a
predetermined threshold on them according to a shrinkage rule. Reconstruction from these
modified coefficients then produces the desired denoised image. A flow diagram to illustrate the
main stages in waveletbased denoising process has been depicted in Fig. 1.2 and detail review of
these stages has been presented in Section 2.4.
The basic issue with most of the wavelet transforms is that the multiscale decomposition of
image into the wavelet coefficients is not adaptive i.e. local structures of image are not taken into
account during decomposition. Although this issue is resolved by an adaptive Haar wavelet
transform (also called tetrolet transform) [Krommweh (2010)] but this tetrolet system is suitable
only for sparse image representation due to its non-redundant nature, while for image denoising
redundant information is helpful. Thus, it is very much needed to exploit redundancy by a
denoising method based on tetrolet transform.Apart from this, in conventionalthresholding
35
schemes, a universal (global) threshold is generally used to shrink small wavelet coefficients.
However, such procedure may also suppress high frequency details, such as edges. Also the
noise variance which is used in the computation of threshold (s), is usually kept fixed through all
resolution scales. However, the noise strength decreases with the raise in resolution scale.
To deal with the aforementioned issues,a new edge-preserving image denoising method using
tetrolet transforms is proposedin this chapter. Although the method is motivated by non-
redundant tetrolet system [Krommweh (2010)] but the underlying approach has a higher degree
of redundancy. This redundancy helps in achieving better denoising. The proposed method also
improves the conventional wavelet denoising methods as instead of using a global threshold, an
adaptive threshold is calculated in a subband-dependent manner to characterize local features of
the image. In addition, instead of using fixed noise variancewhich is used in the computation of
thresholds, it is estimatedlocally for each decomposition level (resolution scale). An adaptive
epsilon-median(e-median) filtering, which uses above computed threshold, is employed to
perform noise suppression in tetrolet coefficients.
There are some reasons that motivate us to use tetrolet transform [Krommweh (2010)] to
obtain a multiscale decomposition of an input image into wavelet (tetrolet) coefficients. First,
local geometrical structures of image are taken into account during the decomposition process,
i.e. the decomposition of an image into coefficients using tetrolet transform is adaptive to the
image contents. Second, the underlying idea of adaptive tetrolet decomposition algorithm is
simple but is fast and effective.
Apart from the above discussed issues related to the wavelet transform and the threshold being
applied, there may also an issue with the thresholding rule being used. Most simple non-linear
thresholding rules assume that the wavelet coefficients are independent. However, it is observed
36
that the wavelet coefficients of natural images have significant statistical dependencies. Sendur
and Selenick (2002a) have suggested four joint shrinkage functions that utilize the dependencies
between wavelet coefficients through two adjacent resolution scales to enhance thedenoising
performance. In [Sendur and Selenick (2002b)], the authors further enhanced their earlier
proposed bivariate shrinkage function in [Sendur and Selenick (2002a)] and proposed a locally
adaptive thresholding method in which the thresholding parameters are computed in a local
neighborhood.
Inspired by their approach, another edge-preserving image denoising technique in tetrolet
domain is proposed in this chapter. Indeed, this method is an extension of our earlier mentioned
method, i.e. it inherits the merits of previous method and incorporates some more aspects, such
as the proposed approach employs a locally adaptive (that is, coefficient dependent) thresholding
which exploits the interscale statistical dependencies between tetrolet coefficients and computes
the thresholding parameters in a local neighborhood.
3.2 Related Work
The tetrolet transform has also been in other denoising approaches as main stream. In Singh
(2010), a new approach was proposed to the denoising problem based on the tetrolet transform.
This approach uses the ideas of adaptiveness and 4 × 4 block mechanism described in
Krommweh (2010) and works on each 4 × 4 block independently and adapts to image
characteristics automatically.
Li et al. (2010) introduced a new class of denoising function that has continuous derivatives for
image denoising. The authors have presented a new algorithm in which, firstly, the noisy image
is decomposed into tetrolet coefficients by using a discrete tetrolet transform given in
Krommweh (2010). Secondly, an adaptive method based on SURE risk is presented by using a
37
new denoising function. Instead of using a universal threshold for noise suppression, the
threshold is obtained by minimizing an estimate of the mean square error through an adaptive
genetic algorithm.
Further, Zhang et al. (2013) presented a new denoising techniquein tetrolet domain. In this
technique,the authors provided improvement over the Bivariate Model (BM) [Sendur and
Selenick (2002b)]. This improved model fits the joint distribution of parent-child tetrolet
coefficients with a Scale Variable Parameter Bivariate Model (SVPBM). Corresponding non-
linear bivariate shrinkage function is derived from SVBPM by using maximum-a-posteriori
(MAP) estimator.
Recently, Dai et al. (2013) improved the BM3D algorithm [Dabov et al. (2006)] by combining
BM3D with tetrolet prefiltering. Authors resolved the limitation with BM3D method of having a
sharp drop in denoising performance at higher values of noise standard deviation. In this
approach, firstly the tetrolet filtering of the strongly noisy image is done to remove part of the
noise and then BM3D filtering is carried out over this partially filtered image.
3.3 Tetrolet Transforms
Tetrolets are Haar-type wavelets whose supports are the shapes called tetrominoes. The
tetrominoes are some geometric shapes in the famous computer game ‘Tetris’ [Breukelaar et al.
(2004)]. All the tetrominoes are made by connecting four equal-sized squares. Disregarding
rotations and reflections there are five basic free tetrominoes as given in following figure:
Figure 3.1:Five free tetrominoes
When applying the tetrolet transform to an image 𝐠 = [g(𝑥, 𝑦)]𝑥,𝑦=1𝑁 with N = 2
K, K N, we
divide an image into 4 4 blocks. Then each block is covered with any four free tetrominoes,
38
while rotations and reflections of tetrominoes are taken into consideration, which depends on the
local structure in the block. These four tetrominoes, which form the adaptive basis, are denoted
by {I0,I1,I2, I3} and the four indices in each tetromino subset Iv are mapped to a unique order {0,
1, 2, 3} by applying a bijective mapping L. On the basis of these definitions, for each tetromino
subset Iv, Krommweh (2010) defined following discrete basis functions:
𝐼𝑣
[𝑥′, 𝑦′]: = { 1/2, (𝑥′, 𝑦′) ∈ 𝐼𝑣
0, otherwise
(3.1)
𝜓𝐼𝑣
𝑙 [𝑥′, 𝑦′]: = { [𝑙, 𝐿(𝑥’, 𝑦’)], (𝑥′, 𝑦′) ∈ 𝐼𝑣
0, otherwise
(3.2)
for l = 1, 2, 3. Due to the underlying tetromino support, 𝜓𝑣𝑙 are called tetrolets and
𝑣 the
corresponding scaling function. The function values [𝑙, 𝐿(𝑥’, 𝑦’)] in the tetrolet definition come
from the Haar wavelet transform matrix
W = ([𝑚, 𝑛])𝑚,𝑛=03 =
1
2(
1 1 1 1 1 1 − 1 − 1 1 − 1 1 − 1 1 − 1 − 1 1
) (3.3)
The basic idea of tetrolets is inherited from the conventional 2-D Haar wavelets. Infect tetrolets
are improved version of conventional Haar wavelets. The 2-D Haar wavelets allow the covering
of a 4 4 block with four fixed 2 2 squares. This is a very inefficient way of covering the
block because the local image structures are not taken into account during the covering. On the
other hand, for the tetrolets, it has been proved that there are 117 possible ways of covering a 4
4 block with any four tetrominoes [Krommweh (2010)]. This inefficiency of conventional Haar
wavelet transform motivates to use tetrolet transform method because it allows more partitions
than the conventional Haar wavelets and also finds most appropriate one based on local features
of the image. Fig. 3.2(a) shows the example of the fixed square partitioning of a 4 4 block by
39
the 2-D Haar wavelets. Fig. 3.2(b) shows one of the 117 possible tetromino partitions. If the local
structure of the block to be partitioned matches with Fig. 3.2(c), the solution in Fig. 3.2(b) is
obviously more appropriate than the fixed squares in Fig. 3.2(a).
(a) (b) (c)
Figure 3.2:Two examples of covering a 4 4 block and the local structure of the 4 4 block:
(a)The fixed squares of the 2-D Haar wavelets; (b) One of the 117 solutions for disjoint covering
of a 4 4 block with tetrominoes; (c) Example of the local structure
Following steps are followed to obtain the multiscale tetrolet decomposition of given input noisy
image 𝐠 = [g(𝑥, 𝑦)]𝑥,𝑦=1𝑁 with N = 2
K, K N, while given a tetromino covering c and number of
decomposition levels J. We start with the input image 𝐠0 = 𝐠 = [g(𝑥, 𝑦)]𝑥,𝑦=1𝑁 . In the j-th level,
𝑗 = 1, … … , 𝐽, we do the following computations.
1. Divide the low-pass image 𝐠𝑗−1 into 4 4 blocks, 𝑄𝑚,𝑛, 𝑚, 𝑛 = 1, … … ,𝑁
2𝑗+1.
2. In each block Qm,n, we compute the low-pass part for tetromino covering c by
𝐠𝑗,(𝑐) = (g𝑗,(𝑐)[𝑣])𝑣=0
3with g𝑗,(𝑐)[𝑣] = ∑
𝐼𝑣(𝑐)[𝑥′, 𝑦′]g𝑗−1[𝑥′, 𝑦′]
(𝑥′,𝑦′)∈𝐼𝑣(𝑐)
(3.4)
as well as the three high-pass parts for l = 1, 2, 3
𝐒𝑙𝑗,(𝑐)
= (S𝑙𝑗,(𝑐)[𝑣])
𝑣=0
3
withS𝑙𝑗,(𝑐)[𝑣] = ∑ 𝜓
𝐼𝑣(𝑐)
𝑙 [𝑥′, 𝑦′]g𝑗−1[𝑥′, 𝑦′]
(𝑥′,𝑦′)∈𝐼𝑣(𝑐)
(3.5)
where 𝐼𝑣
and 𝜓𝐼𝑣
𝑙 are defined in (3.1) and (3.2), respectively.
40
For each block 𝑄𝑚,𝑛 we save the covering c, which is used for partition, since this information is
required at the time of reconstruction.
3. In order to be able to apply further levels of the tetrolet decomposition algorithm, we
rearrange the entries of the vectors 𝐠𝑗,(𝑐) and 𝐒𝑙𝑗,(𝑐)
into 2 2 matrices using a reshape
function R,
𝐠|𝑄𝑚,𝑛
𝑗= 𝑅(𝐠𝑗,(𝑐)) = (
g𝑗,(𝑐)[0] g𝑗,(𝑐)[1]
g𝑗,(𝑐)[2]g𝑗,(𝑐)[3]) ,
and in the same way 𝐒𝑙|𝑄𝑚,𝑛
𝑗= 𝑅(𝐒𝑙
𝑗,(𝑐) ), 𝑙 = 1, 2, 3.
4. After finding a tetrolet decomposition in every block 𝑄𝑚,𝑛, 𝑚, 𝑛 = 1, … … ,𝑁
2𝑗+1, we store the
low-pass matrix 𝐠𝑗 = (𝐠|𝑄𝑚,𝑛
𝑗)
𝑚,𝑛=1
𝑁
2𝑗+1and the high-pass matrices 𝐒𝑙
𝑗= (𝐒𝑙|𝑄𝑚,𝑛
𝑗)
𝑚,𝑛=1
𝑁
2𝑗+1, 𝑙 =
1, 2, 3, replacing the low-pass image 𝐠𝑗−1by the matrix (𝐠𝑗𝐒2
𝑗
𝐒1𝑗𝐒3
𝑗). In this way we obtained one
low pass image (subband) 𝐠𝑗 and three highpass subbands 𝐒1𝑗,𝐒2
𝑗and 𝐒3
𝑗 in the decomposition
level j.
3.4 Adaptive Image Denoising Methods
In this section anedge-preserving image denoising methods for noise suppression in tetrolet
coefficients. The method is based on adaptive epsilon-median filtering which provides subband-
dependent thresholding andits variant is based on locally adaptive thresholding i.e. it provides a
coefficient dependent thresholding.
3.4.1 Image Denoising using Adaptive Epsilon-Median(E-median) Filtering
Inspired by an adaptive tetrolet decomposition algorithm [Krommweh (2010)] and the epsilon-
median (e-median) filtering method [Haseyama et al. (2000)], we proposed a novel adaptive
41
edge-preserving image denoising method. The main stages of the proposed method are shown in
Fig. 3.3.
Let 𝐟 = [f(𝑥, 𝑦)]𝑥,𝑦=1𝑁 , 𝑁 = 2𝐾 , 𝐾 Ndenote 𝑁 × 𝑁 original image to be recovered. During
transmission, the image f is corrupted by independent and identically distributed (i.i.d.) zero
mean white Gaussian noise according to the following model:
𝐠 = 𝐟 + 𝐧, (3.6)
where n represents the noise and g the observed image. The goal is to estimate f from the noisy
observation g.
Figure 3.3: Image denoising using adaptive epsilon-median filtering
Initially, the input image g corrupted by additive Gaussian noise, is decomposed into
tetroletcoefficients through a Discrete Tetrolet Transform (DTT) 𝐖T given in previous section,
expressed as
𝐆T = 𝐖T(𝐠) = {𝐠𝐽; 𝐒1𝑗; 𝐒2
𝑗; 𝐒3
𝑗, 𝑗 = 𝐽, 𝐽 − 1, … … ,1} (3.7)
where j indicates the decomposition level (or resolution scale) of tetrolet transformand J is the
largest scale in the decomposition.The low-frequency subband 𝐠𝐽 at largest resolution scale J
noisy image
g
denoised
image 𝐟
GT
e-median
filtering
subband
adaptive
threshold
estimation
�̂�
𝐖T−1
inverse
tetrolet
transform
discrete
tetrolet
transform
WT
local noise
variance
estimation
42
corresponds to a coarsest approximation of the image signal while the high-frequency subbands
𝐒1𝑗, 𝐒2
𝑗, and 𝐒3
𝑗, respectively correspond to horizontal, vertical and diagonal details of the image
signal at scale j.Figure 3.4 illustrates the subband regions of the 2-D critically sampled tetrolet
transform.
Let us consider a detail subband S at scale j, i.e.𝐒 = 𝐒𝑙𝑗, 𝑙 = 1, 2, 3, 𝑗 = 1, … … , 𝐽.
Note that the additive noise model for image g in spatial domain in Eq. (3.6) is also applicable
for the subband 𝐒𝑙𝑗 in wavelet domain. Thus, we have
𝐒𝑙𝑗
= 𝐰𝑙𝑗
+ 𝐧𝑙𝑗, (3.8)
where the noisy subband 𝐒𝑙𝑗 is obtained after introducing the noise 𝐧𝑙
𝑗 in its noiseless counterpart
𝐰𝑙𝑗. The aim of denoising method is to obtain �̂�𝑙
𝑗 (the estimate of noiseless counterpart 𝐰𝑙
𝑗) from
the given noisy subband 𝐒𝑙𝑗.
Figure 3.4: Subband regions of critically sampled tetrolet transform
Let 𝑦𝑘 ∈ 𝐒𝑙𝑗, 𝑤𝑘 ∈ 𝐰𝑙
𝑗 and 𝑛𝑘 ∈ 𝐧𝑙
𝑗. Then,
𝑦𝑘 = 𝑤𝑘 + 𝑛𝑘 , 𝑘 = 1 … no. of tetrolet coeffs. (3.9)
𝐠3 𝐒23
𝐒2
2 𝐒1
3 𝐒33
𝐒21
𝐒1
2 𝐒32
𝐒1
1 𝐒31
43
where 𝑦𝑘 is the noisy observation of wk and nk is the noise sample. The objective of the proposed
adaptive e-medianfiltering method (3 × 3 window size based) is to obtain �̂�𝑘, the estimate of 𝑤𝑘,
from its noisy observation 𝑦𝑘. The proposed adaptivee-median filtercan be defined as
�̂�𝑘 = 𝑦𝑘𝑚 + 𝑋(𝑦𝑘 − 𝑦𝑘
𝑚) (3.10)
where 𝑦𝑘 represents the degraded data, 𝑦𝑘𝑚 represents the median filtered data.
The function 𝑋 is defined as
𝑋(𝑥) = { 𝑥, |𝑥| > 𝜆𝑙
𝑗
0, otherwise
(3.11)
where 𝜆𝑙𝑗 represents the threshold value.
The e-median filter preserves edges while removing noise [Haseyama et al. (2000), Koç and
Ergelebi (2006)]. The threshold 𝜆𝑙𝑗 on a given subband 𝐒𝑙
𝑗 is computed by using the BayesShrink
[Chipman et al. (1997), Chang et al. (2000a)] as threshold estimation criterion, expressed as
𝜆𝑙𝑗
=�̂�𝑛𝑜𝑖𝑠𝑒,𝑗
2
�̂�𝑠𝑖𝑔𝑛𝑎𝑙,𝑙𝑗
, (3.12)
where �̂�𝑛𝑜𝑖𝑠𝑒,𝑗2 is the local noise variance which can be estimated from the diagonal detail
coefficients at the same scale j as the subband under consideration (that is, the coefficients in 𝐒3𝑗
subband).
�̂�𝑛𝑜𝑖𝑠𝑒,𝑗 = 𝑚𝑒𝑑𝑖𝑎𝑛(|𝑦𝑖|)
0.6745, 𝑦𝑖 ∈ 𝐒3
𝑗 (3.13)
The expression on right-hand side of Eq. (3.13) is a robust median estimator [Donoho and
Johnstone (1994)] often used to estimate noise variance from diagonal detail coefficients at the
finest scale (that is, the coefficients in subband 𝐒31). The term �̂�𝑠𝑖𝑔𝑛𝑎𝑙,𝑙
𝑗 in Eq. (3.12) is the local
estimated signal deviation on the subband under consideration, estimated as
44
�̂�𝑠𝑖𝑔𝑛𝑎𝑙,𝑙𝑗
= √max(�̂�𝑦2 − �̂�𝑛𝑜𝑖𝑠𝑒,𝑗
2 , 0) (3.14)
where �̂�𝑦2 =
1
𝑁s∑ 𝑦𝑘
2𝑁𝑠𝑘=1 and 𝑁s is the number of tetrolet coefficients 𝑦𝑘 on the subband under
consideration.
The threshold computed in (3.12) will be used obtain the elements �̂�𝑘 of �̂�𝑙𝑗 (the estimate of
noiseless counterpart 𝐰𝑙𝑗
of the given noisy subband 𝐒𝑙𝑗
) according to (3.10). The above
procedure will be used to obtain the estimates of noiseless counterparts for all the detail
subbands at each level and the final thresholded result �̂� of complete image in wavelet domain
can be obtained as
�̂� = {𝐠𝐽; �̂�1𝑗; �̂�2
𝑗; �̂�3
𝑗, 𝑗 = 𝐽, 𝐽 − 1, … ,1} (3.15)
Now an Inverse Discrete Tetrolet Transform (IDTT) is applied at last stage, expressed as
𝐟𝑟𝑒𝑐 = 𝐖T−1(�̂�) (3.16)
where 𝐖T−1 denotes the inverse discrete tetrolet transform and 𝐟𝑟𝑒𝑐 is the reconstructed image.
For the reconstruction of the image, we use the low-pass coefficients from the coarsest level and
the thresholded tetrolet coefficients from all levels obtained in above stage as usual.
Additionally, the information about the respective covering in each level and block is used. We
now apply the mechanism of tetrolet decomposition process elaborated earlier in reverse order.
Table 3.1 gives the algorithm for the proposed adaptive e-median filtering-based denoising
method. In this algorithm, the whole procedure iteratively executed for each admissible
tetromino covering. As there are 117 possible tetromino coverings that can be used for
partitioning image blocks, so many samples (instead of one) are obtained for a pixel value. This
situation leads to redundancy in samples for the image pixels. Such redundant information is
45
helpful in achieving better denoising. The average of all the collected samples is taken to obtain
the desired denoised version of a noisy pixel.
Table 3.1: Algorithm for the proposed adaptive e-median filtering-based denoising
method[Jain and Tyagi (2015a)]
Algorithm’s prerequisite: Extend the input noisy image (if needed) to make its size as N N
with N = 2K, K N. After denoising, the image is cropped to get the original size. The extended
image is used for further analysis.
Inputs: Noisy image 𝐠 = [g(𝑥, 𝑦)]𝑥,𝑦=1𝑁 with N = 2
K, K N, the number of decomposition levels
J (default value, log2N – 1), number of admissible tetromino coverings num_cov (default value,
117).
Output: The denoised image 𝐟.
Initialization: Initialize 𝐟 = 0. Set c = 1.
Iterate as follows:
Step (1) Considering image g, tetromino configuration c, and the number of decomposition
levels J as input arguments, obtain the multiscale tetrolet decomposition
𝐆T = {𝐠𝐽; 𝐒1𝑗; 𝐒2
𝑗; 𝐒3
𝑗, 𝑗 = 𝐽, 𝐽 − 1, … … ,1} using (3.7).
Step (2)For each decomposition level (j = 1J ):
(a) Calculate the local noise variance �̂�𝑛𝑜𝑖𝑠𝑒,𝑗2 using (3.13).
(b) For each subband (S = 𝐒𝑙𝑗, l = 1, 2, 3):
(i) Compute �̂�𝑠𝑖𝑔𝑛𝑎𝑙,𝑙𝑗 using (3.14).
(ii) Compute the threshold 𝜆𝑙𝑗 using (3.12).
(iii) For each tetrolet coefficient, yk S (k = 1no. of tetrolet coefficients in
subband S):
Estimate each coefficient using 𝜆𝑙𝑗 in (3.10).
Step (3) Get the thresholded wavelet output �̂� using (3.15).
Step (4) Obtain the reconstructed image 𝐟𝑟𝑒𝑐 using (3.16).
Step (5) 𝐟 = 𝐟 + 𝐟𝑟𝑒𝑐.
Step (6) Increase c (c = c + 1) and go to Step (1).
Termination criterion: If c > num_cov, stop iteration.
Output: 𝐟= 𝐟 / num_cov
3.4.2 Image Denoising using Locally Adaptive Thresholding
Inspired by an adaptive tetrolet decomposition algorithm [Krommweh (2010)] and the locally
adaptive denoising algorithm [Sendur and Selesnick (2002b)], we have proposed a novel locally
adaptive image denoising method to preserves edges. This method is a variation of the earlier
proposed method. That is, it inherits some relevant aspects of previous method while includes
46
some new aspects to achieve gain in denoising performance. All of these aspects can be
summarized as follows:
1. Redundancy is exploited: Although our method is motivated by non-redundant tetrolet
system [Krommweh (2010)] but proposed approach has a higher degree of redundancy. This
redundancy helps in achieving better denoising.
2. Interscale dependency is exploited: Sendur and Selenick (2002a) have suggested four joint
shrinkage functions that utilize the dependencies between wavelet coefficients through two
adjacent resolution scales in order to improve the denoising performance. Inspired by their
approach, the proposed thresholding scheme also exploits the interscale statistical
dependencies between coefficients obtained by applying discrete tetrolet transform.
3. Local noise variance is computed: It can be observed that noise strength decreases with the
raise in resolution scale. Therefore, instead of using fixed noise variance, which is used in the
computation of thresholds, it is estimated locally for each resolution scale.
4. The thresholding parameters are estimated in a local neighborhood: Sendur and Selenick
(2002b) further shown improvisation over their earlier proposed bivariate shrinkage function
in [Sendur and Selenick (2002a)] and proposed a locally adaptive thresholding method in
which the thresholding parameters are computed in a local neighborhood. Inspired by their
approach, our thresholding scheme also determines the thresholding parameters in a local
neighborhood around each pixel position.
The implementation outlines of the proposed method are as follows: First, we start with the
decomposition of input noisy image into the tetrolet coefficients by applying a discrete tetrolet
transform. Second, we estimate the noise variance locally for each resolution scale by using a
robust median estimator [Donoho and Johnstone (1994)]. Third, we have computed the threshold
47
for the subband under consideration by using the local noise variance (estimated in previous
step) and the coefficients which are present in the neighborhood of that coefficient. Fourth, a
bivariate shrinkage function which exploits the interscale dependency between the coefficients,
is employed to threshold each coefficient, which is analytically dependent on three parameters,
namely the coefficient itself, its parent coefficient (the coefficient at the same position, but at the
next coarser scale) and the threshold computed earlier. Lastly, the thresholded coefficients are
transformed back to the original domain by applying an inverse discrete tetrolet transform.
The main stages of this variant are illustrated in Fig. 3.5. The notations and definitions for the
noisy image 𝐠 , the subbands 𝐒𝑙𝑗, 𝐰𝑙
𝑗, 𝐧𝑙
𝑗, �̂�𝑙
𝑗 and the local noise variance �̂�𝑛𝑜𝑖𝑠𝑒,𝑗
2 that have
defined earlier for previous method, are also reused for this method.
Figure 3.5: Image denoising using locally adaptive thresholding
Let w2k represents the parent of w1k (w2k is the tetrolet coefficient at the same position as the k-th
tetrolet coefficient w1k, but at the next coarser scale). Then
𝑦1𝑘 = 𝑤1𝑘 + 𝑛1𝑘 𝑦2𝑘 = 𝑤2𝑘 + 𝑛2𝑘 (3.17)
noisy image
g
denoised
image 𝐟
GT
bivariate
shrinkage
function
T
coefficient
adaptive
threshold
estimation
�̂�
𝐖T−1
inverse
tetrolet
transform
discrete
tetrolet
transform
WT
local noise
variance
estimation
48
where 𝑦1𝑘 and 𝑦2𝑘 are noisy observations of 𝑤1𝑘 and 𝑤2𝑘 ; and 𝑛1𝑘 and 𝑛2𝑘 are noise samples.
We can write
𝐲𝑘 = 𝐰𝑘 + 𝐧𝑘, 𝑘 = 1 … no. of tetrolet coeffs. (3.18)
where 𝐰𝑘 = (𝑤1𝑘, 𝑤2𝑘), 𝐲𝑘 = (𝑦1𝑘, 𝑦2𝑘) and 𝐧𝑘 = (𝑛1𝑘, 𝑛2𝑘).
Let us define subband P(S), which is the subband of the parents of the coefficients of the
subband S. For example, if S is 𝐒31 , then P(S) is 𝐒3
2 , or if S is 𝐒22 , then P(S) is 𝐒2
3 . In the
observation model given in Eq. (3.17), 𝑦1𝑘, 𝑤1𝑘, 𝑛1𝑘 S and 𝑦2𝑘, 𝑤2𝑘, 𝑛2𝑘 P(S).
The estimate of 𝑤1𝑘can be obtained as
�̂�1𝑘 = 𝑻(𝑦1𝑘, 𝑦2𝑘, 𝜆𝑘) =(√𝑦1𝑘
2 +𝑦2𝑘2 −𝜆𝑘)
+
√𝑦1𝑘2 +𝑦2𝑘
2. 𝑦1𝑘 (3.19)
which can be interpreted as a bivariate shrinkage function.The function (a)+ is defined as
(a)+ = { 0, if a < 0 a, if a > 0
(3.20)
The term 𝜆𝑘 in Eq. (3.19) is the threshold for the k-th coefficient, computed as
𝜆𝑘 =√3�̂�𝑛𝑜𝑖𝑠𝑒,𝑗
2
𝜎𝑘, (3.21)
where the local noise variance �̂�𝑛𝑜𝑖𝑠𝑒,𝑗2 is estimated similarly as in Eq. (3.13).
The estimator in Eq. (3.19) uses the threshold 𝜆𝑘 which in turn requires the prior knowledge of
the local noise variance �̂�𝑛𝑜𝑖𝑠𝑒,𝑗2 and the marginal variance 𝜎𝑘
2 for each tetrolet coefficient. In this
procedure, the marginal variance for the k-th coefficient will be estimated using neighboring
coefficients in the region N(k). Here N(k) is defined as all coefficients within a square-shaped
window that is centered at the k-th coefficient as illustrated in Fig. 3.6.
49
Figure 3.6:Illustration of neighborhood N(k)
Let us assume that we are trying to estimate the marginal variances 𝜎𝑘2 for the k-th wavelet
coefficient. From the observation model in Eq. (3.17), we have
𝜎𝑦𝑘2 = 𝜎𝑘
2 + 𝜎𝑛𝑜𝑖𝑠𝑒,𝑗2 (3.22)
where 𝜎𝑦𝑘2 is the marginal variance of noisy observations y1k and y2k. Since y1k and y2k are modeled
as zero mean, 𝜎𝑦𝑘2 can be found empirically by
�̂�𝑦𝑘2 =
1
|𝑁(𝑘)|∑ 𝑦𝑖
2
𝑦𝑖∈𝑁(𝑘)
(3.23)
where |𝑁(𝑘)|is the size of the neighborhood N(k). Then using Eq. (3.22), 𝜎𝑘 can be estimated as
�̂�𝑘 = √(�̂�𝑦𝑘2 − �̂�𝑛𝑜𝑖𝑠𝑒,𝑗
2 )+
(3.24)
The threshold computed in Eq. (3.21) will be used to obtain the elements �̂�1𝑘 of �̂�𝑙𝑗 (the estimate
of 𝐰𝑙𝑗) according to the shrinkage function T in Eq. (3.19). The final thresholded result �̂� and the
reconstructed image 𝐟𝑟𝑒𝑐 are obtained similar to as in Eqs. (3.15) and (3.16), respectively. Table
3.2 gives the algorithm for the proposed locally adaptive thresholding-based denoising method.
y2k (noisy parent) scale j + 1
coarser
scale j
y1k (noisy child)
N(k): neighbor
coefficients
k-th pixel
subband P(S)
subband S
50
In this algorithm, whole above procedure is iterated for every admissible tetromino covering, i.e.
we obtained the reconstructed data for every admissible tetromino covering. Such iterative
method leads to redundancy in the reconstructed results. The pixel values of the desired denoised
image are obtained by the averaging of these results over total number of tetromino coverings
considered in the experiment. Fig. 3.7 demonstrates the flow of execution of above three stages
for a given tetromino covering.
Table 3.2: Algorithm for the proposed locally adaptive thresholding-based denoisingmetho
Method [Jain and Tyagi (2015d)]
Algorithm’s prerequisite: Similar as in Table 3.1.
Inputs: The input noisy image 𝐠 = [g(𝑥, 𝑦)]𝑥,𝑦=1𝑁 with N = 2
K, K N, the number of
decomposition levels J (default value, log2N – 1), number of admissible tetromino coverings
num_cov (default value, 117), and size |𝑁(𝑘)| of the local neighborhood N(k) (default size, 7
7).
Output: The denoised image 𝐟.
Initialization: Initialize 𝐟 = 0. Set c =1.
Iterate as follows:
Step (1) Considering image g, tetromino configuration c, and the number of decomposition
levels J as input arguments, obtain the multiscale tetrolet decomposition
𝐆T = {𝐠𝐽; 𝐒1𝑗; 𝐒2
𝑗; 𝐒3
𝑗, 𝑗 = 𝐽, 𝐽 − 1, … … ,1} using (3.7).
Step (2)For each decomposition level (j = 1J ):
(c) Calculate the local noise variance �̂�𝑛𝑜𝑖𝑠𝑒,𝑗2 using (3.13).
(d) For each subband (S = 𝐒𝑙𝑗, l =1, 2, 3):
For each tetrolet coefficient, y1k S (k = 1no. of tetrolet coefficients in
subband S):
(i) Compute �̂�𝑦𝑘2 using (3.23).
(ii) Compute�̂�𝑘 using (3.24).
(iii)Compute the threshold 𝜆𝑘 using (3.21).
(iv) Estimate each coefficient using 𝜆𝑘 in (3.19).
Step (3) Get the thresholded wavelet output �̂� using (3.15).
Step (4) Obtain the reconstructed image 𝐟𝑟𝑒𝑐 using (3.16).
Step (5) 𝐟 = 𝐟 + 𝐟𝑟𝑒𝑐.
Step (6) Increase c (c = c + 1) and go to Step (1).
Termination criterion: If c > num_cov, stop iteration.
Output: 𝐟= 𝐟 / num_cov
51
Figure3.7: Flow of sequential execution for a given tetromino configuration
3.5 Experimental Results
The proposed image denoising method is applied to several test images corrupted by simulated
additive Gaussian white noise at six different power level 𝜎 [10, 15, 20, 25, 30, 35]. The
I/P noisy image
Divide into 4 4 blocks
Obtain the four low-pass and
twelve high-pass coefficients in
each block
Rearrange low/high
frequency coefficients into a
2 2 block
Store
tetromino
covering
Store
subbands
Store
subbands
Store
subbands
Reconstructed
image
Tetrolet
reconstruction
Further
decomposition
N
Y
Subbands of
high-pass
tetrolet
coefficients
Finish the
decomposition
?
Locally adaptive
thresholding
Subbands of
thresholded
high-pass
tetrolet coefficients
Subbands of
low-pass
tetrolet
coefficients
Modified
tetrolet
image
52
denoising process has been iterated for ten different noise realizations for each standard
deviation and the results are averaged over these ten runs. The test set comprises total 52 test
images in which 49 images have been taken from the standard grayscale image dataset
[http://decsai.ugr.es/cvg/CG/base.htm] and restthree are well-known images Lena, cameraman,
and peppers, respectively. A subset of test images, shown in Fig. 3.8, is considered in the
subsequent discussions. The graphs in Fig. 3.9, 3.11 and 3.12 are obtained using 128128
images.
(a) Lena (b) Cameraman (c) Barbara (d) Man (e) Boat (f) Peppers
Figure 3.8: Images used in experiments
3.5.1 Experimental setup
We estimate a set of parameters used by the algorithmspresented in Tables 3.1 and 3.2: number
of decomposition levels J, number of admissible tetromino coverings num_cov, and size
|𝑁(𝑘)|of the local neighborhood named N(k). These parameters are estimated using a set of
images different from those shown in the comparisons. Once the parameters are set, they are kept
fixed throughout the comparisons to other methods.
In the proposed method, wavelet coefficients are obtained using Haar-type wavelets (tetrolets)
and the results are obtained using four decomposition levels (i.e. J = 4). Therefore, for all other
wavelet-based methods (SURELET, Bayes, Bivariate), the Haar (Daubechies-1) wavelet with
four decomposition levels are used for fair comparison.
The denoising results are obtained by averaging the collected samples over a number of
tetromino coverings used. Different values of num_cov are considered in the experiments. In
53
Fig.3.9, the PSNR responses of denoised images that are obtained with our locally adaptive
thresholding-based denoising method,are plotted against number of tetromino coverings being
averaged.
It can be observed that redundancy improves the denoising performance by a huge factor. The
denoising performance improves as more and more tetromino coverings are averaged. Though,
the performance improves rapidly at the beginning, but gets saturated after a certain point. The
primary reason for this is the duplication in the generated tetrolet coefficients. Figure 3.10 shows
the duplication in the coefficients generated by selecting different tetromino coverings. It shows
that after a certain point, further raise in number of tetromino coverings do not improve the
image quality by much. Dropping them from consideration improves the speed with very little or
no cost to the image quality. Considering this fact and based on experimental observations, the
appropriate choice for num_cov is set to be 16.
Another important parameter is the size |𝑁(𝑘)| of the neighborhood windowwhich is a crucial
parameter used in the experiment. We have used |𝑁(𝑘)| as 7 7 in our experiments. However,
we have also tested different options for |𝑁(𝑘)|, but 7 7 resulted in best performance.
3.5.2 Comparisons
The proposed methods are compared to state-of-the-art methods to assess the denoising
effectiveness. For convenience in deciphering, we are using the conventions Proposed 1 and
Proposed 2, respectively for proposed method and its variant. The Proposed 1 method is
compared with the wavelet-based methods, namely, SURELET [Blu and Luisier (2007)],
Bivariate [Sendur and Selesnick (2002b)] and Bayes [Chang et al. (2000a)] while the variant,
Proposed 2 method, along with these wavelet-based methods, is also compared with the non
wavelet-based methods, namely, LLSURE [Qiu et al. (2013)], guided image filter (GIF) [He et
54
al. (2010)], fast bilateral filter (FBF) [Yang et al. (2009)] and total variation (TV) [Rudin et al.
(1992)]. All these state-of-the-art denoising approaches have been briefly discussed in Section
2.3. The PSNR (in dB), SSIM [Wang et al. (2004)] and FOM [Pratt (2012)] values of the
denoised images relative to their original images using such methods are reported in Tables 3.3,
3.4 and 3.5, respectively. Mathematical background for the performance metrics PSNR, SSIM
and FOM has been discussed in Section 1.5. Values marked in bold indicate the best results
among all compared methods; values with underline indicate the best results among the wavelet-
based methods compared with Proposed 1 method and values with italic indicate the best results
among the wavelet-based methods compared with Proposed 2 method.
The right segment in each table shows the results of wavelet-based denoising methods. The
results obtained with both the proposed methods reveal significant gain when compared with
such methods, specially considering Bivariate and Bayes methods. Both the proposed methods
achieve better results with PSNR and FOM, while close to the SURELET method regarding the
SSIM measure.When we focus on Proposed 2 method, the results are comparable to those
obtained with LLSURE and superior to GIF, FBF and TV.
In Fig. 3.11 which shows the comparison of the PSNR performances of the Proposed 1 method
with other state-of-the-art denoising methods, starting from (a) to (f) it can be observed that the
this method performs better in comparison to others in most of the cases. Though, at times the
SURELET method marginally beats it. In Fig. 3.12 which shows the comparison of the PSNR
performances of the Proposed 2 method with other state-of-the-art denoising methods, starting
from (a) to (f) it can be observed that the proposed method performs better in comparison to
others in most of the cases. Though, at times the LLSURE method marginally beats it. Figures
3.13, 3.14, 3.15 and 3.16 demonstrate the visual comparison among all the methods, including
55
the proposed method with respect to the sample test images: Lena, Barbara, Boat and Peppers,
respectively.
Bayes wavelet-based methods tend to produce smoothed results in homogeneous regions.
Nevertheless, certain features such as edges are affected. As the proposed denoising methods that
respectively employ an epsilon-median filtering which provides a subband-dependent
thresholding and a bivariate shrinkage function which provides a locally adaptive (coefficient-
dependent) thresholding, it is observed that such adaptive thresholding schemes, in conjunction
with the tetrolet concept and the computation of local noise variance, effectively reduces noise
while preserving features of the image.
The SURELET method produces a similar result on edges. However, as can be perceived
through Fig. 3.13, 3.14, 3.15 and 3.16, the proposed methods outperform SURELET in
homogeneous regions, producing smoother results. That can be clearly observed in the various
homogeneous regions in Fig. 3.16.
The GIF, FBF, Bivariate and Bayes methods fail to smooth images when noise increases to
higher levels. These produce good results at lower 𝜎 values, but give poor denoised images at
higher noise levels. The TV method tends to oversmooth the image. Due to this reason some fine
structures of the original image are not being preserved in the filtered output image. All such
issues are greatly resolved by proposed method and its variant.
56
Table 3.3: Performance of various methods as measured by PSNR
Non wavelet-based methods
Wavelet-based methods
LLSURE GIF FBF TV
SURELET Bivariate Bayes Proposed 1 Proposed 2
method method method method
Method method method method method
Lena
= 10 31.33 30.56 30.53 30.47
30.94 30.02 29.97 30.45 30.72
= 15 29.09 28.23 28.21 28.40
28.50 27.12 27.30 28.78 29.45
= 20 27.76 26.56 26.12 26.92
26.99 25.14 25.96 27.25 28.25
= 25 26.38 25.52 24.14 25.84
26.07 23.84 23.75 26.31 26.38
= 30 25.37 24.35 23.12 24.94
25.04 23.10 23.09 25.54 25.70
= 35 24.45 23.89 22.78 24.34
24.42 22.20 22.51 24.73 24.81
Cameraman
= 10 32.93 31.86 32.10 31.21
32.35 30.96 30.91 32.01 32.02
= 15 30.65 30.10 29.60 29.02
29.79 27.93 28.12 30.57 30.97
= 20 29.20 27.72 27.71 27.68
28.27 25.88 26.56 28.62 28.74
= 25 27.75 25.93 26.07 26.55
26.85 24.35 25.25 27.24 27.55
= 30 26.55 24.65 24.72 25.69
25.66 23.29 24.16 26.18 26.34
= 35 25.54 23.62 23.70 25.13
24.63 21.75 23.38 25.03 25.23
Barbara
= 10 30.57 30.03 29.81 29.89
30.30 29.24 29.56 29.42 29.92
= 15 28.20 27.32 27.10 27.68
26.84 26.43 27.12 28.15 28.68
= 20 26.65 25.78 25.86 26.26
26.19 24.40 24.93 26.61 26.81
= 25 25.64 24.13 24.88 25.25
25.25 23.26 23.96 25.78 25.78
= 30 24.83 23.09 23.98 24.36
24.24 22.02 22.52 24.45 24.49
= 35 23.39 22.36 23.14 23.58
23.58 21.42 22.14 23.72 23.72
Man
= 10 30.02 29.32 29.34 29.25
29.74 29.17 28.90 30.81 31.45
= 15 27.76 26.90 26.89 27.25
27.29 26.37 26.63 28.15 28.49
= 20 26.26 25.19 25.47 25.86
25.83 24.57 24.98 26.70 26.93
= 25 25.64 24.16 24.08 24.80
24.98 23.19 23.72 25.24 25.34
= 30 24.71 23.52 23.26 24.04
23.97 22.50 22.55 24.35 24.42
= 35 23.80 22.68 22.44 23.26
22.80 21.81 21.72 23.26 23.31
Boat
= 10 30.88 30.03 29.95 29.64
30.68 29.96 29.76 30.87 30.55
= 15 28.60 27.62 27.69 27.43
28.07 27.12 27.16 28.48 28.60
= 20 26.87 25.42 25.48 26.02
26.51 25.00 24.99 27.18 27.29
= 25 25.76 24.32 24.49 24.97
25.34 23.90 24.07 25.52 25.42
= 30 24.48 23.29 23.51 24.19
24.61 22.53 23.07 25.01 25.03
= 35 23.81 22.59 22.87 23.55
23.95 21.85 21.95 24.40 24.42
Peppers
= 10 30.91 30.36 30.37 29.94
30.53 29.62 29.58 30.06 30.06
= 15 28.60 27.71 27.72 27.61
28.01 26.63 27.04 28.29 28.31
= 20 27.35 26.07 26.15 26.08
26.58 24.70 24.96 26.83 26.96
= 25 25.86 24.48 24.73 24.87
25.24 23.27 23.99 25.58 25.77
= 30 24.74 23.65 23.86 23.86
24.50 22.41 22.57 24.89 25.03
= 35 23.97 22.76 22.89 23.07 23.86 21.13 21.59 24.28 24.39
57
Table 3.4: Performance of various methods as measured by SSIM
Non wavelet-based methods
Wavelet-based methods
LLSURE GIF FBF TV
SURELET Bivariate Bayes Proposed 1 Proposed 2
Method method method method
Method method method method method
Lena
= 10 0.92 0.91 0.91 0.91
0.90 0.85 0.88 0.91 0.92
= 15 0.89 0.89 0.87 0.87
0.85 0.76 0.82 0.87 0.88
= 20 0.85 0.83 0.84 0.83
0.80 0.67 0.77 0.83 0.84
= 25 0.81 0.80 0.79 0.80
0.76 0.60 0.65 0.78 0.78
= 30 0.78 0.76 0.74 0.77
0.72 0.54 0.62 0.74 0.72
= 35 0.75 0.74 0.71 0.74
0.68 0.48 0.56 0.70 0.65
Cameraman
= 10 0.93 0.91 0.92 0.91
0.90 0.80 0.82 0.90 0.91
= 15 0.89 0.89 0.90 0.88
0.85 0.67 0.75 0.85 0.87
= 20 0.86 0.85 0.83 0.86
0.80 0.57 0.69 0.81 0.82
= 25 0.82 0.80 0.81 0.83
0.76 0.50 0.65 0.76 0.76
= 30 0.79 0.77 0.75 0.81
0.72 0.44 0.62 0.72 0.73
= 35 0.76 0.73 0.72 0.80
0.68 0.40 0.59 0.70 0.70
Barbara
= 10 0.92 0.90 0.91 0.91
0.91 0.87 0.89 0.91 0.92
= 15 0.88 0.87 0.89 0.86
0.85 0.77 0.83 0.87 0.89
= 20 0.83 0.81 0.81 0.82
0.80 0.69 0.75 0.83 0.84
= 25 0.80 0.80 0.79 0.78
0.76 0.62 0.71 0.78 0.79
= 30 0.77 0.78 0.77 0.74
0.73 0.55 0.62 0.74 0.73
= 35 0.74 0.71 0.72 0.71
0.69 0.49 0.60 0.70 0.69
Man
= 10 0.90 0.91 0.92 0.89
0.89 0.87 0.88 0.89 0.89
= 15 0.85 0.84 0.86 0.83
0.83 0.78 0.81 0.84 0.85
= 20 0.80 0.79 0.78 0.78
0.78 0.73 0.75 0.80 0.81
= 25 0.75 0.72 0.70 0.74
0.74 0.62 0.68 0.75 0.76
= 30 0.71 0.69 0.68 0.70
0.69 0.55 0.61 0.71 0.70
= 35 0.69 0.68 0.66 0.66
0.65 0.50 0.56 0.67 0.67
Boat
= 10 0.92 0.91 0.92 0.90
0.90 0.85 0.87 0.91 0.90
= 15 0.87 0.88 0.89 0.85
0.85 0.75 0.80 0.86 0.88
= 20 0.83 0.82 0.81 0.81
0.79 0.66 0.68 0.81 0.83
= 25 0.79 0.81 0.79 0.77
0.74 0.58 0.66 0.76 0.77
= 30 0.74 0.72 0.70 0.73
0.70 0.52 0.62 0.72 0.71
= 35 0.71 0.69 0.69 0.70
0.66 0.47 0.52 0.68 0.67
Peppers
= 10 0.93 0.91 0.90 0.93
0.91 0.86 0.88 0.91 0.93
= 15 0.90 0.88 0.89 0.89
0.86 0.77 0.82 0.88 0.90
= 20 0.87 0.86 0.83 0.86
0.82 0.68 0.75 0.83 0.86
= 25 0.84 0.82 0.81 0.83
0.78 0.62 0.72 0.78 0.81
= 30 0.81 0.79 0.78 0.80
0.74 0.56 0.63 0.74 0.76
= 35 0.79 0.76 0.74 0.77 0.71 0.51 0.58 0.72 0.73
58
Table 3.5: Performance of various methods as measured by FOM
Non wavelet-based methods
Wavelet-based methods
LLSURE GIF FBF TV
SURELET Bivariate Bayes Proposed 1 Proposed 2
Method method method method Method method method method method
Lena
= 10 0.92 0.92 0.91 0.90
0.94 0.96 0.94 0.90 0.91
= 15 0.90 0.90 0.89 0.88
0.92 0.95 0.93 0.89 0.90
= 20 0.86 0.85 0.86 0.83
0.91 0.93 0.93 0.90 0.91
= 25 0.84 0.84 0.83 0.81
0.89 0.90 0.90 0.91 0.93
= 30 0.84 0.82 0.83 0.80
0.88 0.88 0.88 0.88 0.89
= 35 0.82 0.82 0.81 0.76
0.88 0.83 0.88 0.87 0.88
Cameraman
= 10 0.88 0.88 0.87 0.88
0.90 0.95 0.94 0.91 0.92
= 15 0.86 0.85 0.85 0.86
0.89 0.92 0.93 0.89 0.89
= 20 0.83 0.83 0.85 0.81
0.89 0.89 0.88 0.88 0.89
= 25 0.81 0.82 0.83 0.77
0.87 0.78 0.78 0.86 0.87
= 30 0.78 0.80 0.81 0.75
0.84 0.65 0.72 0.83 0.84
= 35 0.79 0.80 0.78 0.72
0.80 0.64 0.69 0.80 0.82
Barbara
= 10 0.92 0.93 0.92 0.89
0.93 0.96 0.95 0.91 0.92
= 15 0.89 0.90 0.90 0.84
0.92 0.94 0.94 0.91 0.92
= 20 0.82 0.86 0.85 0.81
0.90 0.92 0.92 0.90 0.91
= 25 0.82 0.85 0.83 0.78
0.89 0.90 0.90 0.90 0.91
= 30 0.80 0.81 0.81 0.75
0.87 0.88 0.88 0.89 0.91
= 35 0.76 0.77 0.80 0.72
0.86 0.86 0.87 0.88 0.90
Man
= 10 0.92 0.90 0.91 0.89
0.93 0.96 0.95 0.92 0.93
= 15 0.87 0.86 0.86 0.84
0.91 0.95 0.94 0.91 0.92
= 20 0.82 0.84 0.83 0.82
0.90 0.92 0.92 0.91 0.92
= 25 0.77 0.78 0.81 0.76
0.88 0.89 0.90 0.89 0.90
= 30 0.75 0.74 0.78 0.72
0.87 0.87 0.88 0.88 0.89
= 35 0.73 0.72 0.76 0.69
0.86 0.84 0.85 0.87 0.88
Boat
= 10 0.95 0.93 0.92 0.92
0.95 0.94 0.96 0.94 0.94
= 15 0.92 0.91 0.91 0.90
0.94 0.92 0.93 0.93 0.95
= 20 0.90 0.90 0.91 0.84
0.92 0.88 0.90 0.92 0.93
= 25 0.85 0.86 0.87 0.83
0.87 0.85 0.85 0.90 0.93
= 30 0.83 0.84 0.82 0.75
0.85 0.81 0.82 0.86 0.88
= 35 0.80 0.79 0.80 0.75
0.82 0.75 0.79 0.82 0.84
Peppers
= 10 0.94 0.94 0.92 0.93
0.96 0.96 0.96 0.93 0.94
= 15 0.90 0.89 0.89 0.88
0.93 0.95 0.94 0.92 0.92
= 20 0.85 0.84 0.86 0.84
0.92 0.92 0.93 0.90 0.90
= 25 0.84 0.83 0.84 0.81
0.89 0.91 0.91 0.89 0.92
= 30 0.81 0.80 0.80 0.78
0.87 0.89 0.90 0.89 0.90
= 35 0.80 0.78 0.79 0.76 0.89 0.87 0.88 0.89 0.90
59
(a) sigma =10 (b) sigma = 15
(c) sigma = 20 (d) sigma = 25
(e) sigma = 30 (f) sigma = 3
Figure3.9: PSNR vs number of tetromino coverings being averaged
Figure3.10: Duplicate tetrolet coefficients in two different tetromino coverings
These pixels will generate same tetrolet
coefficients
60
(a) (b)
(c) (d)
(e) (f)
Figure 3.11: PSNR performance graphs of the Proposed 1 method for test images
61
(a) (b)
(c) (d)
(e) (f)
Figure 3.12: PSNR performance graphs of the Proposed 2 method for test images
62
(a) Original (b) Noisy (=30) (c) LLSURE
(d) GIF (e) FBF (f) TV
(g) SURELET (h) Bivariate (i) Bayes
(j) Proposed 1 (k) Proposed 2
Figure 3.13: Denoising results for image ‘Lena’
63
(a) Original (b) Noisy (=30) (c) LLSURE
(d) GIF (e) FBF (f) TV
(g) SURELET (h) Bivariate (i) Bayes
(j) Proposed 1 (k) Proposed 2
Figure 3.14: Denoising results for image ‘Barbara’
64
(a) Original (b) Noisy (=30) (c) LLSURE
(d) GIF (e) FBF (f) TV
(g) SURELET (h) Bivariate (i) Bayes
(j) Proposed 1 (k) Proposed 2
Figure 3.15: Denoising results for image ‘Boat’
65
(a) Original (b) Noisy (=30) (c) LLSURE
(d) GIF (e) FBF (f) TV
(g) SURELET (h) Bivariate (i) Bayes
(j) Proposed 1 (k) Proposed 2
Figure 3.16: Denoising results for image ‘Peppers’
66
3.6 Conclusions
In this chapter, we have presented an adaptive method and its variant in tetrolet domain to
achieve edgepreserving image denoising. In the proposed method, we have employed a subband
adaptive epsilon-median filtering scheme which is based on estimation of local noise variance, to
threshold the tetrolet coefficients. This method has several desirable features. First, redundancy
of tetrolet coefficients is exploited in the method to achieve significant gain in denoising
performance. Second, estimating the term noise variance, used in the computation of thresholds,
locally at each resolution scale makes it more beneficial as it takes the noise strength at that scale
into consideration. Third, thresholding is done in subband-dependent manner which shows
improvisation over conventional thresholding schemes that make use of a universal threshold.
In the variant of this method, a bivariate shrinkage function which provides a coefficient-
dependent thresholding, is employed. In fact, this method is an extension of the first method as it
inherits the aspects of exploiting the redundancy and computing the noise level from the first
method and incorporates some other aspects.Firstly, the interscale statistical
dependenciesbetween tetrolet coefficients are exploited, as this property of coefficients also leads
to achieve efficient denoising. Secondly, thresholding is done in coefficient-dependent manner
rather than the subband-dependent approach suggested in the first method, which can
characterize the local features of the image more efficiently.
The proposed method denoises square natural gray scale images with dimensions in the
exponential order of two. If the image is not a square then it has to be extended to make it a
suitable input image. The quantitative and qualitative analysis of the experimental results
indicates that the proposed methods produce superior results compared to the methods based on
the wavelet transform and results comparable to other well-known denoising methods.