EELE 5310: Digital Image Processing Lecture 2 Ch. 3 Eng...
Transcript of EELE 5310: Digital Image Processing Lecture 2 Ch. 3 Eng...
EELE 5310: Digital Image Processing
Lecture 2Ch. 3
Eng. Ruba A. Salamah
Rsalamah @ iugaza.Edu
EELE 5310: Digital Image Processing
Lecture 2Ch. 3
Eng. Ruba A. Salamah
Rsalamah @ iugaza.Edu
1
Image Enhancement in the Spatial DomainImage Enhancement in the Spatial Domain
2
Lecture Reading
3.1 Background
3.2 Some Basic Gray Level Transformations Some Basic GrayLevel Transformations
3.2.1 Image Negatives
3.2.2 Log Transformations
3.2.3 Power-Law Transformations
3.2.4 Piecewise-Linear Transformation Functions
Contrast stretching
Gray-level slicing
Bit-plane slicing
3.1 Background
3.2 Some Basic Gray Level Transformations Some Basic GrayLevel Transformations
3.2.1 Image Negatives
3.2.2 Log Transformations
3.2.3 Power-Law Transformations
3.2.4 Piecewise-Linear Transformation Functions
Contrast stretching
Gray-level slicing
Bit-plane slicing3
Process an image so that the result will be moresuitable than the original image for a specificapplication.
The suitableness is up to each application.
A method which is quite useful for enhancing animage may not necessarily be the best approachfor enhancing another images
Principle Objective of Enhancement
Process an image so that the result will be moresuitable than the original image for a specificapplication.
The suitableness is up to each application.
A method which is quite useful for enhancing animage may not necessarily be the best approachfor enhancing another images
4
2 domains
Spatial Domain : (image plane) Techniques are based on direct manipulation of pixels in an
image
Frequency Domain : Techniques are based on modifying the Fourier transform of
an image
There are some enhancement techniques based onvarious combinations of methods from these twocategories.
Spatial Domain : (image plane) Techniques are based on direct manipulation of pixels in an
image
Frequency Domain : Techniques are based on modifying the Fourier transform of
an image
There are some enhancement techniques based onvarious combinations of methods from these twocategories.
5
Good images
For human visual
The visual evaluation of image quality is a highly subjective
process.
It is hard to standardize the definition of a good image.
A certain amount of trial and error usually is required
before a particular image enhancement approach is
selected.
For human visual
The visual evaluation of image quality is a highly subjective
process.
It is hard to standardize the definition of a good image.
A certain amount of trial and error usually is required
before a particular image enhancement approach is
selected.
6
Spatial Domain
Procedures that operatedirectly on pixels.
g(g(x,yx,y) = T[f() = T[f(x,yx,y)])]where
f(f(x,yx,y)) is the input image
g(g(x,yx,y)) is the processed image
TT is an operator on ff definedover some neighborhood of((x,yx,y))
Procedures that operatedirectly on pixels.
g(g(x,yx,y) = T[f() = T[f(x,yx,y)])]where
f(f(x,yx,y)) is the input image
g(g(x,yx,y)) is the processed image
TT is an operator on ff definedover some neighborhood of((x,yx,y))
7
Mask/Filter
Neighborhood of a point (x,y) canbe defined by using asquare/rectangular (commonused) or circular subimage areacentered at (x,y)
The center of the subimage ismoved from pixel to pixel startingat the top left corner.
•(x,y)
Neighborhood of a point (x,y) canbe defined by using asquare/rectangular (commonused) or circular subimage areacentered at (x,y)
The center of the subimage ismoved from pixel to pixel startingat the top left corner.
•
8
Mask Processing or Filtering
Neighborhood is bigger than 1x1 pixel
Use a function of the values of f in a predefinedneighborhood of (x,y) to determine the value of g at(x,y)
The value of the mask coefficients determine thenature of the process.
Used in techniques like: Image Sharpening
Image Smoothing
Neighborhood is bigger than 1x1 pixel
Use a function of the values of f in a predefinedneighborhood of (x,y) to determine the value of g at(x,y)
The value of the mask coefficients determine thenature of the process.
Used in techniques like: Image Sharpening
Image Smoothing9
Point Processing
Neighborhood = 1x1 pixel gg depends on only the value of ff at ((x,yx,y)) TT = gray level (or intensity or mapping) transformation function:
s = T(r)s = T(r) Where
rr = gray level of f(f(x,yx,y))ss = gray level of g(g(x,yx,y))
Because enhancement at any point in an image depends only onthe gray level at that point, techniques in this category often arereferred to as point processing.
Neighborhood = 1x1 pixel gg depends on only the value of ff at ((x,yx,y)) TT = gray level (or intensity or mapping) transformation function:
s = T(r)s = T(r) Where
rr = gray level of f(f(x,yx,y))ss = gray level of g(g(x,yx,y))
Because enhancement at any point in an image depends only onthe gray level at that point, techniques in this category often arereferred to as point processing.
10
Contrast Stretching
Produce higher contrastthan the original by:
darkening the levelsbelow m in the originalimage
Brightening the levelsabove m in the originalimage
Produce higher contrastthan the original by:
darkening the levelsbelow m in the originalimage
Brightening the levelsabove m in the originalimage
11
Thresholding
Produce a two-level
(binary) image
T(r) is called a
thresholding function.
Produce a two-level
(binary) image
T(r) is called a
thresholding function.
12
3 basic gray-level transformation functions
Linear function Negative and identity
transformations
Logarithm function Log and inverse-log
transformation
Power-law function nth power and nth root
transformations
Negative
Log
nth root
nth power
Linear function Negative and identity
transformations
Logarithm function Log and inverse-log
transformation
Power-law function nth power and nth root
transformationsInput gray level, r
Identity Inverse Log
13
Image Negatives An image with gray level in the
range [[00, L, L--11]]where L =L = 22nn ; n = 1, 2…
Negative transformation :
s = Ls = L –– 11 ––rr
Reversing the intensity levels ofan image.
Suitable for enhancing white orgray detail embedded in darkregions of an image, especiallywhen the black area dominantin size.
Negative
Log
nth root
nth power
An image with gray level in therange [[00, L, L--11]]where L =L = 22nn ; n = 1, 2…
Negative transformation :
s = Ls = L –– 11 ––rr
Reversing the intensity levels ofan image.
Suitable for enhancing white orgray detail embedded in darkregions of an image, especiallywhen the black area dominantin size.
Input gray level, r
Identity Inverse Log
14
Image Negatives
Example:1 5 1
5 255 5
1 5 0
254 250 254
250 0 250
254 250 255
Negative
Original
15
Log Transformations c is a constant
and r ≥ 0
Log curve maps a narrowrange of low gray-level valuesin the input image into a widerrange of output levels. Theopposite is true for highervalues.
Used to expand the values ofdark pixels in an image whilecompressing the higher-levelvalues.
Negative
Log
nth root
nth power
s = c log (s = c log (11+r)+r) c is a constantand r ≥ 0
Log curve maps a narrowrange of low gray-level valuesin the input image into a widerrange of output levels. Theopposite is true for highervalues.
Used to expand the values ofdark pixels in an image whilecompressing the higher-levelvalues.
Input gray level, r
Identity Inverse Log
16
Example of Logarithm Image
17
Inverse Logarithm Transformations
Do opposite to theLog Transformations
Used to expand thevalues of high pixelsin an image whilecompressing thedarker-level values.
Negative
Log
nth root
nth power
Do opposite to theLog Transformations
Used to expand thevalues of high pixelsin an image whilecompressing thedarker-level values.Input gray level, r
Identity Inverse Log
18
Power-Law Transformations
s =s = crcr c and γ are positive constants
Power-law curves withfractional values of γ map anarrow range of dark inputvalues into a wider range ofoutput values. The opposite istrue for higher values of inputlevels.
c = γ = 1 Identity function
s =s = crcr c and γ are positive constants
Power-law curves withfractional values of γ map anarrow range of dark inputvalues into a wider range ofoutput values. The opposite istrue for higher values of inputlevels.
c = γ = 1 Identity functionInput gray level, r
Plots of s =s = crcr for various values of γ(c = 1 in all cases)
19
Example 1: Gamma correction
Cathode ray tube (CRT)devices have an intensity-to-voltage response that isa power function, with γvarying from 1.8 to 2.5
The picture will becomedarker.
Gamma correction is doneby preprocessing theimage before inputting itto the monitor with s =s =crcr11//
Monitor
Cathode ray tube (CRT)devices have an intensity-to-voltage response that isa power function, with γvarying from 1.8 to 2.5
The picture will becomedarker.
Gamma correction is doneby preprocessing theimage before inputting itto the monitor with s =s =crcr11//
Monitor
Gammacorrection
= 2.5
=1/2.5 = 0.420
Example 2: MRI
(a) The picture is predominately dark
(b) Result after power-law
transformation with γ = 0.6
(c) transformation with γ = 0.4
(best result)
(d) transformation with γ = 0.3
(washed out look)
under than 0.3 will be reduced tounacceptable level.
a b (a) The picture is predominately dark
(b) Result after power-law
transformation with γ = 0.6
(c) transformation with γ = 0.4
(best result)
(d) transformation with γ = 0.3
(washed out look)
under than 0.3 will be reduced tounacceptable level.
c d
21
Example 3
(a) image has a washed-outappearance, it needs acompression of gray levels needs γ > 1
(b) result after power-lawtransformation with γ = 3.0(suitable)
(c) transformation with γ = 4.0(suitable)
(d) transformation with γ = 5.0(high contrast, the imagehas areas that are too dark,some detail is lost)
a b (a) image has a washed-outappearance, it needs acompression of gray levels needs γ > 1
(b) result after power-lawtransformation with γ = 3.0(suitable)
(c) transformation with γ = 4.0(suitable)
(d) transformation with γ = 5.0(high contrast, the imagehas areas that are too dark,some detail is lost)
dc
22
Piecewise-Linear Transformation Functions
Advantage: The form of piecewise
functions can be arbitrarilycomplex
Disadvantage: Their specification requires
considerably more user input
Advantage: The form of piecewise
functions can be arbitrarilycomplex
Disadvantage: Their specification requires
considerably more user input
23
Contrast Stretching
increase the dynamicrange of the gray levels inthe image
(b) a low-contrast image
(c) result of contraststretching: (r1,s1)=(rmin,0)and (r2,s2) = (rmax,L-1)
(d) result of thresholding
increase the dynamicrange of the gray levels inthe image
(b) a low-contrast image
(c) result of contraststretching: (r1,s1)=(rmin,0)and (r2,s2) = (rmax,L-1)
(d) result of thresholding
24
Gray-level slicing Highlighting a specific range of
gray levels in an image
Display a high value of allgray levels in the range ofinterest and a low value forall other gray levels
(a) transformation highlightsrange [A,B] of gray level andreduces all others to a constantlevel (result in binary image)
(b) transformation highlightsrange [A,B] but preserves allother levels
Highlighting a specific range ofgray levels in an image
Display a high value of allgray levels in the range ofinterest and a low value forall other gray levels
(a) transformation highlightsrange [A,B] of gray level andreduces all others to a constantlevel (result in binary image)
(b) transformation highlightsrange [A,B] but preserves allother levels
25
Bit-plane Slicing
Highlighting the contributionmade to total image appearanceby specific bits
Suppose each pixel is representedby 8 bits
Higher-order bits contain themajority of the visuallysignificant data
Useful for analyzing the relativeimportance played by each bit ofthe image
Bit-plane 7(most significant)
One 8-bit byte
Highlighting the contributionmade to total image appearanceby specific bits
Suppose each pixel is representedby 8 bits
Higher-order bits contain themajority of the visuallysignificant data
Useful for analyzing the relativeimportance played by each bit ofthe image
Bit-plane 0(least significant)
26
Example
An 8-bit fractal image
27
8 bit planes
Bit-plane 7 Bit-plane 6
Bit-plane 5
Bit-plane 4
Bit-plane 3
Bit-plane 5
Bit-plane 4
Bit-plane 3
Bit-plane 2
Bit-plane 1
Bit-plane 0
28
Next lecture Reading
3.3 Histogram processing
3.3.1 Histogram Equalization
3.3.2 Histogram Specification
3.4 Enhancement Using Arithmetic/LogicOperations
3.4.1 Image Subtraction
3.4.2 Image Averaging
3.3 Histogram processing
3.3.1 Histogram Equalization
3.3.2 Histogram Specification
3.4 Enhancement Using Arithmetic/LogicOperations
3.4.1 Image Subtraction
3.4.2 Image Averaging
29
Histogram Processing
Histogram of a digital image with gray levels in therange [0,L-1] is a discrete function
h(rh(rkk) = n) = nkk
Where rk : the kth gray level
nk : the number of pixels in the image having gray level rk
h(rk) : histogram of a digital image with gray levels rk
Histogram of a digital image with gray levels in therange [0,L-1] is a discrete function
h(rh(rkk) = n) = nkk
Where rk : the kth gray level
nk : the number of pixels in the image having gray level rk
h(rk) : histogram of a digital image with gray levels rk
30
Example
2 3 3 2
4 2 4 3 4
5
6
No. of pixels
4 2 4 3
3 2 3 5
2 4 2 4
4x4 image
Gray scale = [0,9]histogram
0 1
1
2
2
3
3
4
4
5 6 7 8 9
Gray level
31
Normalized Histogram
dividing each of histogram at gray level rrkk by the totalnumber of pixels in the image, nn
p(p(rrkk) =) = nnkk / n/ nFor k = 0,1,…,L-1
p(p(rrkk)) gives an estimate of the probability of occurrenceof gray level rrkk
The sum of all components of a normalized histogramis equal to 1
dividing each of histogram at gray level rrkk by the totalnumber of pixels in the image, nn
p(p(rrkk) =) = nnkk / n/ nFor k = 0,1,…,L-1
p(p(rrkk)) gives an estimate of the probability of occurrenceof gray level rrkk
The sum of all components of a normalized histogramis equal to 1
32
Histogram Processing
Basic for numerous spatial domain processingtechniques
Used effectively for image enhancement
Information inherent in histograms also is usefulin image compression and segmentation
Basic for numerous spatial domain processingtechniques
Used effectively for image enhancement
Information inherent in histograms also is usefulin image compression and segmentation
33
Example
Dark image
Components ofhistogram areconcentrated on thelow side of the grayscale.
Bright image
Components ofhistogram areconcentrated on thelow side of the grayscale.
Components ofhistogram areconcentrated on thehigh side of the grayscale.
34
ExampleLow-contrast image
histogram is narrowand centered towardthe middle of thegray scale
High-contrast imagehistogram covers broadrange of the gray scaleand the distribution ofpixels is not too far fromuniform, with very fewvertical lines being muchhigher than the others
35
Histogram Equalization
As the low-contrast image’s histogram is narrow andcentered toward the middle of the gray scale, if wedistribute the histogram to a wider range the quality ofthe image will be improved.
We can do it by adjusting the probability densityfunction of the original histogram of the image so thatthe probability spread equally
As the low-contrast image’s histogram is narrow andcentered toward the middle of the gray scale, if wedistribute the histogram to a wider range the quality ofthe image will be improved.
We can do it by adjusting the probability densityfunction of the original histogram of the image so thatthe probability spread equally
36
sk= T(rk)
s
Histogram transformation
s = T(r)s = T(r) Where 0 ≤ r ≤ 1 T(r) satisfies (a). T(r) is single-
valued andmonotonicallyincreasingly in theinterval 0 ≤ r ≤ 1
(b). 0 ≤ T(r) ≤ 1 for0 ≤ r ≤ 10 1rk r
T(r)
s = T(r)s = T(r) Where 0 ≤ r ≤ 1 T(r) satisfies (a). T(r) is single-
valued andmonotonicallyincreasingly in theinterval 0 ≤ r ≤ 1
(b). 0 ≤ T(r) ≤ 1 for0 ≤ r ≤ 1
٣٧
2 Conditions of T(r)
Single-valued (one-to-one relationship) guarantees thatthe inverse transformation will exist
Monotonicity condition preserves the increasing orderfrom black to white in the output image
0 ≤ T(r) ≤ 1 for 0 ≤ r ≤ 1 guarantees that the output graylevels will be in the same range as the input levels.
The inverse transformation from s back to r is
r = Tr = T --11(s) ;(s) ; 00 ss 11
Single-valued (one-to-one relationship) guarantees thatthe inverse transformation will exist
Monotonicity condition preserves the increasing orderfrom black to white in the output image
0 ≤ T(r) ≤ 1 for 0 ≤ r ≤ 1 guarantees that the output graylevels will be in the same range as the input levels.
The inverse transformation from s back to r is
r = Tr = T --11(s) ;(s) ; 00 ss 1138
Probability Density Function
The gray levels in an image may be viewed asrandom variables in the interval [0,1]
PDF is one of the fundamental descriptors of arandom variable
The gray levels in an image may be viewed asrandom variables in the interval [0,1]
PDF is one of the fundamental descriptors of arandom variable
39
Applied to Image
Let pprr(r)(r) denote the PDF of random variable r
ppss (s)(s) denote the PDF of random variable s
If pprr(r)(r) and T(r)T(r) are known and TT--11(s)(s) satisfies condition(a) then ppss(s)(s) can be obtained using a formula :
The PDF of the transformed variable s is determined by the gray-
level PDF of the input image and by the chosen transformation
function
Let pprr(r)(r) denote the PDF of random variable r
ppss (s)(s) denote the PDF of random variable s
If pprr(r)(r) and T(r)T(r) are known and TT--11(s)(s) satisfies condition(a) then ppss(s)(s) can be obtained using a formula :
The PDF of the transformed variable s is determined by the gray-
level PDF of the input image and by the chosen transformation
function
dsdr(r) p(s)p rs =
40
A transformation function of particular importancein DIP is :
Thus:∫==r
r dw)w(p)r(Ts0
Transformation function
)r(p
dw)w(pdrd
dr)r(dT
drds
r
r
r
=
=
=
∫0
A transformation function of particular importancein DIP is :
Thus:
)r(p
dw)w(pdrd
dr)r(dT
drds
r
r
r
=
=
=
∫0
10 where1
1
≤≤=
=
=
s)r(p
)r(p
dsdr)r(p)s(p
rr
rs
Substitute and yield 41
Transformation function
It is important to note that T(r) depends on pr(r),the resulting ps(s) always is uniform, independentof the form of pr(r).
∫==r
r dw)w(p)r(Ts0∫==r
r dw)w(p)r(Ts0
a random variable scharacterized by
a uniform probabilityfunction
yields
0
1
s
Ps(s)
42
Discrete Transformation Function
The probability of occurrence of gray level inan image is approximated by
The discrete version of transformation
110)( , ..., L-,where knnrp k
kr ==
The probability of occurrence of gray level inan image is approximated by
The discrete version of transformation
∑
∑
=
=
==
==
k
j
j
k
jjrkk
, ..., L-, where knn
)r(p)r(Ts
0
0
110
110)( , ..., L-,where knnrp k
kr ==
43
Histogram Equalization Implementation
Thus, an output image is obtained by mappingeach pixel with level rrkk in the input image into acorresponding pixel with level sskk in the outputimage such that:
Thus, an output image is obtained by mappingeach pixel with level rrkk in the input image into acorresponding pixel with level sskk in the outputimage such that:
∑=
==k
j
jk , ..., L-, where k
nn
s0
110
44
Example
٤٥
Histogram Processing
٤٦
Transformation functions used to equalized the histogram
Example Assume a 10 x 10 image (100 pixels) with the following normalized histogram:
For example, the value 0.1 appears 19 times in the histogram so the number is19/100 = 0.19 in the normalized histogram. Using equation (3), we get:
s1 s2 s3 s4 s5 s6 s7 s8 s9 s100.19 0.37 0.55 0.70 0.83 0.89 0.93 0.95 0.96 1.00
In the histogram equalized image, s5 = 0.83 will have a correspondinghistogram value of (the same as r5) 0.13
0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 10
0.02
0.04
0.06
0.08
0.1
0.12
0.14
0.16
0.18
0.2
0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 10
0.02
0.04
0.06
0.08
0.1
0.12
0.14
0.16
0.18
0.2
Assume a 10 x 10 image (100 pixels) with the following normalized histogram:
For example, the value 0.1 appears 19 times in the histogram so the number is19/100 = 0.19 in the normalized histogram. Using equation (3), we get:
s1 s2 s3 s4 s5 s6 s7 s8 s9 s100.19 0.37 0.55 0.70 0.83 0.89 0.93 0.95 0.96 1.00
In the histogram equalized image, s5 = 0.83 will have a correspondinghistogram value of (the same as r5) 0.13
0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 10
0.02
0.04
0.06
0.08
0.1
0.12
0.14
0.16
0.18
0.2
0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 10
0.02
0.04
0.06
0.08
0.1
0.12
0.14
0.16
0.18
0.2
47
Histogram Matching (Specification)
Histogram equalization has a disadvantage whichis that it can generate only one type of outputimage.
With Histogram Specification, we can specify theshape of the histogram that we wish the outputimage to have.
It doesn’t have to be a uniform histogram.
٤٨
Histogram equalization has a disadvantage whichis that it can generate only one type of outputimage.
With Histogram Specification, we can specify theshape of the histogram that we wish the outputimage to have.
It doesn’t have to be a uniform histogram.
Let pr(r) denote continuous probability density function of gray-level of inputimage, r
Let pz(z) denote desired (specified) continuous probability density function ofgray-level of output image, z
Let s be a random variable with the property
We need to find z such that:
Thus: s = T(r) = G(z).
Therefore: z = G-1(s) = G-1[T(r)]
Assume G-1 exists and satisfies the condition (a) and (b). We can map an input gray levelr to output gray level z
∫==r
r dw)w(p)r(Ts0
sdt)t(p)z(gz
z == ∫0
Consider the continuous domainLet pr(r) denote continuous probability density function of gray-level of inputimage, r
Let pz(z) denote desired (specified) continuous probability density function ofgray-level of output image, z
Let s be a random variable with the property
We need to find z such that:
Thus: s = T(r) = G(z).
Therefore: z = G-1(s) = G-1[T(r)]
Assume G-1 exists and satisfies the condition (a) and (b). We can map an input gray levelr to output gray level z
∫==r
r dw)w(p)r(Ts0
sdt)t(p)z(gz
z == ∫0
49
Discrete Formulation
12100
0
−==
==
∑
∑
=
=
L,...,,,knn
)r(p)r(Ts
k
j
j
k
jjrkk
٥٠
12100
−=== ∑=
L,...,,,ks)z(p)z(G k
k
iizk
[ ][ ] 12101
1
−==
=−
−
L,...,,,ksG)r(TGz
k
kk
Histogram Matching Implementation
1. Find histogram of input image, and find histogram equalizationmapping:
2. Specify the desired histogram , and find histogram equalizationmapping:
3. For every value of sk use the stored values of G to find thecorresponding min value zq such that:
∑=
=k
jjrk rps
0)(
1. Find histogram of input image, and find histogram equalizationmapping:
2. Specify the desired histogram , and find histogram equalizationmapping:
3. For every value of sk use the stored values of G to find thecorresponding min value zq such that:
∑=
=k
jjzk zpv
0)(
51
Histogram Matching Implementation
٥٢
• Consider an 8-level image with theshownhistogram
Histogram Matching Example
٥٣
• Match it to the image with thehistogram
1. Equalize the histogram of the input image using transform s =T(r)
Example Continue…
٥٤
2. Equalize the desired histogram v = G(z).
Example Continue…
٥٥
Example Continue…
3. Set v = s to obtain the composite transform
k rk Sk vk zk0 0 1 0 3
٥٦
0 0 1 0 3
1 1 3 0 4
2 2 5 0 5
3 3 6 1 6
4 4 6 2 6
5 5 7 5 7
6 6 7 6 7
7 7 7 7 7
Example
٥٧Image of Mars moon
Image is dominated by large, dark areas,resulting in a histogram characterized bya large concentration of pixels in pixels inthe dark end of the gray scale
Image Equalization
Result imageafter histogram
equalization
٥٨
Result imageafter histogram
equalizationTransformation functionfor histogram equalization Histogram of the result image
Histogram Equalization
Solve the problem
Since the problem with thetransformation function of thehistogram equalization wascaused by a large concentrationof pixels in the original image withlevels near 0
A reasonable approach is tomodify the histogram of thatimage so that it does not havethis property
٥٩
Since the problem with thetransformation function of thehistogram equalization wascaused by a large concentrationof pixels in the original image withlevels near 0
A reasonable approach is tomodify the histogram of thatimage so that it does not havethis property
Histogram Specification
(1) the transformationfunction G(z) obtainedfrom the specifiedhistogram.
(2) the inversetransformation G-1(s)
Histogram Specification
٦٠
(1) the transformationfunction G(z) obtainedfrom the specifiedhistogram.
(2) the inversetransformation G-1(s)
Result image and its histogram
٦١
Original image The output image’s histogramAfter applingthehistogrammatching
Notes
Histogram specification is a trial-and-errorprocess
There are no rules for specifying histograms, andone must resort to analysis on a case-by-casebasis for any given enhancement task.
٦٢
Histogram specification is a trial-and-errorprocess
There are no rules for specifying histograms, andone must resort to analysis on a case-by-casebasis for any given enhancement task.
Readings
From the book the following sections:
3.4: Enhancement Using Arithmetic/LogicOperations
3.4.1: Image Subtraction
3.4.2 :Image Averaging
From the book the following sections:
3.4: Enhancement Using Arithmetic/LogicOperations
3.4.1: Image Subtraction
3.4.2 :Image Averaging
63
Enhancement using Arithmetic/LogicOperations Arithmetic/Logic operations are performed on pixel by
pixel basis between two or more images
Except NOT operation which perform only on a singleimage.
We need only be concerned with the ability to implementthe AND, OR, and NOT which are functionally complete.
Pixel values are processed as strings of binary numbers.
NOT operation = negative transformation
٦٤
Arithmetic/Logic operations are performed on pixel bypixel basis between two or more images
Except NOT operation which perform only on a singleimage.
We need only be concerned with the ability to implementthe AND, OR, and NOT which are functionally complete.
Pixel values are processed as strings of binary numbers.
NOT operation = negative transformation
Example of AND Operation The AND and OR operations are used for masking; for selecting
subimages in an image. ROI processing. The AND and OR operations are used for masking; for selecting
subimages in an image. ROI processing.
original image AND imagemask
result of ANDoperation
65
Example of OR Operation
original image OR imagemask
result of ORoperation
٦٦
Image Subtraction
The difference between two images f(x, y) and h(x, y):
g(g(x,yx,y) = f() = f(x,yx,y)) –– h(h(x,yx,y))
is obtained by computing the difference between all pairsof corresponding pixels from f and h.
The key usefulness of subtraction is the enhancement ofdifferences between images.
The difference between two images f(x, y) and h(x, y):
g(g(x,yx,y) = f() = f(x,yx,y)) –– h(h(x,yx,y))
is obtained by computing the difference between all pairsof corresponding pixels from f and h.
The key usefulness of subtraction is the enhancement ofdifferences between images.
67
Image Subtraction
68
Mask mode radiography
Note:• the background is dark because it doesn’t change much in bothimages.• the difference area is bright because it has a big change
٦٩
Note
The values in a difference image can range from –255 to255, so some sort of scaling is required.
One method is to add 255 to every pixel and then divide by2.
Another method is the min_max normalization:
Where minA and maxA are the min and max values of thediff. image, and new_maxA = 255, and new_minA= 0.
thus
The values in a difference image can range from –255 to255, so some sort of scaling is required.
One method is to add 255 to every pixel and then divide by2.
Another method is the min_max normalization:
Where minA and maxA are the min and max values of thediff. image, and new_maxA = 255, and new_minA= 0.
thus
70
Image Averaging
consider a noisy image g(x,y) formed by theaddition of noise η(x,y) to an original imagef(x,y)
g(x,y) = f(x,y) + η(x,y)
consider a noisy image g(x,y) formed by theaddition of noise η(x,y) to an original imagef(x,y)
g(x,y) = f(x,y) + η(x,y)
71
Image Averaging if noise has zero mean and is uncorrelated then it can be
shown that if
image formed by averaging K different noisy images.
Then: and
Thus if K increases, the variability (noise) of the pixel ateach location (x,y) decreases in g(x,y).
This means that g(x,y) approaches f(x, y) as the number ofnoisy images used in the averaging process increases.
if noise has zero mean and is uncorrelated then it can beshown that if
image formed by averaging K different noisy images.
Then: and
Thus if K increases, the variability (noise) of the pixel ateach location (x,y) decreases in g(x,y).
This means that g(x,y) approaches f(x, y) as the number ofnoisy images used in the averaging process increases.
72
Example
a) original image b) image corrupted by
additive Gaussiannoise with zero meanand a standarddeviation of 64 graylevels.
c). -f). results ofaveraging K = 8, 16,64 and 128 noisyimages
a) original image b) image corrupted by
additive Gaussiannoise with zero meanand a standarddeviation of 64 graylevels.
c). -f). results ofaveraging K = 8, 16,64 and 128 noisyimages
a bc de f
٧٣
Example
74
Reading
3.5: Basics of Spatial Filtering
3.6:Smoothing Spatial Filters 3.6.1: Smoothing Linear Filters
3.6.2: Order-Statistics Filters
3.7: Sharpening Spatial Filters
3.7.2 The Laplacian Operator
3.7.3 Use of First Derivatives for Enhancement
3.8: Combining Spatial Enhancement Methods
3.5: Basics of Spatial Filtering
3.6:Smoothing Spatial Filters 3.6.1: Smoothing Linear Filters
3.6.2: Order-Statistics Filters
3.7: Sharpening Spatial Filters
3.7.2 The Laplacian Operator
3.7.3 Use of First Derivatives for Enhancement
3.8: Combining Spatial Enhancement Methods
75
Spatial Filtering
r s t
u v w
x y z
Origin x
FilterSimple 3*3
Neighbourhood e
a b c
d e f
g h iOriginal Image
Pixels
*
y Image f (x, y)
eprocessed = v*e +r*a + s*b + t*c +u*d + w*f +x*g + y*h + z*i
Simple 3*3Neighbourhood e
3*3 Filter
Original ImagePixels
The above is repeated for every pixel in theoriginal image to generate the processed image
Spatial Filtering
use filter (can also be called asmask/kernel/template or window)
the values in a filter subimage are referred to ascoefficients, rather than pixel.
our focus will be on masks of odd sizes, e.g.3x3, 5x5,…
use filter (can also be called asmask/kernel/template or window)
the values in a filter subimage are referred to ascoefficients, rather than pixel.
our focus will be on masks of odd sizes, e.g.3x3, 5x5,…
77
Spatial Filtering
78
Spatial Filtering Process
simply move the filter mask from point to point in animage.
at each point (x,y), the response of the filter at that
point is calculated using a predefined relationship.
Or:
simply move the filter mask from point to point in animage.
at each point (x,y), the response of the filter at that
point is calculated using a predefined relationship.
Or:
79
Linear Filtering
Linear Filtering of an image f of size MxN filtermask of size mxn is given by the expression
To generate a complete filtered image thisequation must be applied for x = 0, 1, 2, … , M-1and y = 0, 1, 2, … , N-1
Linear Filtering of an image f of size MxN filtermask of size mxn is given by the expression
To generate a complete filtered image thisequation must be applied for x = 0, 1, 2, … , M-1and y = 0, 1, 2, … , N-1
where a = (m-1)/2 and b = (n-1)/2
80
Note An important consideration in implementing neighborhood
operations for spatial filtering is the issue of what happenswhen the center of the filter approaches the border of theimage.
There are several ways to handle this situation: limit the excursions of the center of the mask to be at a distance
no less than (n-1)2 pixels from the border. filter all pixels only with the section of the mask that is fully
contained in the image. Other approaches include “padding” the image by adding rows
and columns of 0’s (or other constant gray level), or paddingby replicating rows or columns.
An important consideration in implementing neighborhoodoperations for spatial filtering is the issue of what happenswhen the center of the filter approaches the border of theimage.
There are several ways to handle this situation: limit the excursions of the center of the mask to be at a distance
no less than (n-1)2 pixels from the border. filter all pixels only with the section of the mask that is fully
contained in the image. Other approaches include “padding” the image by adding rows
and columns of 0’s (or other constant gray level), or paddingby replicating rows or columns.
81
Smoothing Spatial Filters
Image sensors and transmission channels mayproduce certain type of noise characterized byrandom and isolated pixels with out-of-rangegray levels, which are either much lower orhigher than the gray levels of the neighboringpixels.
One way to get rid of this kind of noise is to usethe average of a small local region in the imageso that the out-of-range gray levels can besuppressed.
Image sensors and transmission channels mayproduce certain type of noise characterized byrandom and isolated pixels with out-of-rangegray levels, which are either much lower orhigher than the gray levels of the neighboringpixels.
One way to get rid of this kind of noise is to usethe average of a small local region in the imageso that the out-of-range gray levels can besuppressed.
82
Smoothing Spatial Filters
Used for image blurring and for noise reduction Blurring is used in preprocessing steps, such as removal of small details from an image prior to
object extraction bridging of small gaps in lines or curves
noise reduction can be accomplished byblurring with a linear filter and also by anonlinear filter
Used for image blurring and for noise reduction Blurring is used in preprocessing steps, such as removal of small details from an image prior to
object extraction bridging of small gaps in lines or curves
noise reduction can be accomplished byblurring with a linear filter and also by anonlinear filter
83
Two Types of Smoothing filters
Smoothing Linear Filter
Order-Statistics Filters
Smoothing Linear Filter
Order-Statistics Filters
84
Smoothing Linear Filters
These filters are called averaging filters or lowpassfilters.
Replacing the value of every pixel in an image bythe average of the gray levels in the neighborhoodwill reduce the “sharp” transitions in gray levels.
sharp transitions random noise in the image edges of objects in the image
Thus, smoothing can reduce noises (desirable) andblur edges (undesirable)
These filters are called averaging filters or lowpassfilters.
Replacing the value of every pixel in an image bythe average of the gray levels in the neighborhoodwill reduce the “sharp” transitions in gray levels.
sharp transitions random noise in the image edges of objects in the image
Thus, smoothing can reduce noises (desirable) andblur edges (undesirable)
85
3x3 Smoothing Linear Filters
box filter weighted average
the center is the most important and otherpixels are inversely weighted as a function oftheir distance from the center of the mask
86
General form : smoothing mask
Filter of size mxn (m and n odd)
∑ ∑
∑ ∑
−= −=
−= −=
++= a
as
b
bt
a
as
b
bt
tsw
tysxftswyxg
),(
),(),(),(
∑ ∑
∑ ∑
−= −=
−= −=
++= a
as
b
bt
a
as
b
bt
tsw
tysxftswyxg
),(
),(),(),(
summation of all coefficient of the mask
87
Example
88
Example a). original image 500x500 pixel b). - f). results of smoothing with
square averaging filter masks ofsize n = 3, 5, 9, 15 and 35,respectively.
Note: big mask is used to eliminate small
objects from an image. the size of the mask establishes the
relative size of the objects that will beblended with the background.
a). original image 500x500 pixel b). - f). results of smoothing with
square averaging filter masks ofsize n = 3, 5, 9, 15 and 35,respectively.
Note: big mask is used to eliminate small
objects from an image. the size of the mask establishes the
relative size of the objects that will beblended with the background.
٨٩
Example
original image result after smoothingwith 15x15 averaging mask
result of thresholding
we can see that the result after smoothing and thresholding, the remainders are thelargest and brightest objects in the image.
90
Order-Statistics Filters (Nonlinear Filters) The response is based on ordering (ranking) the
pixels contained in the image area encompassedby the filter
example median filter : R = median{zk |k = 1,2,…,n x n} max filter : R = max{zk |k = 1,2,…,n x n} min filter : R = min{zk |k = 1,2,…,n x n}
note: n x n is the size of the maskMedian example: (10,20,20,20,15,20,20,25,100)
rank to:( 10,15,20,20,20,20,20,25,100)Median=20
The response is based on ordering (ranking) thepixels contained in the image area encompassedby the filter
example median filter : R = median{zk |k = 1,2,…,n x n} max filter : R = max{zk |k = 1,2,…,n x n} min filter : R = min{zk |k = 1,2,…,n x n}
note: n x n is the size of the maskMedian example: (10,20,20,20,15,20,20,25,100)
rank to:( 10,15,20,20,20,20,20,25,100)Median=20
91
Median Filters
Replaces the value of a pixel by the median of the graylevels in the neighborhood of that pixel (the originalvalue of the pixel is included in the computation of themedian)
Quite popular because for certain types of random noise(impulse noiseimpulse noise salt and pepper noisesalt and pepper noise) , they provideprovideexcellent noiseexcellent noise--reduction capabilitiesreduction capabilities, with consideringless blurring than linear smoothing filters of similar size.less blurring than linear smoothing filters of similar size.
Replaces the value of a pixel by the median of the graylevels in the neighborhood of that pixel (the originalvalue of the pixel is included in the computation of themedian)
Quite popular because for certain types of random noise(impulse noiseimpulse noise salt and pepper noisesalt and pepper noise) , they provideprovideexcellent noiseexcellent noise--reduction capabilitiesreduction capabilities, with consideringless blurring than linear smoothing filters of similar size.less blurring than linear smoothing filters of similar size.
92
Median Filters
Forces the points with distinct gray levels to be more liketheir neighbors.
Isolated clusters of pixels that are light or dark withrespect to their neighbors, and whose area is less thann2/2 (one-half the filter area), are eliminated by an n x nmedian filter.
Eliminated = forced to have the value equal the medianintensity of the neighbors.
Larger clusters are affected considerably less
Forces the points with distinct gray levels to be more liketheir neighbors.
Isolated clusters of pixels that are light or dark withrespect to their neighbors, and whose area is less thann2/2 (one-half the filter area), are eliminated by an n x nmedian filter.
Eliminated = forced to have the value equal the medianintensity of the neighbors.
Larger clusters are affected considerably less
93
Example : Median Filters
94
Sharpening Spatial Filters To highlight fine details in an image or to
enhance detail that has been blurred, either inerror or as a natural effect of a particular methodof image acquisition.
Blurring can be done in spatial domain by pixelaveraging in a neighborhood, since averaging isanalogous to integration, thus, we can guess thatthe sharpening must be accomplished by spatialspatialdifferentiation.differentiation.
To highlight fine details in an image or toenhance detail that has been blurred, either inerror or as a natural effect of a particular methodof image acquisition.
Blurring can be done in spatial domain by pixelaveraging in a neighborhood, since averaging isanalogous to integration, thus, we can guess thatthe sharpening must be accomplished by spatialspatialdifferentiation.differentiation. 95
Derivative operator
The strength of the response of a derivativeoperator is proportional to the degree ofdiscontinuity of the image at the point at whichthe operator is applied.
Thus, image differentiation enhances edges and other discontinuities (noise)
The strength of the response of a derivativeoperator is proportional to the degree ofdiscontinuity of the image at the point at whichthe operator is applied.
Thus, image differentiation enhances edges and other discontinuities (noise)
96
First-order derivative
a basic definition of the first-order derivative of aone-dimensional function f(x) is the difference
)()1( xfxfxf −+=
∂∂
97
Second-order derivative
similarly, we define the second-order derivativeof a one-dimensional function f(x) is thedifference
similarly, we define the second-order derivativeof a one-dimensional function f(x) is thedifference
)(2)1()1(2
2
xfxfxfx
f −−++=∂∂
98
Derivative operator
99
First and Second-order derivative of f(x,y)
When we consider an image function of twovariables, f(x,y), at which time we will dealingwith partial derivatives along the two spatial axes.
When we consider an image function of twovariables, f(x,y), at which time we will dealingwith partial derivatives along the two spatial axes.
yyxf
xyxf
yxyxf
∂∂+
∂∂=
∂∂∂=∇ ),(),(),(f
2
2
2
22 ),(),(
yyxf
xyxff
∂∂+
∂∂=∇(linear operator)
Laplacian operator
Gradient operator
100
Discrete Form of Laplacian
),(2),1(),1(2
2
yxfyxfyxfx
f −−++=∂∂
),(2)1,()1,(2
2
yxfyxfyxfy
f −−++=∂∂
from
),(2)1,()1,(2
2
yxfyxfyxfy
f −−++=∂∂
yield,
)],(4)1,()1,(),1(),1([2
yxfyxfyxfyxfyxff
−−+++−++=∇
101
Result Laplacian mask
102
Laplacian mask implemented an extensionof diagonal neighbors
103
Other implementations of Laplacian masks
give the same result, but we have to keep in mind that when combining(add / subtract) a Laplacian-filtered image with another image.
104
Effect of Laplacian Operator As it is a derivative operator, It highlights gray-level discontinuities in an image
Tends to produce images that have Grayish edge lines and other discontinuities, all
superimposed on a dark, featureless background.
١٠٥
As it is a derivative operator, It highlights gray-level discontinuities in an image
Tends to produce images that have Grayish edge lines and other discontinuities, all
superimposed on a dark, featureless background.
Correct the effect of featurelessbackground
Easily by adding the original and Laplacianimage.
Be careful with the Laplacian filter used
Easily by adding the original and Laplacianimage.
Be careful with the Laplacian filter used
∇+∇−
=),(),(),(),(
),(2
2
yxfyxfyxfyxf
yxg
if the center coefficientof the Laplacian mask isnegative
if the center coefficientof the Laplacian mask ispositive
106
107
Example
a). image of the North poleof the moon
b). Laplacian-filtered imagewith
c). Laplacian image scaledfor display purposes
d). image enhanced byaddition with original image
a). image of the North poleof the moon
b). Laplacian-filtered imagewith
c). Laplacian image scaledfor display purposes
d). image enhanced byaddition with original image
1 1 1
1 -8 1
1 1 1
١٠٨
Mask of Laplacian + addition
To simply the computation, we can create a maskwhich do both operations, Laplacian Filter andAddition the original image.
)]1,()1,(),1(),1([),(5)],(4)1,()1,(
),1(),1([),(),(
−+++−++−=
−−+++−++−=
yxfyxfyxfyxfyxfyxfyxfyxf
yxfyxfyxfyxg
)]1,()1,(),1(),1([),(5)],(4)1,()1,(
),1(),1([),(),(
−+++−++−=
−−+++−++−=
yxfyxfyxfyxfyxfyxfyxfyxf
yxfyxfyxfyxg
0 -1 0
-1 5 -10 -1 0
109
Example
١١٠
Unsharp Masking and High-Boost Filtering
),(),(),( yxfyxfyxfs −=
sharpened image = original image – blurred imagesharpened image = original image – blurred image
This operation is known as unsharp masking
where fs(x, y) denotes the sharpened imageobtained by unsharp masking, and f(x, y) is ablurred version of f(x, y).
111
High-boost filtering
A slight further generalization of unsharp masking iscalled high-boost filtering.
A high-boost filtered image, fhb (x,y) is defined at anypoint (x, y) as:
where A ≥ 1.
A slight further generalization of unsharp masking iscalled high-boost filtering.
A high-boost filtered image, fhb (x,y) is defined at anypoint (x, y) as:
where A ≥ 1.
112
High-boost filtering
If we use Laplacian filter to create sharpenimage fs(x,y) with addition of original image
If we use Laplacian filter to create sharpenimage fs(x,y) with addition of original image
∇+∇−
=),(),(),(),(
),(2
2
yxfyxfyxfyxf
yxfs
113
High-boost filtering
yields
∇+∇−
=),(),(),(),(
),(2
2
yxfyxAfyxfyxAf
yxfhb
if the center coefficientof the Laplacian mask isnegative
∇+∇−
=),(),(),(),(
),(2
2
yxfyxAfyxfyxAf
yxfhb
if the center coefficientof the Laplacian mask ispositive
114
High-boost Masks
A ≥ 1 if A = 1, it becomes “standard” Laplacian
sharpening115
Example
One of the principal applications of boostfiltering is when the input image is darker thandesired. By varying the boost coefficient, itgenerally is possible to obtain an overall increasein average gray level of the image, thus helping tobrighten the final result.
One of the principal applications of boostfiltering is when the input image is darker thandesired. By varying the boost coefficient, itgenerally is possible to obtain an overall increasein average gray level of the image, thus helping tobrighten the final result.
116
Example
117
Gradient Operator
first derivatives are implemented using themagnitude of the gradientmagnitude of the gradient.
∂∂∂∂
=
=∇
yfxf
GG
y
xf
21
22
2122 ][)f(
∂∂+
∂∂=
+=∇=∇
yf
xf
GGmagf yx
21
22
2122 ][)f(
∂∂+
∂∂=
+=∇=∇
yf
xf
GGmagf yx
the magnitude becomes nonlinearyx GGf +≈∇
commonly approx.
118
Gradient Mask
simplest approximation, 2x2z1 z2 z3
z4 z5 z6
z7 z8 z9
)(and)( 5658 zzGzzG yx −=−=
212
562
582
122 ])()[(][ zzzzGGf yx −+−=+=∇
5658 zzzzf −+−≈∇119
Gradient Mask here
Roberts cross-gradient operators, 2x2
z1 z2 z3
z4 z5 z6
z7 z8 z9
)(and)( 6859 zzGzzG yx −=−= )(and)( 6859 zzGzzG yx −=−=
212
682
592
122 ])()[(][ zzzzGGf yx −+−=+=∇
6859 zzzzf −+−≈∇
120
Gradient Mask
Sobel operators, 3x3
z1 z2 z3
z4 z5 z6
z7 z8 z9
)2()2()2()2(
741963
321987
zzzzzzGzzzzzzG
y
x
++−++=++−++=
)2()2()2()2(
741963
321987
zzzzzzGzzzzzzG
y
x
++−++=++−++=
yx GGf +≈∇
121
Note
the summation of coefficients in all masks equals0, indicating that they would give a response of 0in an area of constant gray level.
the summation of coefficients in all masks equals0, indicating that they would give a response of 0in an area of constant gray level.
122
Example
123
Example of Combining SpatialEnhancement Methods
want to sharpen theoriginal image andbring out more skeletaldetail.
problems: narrowdynamic range of graylevel and high noisecontent makes theimage difficult toenhance
want to sharpen theoriginal image andbring out more skeletaldetail.
problems: narrowdynamic range of graylevel and high noisecontent makes theimage difficult toenhance
١٢٤
Example of Combining SpatialEnhancement Methods
solve :
1. Laplacian to highlight fine detail2. gradient to enhance prominent edges3. gray-level transformation to increase the
dynamic range of gray levels
solve :
1. Laplacian to highlight fine detail2. gradient to enhance prominent edges3. gray-level transformation to increase the
dynamic range of gray levels
125
126
127