Computational Photography and Videography Christian Theobalt and Ivo Ihrke Winter term 09/10.
Introduction to Image Processing and Computer Vision...
Transcript of Introduction to Image Processing and Computer Vision...
Ivo Ihrke / Winter 2013
Introduction to Image Processing and Computer Vision
-- Filtering Applications --- Non-Convolutional Filters, Scale Space, Feature
Detection -
Winter 2013/14
Ivo Ihrke
Ivo Ihrke / Winter 2013
Edge Detection Continued
Introduction to Scale Space
Ivo Ihrke / Winter 2013
Effects of noise
Consider a single row or column of the image
– Plotting intensity as a function of position gives a signal
Where is the edge?
Ivo Ihrke / Winter 2013Where is the edge?
Solution: smooth first
Look for peaks in
Ivo Ihrke / Winter 2013
Associative property of convolution
This saves us one operation:
Ivo Ihrke / Winter 2013
Laplacian of Gaussian
Consider
Laplacian of Gaussian
operator
Where is the edge? Zero-crossings of bottom graph
Ivo Ihrke / Winter 2013
2D edge detection filters
is the Laplacian operator:
Laplacian of Gaussian
Gaussian derivative of Gaussian
Ivo Ihrke / Winter 2013
The Sobel operator
Common approximation of derivative of Gaussian
-1 0 1
-2 0 2
-1 0 1
1 2 1
0 0 0
-1 -2 -1
• The standard defn. of the Sobel operator omits the 1/8 term
– doesn’t make a difference for edge detection
– the 1/8 term is needed to get the right gradient value, however
Ivo Ihrke / Winter 2013
The effect of scale on edge detection
larger
larger
Scale space (Witkin 83)
Ivo Ihrke / Winter 2013
Some times we want many resolutions
Known as a Gaussian Pyramid [Burt and Adelson, 1983]
• In computer graphics, a mip map [Williams, 1983]
• A precursor to wavelet transform
Gaussian Pyramids have all sorts of applications in computer vision
Ivo Ihrke / Winter 2013
Gaussian pyramid construction
filter kernel
Repeat• Filter
• Subsample
Until minimum resolution reached • can specify desired number of levels (e.g., 3-level pyramid)
The whole pyramid is only 4/3 the size of the original image!
Ivo Ihrke / Winter 2013
Non-Convolutional Filters
Ivo Ihrke / Winter 2013
Non-Convolutional Filters
Examples:(Gaussian
noise)
Original
Bilateral filter
Gaussian (conv)
Median filter
Ivo Ihrke / Winter 2013
Non-Convolutional Filters
Original
Bilateral filter
Gaussian (conv)
Median filterExamples:(Salt&Pepper
noise)
Ivo Ihrke / Winter 2013
Median Filter
Extract pixels in a window
– Sort entries
– Pick the one at the mean index position
Non-linear, non-analytic filter
Application: “salt and pepper“ noise removal
1
11
9
93
13
15
5
4
1 3 4 5 9 9 11 13 15
Ivo Ihrke / Winter 2013
Bilateral Filter
Consist of a spatial part and a range part
Spatially varying filter
─ Spatial range
─ Intensity range
Ivo Ihrke / Winter 2013
Bilateral Filter
Noisy input
Filtered output
Filter Shape at red dot
Ivo Ihrke / Winter 2013
Bilateral Filter
Alternative interpretation as a “3D distance”
– space and intensity combined
– different weights for different axes
Ivo Ihrke / Winter 2013
original
One-time
application
repeated
application
Ivo Ihrke / Winter 2013
Discussion
Most of techniques discussed here can be applied to N-D data
– Most often 3D
– Requires a uniform sampling of the volume
(N-D data)
– Apart from that only minor modifications to algorithms
– Meaning and utility stays the same
Ivo Ihrke / Winter 2013
Image Statistics
Histograms
- Inferring global image information -
Ivo Ihrke / Winter 2013
The histogram of a digital image with gray values110 ,,, Lrrr
is the discrete function
n
nrp k
k )(
nk: Number of pixels with gray value rk
n: total Number of pixels in the image
The function p(rk) represents the fraction of the total number of pixels with
gray value rk.
What is the histogram of a digital image?
Image Enhancement: HistogramBased Methods
Ivo Ihrke / Winter 2013
provide a global description of the appearance of the image.
Consider image gray values as realizations of a random variable R, with some probability density (pdf)
a histogram is an approximation to this pdf
)()Pr( kk rprR
Histograms
Ivo Ihrke / Winter 2013
The shape of a histogram provides useful information for
contrast enhancement.
Dark image
Some Typical Histograms
Ivo Ihrke / Winter 2013
Bright image
Low contrast image
Ivo Ihrke / Winter 2013
High contrast image
Ivo Ihrke / Winter 2013
Let us assume for the moment that the input image to be
enhanced has continuous gray values, with r = 0 representing
black and r = 1 representing white.
We need to design a gray value transformation s = T(r), based
on the histogram of the input image, which will enhance the
image.
What is the histogram equalization?
he histogram equalization is an approach to enhance a given image. The
approach is to design a transformation T(.) such that the gray values in the
output is uniformly distributed in [0, 1].
Histogram Equalization
Ivo Ihrke / Winter 2013
As before, we assume that:
(1) T(r) is a monotonically increasing function for 0 r 1
(preserves order from black to white).
(2) T(r) maps [0,1] into [0,1] (preserves the range of allowed
Gray values).
Histogram Equalization
Ivo Ihrke / Winter 2013
Let us denote the inverse transformation by r T -1(s) . We
assume that the inverse transformation also satisfies the above
two conditions.
We consider the gray values in the input image and output
image as random variables in the interval [0, 1].
Let pin(r) and pout(s) denote the probability density of the
Gray values in the input and output images.
Histogram Equalization
Ivo Ihrke / Winter 2013
If pin(r) and T(r) are known, and r T -1(s) satisfies condition 1, we can
write (result from probability theory):
)(1
)()(sTr
inoutds
drrpsp
One way to enhance the image is to design a transformation
T(.) such that the gray values in the output is uniformly
distributed in [0, 1], i.e. pout (s) 1, 0 s 1
In terms of histograms, the output image will have all
gray values in “equal proportion” .
This technique is called histogram equalization.
Histogram Equalization
Ivo Ihrke / Winter 2013
Consider the transformation
10)()( ,0
rdwwprTsr
in
Note that this is the cumulative distribution function (CDF) of pin (r) and
satisfies the previous two conditions.
From the previous equation and using the fundamental
theorem of calculus,
)(rpdr
dsin
Next we derive the gray values in the output is uniformly distributed in
[0, 1].
Histogram Equalization
Ivo Ihrke / Winter 2013
Therefore, the output histogram is given by
10,11)(
1)()( )(
)(
1
1
srp
rpsp sTr
sTrin
inout
The output probability density function is uniform, regardless of the
input.
Thus, using a transformation function equal to the CDF of input gray
values r, we can obtain an image with uniform gray values.
This usually results in an enhanced image, with an increase in the
dynamic range of pixel values.
Histogram Equalization
Ivo Ihrke / Winter 2013
Step 1:For images with discrete gray values, compute:
n
nrp k
kin )( 10 kr 10 Lk
L: Total number of gray levels
nk: Number of pixels with gray value rk
n: Total number of pixels in the image
Step 2: Based on CDF, compute the discrete version of the previous
transformation :
k
j
jinkk rprTs0
)()( 10 Lk
How to implement histogram equalization?
Ivo Ihrke / Winter 2013
Example Original image and its histogram
Ivo Ihrke / Winter 2013
Histogram equalized image and its histogram
Ivo Ihrke / Winter 2013
Comments:
Histogram equalization may not always produce desirable
results, particularly if the given histogram is very narrow. It
can produce false edges and regions. It can also increase
image “graininess” and “patchiness.”
Ivo Ihrke / Winter 2013
Ivo Ihrke / Winter 2013
Histogram equalization yields an image whose pixels are (in
theory) uniformly distributed among all gray levels.
Sometimes, this may not be desirable. Instead, we may want a
transformation that yields an output image with a pre-specified
histogram. This technique is called histogram matching
Histogram Matching
Ivo Ihrke / Winter 2013
Given Information
(1) Input image from which we can compute its histogram .
(2) Desired histogram.
Goal
Derive a point operation, M(r), that maps the input image into an output
image that has the user-specified histogram.
Again, we will assume, for the moment, continuous-gray values.
Histogram Matching
Ivo Ihrke / Winter 2013
Histogram Matching
Idea: concatenate mapping from source histogram to
equalized histogram and from equalized histogram
to destination
(justification: G, H are bijective and domain and range are the same)
rrr ij
Ivo Ihrke / Winter 2013
Original image and its histogram
Histogram matched image and its histogram
Desired histogram
Ivo Ihrke / Winter 2013
Desired histogram
Ivo Ihrke / Winter 2013
Example: Make the input of two cameras more similar
Reference image
Image to be adjusted
Histogram matched image
Ivo Ihrke / Winter 2013
Feature Detection
Ivo Ihrke / Winter 2013
What is a feature ?
Ivo Ihrke / Winter 2013
Motivation…
Feature points are used for:
– Image alignment (e.g., mosaics)
– 3D reconstruction
– Motion tracking
– Object recognition
– Indexing and database retrieval
– Robot navigation
– … other
Ivo Ihrke / Winter 2013
Object recognition (David Lowe)
Ivo Ihrke / Winter 2013
Sony Aibo
SIFT usage:
Recognize
charging station
Communicatewith visualcards
Teach object recognition
Ivo Ihrke / Winter 2013
Image Matching
Ivo Ihrke / Winter 2013
Advantages of local features
Locality
– features are local, so robust to occlusion and clutter
Distinctiveness:
– can differentiate a large number of points
Quantity
– hundreds or thousands in a single image
Efficiency
– real-time performance achievable
Generality
– exploit different types of features in different situations
Ivo Ihrke / Winter 2013
What makes a good feature?
Ivo Ihrke / Winter 2013
Want uniqueness
Look for image regions that are unusual
– Lead to unambiguous matches in other images
How to define “unusual”?
Ivo Ihrke / Winter 2013
Local measures of uniqueness
Suppose we only consider a small window of pixels
– What defines whether a feature is a good or bad candidate?
Slide adapted from Darya Frolova, Denis Simakov, Weizmann Institute.
Ivo Ihrke / Winter 2013
Feature detection
“flat” region:
no change in all
directions
“edge”:
no change along
the edge direction
“corner”:
significant change
in all directions
Local measure of feature uniqueness
– How does the window change when you shift it?
– Shifting the window in any direction causes a big change
Slide adapted from Darya Frolova, Denis Simakov, Weizmann Institute.
Ivo Ihrke / Winter 2013
Consider shifting the window W by (u,v)• how do the pixels in W change?
• compare each pixel before and after by
summing up the squared differences (SSD)
• this defines an SSD “error” of E(u,v):
Feature detection: the math
W
Ivo Ihrke / Winter 2013
Taylor Series expansion of I:
If the motion (u,v) is small, then first order approx is good
Plugging this into the formula on the previous slide…
Small motion assumption
Ivo Ihrke / Winter 2013
Consider shifting the window W by (u,v)• how do the pixels in W change?
• compare each pixel before and after by
summing up the squared differences
• this defines an “error” of E(u,v):
Feature detection: the math
W
Ivo Ihrke / Winter 2013
Feature detection: the math
This can be rewritten:
For the example above• You can move the center of the green window to anywhere on the
blue unit circle
• Which directions will result in the largest and smallest E values?
• We can find these directions by looking at the eigenvectors of H
Ivo Ihrke / Winter 2013
Quick eigenvalue/eigenvector review
The eigenvectors of a matrix A are the vectors x that satisfy:
The scalar is the eigenvalue corresponding to x
– The eigenvalues are found by solving:
– In our case, A = H is a 2x2 matrix, so we have
– The solution:
Once you know , you find x by solving
Ivo Ihrke / Winter 2013
Feature detection: the math
This can be rewritten:
Eigenvalues and eigenvectors of H• Define shifts with the smallest and largest change (E value)
• x+ = direction of largest increase in E.
• + = amount of increase in direction x+
• x- = direction of smallest increase in E.
• - = amount of increase in direction x+
x-
x+
Ivo Ihrke / Winter 2013
Feature detection: the math
How are +, x+, -, and x+ relevant for feature detection?• What’s our feature scoring function?
Ivo Ihrke / Winter 2013
Feature detection: the math
How are +, x+, -, and x+ relevant for feature detection?• What’s our feature scoring function?
Want E(u,v) to be large for small shifts in all directions• the minimum of E(u,v) should be large, over all unit vectors [u v]
• this minimum is given by the smaller eigenvalue (-) of H
Ivo Ihrke / Winter 2013
Feature detection summary
Here’s what you do• Compute the gradient at each point in the image
• Create the H matrix from the entries in the gradient
• Compute the eigenvalues.
• Find points with large response (- > threshold)
• Choose those points where - is a local maximum as features
Ivo Ihrke / Winter 2013
Feature detection summary
Here’s what you do• Compute the gradient at each point in the image
• Create the H matrix from the entries in the gradient
• Compute the eigenvalues.
• Find points with large response (- > threshold)
• Choose those points where - is a local maximum as features
Ivo Ihrke / Winter 2013
The Harris operator
- is a variant of the “Harris operator” for feature detection
• The trace is the sum of the diagonals, i.e., trace(H) = h11 + h22
• Very similar to - but less expensive (no square root)
• Called the “Harris Corner Detector” or “Harris Operator”
• Lots of other detectors, this is one of the most popular
Ivo Ihrke / Winter 2013
The Harris operator
Harris operator
Ivo Ihrke / Winter 2013
Harris detector example
Ivo Ihrke / Winter 2013
f value (red high, blue low)
Ivo Ihrke / Winter 2013
Threshold (f > value)
Ivo Ihrke / Winter 2013
Find local maxima of f
Ivo Ihrke / Winter 2013
Harris features (in red)
Ivo Ihrke / Winter 2013
Acknowledgements
Most slides by Steve Seitz, Rick Szeliski
– http://szeliski.org/book
Histogram slides by Samir H. Abdul-Jauwad
Some histogram-matching results by Paul Bourke
Ivo Ihrke / Winter 2013
The End