Lecture 8: Interest Point Detection - University of...

Post on 01-Jul-2018

224 views 0 download

Transcript of Lecture 8: Interest Point Detection - University of...

#1

Lecture 8: Interest Point Detection

Saad J Bedrossbedros@umn.edu

#2

Last Lecture : Edge Detection• Preprocessing of image is desired to eliminate or

at least minimize noise effects• There is always tradeoff between sensitivity and

accuracy in edge detection • The parameters that we can set for edge detection

and labeling include the size of the edge detection mask and the value of the threshold

• A larger mask or a higher gray level threshold will tend to reduce noise effects, but may result in a loss of valid edge points

Example

original image (Lena)

Compute Gradients

X-Derivative of Gaussian Y-Derivative of Gaussian Gradient Magnitude

Get Orientation at Each Pixel

• Threshold at minimum level• Get orientation

theta = atan2(gy, gx)

#6

Edge Thinning: Non-maximum suppression for each orientation

At q, we have a maximum if the value is larger than those at both p and at r. Interpolate to get these values.

Today’s Lecture

Goal: Find features between multiple images taken from different position or time to be robustly matched

Approaches:• Patch/Template Matching• Interest Point Detection

Applications

Comparison Between 2 or more images• Image alignment • Image Stitching• 3D reconstruction• Object recognition ( matching ) • Indexing and database retrieval• Object tracking• Robot navigation

• What points would you choose?Known as: Template Matching Method, with option to use correlation as a metric

Correlation#11

Correlation#12

• Use Cross Correlation to find the template in an image

• Maximum indicate high similarity

Template/Patch Matching Options

• What do we do for different scales of the patch ?– Compute multiple Templates at different sizes– Match for different scales of the template

• What can we do to identify the shape of the pattern at different mean brightness levels – Remove the mean of the template

• Example in 1D

We need more robust feature descriptors for matching !!!!!

Interest Points

• Local features associated with a significant change of an image property of several properties simultaneously (e.g., intensity, color, texture).

Why Extract Interest Points?

• Corresponding points (or features) between images enable the estimation of parameters describing geometric transforms between the images.

What if we don’t know the correspondences?

• Need to compare feature descriptors of local patches surrounding interest points

( ) ( )=?feature

descriptorfeature

descriptor

?

Properties of good features• Local: features are local, robust to occlusion and

clutter (no prior segmentation!).• Accurate: precise localization.

• Invariant / Covariant• Robust: noise, blur, compression, etc.

do not have a big impact on the feature.

• Distinctive: individual features can be matched to a large database of objects.

• Efficient: close to real-time performance.

Repeatable

Invariant / Covariant

• A function f( ) is invariant under some transformation T( ) if its value does change when the transformation is applied to its argument:

if f(x) = y then f(T(x))=y

• A function f( ) is covariant when it commutes with the transformation T( ):

if f(x) = y then f(T(x))=T(f(x))=T(y)

Invariance• Features should be detected despite geometric or

photometric changes in the image.• Given two transformed versions of the same

image, features should be detected in corresponding locations.

Example: Panorama Stitching

• How do we combine these two images?

Panorama stitching (cont’d)

Step 1: extract featuresStep 2: match features

Panorama stitching (cont’d)

Step 1: extract featuresStep 2: match featuresStep 3: align images

What features should we use?

Use features with gradients in at least two (significantly) different orientations patches ? Corners ?

What features should we use? (cont’d)

(auto-correlation)

Corners vs Edges#27

Corners in Images#28

The Aperture Problem#29

#30

Corner Detection

#31

Corner Detection-Basic Idea

Corner Detector: Basic Idea

“flat” region:no change in all directions

“edge”:no change along the edge direction

“corner”:significant change in all directions

Corner Detection Using Intensity: Basic Idea

• Image gradient has two or more dominant directions near a corner.

• Shifting a window in any direction should give a large change in intensity.

“edge”: no change along the edge direction

“corner”: significant change in all directions

“flat” region:no change in all directions

Corner Detection Using Edge Detection?

• Edge detectors are not stable at corners.• Gradient is ambiguous at corner tip.• Discontinuity of gradient direction near

corner.

Corner Types

Example of L-junction, Y-junction, T-junction, Arrow-junction, and X-junction corner types

Mains Steps in Corner Detection

1. For each pixel in the input image, the corner operator is appliedto obtain a cornerness measure for this pixel.

2. Threshold cornerness map to eliminate weak corners.3. Apply non-maximal suppression to eliminate points whose

cornerness measure is not larger than the cornerness values of all points within a certain distance.

Mains Steps in Corner Detection (cont’d)

Moravec Detector (1977)• Measure intensity variation at (x,y) by shifting a

small window (3x3 or 5x5) by one pixel in each of the eight principle directions (horizontally, vertically, and four diagonals).

Moravec Detector (1977)• Calculate intensity variation by taking the sum of

squares of intensity differences of corresponding pixels in these two windows.

∆x, ∆y in {-1,0,1}

SW(-1,-1), SW(-1,0), ...SW(1,1)

8 directions

Moravec Detector (cont’d)• The “cornerness” of a pixel is the minimum intensity variation

found over the eight shift directions:

Cornerness(x,y) = min{SW(-1,-1), SW(-1,0), ...SW(1,1)}

CornernessMap

(normalized)

Note response to isolated points!

Moravec Detector (cont’d)

• Use Non-maximal suppression will yield the final corners.Process of Zero out all pixels that are not the maximum along the direction of the gradient (look at 1 pixel on each side)

Moravec Detector (cont’d)

• Does a reasonable job infinding the majority of true corners.

• Edge points not in one of the eight principle directions will be assigned a relatively large cornerness value.

Moravec Detector (cont’d)• The response is anisotropic (directionally sensitive)

as the intensity variation is only calculated at a discrete set of shifts (i.e., not rotationally invariant)

Corner Detection: Mathematics

2

,( , ) ( , ) ( , ) ( , )

x yE u v w x y I x u y v I x y

Change in appearance of window w(x,y) for the shift [u,v]:

I(x, y)E(u, v)

E(3,2)

w(x, y)

Corner Detection: Mathematics

2

,( , ) ( , ) ( , ) ( , )

x yE u v w x y I x u y v I x y

I(x, y)E(u, v)

E(0,0)

w(x, y)

Change in appearance of window w(x,y) for the shift [u,v]:

Harris Detector: Mathematics

2

,( , ) ( , ) ( , ) ( , )

x yE u v w x y I x u y v I x y

Change of intensity for the shift [u,v]:

IntensityShifted intensity

Window function

orWindow function w(x,y) =

Gaussian1 in window, 0 outside

#47

Harris Detector

• Improves the Moravec operator by avoiding the use of discrete directions and discrete shifts.

• Uses a Gaussian window instead of a square window.

Harris Detector (cont’d)

• Using first-order Taylor approximation:

Reminder: Taylor expansion ( )

2 1f´( ) f´´( ) f ( )( ) f( ) ( ) ( ) ( ) ( )1! 2! !

nn na a af x a x a x a x a O x

n

Harris Detector (cont’d)

Since

Harris Detector (cont’d)

2 x 2 matrix(auto-correlation or2nd order moment matrix)

2( , )( )i if x yx

2( , )( )i if x yy

AW(x,y)=

Harris Detector

2

,( , ) ( , ) ( , ) ( , )

i i

W i i i i i ix y

S x y w x y f x y f x x y y

default window function w(x,y) :

1 in window, 0 outside

• General case – use window function:

22

, ,2

,2

, ,

( , ) ( , )( , )

( , ) ( , )

x x yx y x y x x y

Wx y x y y

x y yx y x y

w x y f w x y f ff f f

A w x yf f f

w x y f f w x y f

Harris Detector (cont’d)

window function w(x,y) :Gaussian

• Harris uses a Gaussian window: w(x,y)=G(x,y,σI) where σIis called the “integration” scale

22

, ,2

,2

, ,

( , ) ( , )( , )

( , ) ( , )

x x yx y x y x x y

Wx y x y y

x y yx y x y

w x y f w x y f ff f f

A w x yf f f

w x y f f w x y f

Harris Detector

Describes the gradientdistribution (i.e., local structure)inside window!

Does not depend on

2

2,

x x yW

x y x y y

f f fA

f f f

,x y

Harris Detector (cont’d)

Since M is symmetric, we have: 11

2

00WA R R

We can visualize AW as an ellipse with axis lengths determined by the eigenvalues and orientation determined by R

(λmin)-1/2

(λmax)-1/2[ ] constW

xx y A

y

Ellipse equation:direction of the slowest change

direction of the fastest change

Harris Corner Detector (cont’d)

• The eigenvectors of AW encode direction of intensity change.

• The eigenvalues of AW encode strength of intensity change.

v1

v2

Harris Detector (cont’d)

• Eigenvectors encode edge direction

• Eigenvalues encode edge strength

(λmin)-1/2

(λmax)-1/2

direction of the fastest change

direction of the slowest change

Distribution of fx and fy

Distribution of fx and fy (cont’d)

Harris Detector (cont’d)

(assuming that λ1 > λ2)

2

%

Harris Detector

Measure of corner response:

(k – empirical constant, k = 0.04-0.06)

(Shi-Tomasi variation: use min(λ1,λ2) instead of R)

http

://w

ww

.wis

dom

.wei

zman

n.ac

.il/~

deni

ss/v

isio

n_sp

ring0

4/fil

es/In

varia

ntFe

atur

es.p

ptD

arya

Fro

lova

, Den

is S

imak

ov T

he W

eizm

ann

Inst

itute

of S

cien

ce

Harris Detector: Mathematics

“Corner”

“Edge”

“Edge”

“Flat”

• R depends only on eigenvalues of M

• R is large for a corner

• R is negative with large magnitude for an edge

• |R| is small for a flatregion

R > 0

R < 0

R < 0|R| small

Harris Corner Detector (cont’d)

1

2

“Corner”1 and 2 are large,1 ~ 2;intensity changes in all directions

1 and 2 are small;intensity is almost constant in all directions

“Edge” 1 >> 2

“Edge” 2 >> 1

“Flat” region

Classification of pixels using the eigenvalues of AW :

Harris Detector (cont’d)

( , ) ( , , )* ( , )i iD i i

f x y G x y f x yx x

( , ) ( , , )* ( , )i iD i i

f x y G x y f x yy y

σD is called the “differentiation” scale

Harris Detector

• The Algorithm:– Find points with large corner response function

R (R > threshold)– Take the points of local maxima of R

65

Harris corner detector algorithm• Compute image gradients Ix Iy for all pixels• For each pixel

– Compute

by looping over neighbors x,y– compute

• Find points with large corner response function R(R > threshold)

• Take the points of locally maximum R as the detected feature points (ie, pixels where R is bigger than for all the 4 or 8 neighbors).

65

http

://w

ww

.wis

dom

.wei

zman

n.ac

.il/~

deni

ss/v

isio

n_sp

ring0

4/fil

es/In

varia

ntFe

atur

es.p

ptD

arya

Fro

lova

, Den

is S

imak

ov T

he W

eizm

ann

Inst

itute

of S

cien

ce

Harris Corner Detector (cont’d)• To avoid computing the eigenvalues

explicitly, the Harris detector uses the following function:

R(AW) = det(AW) – α trace2(AW)

which is equal to:

R(AW) = λ1 λ2- α (λ1+ λ2)2

α: is a const

Harris Corner Detector (cont’d)

“Corner”R > 0

“Edge” R < 0

“Edge” R < 0

“Flat” region

|R| smallα: is usually between 0.04 and 0.06

R(AW) = det(AW) – α trace2(AW)

Classification of image

points using R(AW):

Harris Corner Detector (cont’d)

• Other functions:

2 1

1 2

1 2

det

a

AtrA

Harris Corner Detector - Steps

( , , )* ( , )x D i if G x y f x yx

( , , )* ( , )y D i if G x y f x yy

σD is called the “differentiation” scale

1. Compute the horizontal and vertical (Gaussian) derivatives

2. Compute the three images involved in AW :

2

2,

( , ) x x yW

x W y W x y y

f f fA w x y

f f f

Harris Detector - Steps

R(AW) = det(AW) – α trace2(AW)

3. Convolve each of the three images with a larger Gaussian

w(x,y) :Gaussian

σI is called the “integration” scale

4. Determine cornerness:

5. Find local maxima

Harris Detector - Example

Harris Detector - Example

Harris Detector - ExampleCompute corner response R

Harris Detector - ExampleFind points with large corner response: R>threshold

Harris Detector - ExampleTake only the points of local maxima of R

Harris Detector - ExampleMap corners on the original image

Harris Detector – Scale Parameters

• The Harris detector requires two scale parameters:

(i) a differentiation scale σD for smoothing prior to the computation of image derivatives,

&(ii) an integration scale σI for defining the size of the Gaussian window (i.e., integrating derivative responses). AW(x,y) AW(x,y,σI,σD)

• Typically, σI=γσD

Invariance to Geometric/Photometric Changes

• Is the Harris detector invariant to geometric and photometric changes?

• Geometric

– Rotation

– Scale

– Affine

• Photometric– Affine intensity change: I(x,y) a I(x,y) + b

Harris Detector: Rotation Invariance

• Rotation

Ellipse rotates but its shape (i.e. eigenvalues) remains the same

Corner response R is invariant to image rotation

Harris Detector: Rotation Invariance (cont’d)

Harris Detector: Photometric Changes

• Affine intensity change Only derivatives are used => invariance to intensity shift I(x,y) I (x,y) + b

Intensity scale: I(x,y) a I(x,y)

R

x (image coordinate)

threshold

R

x (image coordinate)

Partially invariant to affine intensity change

Harris Detector: Scale Invariance

• Scaling

All points will be classified as edges

Corner

Not invariant to scaling (and affine transforms)

Harris Detector: Repeatability

• In a comparative study of different interest point detectors, Harris was shown to be the most repeatable and most informative.

C. Schmid, R. Mohr, and C. Bauckhage, "Evaluation of Interest Point Detectors", International Journal of Computer Vision 37(2), 151.172, 2000.

best results:σI=1, σD=2

Repeatability = 1 2

#min( , )

correspondencesm m

Harris Detector: Disadvantages

• Sensitive to:– Scale change– Significant viewpoint change– Significant contrast change

How to handle scale changes?

• AW must be adapted to scale changes.

• If the scale change is known, we can adapt the Harris detector to the scale change (i.e., set properly σI ,σD).

• What if the scale change is unknown?

Multi-scale Harris Detector

scale

x

y

Harris

• Detects interest points at varying scales.R(AW) = det(AW(x,y,σI,σD)) – α trace2(AW(x,y,σI,σD))

σnσD= σnσI=γσD

σn=knσ

How to cope with transformations?

• Exhaustive search• Invariance• Robustness

Exhaustive search• Multi-scale approach

Slide from T. Tuytelaars ECCV 2006 tutorial

Exhaustive search• Multi-scale approach

Exhaustive search

• Multi-scale approach

Exhaustive search• Multi-scale approach

How to handle scale changes?

• Not a good idea!– There will be many points representing the same

structure, complicating matching! – Note that point locations shift as scale increases.

The size of the circle corresponds to the scale at which the point was detected

How to handle scale changes? (cont’d)

• How do we choose corresponding circles independently in each image?

How to handle scale changes? (cont’d)

• Alternatively, use scale selection to find the characteristic scale of each feature.

• Characteristic scale depends on the feature’s spatial extent (i.e., local neighborhood of pixels).

scale selection scale selection

How to handle scale changes?

The size of the circles corresponds to the scale at which the point was selected.

• Only a subset of the points computed in scale space are selected!

Automatic Scale Selection• Design a function F(x,σn) which provides some local

measure. • Select points at which F(x,σn) is maximal over σn.

T. Lindeberg, "Feature detection with automatic scale selection" International Journal of Computer Vision, vol. 30, no. 2, pp 77-116, 1998.

max of F(x,σn)corresponds tocharacteristic scale!

σn

F(x,σn)

Invariance

• Extract patch from each image individually

Automatic scale selection• Solution:

– Design a function on the region, which is “scale invariant” (the same for corresponding regions, even if they are at different scales)

Example: average intensity. For corresponding regions (even of different sizes) it will be the same.

scale = 1/2

– For a point in one image, we can consider it as a function of region size (patch width)

f

region size

Image 1 f

region size

Image 2

Automatic scale selection• Common approach:

scale = 1/2f

region size

Image 1 f

region size

Image 2

Take a local maximum of this function

Observation: region size, for which the maximum is achieved, should be invariant to image scale.

s1 s2

Important: this scale invariant region size is found in each image independently!

Automatic Scale Selection

100

)),(( )),((11

xIfxIfmm iiii

Same operator responses if the patch contains the same image up to scale factor.

Automatic Scale Selection• Function responses for increasing scale (scale signature)

101)),((

1xIf

mii )),((1

xIfmii

Automatic Scale Selection• Function responses for increasing scale (scale signature)

102)),((

1xIf

mii )),((1

xIfmii

Automatic Scale Selection• Function responses for increasing scale (scale signature)

103)),((

1xIf

mii )),((1

xIfmii

Automatic Scale Selection• Function responses for increasing scale (scale signature)

104)),((

1xIf

mii )),((1

xIfmii

Automatic Scale Selection• Function responses for increasing scale (scale signature)

105)),((

1xIf

mii )),((1

xIfmii

Automatic Scale Selection• Function responses for increasing scale (scale signature)

106)),((

1xIf

mii )),((1

xIfmii

Scale selection• Use the scale determined by detector to compute

descriptor in a normalized frame

Automatic Scale Selection (cont’d)

• Characteristic scale is relatively independentof the image scale.

• The ratio of the scale values corresponding to the max values, is equal to the scale factor between the images.

Scale selection allows for finding spatial extend that is covariant with the image transformation.

Scale factor: 2.5

Automatic Scale Selection

• What local measures should we use? – Should be rotation invariant– Should have one stable sharp peak

How should we choose F(x,σn) ?

• Typically, F(x,σn) is defined using derivatives, e.g.:

2 2 2

2

12

: ( ( , ) ( , ))

: | ( ( , ) ( , )) |

:| ( )* ( ) ( )* ( ) |

: det( ) ( )

x y

xx yy

n n

W W

Square gradient L x L x

LoG L x L x

DoG I x G I x G

Harris function A trace A

C. Schmid, R. Mohr, and C. Bauckhage, "Evaluation of Interest Point Detectors", International Journal of Computer Vision, 37(2), pp. 151-172, 2000.

Review#111• Patch/Template Matching

• Interest Points• Corner Detection: Harris Detector

• Properties of Harris Detector• Scaling approach