Lecture 6 Feature Detection 2

download Lecture 6 Feature Detection 2

of 23

Transcript of Lecture 6 Feature Detection 2

  • 8/14/2019 Lecture 6 Feature Detection 2

    1/23

    Feature extraction

    Scheme and tools in computer vision

    Recognition of features is, in general, a complex procedure requiring a

    variety of steps that successively transform the iconic data to

    recognition information.

    Handling unconstrained environments is still very challenging

    problem.

  • 8/14/2019 Lecture 6 Feature Detection 2

    2/23

    Visual Appearance-base Feature Extraction (Vision)

    conditioning

    labeling/grouping

    extracting

    matching model

    environment

    image

    simplified image

    properties

    groups of pixel

    cognition/action

    thresholding

    connectedcomponent

    labeling

    edgedetection

    Hough

    transfor-

    mation

    filtering

    disparity

    correlation

    Image Processing Scheme

    Computer Vision

    Tools in Computer Vision

  • 8/14/2019 Lecture 6 Feature Detection 2

    3/23

    Feature Extraction (Vision): Tools

    Conditioning

    Suppresses noise

    Background normalization by suppressing uninteresting systematic or patterned

    variations Done by:

    gray-scale modification (e.g. thresholding)

    (low pass) filtering

    Labeling Determination of the spatial arrangement of the events, i.e. searching for a

    structure

    Grouping

    Identification of the events by collecting together pixel participating in the samekind of event

    Extracting

    Compute a list of properties for each group

    Matching (see chapter 5)

  • 8/14/2019 Lecture 6 Feature Detection 2

    4/23

    Filtering and Edge Detection

    Gaussian Smoothing

    Removes high-frequency noise

    Convolution of intensity image I with G:

    with:

    Edges

    Locations where the brightness undergoes a sharp change,

    Differentiate one or two times the image

    Look for places where the magnitude of the derivative is large.

    Noise, thus first filtering/smoothing required before edge detection

  • 8/14/2019 Lecture 6 Feature Detection 2

    5/23

    Edge Detection

    Ultimate goal of edge detection

    an idealized line drawing.

    Edge contours in the image correspond to important scene contours.

  • 8/14/2019 Lecture 6 Feature Detection 2

    6/23

    Optimal Edge Detection: Canny

    The processing steps

    Convolution of image with the Gaussian function G

    Finding maxima in the derivative

    Canny combines both in one operation

    (a) A Gaussian function. (b) The first derivative of a Gaussian function.

  • 8/14/2019 Lecture 6 Feature Detection 2

    7/23

    Optimal Edge Detection: Canny 1D example

    (a) Intensity 1-D profile of an ideal step edge.

    (b) Intensity profileI(x) of a real edge.

    (c) Its derivativeI(x).

    (d) The result of the convolutionR(x) = G I, where G is the first

    derivative of a Gaussian function.

  • 8/14/2019 Lecture 6 Feature Detection 2

    8/23

    Optimal Edge Detection: Canny

    1-D edge detector can be defined with the following steps:

    1. Convolute the image I with G to obtain R.

    2. Find the absolute value ofR.3.Mark those peaks |R| that are above some predefined thresholdT. The

    threshold is chosen to eliminate spurious peaks due to noise.

    2D Two dimensional Gaussian function

  • 8/14/2019 Lecture 6 Feature Detection 2

    9/23

    Optimal Edge Detection: Canny Example

    a) Example of Canny edge detection

    b) After nonmaxima suppression (reduce the thickness of all edges to a

    single pixel)

  • 8/14/2019 Lecture 6 Feature Detection 2

    10/23

    Nonmaxima Suppression

    Output of an edge detector is usually a

    b/w image where the pixels with

    gradient magnitude above a predefined

    threshold are black and all the others

    are white

    Nonmaxima suppression generates

    contours described with only one pixel

    thinness by determining the localmaximum.

  • 8/14/2019 Lecture 6 Feature Detection 2

    11/23

    Gradient Edge Detectors

    Roberts

    Prewitt

    Sobel

  • 8/14/2019 Lecture 6 Feature Detection 2

    12/23

    Example

    a) Raw image

    b) Filtered

    (Sobel)

    c) Thresholding

    d) Nonmaxima

    suppression

    Fig.4.46Fig.4.46

  • 8/14/2019 Lecture 6 Feature Detection 2

    13/23

    Comparison of Edge Detection Methods

    Average time required to compute the edge figure of a 780 x 560 pixels image.

    The times required to compute an edge image are proportional with the accuracy of

    the resulting edge images

  • 8/14/2019 Lecture 6 Feature Detection 2

    14/23

    Dynamic Thresholding

    Changing illumination

    Constant threshold level in edge detection is not suitable

    Dynamically adapt the threshold level

    consider only the n pixels with the highest gradient magnitude for further

    calculation steps

    The gradient magnitude of the point where n is reached (counted backward from

    highest value) is used as the temporary threshold value

    (a) Number of pixels with a specific gradient magnitude in the image of Fig. 4.46(b).

    (b) Same as (a), but with logarithmic scale

  • 8/14/2019 Lecture 6 Feature Detection 2

    15/23

    Hough Transform: Straight Edge Extraction

    All pointsp on a straight-line edge must satisfyyp = m1xp + b1 .

    Each point (xp, yp) that is part of this line constraints the parameter m1

    and b1. The Hough transform finds the line (line-parameters m, b) that get most

    votes from the edge pixels in the image.

    This is realized by four steps1. Create a 2D array A [m,b] with axes that tessellate the values of m and b.

    2.Initialize the array A to zero: A[m b]=0 for all values of m, b.

    3. For each edge pixel (xp, yp) in the image, loop over all values of m and b:if yp = m1xp + b1 then A[m,b]+=1

    4. Search cells in A with largest value. They correspond to extracted straight-

    line edge in the image.

  • 8/14/2019 Lecture 6 Feature Detection 2

    16/23

    Grouping, Clustering: Assigning Features to Features

    pixels feature object

    Connected Component Labeling

    1=2

    3=41

    1111111

    111

    1111

    1

    1111111

    111

    1111

    21

    1 1 1 1 1 1

    3 333

    33

    3333

    3333

    3

    333

    4

    333

    333333

    33

    1

    1111111

    111

    1111

    1

    1111111

    111

    1111

    11

    1 1 1 1 1 1

    3 333

    33

    3333

    3333

    3

    333

    3

    333

    333333

    33

  • 8/14/2019 Lecture 6 Feature Detection 2

    17/23

    Floor Plane Extraction

    Vision based identification of traversable

    Assumptions

    Obstacles differ in appearance from the ground The ground is flat and its angle to the camera is known

    There are no overhanging obstacles

    Floor plane extraction in artificial environments Shakey, the first autonomous mobile robot developed from 1977 to 1972 at

    SRI, use vision based floor plane extraction

    Textureless white floor tiles, with walls painted with a high-contrast stripof black paint and the edges of all simple polygonal obstacles were also

    painted black

  • 8/14/2019 Lecture 6 Feature Detection 2

    18/23

    Adaptive Floor Plane Extraction

    The parameters defining the

    expected appearance of the floor

    are allowed to vary over time

    Use edge detection and color

    detection jointly

    The processing steps As pre-processing, smooth Ifusing

    a Gaussian smoothing operator

    Initialize a histogram array H with

    n intensity values:for

    For every pixel (x,y) in Ifincrement

    the histogram:

  • 8/14/2019 Lecture 6 Feature Detection 2

    19/23

    Whole-Image Features

    OmniCam: a CCDcamera mountedtoward a parabolic

    mirror Compact

    representations of

    the entire localregion

    Extract one ormore features

    from the imagethat are correlatedwell with the

    robots position

  • 8/14/2019 Lecture 6 Feature Detection 2

    20/23

    Direct Extraction: image histograms

    When combined with theWhen combined with the catadioptriccatadioptric cameracameras field of views field of view

    invariance, we can create a system that is invariant to robot roinvariance, we can create a system that is invariant to robot rotationtation

    and insensitive to smalland insensitive to small--scale robot translation.scale robot translation.

    A color cameraA color cameras output image contains information along multiples output image contains information along multiple

    bands: R, G, and B values as well as hue, saturation, and luminabands: R, G, and B values as well as hue, saturation, and luminancence

    valuesvalues

    The simplest histogramThe simplest histogram--based extraction strategy is to build separatebased extraction strategy is to build separate

    1D histograms charactering each band. We use1D histograms charactering each band. We use GGii to refer to an arrayto refer to an arraystoring the values in bandstoring the values in band ii for all pixels infor all pixels in GG..

  • 8/14/2019 Lecture 6 Feature Detection 2

    21/23

    Image Histograms

    The processing steps

    As pre-processing, smooth using a Gaussian smoothing operator

    Initialize with n levels:

    For every pixel (x,y) in increment the histogram:

    +

    ++

    =i ii

    ii

    ii

    ii

    khkk

    khhhKHd )2log2log(),(

    Jeffrey divergence:Jeffrey divergence:

    Using measure such as JeffreyUsing measure such as Jeffrey

    divergence, mobile robots havedivergence, mobile robots haveused wholeused whole--image histogramimage histogram

    features to identify their position infeatures to identify their position in

    realreal--time against a database oftime against a database of

    previously recorded images ofpreviously recorded images oflocations in their environment.locations in their environment.

  • 8/14/2019 Lecture 6 Feature Detection 2

    22/23

    Image Fingerprint Extraction

    Highly distinctive combination of simple features v: vertical edgev: vertical edgeA,B,C,..,P: hue binsA,B,C,..,P: hue bins

    String from current locationString from current location

    Strings from the databaseStrings from the database

  • 8/14/2019 Lecture 6 Feature Detection 2

    23/23

    Homework 5 (Due two weeks from now)

    Use MATLAB to do feature detection on the image I posted on

    pipeline. You are asked to use Canny detection and Sobel detection.

    Compare the results of two algorithms. You need submit the following

    items

    Report including introduction of two detection algorithms, the

    procedures and final results.

    Matlab file for each algorithm so that I can run them on my PC.