Remote Sensing Part 4 Classification & Vegetation Indices.

42
Remote Sensing Part 4 Classification & Vegetation Indices

Transcript of Remote Sensing Part 4 Classification & Vegetation Indices.

Page 1: Remote Sensing Part 4 Classification & Vegetation Indices.

Remote SensingPart 4

Classification & Vegetation Indices

Page 2: Remote Sensing Part 4 Classification & Vegetation Indices.

Classification Introduction• Humans are classifiers by nature - we’re always

putting things into categories

• To classify things, we use sets of criteria

• Examples: – Classifying people by age, gender, race, job/career, etc.– Criteria might include appearance, style of dress, pitch of

voice, build, hair style, language/lexicon, etc.– Ambiguity comes from:

• 1) our classification system (i.e., what classes we choose) • 2) our criteria (some criteria don’t differentiate people with

complete accuracy)• 3) our data (i.e., people who fit multiple categories and people

who fit no categories)

Page 3: Remote Sensing Part 4 Classification & Vegetation Indices.

Non-Remote Sensing Classification Example

• “Sorting incoming Fish on a conveyor according to species using optical sensing”

Sea bassSpecies

Salmon

** The following data are just hypothetical

Page 4: Remote Sensing Part 4 Classification & Vegetation Indices.

Methods

– Set up a camera and take some sample images to extract features

• Length• Lightness• Width• Number and shape of fins• Position of the mouth, etc…

Page 5: Remote Sensing Part 4 Classification & Vegetation Indices.

Scanning the Fish

Page 6: Remote Sensing Part 4 Classification & Vegetation Indices.

• Classification #1

– Use the length of the fish as a possible feature for discrimination

Page 7: Remote Sensing Part 4 Classification & Vegetation Indices.

• Fish length alone is a poor feature for classifying fish type– Using only length we would be correct 50-60% of the time– That’s not great because random guessing (i.e., flipping a

coin) would be right ~50% of the time if there are an equal number of each fish type

Page 8: Remote Sensing Part 4 Classification & Vegetation Indices.

• Classification #2

– Use the lightness (i.e., color) of the fish as a possible feature for discrimination

Page 9: Remote Sensing Part 4 Classification & Vegetation Indices.

• Fish lightness alone is a pretty good feature for classifying fish by type– Using only lightness we would be correct ~ 80% of

the time

Page 10: Remote Sensing Part 4 Classification & Vegetation Indices.

• Classification #3

– Use the width & lightness (i.e., color) of the fish as possible features for discrimination

Page 11: Remote Sensing Part 4 Classification & Vegetation Indices.

• Fish lightness AND fish width do a very good job of classifying fish by type– Using lightness AND width we would be correct

~90% of the time

Page 12: Remote Sensing Part 4 Classification & Vegetation Indices.

How does this relate to remote sensing?

• Instead of fish types, we are typically interested in land cover– For example: forests, crops, urban areas

• Instead of fish characteristics we have reflectance in the spectral bands collected by the sensor– For example: Landsat TM bands 1-6 instead

of fish length, width, lightness, etc.

Page 13: Remote Sensing Part 4 Classification & Vegetation Indices.
Page 14: Remote Sensing Part 4 Classification & Vegetation Indices.
Page 15: Remote Sensing Part 4 Classification & Vegetation Indices.

Imagery Classification

• Two main types of classification– Unsupervised

• Classes based on statistics inherent in the remotely sensed data itself

• Classes do not necessarily correspond to real world land cover types

– Supervised• A classification algorithm is “trained”

using ground truth data• Classes correspond to real world land

cover types determined by the user

Page 16: Remote Sensing Part 4 Classification & Vegetation Indices.

Notes

• For ease of display the following examples show just 2 bands: – one band on the X-axis– one band on the Y-axis

• In reality computers use all bands when doing classifications

• These types of graphs are often called feature space

• The points displayed on the graphs relate to pixels from an image

• The term cloud sometimes refers to the amorphous blob(s) of pixels in the feature space

Page 17: Remote Sensing Part 4 Classification & Vegetation Indices.

Unsupervised Classification• Classes are created based on the

locations of the pixel data in feature space

Red BV’s

Infrared BV’s

0

0 255

255

v

Page 18: Remote Sensing Part 4 Classification & Vegetation Indices.

A Computer Algorithm Finds Clusters

Red BV’s

Infrared BV’s

0

0 255

255

v

Unsupervised Classification

Page 19: Remote Sensing Part 4 Classification & Vegetation Indices.

Unsupervised Classification

• Attribution phase – performed by human

water

Soil

agriculture

forest

Red BV’s

Infrared BV’s

0

0 255

255

Page 20: Remote Sensing Part 4 Classification & Vegetation Indices.

Problems with Unsupervised Classification

Red BV’s

Infrared BV’s

0

0 255

255

v

The computer may consider these 2 clusters (forest and agriculture) as one cluster The computer may consider

this cluster (soil) to be 2 clusters

Page 21: Remote Sensing Part 4 Classification & Vegetation Indices.

Supervised Classfication• We “train” the computer program using ground truth data

• I.e., we tell the computer what our classes (e.g., trees, soil, agriculture, etc.) “look like”

Coniferous treesDeciduous trees

Page 22: Remote Sensing Part 4 Classification & Vegetation Indices.

Supervised Classification

Red BV’s

Infrared BV’s

0

0 255

255

v

Sample pixels

Other pixels

Page 23: Remote Sensing Part 4 Classification & Vegetation Indices.

Supervised Classification• No attribution phase necessary

because we define the classes before-hand

water

Soil

agriculture

forest

Red BV’s

Infrared BV’s

0

0 255

255

Page 24: Remote Sensing Part 4 Classification & Vegetation Indices.

Problems with Supervised Classification

Red BV’s

Infrared BV’s

0

0 255

255

vforest

agri

water

Soil

What’s this?

v

Page 25: Remote Sensing Part 4 Classification & Vegetation Indices.

What is the computer actually doing?

• This classification generates statistics for the center, the size, and the shape of the sample pixel clouds

• The computer will then classify all the rest of the pixels in the image using these statistical values

Page 26: Remote Sensing Part 4 Classification & Vegetation Indices.

Example: Remote Sensing of Clouds

Page 27: Remote Sensing Part 4 Classification & Vegetation Indices.

Supervised Classification: Training Samples

• Users survey (using GPS) areas of “pure” land cover for all possible land cover types in an image

• OR

• Users “heads-up” digitize “pure” areas using expert knowledge and/or higher spatial resolution imagery

• The rest of the image is classified based on the spectral characteristics of the training sites

Page 28: Remote Sensing Part 4 Classification & Vegetation Indices.

Classification of Nang Rong imagery

(a) Nov 1979

(c) Nov 2001

Shown are Landsat MSS,TM,and ETM Image Classification Results

(a) Nov 1992

Upland Ag

Forest

Rice

Water

Built-up

Page 29: Remote Sensing Part 4 Classification & Vegetation Indices.

Land Use/cover Change in Nang Rong, Thailand

1954 1994

Page 30: Remote Sensing Part 4 Classification & Vegetation Indices.

Example Classification Results (Bangkok, Thailand)

Page 31: Remote Sensing Part 4 Classification & Vegetation Indices.

Accuracy Assessments

• After classifying an image we want to know how well the classification worked

• To find out we must conduct an accuracy assessment

Page 32: Remote Sensing Part 4 Classification & Vegetation Indices.

How are accuracy assessments done?

• Basically we need to compare the classification results with real land cover

• As with training data, the real land cover data can be field data (best) or samples from higher spatial resolution imagery (easier)

• What points should we use for the accuracy assessment?– Possible options (there are others)

• Random points• Stratified random points (each class represented with an

equal number of points)

Page 33: Remote Sensing Part 4 Classification & Vegetation Indices.

Classification Challenges

• What problem might occur when gathering points for an accuracy assessment (and to a lesser extent, training areas)?

• Can we use the same points for the accuracy assessment that we used to train the classification?

Page 34: Remote Sensing Part 4 Classification & Vegetation Indices.

Ikonos Imagery: Glacier National Park

Page 35: Remote Sensing Part 4 Classification & Vegetation Indices.

Classification Results

Page 36: Remote Sensing Part 4 Classification & Vegetation Indices.

Accuracy Assessment Table

• Rows are the reference data, columns are the classified data

• Values on the diagonal are correctly classified

• The values in red are the producer’s accuracy for each class– A.k.a. errors of omission– E.g., “how many pixels that ARE water (13) are classified AS water (12)”

• The values in blue are the user’s accuracy for each class– A.k.a. the errors of commission– E.g., “how many pixels classified AS water (14) ARE water (12)”

• Overall accuracy = # of correctly classified pixels / total # of pixels

• The Kappa statistic is basically the overall accuracy adjusted for how many pixels we would expect to correctly classify by chance alone

Page 37: Remote Sensing Part 4 Classification & Vegetation Indices.

Vegetation Indices

Page 38: Remote Sensing Part 4 Classification & Vegetation Indices.

Vegetation Indices

• Normalized Differential Vegetation Index (NDVI)

• Takes advantage of the “red edge” of vegetation reflectance that occurs between red and near infrared reflectance (NIR)

• NDVI = (NIR – Red) / (NIR + Red)

• Many more indices with many variants exist (lots of acronyms like SAVI, etc.)

Page 39: Remote Sensing Part 4 Classification & Vegetation Indices.

Normalized Difference Vegetation Index (NDVI)

dNIR

dNIR

RR

RRNDVI

Re

Re

NDVI: [-1.0, 1.0]

Often, the more leaves of vegetation there are present, the bigger the contrast in reflectance in the red and near-infrared spectra

NDVI most accurately approximates the Fraction of Absorbed Photosynthetically Active Radiation (FAPAR)

Page 40: Remote Sensing Part 4 Classification & Vegetation Indices.

NDVI from AVHRR

Feb 27-Mar 12

Jul 17-Jul 30

Aug 14-Aug 27

Jun 19-Jul 2

Apr 24-May 7

Nov 6- Nov19

Page 41: Remote Sensing Part 4 Classification & Vegetation Indices.

NDVI and Precipitation Relationships

Expansion and contraction of the Sahara

A: 12 Apr-2 May 1982B: 5 to 25 Jul 1982C: 22 Sep to 17 Oct 1982D: 10 Dec 1982-9Jan 1983

Page 42: Remote Sensing Part 4 Classification & Vegetation Indices.

Monitoring forest fire

Pre-forest fire

Post-forest fire

Burned area identified from space using NDVI