REMOTE SENSING AND GIS LECTURES

94
1 University of Anbar Engineering college Department of dams and water resources REMOTE SENSING AND GIS LECTURES BY Dr. KHAMIS NABA SAYL Fourth stage

Transcript of REMOTE SENSING AND GIS LECTURES

Page 1: REMOTE SENSING AND GIS LECTURES

1

University of Anbar

Engineering college

Department of dams and water resources

REMOTE SENSING AND GIS LECTURES

BY Dr. KHAMIS NABA SAYL

Fourth stage

Page 2: REMOTE SENSING AND GIS LECTURES

2

List of continent

Chapter one Concepts and Foundations of Remote Sensing 1.1 Introduction 1.2 Energy Sources and Radiation Principles 1.3 Energy Interactions in the Atmosphere 1.4 Energy Interactions with Earth Surface Features 1.5 Data Acquisition and Digital Image Concepts 1.6 Reference Data 1.7 Characteristics of Remote Sensing Systems 1.8 Successful Application of Remote Sensing 1.9 Geographic Information Systems (GIS) 1.10 Spatial Data Frameworks for GIS and Remote Sensing 1.11 Visual Image Interpretation

Chapter two Elements of Photographic Systems 2.1 Introduction 2.2Film Photography

2.3 Digital Photography

2.4 Aerial Cameras

2.5 Spatial Resolution of Camera Systems

2.7 Aerial Videography

2.9 Conclusion

Chapter three Basic Principles of Photogrammetry 1 Introduction

3.2 Basic Geometric Characteristics of Aerial Photographs

3.3 Photographic Scale

3.4 Ground Coverage of Aerial Photographs

3.5 Area Measurement

3.6 Relief Displacement of Vertical Features

3.7 Image Parallax

3.8 Ground Control for Aerial Photography

3.9 Determining the Elements of Exterior Orientation of Aerial Photographs

3.10 Production of Mapping Products from Aerial Photographs

3.11 Flight Planning

3.12 Conclusion Chapter four Earth Resource Satellites Operating in the Optical Spectrum

4.1Introduction

4.2 General Characteristics of Satellite Remote Sensing Systems Operating in

the Optical Spectrum

4.3 Moderate Resolution Systems

4.4 Landsat-1 to -7

4.5 Landsat-8

4.6 Future Landsat Missions and the Global Earth Observation System of

Systems 4.7 SPOT-1 to -5

4.8 SPOT-6 and -7

4.9 Evolution of Other Moderate Resolution Systems 339

Chapter five Microwave and Lidar Sensing

Chapter six Digital Image Analysis

Chapter seven APPLICATIONS OF REMOTE SENSING

Page 3: REMOTE SENSING AND GIS LECTURES

3

المحاضرة

الاولى

CONCEPTS AND FOUNDATIONS OF REMOTE SENSING

Page 4: REMOTE SENSING AND GIS LECTURES

4

Chapter one

CONCEPTS AND FOUNDATIONS OF REMOTE SENSING

1.1 INTRODUCTION

Remote sensing is the science and art of obtaining information about an object, area, or phenomenon

through the analysis of data acquired by a device that is not in contact with the object, area, or

phenomenon under investigation. As you read these words, you are employing remote sensing. Your eyes

are acting as sensors that respond to the light reflected from this page. The ―data‖ your eyes acquire are

impulses corresponding to the amount of light reflected from the dark and light areas on the page. These

data are analyzed, or interpreted, in your mental computer to enable you to explain the dark areas on the

page as a collection of letters forming words. Beyond this, you recognize that the words form sentences,

and you interpret the information that the sentences convey. In many respects, remote sensing can be

thought of as a reading process. Using various sensors, we remotely collect data that may be analyzed to

obtain information about the objects, areas, or phenomena being investigated. The remotely collected data

can be of many forms, including variations in force distributions, acoustic wave distributions, or

electromagnetic energy distributions.

Overview of the Electromagnetic Remote Sensing Process

This lecture is about electromagnetic energy sensors that are operated from airborne and spaceborne

platforms to assist in inventorying, mapping, and monitoring earth resources. These sensors acquire data

on the way various earth surface features emit and reflect electromagnetic energy, and these data are

analyzed to provide information about the resources under investigation. Figure 1.1 schematically

illustrates the generalized processes and elements involved in electromagnetic remote sensing of earth

resources. The two basic processes involved are data acquisition and data analysis. The elements of the

data acquisition process are energy sources (a), propagation of energy through the atmosphere (b), energy

interactions with earth surface features (c), retransmission of energy through the atmosphere (d), airborne

and/or spaceborne sensors (e), resulting in the generation of sensor data in pictorial and/or digital form (f).

In short, we use sensors to record variations in the way earth surface features reflect and emit

electromagnetic energy. The data analysis process (g) involves examining the data using various viewing

and interpretation devices to analyze pictorial data and/or a computer to analyze digital sensor data.

Reference data about the resources being studied (such as soil maps, crop statistics, or field-check data)

are used when and where available to assist in the data analysis. With the aid of the reference data, the

analyst extracts information about the type, extent, location, and condition of the various resources over

which the sensor data were collected. This information is then compiled (h), generally in the form of

maps, tables, or digital spatial data that can be merged with other ―layers‖ of information in a geographic

information system (GIS). Finally, the information is presented to users (i), who apply it to their decision-

making process.

Page 5: REMOTE SENSING AND GIS LECTURES

5

1.2 ENERGY SOURCES AND RADIATION PRINCIPLES

Visible light is only one of many forms of electromagnetic energy. Radio waves, ultraviolet rays, radiant

heat, and X-rays are other familiar forms. All this energy is inherently similar and propagates in

accordance with basic wave theory. As shown in Figure 1.2, this theory describes electromagnetic energy

as traveling in a harmonic, sinusoidal fashion at the ―velocity of light‖ c. The distance from one wave

peak to the next is the wavelength l, and the number of peaks passing a fixed point in space per unit time

is the wave frequency v. From basic physics, waves obey the general equation

c = vλ (1)

Because c is essentially a constant 3*108m/sec , frequency v and wavelength l for any given wave are

related inversely, and either term can be used to characterize a wave. In remote sensing, it is most

common to categorize electromagnetic waves by their wavelength location within the electromagnetic

spectrum (Figure 1.3). The most prevalent unit used to measure wavelength along the spectrum is the

micrometer m . A micrometer equals 1*106

m.

Although names (such as ―ultraviolet‖ and ―microwave‖) are generally assigned to regions of the

electromagnetic spectrum for convenience, there is no clear-cut dividing line between one nominal

spectral region and the next. Divisions of the spectrum have grown from the various methods for sensing

each type of radiation more so than from inherent differences in the energy characteristics of various

wavelengths. Also, it should be noted that the portions of the electromagnetic spectrum used in remote

sensing lie along a continuum characterized by magnitude changes of many powers of 10. Hence, the use

of logarithmic plots to depict the electromagnetic spectrum is quite common. The ―visible‖ portion of

such a plot is an extremely small one, because the spectral sensitivity of the human eye extends only from

about 0.4 ᶬm to approximately 0. 7 ᶬm. The color ―blue‖ is ascribed to the approximate range of 0.4 to

0:5 m, ―green‖ to 0.5 to 0:6 ᶬm, and ―red‖ to 0.6 to 0.7 ᶬm. Ultraviolet (UV) energy adjoins the blue end

of the visible portion of the spectrum. Beyond the red end of the visible region are three different

categories of infrared (IR) waves: near IR (from 0.7 to 1.3 m), mid IR (from 1.3 to 3 m; also referred to as

Page 6: REMOTE SENSING AND GIS LECTURES

6

shortwave IR or SWIR), and thermal IR (beyond 3 to 14 ᶬm, sometimes referred to as longwave IR). At

much longer wavelengths (1 mm to 1 m) is the microwave portion of the spectrum.

Most common sensing systems operate in one or several of the visible, IR, or microwave portions of the

spectrum. Within the IR portion of the spectrum, it should be noted that only thermal-IR energy is directly

related to the sensation of heat; near- and mid-IR energy are not.

Page 7: REMOTE SENSING AND GIS LECTURES

7

Although many characteristics of electromagnetic radiation are most easily described by wave theory,

another theory offers useful insights into how electromagnetic energy interacts with matter. This theory—

the particle theory—suggests that electromagnetic radiation is composed of many discrete units called

photons or quanta. The energy of a quantum is given as

Thus, we see that the energy of a quantum is inversely proportional to its wavelength. The longer the

wavelength involved, the lower its energy content. This has important implications in remote sensing

from the standpoint that naturally emitted long wavelength radiation, such as microwave emission from

terrain features, is more difficult to sense than radiation of shorter wavelengths, such as emitted thermal

IR energy. The low energy content of long wavelength radiation means that, in general, systems operating

at long wavelengths must ―view‖ large areas of the earth at any given time in order to obtain a detectable

energy signal. The sun is the most obvious source of electromagnetic radiation for remote sensing.

However, all matter at temperatures above absolute zero (0 K, or -273C) continuously emits

electromagnetic radiation. Thus, terrestrial objects are also sources of radiation, although it is of

considerably different magnitude and spectral composition than that of the sun. How much energy any

object radiates is, among other things, a function of the surface temperature of the object. This property is

expressed by the Stefan–Boltzmann law, which states that

The particular units and the value of the constant are not critical for the student to remember, yet it is

important to note that the total energy emitted from an object varies as T4 and therefore increases very

rapidly with increases in temperature. Also, it should be noted that this law is expressed for an energy

source that behaves as a blackbody. A blackbody is a hypothetical, ideal radiator that totally absorbs and

reemits all energy incident upon it. Actual objects only approach this ideal. Suffice it to say for now that

the energy emitted from an object is primarily a function of its temperature, as given by Eq. 1.4.

Just as the total energy emitted by an object varies with temperature, the spectral distribution of the

emitted energy also varies. Figure 1.4 shows energy distribution curves for blackbodies at temperatures

ranging from 200 to 6000 K. The units on the ordinate scale W m-2

ᶬm-1

express the radiant power coming

Page 8: REMOTE SENSING AND GIS LECTURES

8

from a blackbody per 1-ᶬm spectral interval. Hence, the area under these curves equals the total radiant

exitance, M, and the curves illustrate graphically what the Stefan–Boltzmann law expresses

mathematically: The higher the temperature of the radiator, the greater the total amount of radiation it

emits. The curves also show that there is a shift toward shorter wavelengths in the peak of a blackbody

radiation distribution as temperature increases. The dominant wavelength, or wavelength at which a

blackbody radiation curve reaches a maximum, is related to its temperature by Wien’s displacement law,

Thus, for a blackbody, the wavelength at which the maximum spectral radiant exitance occurs varies

inversely with the blackbody’s absolute temperature. We observe this phenomenon when a metal body

such as a piece of iron is heated. As the object becomes progressively hotter, it begins to glow and its

color changes successively to shorter wavelengths—from dull red to orange to yellow and eventually to

white. The sun emits radiation in the same manner as a blackbody radiator whose temperature is about

6000 K (Figure 1.4). Many incandescent lamps emit radiation typified by a 3000 K blackbody radiation

curve. Consequently, incandescent lamps have a relatively low output of blue energy, and they do not

have the same spectral constituency as sunlight.

Page 9: REMOTE SENSING AND GIS LECTURES

9

The earth’s ambient temperature (i.e., the temperature of surface materials such as soil, water, and

vegetation) is about 300 K (27°C). From Wien’s displacement law, this means the maximum spectral

radiant exitance from earth features occurs at a wavelength of about 9.ᶬ7 m. Because this radiation

correlates with terrestrial heat, it is termed ―thermal infrared‖ energy. This energy can neither be seen nor

photographed, but it can be sensed with such thermal devices as radiometers and. By comparison, the sun

has a much higher energy peak that occurs at about 0:5 m, as indicated in Figure 1.4.

Our eyes—and photographic sensors—are sensitive to energy of this magnitude and wavelength. Thus,

when the sun is present, we can observe earth features by virtue of reflected solar energy. Once again, the

longer wavelength energy emitted by ambient earth features can be observed only with a nonphotographic

sensing system. The general dividing line between reflected and emitted IR wavelengths is approximately

3 ᶬm. Below this wavelength, reflected energy predominates; above it, emitted energy prevails. Certain

sensors, such as radar systems, supply their own source of energy to illuminate features of interest. These

systems are termed ―active‖ systems, in contrast to ―passive‖ systems that sense naturally available

energy. A very common example of an active system is a camera utilizing a flash. The same camera used

in sunlight becomes a passive sensor.

Page 10: REMOTE SENSING AND GIS LECTURES

10

المحاضرة

الثانية

CONCEPTS AND FOUNDATIONS OF REMOTE SENSING

Page 11: REMOTE SENSING AND GIS LECTURES

11

1.3 ENERGY INTERACTIONS IN THE ATMOSPHERE

Irrespective of its source, all radiation detected by remote sensors passes through some distance, or path

length, of atmosphere. The path length involved can vary widely. For example, space photography results

from sunlight that passes through the full thickness of the earth’s atmosphere twice on its journey from

source to sensor. On the other hand, an airborne thermal sensor detects energy emitted directly from

objects on the earth, so a single, relatively short atmospheric path length is involved. The net effect of the

atmosphere varies with these differences in path length and also varies with the magnitude of the energy

signal being sensed, the atmospheric conditions present, and the wavelengths involved. Here, we merely

wish to introduce the notion that the atmosphere can have a profound effect on, among other things, the

intensity and spectral composition of radiation available to any sensing system. These effects are caused

principally through the mechanisms of atmospheric scattering and absorption.

Scattering

Atmospheric scattering is the unpredictable diffusion of radiation by particles in the atmosphere. Rayleigh

scatter is common when radiation interacts with atmospheric molecules and other tiny particles that are

much smaller in diameter than the wavelength of the interacting radiation. The effect of Rayleigh scatter

is inversely proportional to the fourth power of wavelength. Hence, there is a much stronger tendency for

short wavelengths to be scattered by this mechanism than long wavelengths.

A ―blue‖ sky is an indicator of Rayleigh scatter. In the absence of scatter, the sky would appear black.

But, as sunlight interacts with the earth’s atmosphere, it scatters the shorter (blue) wavelengths more

dominantly than the other visible wavelengths. Consequently, we see a blue sky. At sunrise and sunset,

however, the sun’s rays travel through a longer atmospheric path length than during midday. With the

longer path, the scatter (and absorption) of short wavelengths is so complete that we see only the less

scattered, longer wavelengths of orange and red. Rayleigh scatter is one of the primary causes of ―haze‖

in imagery. Visually, haze diminishes the ―crispness,‖ or ―contrast,‖ of an image. In color photography, it

results in a bluish-gray cast to an image, particularly when taken from high altitude. Haze can often be

eliminated or at least minimized by introducing, in front of the camera lens, a filter that does not transmit

short wavelengths.

Another type of scatter is Mie scatter, which exists when atmospheric particle diameters essentially equal

the wavelengths of the energy being sensed. Water vapor and dust are major causes of Mie scatter. This

type of scatter tends to influence longer wavelengths compared to Rayleigh scatter. Although Rayleigh

scatter tends to dominate under most atmospheric conditions, Mie scatter is significant in slightly overcast

ones.

A more bothersome phenomenon is nonselective scatter, which comes about when the diameters of the

particles causing scatter are much larger than the wavelengths of the energy being sensed. Water droplets,

for example, cause such scatter. They commonly have a diameter in the range 5 to 100ᶬ m and scatter all

visible and near- to mid-IR wavelengths about equally. Consequently, this scattering is ―nonselective‖

with respect to wavelength. In the visible wavelengths, equal quantities of blue, green, and red light are

scattered; hence fog and clouds appear white.

Absorption

In contrast to scatter, atmospheric absorption results in the effective loss of energy to atmospheric

constituents. This normally involves absorption of energy at a given wavelength. The most efficient

Page 12: REMOTE SENSING AND GIS LECTURES

12

absorbers of solar radiation in this regard are water vapor, carbon dioxide, and ozone. Because these gases

tend to absorb electromagnetic energy in specific wavelength bands, they strongly influence the design of

any remote sensing system. The wavelength ranges in which the atmosphere is particularly transmissive

of energy are referred to as atmospheric windows. Figure 1.5 shows the interrelationship between energy

sources and atmospheric absorption characteristics. Figure 1.5a shows the spectral distribution of the

energy emitted by the sun and by earth features. These two curves represent the most common sources of

energy used in remote sensing. In Figure 1.5b, spectral regions in which the atmosphere blocks energy are

shaded. Remote sensing data acquisition is limited to the nonblocked spectral regions, the atmospheric

windows. Note in Figure 1.5c that the spectral sensitivity range of the eye (the―visible‖ range) coincides

with both an atmospheric window and the peak level of energy from the sun. Emitted ―heat‖ energy from

the earth, shown by the small curve in (a), is sensed through the windows at 3 to 5 ᶬm and 8 to 14 ᶬm

using such devices as thermal sensors. Multispectral sensors observe simultaneously through multiple,

narrow wavelength ranges that can be located at various points in the visible through the thermal spectral

region. Radar and passive microwave systems operate through a window in the region 1 mm to 1 m. The

important point to note from Figure 1.5 is the interaction and the interdependence between the primary

sources of electromagnetic energy, the atmospheric windows through which source energy may be

transmitted to and from earth surface features, and the spectral sensitivity of the sensors available to

detect and record the energy. One cannot select the sensor to be used in any given remote sensing task

arbitrarily; one must instead consider (1) the spectral sensitivity of the sensors available, (2) the presence

or absence of atmospheric windows in the spectral range(s) in which one wishes to sense, and (3) the

source, magnitude, and spectral composition of the energy available in these ranges. Ultimately, however,

the choice of spectral range of the sensor must be based on the manner in which the energy interacts with

the features under investigation. It is to this last, very important, element that we now turn our attention.

Page 13: REMOTE SENSING AND GIS LECTURES

13

Page 14: REMOTE SENSING AND GIS LECTURES

14

المحاضرة

الثالثة

CONCEPTS AND FOUNDATIONS OF REMOTE SENSING

Page 15: REMOTE SENSING AND GIS LECTURES

15

1.4 ENERGY INTERACTIONS WITH EARTH SURFACE FEATURES

When electromagnetic energy is incident on any given earth surface feature, three fundamental energy

interactions with the feature are possible. These are illustrated in Figure 1.6 for an element of the volume

of a water body. Various fractions of the energy incident on the element are reflected, absorbed, and/or

transmitted. Applying the principle of conservation of energy, we can state the interrelationship among

these three energy interactions as

with all energy components being a function of wavelength λ.

Equation 1.6 is an energy balance equation expressing the interrelationship among the mechanisms of

reflection, absorption, and transmission. Two points concerning this relationship should be noted. First,

the proportions of energy reflected, absorbed, and transmitted will vary for different earth features,

depending on their material type and condition. These differences permit us to distinguish different

features on an image. Second, the wavelength dependency means that, even within a given feature type,

the proportion of reflected, absorbed, and transmitted energy will vary at different wavelengths. Thus, two

features may be indistinguishable in one spectral range and be very different in another wavelength band.

Within the visible portion of the spectrum, these spectral variations result in the visual effect called color.

For example, we call objects ―blue‖ when they reflect more highly in the blue portion of the spectrum,

―green‖ when they reflect more highly in the green spectral region, and so on. Thus, the eye utilizes

spectral variations in the magnitude of reflected energy to discriminate between various objects. Because

many remote sensing systems operate in the wavelength regions in which reflected energy predominates,

the reflectance properties of earth features are very important. Hence, it is often useful to think of the

energy balance relationship expressed by Eq. 1.6 in the form

Page 16: REMOTE SENSING AND GIS LECTURES

16

That is, the reflected energy is equal to the energy incident on a given feature reduced by the energy that

is either absorbed or transmitted by that feature. The reflectance characteristics of earth surface features

may be quantified by measuring the portion of incident energy that is reflected. This is measured as a

function of wavelength and is called spectral reflectance, ρλ. It is mathematically defined as

where ρλ is expressed as a percentage.

A graph of the spectral reflectance of an object as a function of wavelength is termed a spectral

reflectance curve. The configuration of spectral reflectance curves gives us insight into the spectral

characteristics of an object and has a strong influence on the choice of wavelength region(s) in which

remote sensing data are acquired for a particular application. This is illustrated in Figure 1.7, which shows

highly generalized spectral reflectance curves for deciduous versus coniferous trees. Note that the curve

for each of these object types is plotted as a ―ribbon‖ (or ―envelope‖) of values, not as a single line. This

is because spectral reflectances vary somewhat within a given material class. That is, the spectral

reflectance of one deciduous tree species and another will never be identical, nor will the spectral

reflectance of trees of the same species be exactly equal. We elaborate upon the variability of spectral

reflectance curves later in this section. In Figure 1.7, assume that you are given the task of selecting an

airborne sensor system to assist in preparing a map of a forested area differentiating deciduous versus

coniferous trees. One choice of sensor might be the human eye. However, there is a potential problem

with this choice. The spectral reflectance curves for each tree type overlap in most of the visible portion

of the spectrum and are very close where they do not overlap. Hence, the eye might see both tree types as

being essentially the same shade of ―green‖ and might confuse the identity of the deciduous and

coniferous trees. Certainly one could improve things somewhat by using spatial clues to each tree type’s

identity, such as size, shape, site, and so forth. However, this is often difficult to do from the air,

particularly when tree types are intermixed. How might we discriminate the two types on the basis of their

spectral characteristics alone? We could do this by using a sensor that records near-IR energy. A

specialized digital camera whose detectors are sensitive to near-IR wavelengths is just such a system, as is

an analog camera loaded with black and white IR film. On near-IR images, deciduous trees (having

higher IR reflectance than conifers) generally appear much lighter in tone than do conifers.

On such an image, the task of delineating deciduous versus coniferous trees becomes almost trivial. In

fact, if we were to use a computer to analyze digital data collected from this type of sensor, we might

―automate‖ our entire mapping task. Many remote sensing data analysis schemes attempt to do just that.

For these schemes to be successful, the materials to be differentiated must be spectrally separable.

Experience has shown that many earth surface features of interest can be identified, mapped, and studied

on the basis of their spectral characteristics. Experience has also shown that some features of interest

cannot be spectrally separated. Thus, to utilize remote sensing data effectively, one must know and

understand the spectral characteristics of the particular features under investigation in any given

application. Likewise, one must know what factors influence these characteristics.

Page 17: REMOTE SENSING AND GIS LECTURES

17

Spectral Reflectance of Earth Surface Feature Types.

Figure 1.9 shows typical spectral reflectance curves for many different types of features: healthy green

grass, dry (non-photosynthetically active) grass, bare soil (brown to dark-brown sandy loam), pure

gypsum dune sand, asphalt, construction concrete (Portland cement concrete), fine-grained snow, clouds,

and clear lake water. The lines in this figure represent average reflectance curves compiled by measuring

a large sample of features, or in some cases representative reflectance measurements from a single typical

example of the feature class. Note how distinctive the curves are for each feature. In general, the

configuration of these curves is an indicator of the type and condition of the features to which they apply.

Although the reflectance of individual features can vary considerably above and below the lines shown

here, these curves demonstrate some fundamental points concerning spectral reflectance. For example,

spectral reflectance curves for healthy green vegetation almost always manifest the ―peak-and-valley‖

configuration illustrated by green grass in Figure 1.9. The valleys in the visible portion of the spectrum

are dictated by the pigments in plant leaves. Chlorophyll, for example, strongly absorbs energy in the

wavelength bands centered at about 0.45 and 0:67 ᶬm (often called the ―chlorophyll absorption bands‖).

Hence, our eyes perceive healthy vegetation as green in color because of the very high absorption of blue

and red energy by plant leaves and the relatively high reflection of green energy. If a plant is subject to

some form of stress that interrupts its normal growth and productivity, it may decrease or cease

chlorophyll production. The result is less chlorophyll absorption in the blue and red bands. Often, the red

reflectance increases to the point that we see the plant turn yellow (combination of green and red). This

can be seen in the spectral curve for dried grass in Figure 1.9.

Page 18: REMOTE SENSING AND GIS LECTURES

18

As we go from the visible to the near-IR portion of the spectrum, the reflectance of healthy vegetation

increases dramatically. This spectral feature, known as the red edge, typically occurs between 0.68 and

0:75 m, with the exact position depending on the species and condition. Beyond this edge, from about

0.75 to 1.3 m (representing most of the near-IR range), a plant leaf typically reflects 40 to 50% of the

energy incident upon it. Most of the remaining energy is transmitted, because absorption in this spectral

region is minimal (less than 5%). Plant reflectance from 0.75 to 1.3 ᶬm results primarily from the internal

structure of plant leaves. Because the position of the red edge and the magnitude of the near-IR

reflectance beyond the red edge are highly variable among plant species, reflectance measurements in

these ranges often permit us to discriminate between species, even if they look the same in visible

wavelengths. Likewise, many plant stresses alter the reflectance in the red edge and the near-IR region,

and sensors operating in these ranges are often used for vegetation stress detection. Also, multiple layers

of leaves in a plant canopy provide the opportunity for multiple transmissions and reflections. Hence, the

near-IR reflectance increases with the number of layers of leaves in a canopy, with the maximum

reflectance achieved at about eight leaf layers (Bauer et al., 1986).

Beyond 1.3 m, energy incident upon vegetation is essentially absorbed or reflected, with little to no

transmittance of energy. Dips in reflectance occur at 1.4, 1.9, and 2.7 ᶬm because water in the leaf absorbs

strongly at these wavelengths. Accordingly, wavelengths in these spectral regions are referred to as water

absorption bands. Reflectance peaks occur at about 1.6 and 2.2 ᶬm, between the absorption bands.

Throughout the range beyond 1.3 m, leaf reflectance is approximately inversely related to the total water

present in a leaf. This total is a function of both the moisture content and the thickness of a leaf.

The soil curve in Figure 1.9 shows considerably less peak-and-valley variation in reflectance. That is, the

factors that influence soil reflectance act over less specific spectral bands. Some of the factors affecting

soil reflectance are moisture content, organic matter content, soil texture (proportion of sand, silt, and

clay), surface roughness, and presence of iron oxide. These factors are complex, variable, and interrelated.

For example, the presence of moisture in soil will decrease its reflectance. As with vegetation, this effect

is greatest in the water absorption bands at about 1.4, 1.9, and 2.7 ᶬm (clay soils also have hydroxyl

absorption bands at about 1.4 and 2:2 ᶬm). Soil moisture content is strongly related to the soil texture:

Coarse, sandy soils are usually well drained, resulting in low moisture content and relatively high

reflectance; poorly drained fine-textured soils will generally have lower reflectance. Thus, the reflectance

properties of a soil are consistent only within particular ranges of conditions. Two other factors that

reduce soil reflectance are surface roughness and content of organic matter. The presence of iron oxide in

a soil will also significantly decrease reflectance, at least in the visible wavelengths. In any case, it is

essential that the analyst be familiar with the conditions at hand. Finally, because soils are essentially

opaque to visible and infrared radiation, it should be noted that soil reflectance comes from the uppermost

layer of the soil and may not be indicative of the properties of the bulk of the soil.

Sand can have wide variation in its spectral reflectance pattern. The curve shown in Figure 1.9 is from a

dune in New Mexico and consists of roughly 99% gypsum with trace amounts of quartz (Jet Propulsion

Laboratory, 1999). Its absorption and reflectance features are essentially identical to those of its parent

material, gypsum. Sand derived from other sources, with differing mineral compositions, would have a

spectral reflectance curve indicative of its parent material. Other factors affecting the spectral response

from sand include the presence or absence of water and of organic matter. Sandy soil is subject to the

same considerations listed in the discussion of soil reflectance.

As shown in Figure 1.9, the spectral reflectance curves for asphalt and Portland cement concrete are much

flatter than those of the materials discussed thus far. Overall, Portland cement concrete tends to be

relatively brighter than asphalt, both in the visible spectrum and at longer wavelengths. It is important to

Page 19: REMOTE SENSING AND GIS LECTURES

19

note that the reflectance of these materials may be modified by the presence of paint, soot, water, or other

substances. Also, as materials age, their spectral reflectance patterns may change. For example, the

reflectance of many types of asphaltic concrete may increase, particularly in the visible spectrum, as their

surface ages.

In general, snow reflects strongly in the visible and near infrared, and absorbs more energy at mid-

infrared wavelengths. However, the reflectance of snow is affected by its grain size, liquid water content,

and presence or absence of other materials in or on the snow surface (Dozier and Painter, 2004). Larger

grains of snow absorb more energy, particularly at wavelengths longer than 0.8 ᶬm. At temperatures near

0°C, liquid water within the snowpack can cause grains to stick together in clusters, thus increasing the

effective grain size and decreasing the reflectance at near-infrared and longer wavelengths. When

particles of contaminants such as dust or soot are deposited on snow, they can significantly reduce the

surface’s reflectance in the visible spectrum.

The aforementioned absorption of mid-infrared wavelengths by snow can permit the differentiation

between snow and clouds. While both feature types appear bright in the visible and near infrared, clouds

have significantly higher reflectance than snow at wavelengths longer than 1.4 ᶬm. Meteorologists can

also use both spectral and bidirectional reflectance patterns (discussed later in this section) to identify a

variety of cloud properties, including ice/water composition and particle size.

Considering the spectral reflectance of water, probably the most distinctive characteristic is the energy

absorption at near-IR wavelengths and beyond. In short, water absorbs energy in these wavelengths

whether we are talking about water features per se (such as lakes and streams) or water contained in

vegetation or soil. Locating and delineating water bodies with remote sensing data are done most easily in

near-IR wavelengths because of this absorption property. However, various conditions of water bodies

manifest themselves primarily in visible wavelengths. The energy–matter interactions at these

wavelengths are very complex and depend on a number of interrelated factors. For example, the

reflectance from a water body can stem from an interaction with the water’s surface (specular reflection),

with material suspended in the water, or with the bottom of the depression containing the water body.

Even with deep water where bottom effects are negligible, the reflectance properties of a water body are a

function of not only the water per se but also the material in the water.

Clear water absorbs relatively little energy having wavelengths less than about 0.6 ᶬm. High

transmittance typifies these wavelengths with a maximum in the blue-green portion of the spectrum.

However, as the turbidity of water changes (because of the presence of organic or inorganic materials),

transmittance— and therefore reflectanc—changes dramatically. For example, waters containing large

quantities of suspended sediments resulting from soil erosion normally have much higher visible

reflectance than other ―clear‖ waters in the same geographic area. Likewise, the reflectance of water

changes with the chlorophyll concentration involved. Increases in chlorophyll concentration tend to

decrease water reflectance in blue wavelengths and increase it in green wavelengths. These changes have

been used to monitor the presence and estimate the concentration of algae via remote sensing data.

Reflectance data have also been used to determine the presence or absence of tannin dyes from bog

vegetation in lowland areas and to detect a number of pollutants, such as oil and certain industrial wastes.

Figure 1.10 illustrates some of these effects, using spectra from three lakes with different bio-optical

properties. The first spectrum is from a clear, oligotrophic lake with a chlorophyll level of 1:2 ᶬ g/l and

only 2.4 mg/l of dissolved organic carbon (DOC). Its spectral reflectance is relatively high in the blue-

green portion of the spectrum and decreases in the red and near infrared. In contrast, the spectrum from a

lake experiencing an algae bloom, with much higher chlorophyll concentration 12.3 ᶬ g/l, shows a

Page 20: REMOTE SENSING AND GIS LECTURES

20

reflectance peak in the green spectrum and absorption in the blue and red regions. These reflectance and

absorption features are associated with several pigments present in algae. Finally, the third spectrum in

Figure 1.10 was acquired on an ombrotrophic bog lake, with very high levels of DOC (20.7 mg/l). These

naturally occurring tannins and other complex organic molecules give the lake a very dark appearance,

with its reflectance curve nearly flat across the visible spectrum.

Many important water characteristics, such as dissolved oxygen concentration, pH, and salt

concentration, cannot be observed directly through changes in water reflectance. However, such

parameters sometimes correlate with observed reflectance. In short, there are many complex

interrelationships between the spectral reflectance of water and particular characteristics. One must use

appropriate reference data to correctly interpret reflectance measurements made over water.

Page 21: REMOTE SENSING AND GIS LECTURES

21

Page 22: REMOTE SENSING AND GIS LECTURES

22

حاضرة الم

الرابعة

CONCEPTS AND FOUNDATIONS OF REMOTE SENSING

Page 23: REMOTE SENSING AND GIS LECTURES

23

Spectral Response Patterns

Having looked at the spectral reflectance characteristics of vegetation, soil, sand, concrete, asphalt, snow,

clouds, and water, we should recognize that these broad feature types are often spectrally separable.

However, the degree of separation between types varies among and within spectral regions. For example,

water and vegetation might reflect nearly equally in visible wavelengths, yet these features are almost

always separable in near-IR wavelengths.

Because spectral responses measured by remote sensors over various features often permit an assessment

of the type and/or condition of the features, these responses have often been referred to as spectral

signatures. Spectral reflectance and spectral emittance curves (for wavelengths greater than 3.0 ᶬm) are

often referred to in this manner. The physical radiation measurements acquired over specific terrain

features at various wavelengths are also referred to as the spectral signatures for those features.

Although it is true that many earth surface features manifest very distinctive spectral reflectance and/or

emittance characteristics, these characteristics result in spectral ―response patterns‖ rather than in spectral

―signatures.‖ The reason for this is that the term signature tends to imply a pattern that is absolute and

unique. This is not the case with the spectral patterns observed in the natural world. As we have seen,

spectral response patterns measured by remote sensors may be quantitative, but they are not absolute.

They may be distinctive, but they are not necessarily unique.

We have already looked at some characteristics of objects that influence their spectral response patterns.

Temporal effects and spatial effects can also enter into any given analysis. Temporal effects are any

factors that change the spectral characteristics of a feature over time. For example, the spectral

characteristics of many species of vegetation are in a nearly continual state of change throughout a

growing season. These changes often influence when we might collect sensor data for a particular

application.

Spatial effects refer to factors that cause the same types of features (e.g., corn plants) at a given point in

time to have different characteristics at different geographic locations. In small-area analysis the

geographic locations may be meters apart and spatial effects may be negligible. When analyzing satellite

data, the locations may be hundreds of kilometers apart where entirely different soils, climates, and

cultivation practices might exist.

Temporal and spatial effects influence virtually all remote sensing operations. These effects normally

complicate the issue of analyzing spectral reflectance properties of earth resources. Again, however,

temporal and spatial effects might be the keys to gleaning the information sought in an analysis. For

example, the process of change detection is premised on the ability to measure temporal effects. An

example of this process is detecting the change in suburban development near a metropolitan area by

using data obtained on two different dates.

Atmospheric Influences on Spectral Response Patterns

In addition to being influenced by temporal and spatial effects, spectral response patterns are influenced

by the atmosphere. Regrettably, the energy recorded by a sensor is always modified to some extent by the

atmosphere between the sensor and the ground. Figure 1.11 provides an initial frame of reference for

understanding the nature of atmospheric effects. Shown in this figure is the typical situation encountered

when a sensor records reflected solar energy. The atmosphere affects the ―brightness,‖ or radiance,

recorded over any given point on the ground in two almost contradictory ways. First, it attenuates

Page 24: REMOTE SENSING AND GIS LECTURES

24

(reduces) the energy illuminating a ground object (and being reflected from the object). Second, the

atmosphere acts as a reflector itself, adding a scattered, extraneous path radiance to the signal detected by

the sensor. By expressing these two atmospheric effects mathematically, the total radiance recorded by

the sensor may be related to the reflectance of the ground object and the incoming radiation or irradiance

using the equation

It should be noted that all of the above factors depend on wavelength. Also, as shown in Figure 1.11, the

irradiance (E )stems from two sources: (1) directly reflected ―sunlight‖ and (2) diffuse ―skylight,‖ which

is sunlight that has been previously scattered by the atmosphere. The relative dominance of sunlight

versus skylight in any given image is strongly dependent on weather conditions (e.g., sunny vs. hazy vs.

cloudy). Likewise, irradiance varies with the seasonal changes in solar elevation angle (Figure 7.4) and

the changing distance between the earth and sun.

Page 25: REMOTE SENSING AND GIS LECTURES

25

For a sensor positioned close to the earth’s surface, the path radiance Lp will generally be small or

negligible, because the atmospheric path length from the surface to the sensor is too short for much

scattering to occur. In contrast, imagery from satellite systems will be more strongly affected by path

radiance, due to the longer atmospheric path between the earth’s surface and the spacecraft. This can be

seen in Figure 1.12, which compares two spectral response patterns from the same area. One ―signature‖

in this figure was collected using a handheld field spectroradiometer (see Section 1.6 for discussion),

from a distance of only a few cm above the surface. The second curve shown in Figure 1.12 was collected

by the Hyperion hyperspectral sensor on the EO-1 satellite (hyperspectral systems

Page 26: REMOTE SENSING AND GIS LECTURES

26

المحاضرة

الخامسة

Basic Principles of Photogrammetry

Page 27: REMOTE SENSING AND GIS LECTURES

27

Chapter two

Basic Principles of Photogrammetry

1.2 INTRODUCTION

Photogrammetry is the science and technology of obtaining spatial measurements and other

geometrically reliable derived products from photographs. Historically, photogrammetric analyses

involved the use of hardcopy photographic products such as paper prints or film transparencies. Today,

most photogrammetric procedures involve the use of digital, or softcopy, photographic data products. In

certain cases, these digital products might result from high resolution scanning of hardcopy photographs

(e.g., historical photographs). However, the overwhelming majority of digital photographs used in

modern photogrammetric applications come directly from digital cameras. In fact, photogrammetric

processing of digital camera data is often accomplished in-flight such that digital orthophotos, digital

elevation models (DEMs), and other GIS data products are available immediately after, or even during, an

aerial photographic mission.

While the physical character of hardcopy and digital photographs is quite different, the basic geometric

principles used to analyze them photogrammetrically are identical. In fact, it is often easier to visualize

and understand these principles in a hardcopy context and then extend them to the softcopy environment.

This is the approach we adopt in this chapter. Hence, our objective in this discussion is to not only

prepare the reader to be able to make basic measurements from hardcopy photographic images, but also to

understand the underlying principles of modern digital (softcopy) photogrammetry. We stress aerial

photogrammetric techniques and procedures in this discussion, but the same general principles hold for

terrestrial (ground-based) and space-based operations as well.

In this chapter, we introduce only the most basic aspects of the broad subject of photogrammetry. (More

comprehensive and detailed treatment of the subject of photogrammetry is available in such references as

ASPRS, 2004; Mikhail et al., 2001; and Wolf et al., 2013.) We limit our discussion to the following

photogrammetric activities:

1. Determining the scale of a vertical photograph and estimating horizontal ground distances

from measurements made on a vertical photograph. The scale of a photograph expresses the

mathematical relationship between a distance measured on the photo and the corresponding

horizontal distance measured in a ground coordinate system. Unlike maps, which have a single

constant scale, aerial photographs have a range of scales that vary in proportion to the elevation

of the terrain involved. Once the scale of a photograph is known at any particular elevation,

ground distances at that elevation can be readily estimated from corresponding photo distance

measurements.

2. Using area measurements made on a vertical photograph to determine the equivalent areas

in a ground coordinate system. Computing ground areas from corresponding photo area

measurement is simply an extension of the above concept of scale. The only difference is that

whereas ground distances and photo distances vary linearly, ground areas and photo areas vary

as the square of the scale.

3. Quantifying the effects of relief displacement on vertical aerial photographs. Again unlike

maps, aerial photographs in general do not show the true plan or top view of objects. The images

of the tops of objects appearing in a photograph are displaced from the images of their bases.

This is known as relief displacement and causes any object standing above the terrain to ―lean

Page 28: REMOTE SENSING AND GIS LECTURES

28

away‖ from the principal point of a photograph radially. Relief displacement, like scale

variation, precludes the use of aerial photographs directly as maps. However, reliable ground

measurements and maps can be obtained from vertical photographs if photo measurements are

analyzed with due regard for scale variations and relief displacement.

4. Determining object heights from relief displacement measurements. While relief

displacement is usually thought of as an image distortion that must be dealt with, it can also be

used to estimate the heights of objects appearing on a photograph. As we later illustrate, the

magnitude of relief displacement depends on the flying height, the distance from the photo

principal point to the feature, and the height of the feature. Because these factors are

geometrically related, we can measure an object’s relief displacement and radial position on a

photograph and thereby determine the height of the object. This technique provides limited

accuracy but is useful in applications where only approximate object heights are needed.

5. Determining object heights and terrain elevations by measuring image parallax. The

previous operations are performed using vertical photos individually. Many photogrammetric

operations involve analyzing images in the area of overlap of a stereopair. Within this area, we

have two views of the same terrain, taken from different vantage points. Between these two

views, the relative positions of features lying closer to the camera (at higher elevation) will

change more from photo to photo than the positions of features farther from the camera (at lower

elevation). This change in relative position is called parallax. It can be measured on overlapping

photographs and used to determine object heights and terrain elevations.

6. Determining the elements of exterior orientation of aerial photographs. In order to use aerial

photographs for photogrammetric mapping purposes, it is necessary to determine six independent

parameters that describe the position and angular orientation of each photograph at the instant of

its exposure relative to the origin and orientation of the ground coordinate system used for

mapping. These six variables are called the elements of exterior orientation. Three of these are the

3D position (X, Y, Z) of the center of the photocoordinate axis system at the instant of exposure.

The remaining three are the 3D rotation angles related to the amount and direction of tilt

in each photo at the instant of exposure. These rotations are a function of the orientation of the

platform and camera mount when the photograph is taken. For example, the wings of a fixed‐wing aircraft might be tilted up or down, relative to horizontal. Simultaneously, the camera might

be tilted up or down toward the front or rear of the aircraft. At the same time, the aircraft might be

rotated into a headwind in order to maintain a constant heading We will see that there are two

major approaches that can be taken to determine the elements of exterior orientation. The first

involves the use of ground control (photo‐identifiable points of known ground coordinates)

together with a mathematical procedure called aerotriangulation. The second approach entails

direct georeferencing, which involves the integration of GPS and IMU observations to determine

the position and angular orientation of each photograph. We treat both of these approaches at a

conceptual level, with a minimum of mathematical detail.

7. Production of maps, DEMs, and orthophotos. ―Mapping‖ from aerial photographs can take on

many forms. Historically, topographic maps were produced using hardcopy stereopairs placed in

a device called a stereoplotter. With this type of instrument the photographs are mounted in

special projectors that can be mutually oriented to precisely correspond to the angular tilts

present when the photographs were taken. Each of the projectors can also be translated

in x, y, and z such that a reduced‐size model is created that exactly replicates the exterior

orientation of each of the photographs comprising the stereopair. (The scale of the resulting

Page 29: REMOTE SENSING AND GIS LECTURES

29

stereomodel is determined by the ―air base‖ distance between the projectors chosen by the

instrument operator.) When viewed stereoscopically, the model can be used to prepare an analog

or digital planimetric map having no tilt or relief distortions. In addition, topographic contours

can be integrated with the planimetric data and the height of individual features appearing in the

model can be determined.

8. Preparing a flight plan to acquire vertical aerial photography. Whenever new photographic

coverage of a project area is to be obtained, a photographic flight mission must be planned. As we

will discuss, mission planning software highly automates this process. However, most readers of

this book are, or will become, consumers of image data rather than providers. Such individuals

likely will not have direct access to flight planning software and will appropriately rely on

professional data suppliers to jointly design a mission to meet their needs. There are also cases

where cost and logistics might dictate that the data consumer and the data provider are one in the

same.

2.2 BASIC GEOMETRIC CHARACTERISTICS OF AERIAL PHOTOGRAPHS

Geometric Types of Aerial Photographs

Aerial photographs are generally classified as either vertical or oblique. Vertical photographs are

those made with the camera axis directed as vertically as possible. However, a ―truly‖ vertical aerial

photograph is rarely obtainable because of the previously described unavoidable angular rotations, or

tilts, caused by the angular attitude of the aircraft at the instant of exposure. These unavoidable tilts

cause slight (1° to 3°) unintentional inclination of the camera optical axis, resulting in the acquisition

of tilted photographs.

Virtually all photographs are tilted. When tilted unintentionally and slightly, tilted photographs can be

analyzed using the simplified models and methods appropriate to the analysis of truly vertical

photographs without the introduction of serious error. This is done in many practical applications

where approximate measurements suffice (e.g., determining the ground dimensions of a flat

agricultural field based on photo measurements made with a scale or digitizing tablet). However, as

we will discuss, precise digital photogrammetric procedures employ methods and models that

rigorously account for even very small angles of tilt with no loss of accuracy.

When aerial photographs are taken with an intentional inclination of the camera axis, oblique

photographs result. High oblique photographs include an image of the horizon, and low oblique

photographs do not. In this chapter, we emphasize the geometric aspects of acquiring and analyzing

vertical aerial photographs given their extensive historical and continuing use for large-area

photogrammetric mapping. However, the use of oblique aerial photographs has increased rapidly in

such applications as urban mapping and disaster assessment. With their ―side-view‖ character,

oblique photographs afford a more natural perspective in comparison to the top view perspective of

vertical aerial photographs. This can greatly facilitate the image interpretation process (particularly

for those individuals having limited training or experience in image interpretation). Various examples

of oblique aerial photographs are interspersed throughout this book.

Photogrammetric measurements can also be made from oblique aerial photographs if they are

acquired with this purpose in mind. This is often accomplished using multiple cameras that are

shuttered simultaneously.

Page 30: REMOTE SENSING AND GIS LECTURES

30

Taking Vertical Aerial Photographs

Most vertical aerial photographs are taken with frame cameras along flight lines, or flight strips. The

line traced on the ground directly beneath the aircraft during acquisition of photography is called the

nadir line. This line connects the image centers of the vertical photographs. Figure 2.1 illustrates the

typical character of the photographic coverage along a flight line. Successive photographs are

generally taken with some degree of endlap. Not only does this lapping ensure total coverage along a

flight line, but an endlap of at least 50% is essential for total stereoscopic coverage of a project area.

Stereoscopic coverage consists of adjacent pairs of overlapping vertical photographs called

stereopairs. Stereopairs provide two different perspectives of the ground area in their region of

endlap. When images forming a stereopair are viewed through a stereoscope, each eye

psychologically occupies the vantage point from which the respective image of the stereopair was

taken in flight. The result is the perception of a three-dimensional stereomodel. Most applications of

aerial photographic interpretation entail the use of stereoscopic coverage and stereoviewing.

Figure 2.1 Photographic coverage along a flight strip: (a) conditions during exposure; (b) resulting

photography.

Successive photographs along a flight strip are taken at intervals that are controlled by the camera

intervalometer or software-based sensor control system. The area included in the overlap of successive

photographs is called the stereoscopic overlap area. Typically, successive photographs contain 55 to 65%

overlap to ensure at least 50% endlap over varying terrain, in spite of unintentional tilt. Figure 2.2

Page 31: REMOTE SENSING AND GIS LECTURES

31

illustrates the ground coverage relationship of successive photographs forming a stereopair having

approximately a 60% stereoscopic overlap area.

The ground distance between the photo centers at the times of exposure is called the air base. The ratio

between the air base and the flying height above ground determines the vertical exaggeration perceived

by photo interpreters. The larger the base–height ratio, the greater the vertical exaggeration.

Figure 2.2 Acquisition of successive photographs yielding a stereopair.

This greater apparent relief often aids in visual image interpretation. Also, as we will discuss later, many

photogrammetric mapping operations depend upon accurate determination of the position at which rays

from two or more photographs intersect in space. Rays associated with larger base–height ratios intersect

at larger (closer to being perpendicular) angles than do those associated with the smaller (closer to being

parallel) angles associated with smaller base–height ratios. Thus larger base–height ratios result in more

accurate determination of ray intersection positions than do smaller base–height ratios.

Most project sites are large enough for multiple-flight-line passes to be made over the area to obtain

complete stereoscopic coverage. Figure 2.3 illustrates how adjacent strips are photographed. On

successive flights over the area, adjacent strips have a sidelap of approximately 30%. Multiple strips

comprise what is called a block of photographs.

Page 32: REMOTE SENSING AND GIS LECTURES

32

Figure 2.3 Adjacent flight lines over a project area.

Page 33: REMOTE SENSING AND GIS LECTURES

33

المحاضرة

السادسة

Basic Principles of Photogrammetry

Page 34: REMOTE SENSING AND GIS LECTURES

34

Geometric Elements of a Vertical Photograph

The basic geometric elements of a hardcopy vertical aerial photograph taken with a single-lens frame

camera are depicted in Figure 2.4. Light rays from terrain objects are imaged in the plane of the film

negative after intersecting at the camera lens exposure station, L. The negative is located behind the lens

at a distance equal to the lens focal length, f. Assuming the size of a paper print positive (or film positive)

is equal to that of the negative, positive image positions can be depicted diagrammatically in front of the

lens in a plane located at a distance f. This rendition is appropriate in that most photo positives used for

measurement purposes are contact printed, resulting in the geometric relationships shown.

The x and y coordinate positions of image points are referenced with respect to axes formed by straight

lines joining the opposite fiducial marks recorded on the positive. The x axis is arbitrarily assigned to the

fiducial axis most nearly coincident with the line of flight and is taken as positive in the forward direction

of flight. The positive y axis is located 90° counterclockwise from the positive x axis. Because of the

precision with which the fiducial marks and the lens are placed in a metric camera, the photocoordinate

origin, o, can be assumed to coincide exactly with the principal point, the intersection of the lens optical

axis and the film plane. The point where the prolongation of the optical axis of the camera intersects the

terrain is referred to as the ground principal point, O. Images for terrain points A, B, C, D, and E appear

geometrically reversed on the negative at a , b , c, d , and e and in proper geometric relationship on the

positive at a, b, c, d, and e.

The xy photocoordinates of a point are the perpendicular distances from the xy coordinate axes. Points to

the right of the y axis have positive x coordinates and points to the left have negative x coordinates.

Similarly, points above the x axis have positive y coordinates and those below have negative y

coordinates.

Page 35: REMOTE SENSING AND GIS LECTURES

35

Figure 2.4 Basic geometric elements of a vertical photograph.

Photocoordinate Measurement

Measurements of photocoordinates may be obtained using any one of many measurement devices. These

devices vary in their accuracy, cost, and availability. For rudimentary photogrammetric problems—where

low orders of measurement accuracy are acceptable—a triangular engineer’s scale or metric scale may be

used. When using these scales, measurement accuracy is generally improved by taking the average of

several repeated measurements. Measurements are also generally more accurate when made with the aid

of a magnifying lens.

In a softcopy environment, photocoordinates are measured using a raster image display with a cursor to

collect ―raw‖ image coordinates in terms of row and column values within the image file. The

relationship between the row and column coordinate system and the camera’s fiducial axis coordinate

system is determined through the development of a mathematical coordinate transformation between the

two systems. This process requires that some points have their coordinates known in both systems. The

fiducial marks are used for this purpose in that their positions in the focal plane are determined during the

calibration of the camera, and they can be readily measured in the row and column coordinate system.

Page 36: REMOTE SENSING AND GIS LECTURES

36

Irrespective of what approach is used to measure photocoordinates, these measurements contain errors of

varying sources and magnitudes. These errors stem from factors such as camera lens distortions,

atmospheric refraction, earth curvature, failure of the fiducial axes to intersect at the principal point, and

shrinkage or expansion of the photographic material on which measurements are made. Sophisticated

photogrammetric analyses include corrections for all these errors. For simple measurements made on

paper prints, such corrections are usually not employed because errors introduced by slight tilt in the

photography will outweigh the effect of the other distortions.

2.3 PHOTOGRAPHIC SCALE

One of the most fundamental and frequently used geometric characteristics of hardcopy aerial

photographs is that of photographic scale. A photograph ―scale,‖ like a map scale, is an expression that

states that one unit (any unit) of distance on a photograph represents a specific number of units of actual

ground distance. Scales may be expressed as unit equivalents, representative fractions, or ratios. For

example, if 1 mm on a photograph represents 25 m on the ground, the scale of the photograph can be

expressed as 1 mm=25 m (unit equivalents), or 1 /25 000 (representative fraction), or 1:25 000 (ratio).

Quite often the terms ―large scale‖ and ―small scale‖ are confused by those not working with expressions

of scale on a routine basis. For example, which photograph would have the ―larger‖ scale—a 1:10 000

scale photo covering several city blocks or a 1:50 000 photo that covers an entire city? The intuitive

answer is often that the photo covering the larger ―area‖ (the entire city) is the larger scale product. This is

not the case. The larger scale product is the 1:10 000 image because it shows ground features at a larger,

more detailed, size. The 1:50 000 scale photo of the entire city would render ground features at a much

smaller, less detailed size. Hence, in spite of its larger ground coverage, the 1:50 000 photo would be

termed the smaller scale product. A convenient way to make scale comparisons is to remember that the

same objects are smaller on a ―smaller‖ scale photograph than on a ―larger‖ scale photo. Scale

comparisons can also be made by comparing the magnitudes of the representative fractions involved.

(That is, 1/ 50 000 is smaller than 1/ 10 000.).

The most straightforward method for determining photo scale is to measure the corresponding photo and

ground distances between any two points. This requires that the points be mutually identifiable on both

the photo and a map. The scale S is then computed as the ratio of the photo distance d to the ground

distance D,

EXAMPLE 2.1

Assume that two road intersections shown on a photograph can be located on a 1:25 000 scale

topographic map. The measured distance between the intersections is 47.2 mm on the map and 94.3 mm

on the photograph. (a) What is the scale of the photograph? (b) At that scale, what is the length of a fence

line that measures 42.9 mm on the photograph?

Solution

(a) The ground distance between the intersections is determined from the map scale as

Page 37: REMOTE SENSING AND GIS LECTURES

37

(Note that because only three significant, or meaningful, figures were present in the original

measurements, only three significant figures are indicated in the final result.)

(b) The ground length of the 42.9-mm fence line is

For a vertical photograph taken over flat terrain, scale is a function of the focal length f of the

camera used to acquire the image and the flying height above the ground, Hʹ , from which the image was

taken. In general,

Figure 2.5 illustrates how we arrive at above Eq. Shown in this figure is the side view of a vertical

photograph taken over flat terrain. Exposure station L is at an aircraft

Page 38: REMOTE SENSING AND GIS LECTURES

38

Figure 2.5 Scale of a vertical photograph taken over flat terrain

flying height H above some datum, or arbitrary base elevation. The datum most frequently used is mean

sea level. If flying height H and the elevation of the terrain h are known, we can determine Hʹ by

subtraction (Hʹ= H -h). If we now consider terrain points A, O, and B, they are imaged at points aʹ , oʹ ,

and bʹ on the negative film and at a, o, and b on the positive print. We can derive an expression for photo

scale by observing similar triangles Lao and LAO, and the corresponding photo (ao) and ground (AO)

distances. That is,

or

EXAMPLE 2.2

A camera equipped with a 152-mm-focal-length lens is used to take a vertical photograph from a flying

height of 2780 m above mean sea level. If the terrain is flat and located at an elevation of 500 m, what is

the scale of the photograph?

Solution

Page 39: REMOTE SENSING AND GIS LECTURES

39

EXAMPLE 2.3

Assume that a vertical photograph was taken at a flying height of 5000 m above sea level using a camera

with a 152-mm-focal-length lens. (a) Determine the photo scale at points A and B, which lie at elevations

of 1200 and 1960 m. (b) What ground distance corresponds to a 20.1-mm photo distance measured at

each of these elevations?

Solution

a)

b) The ground distance corresponding to a 20.1-mm photo distance is

Often it is convenient to compute an average scale for an entire photograph. This scale is calculated using

the average terrain elevation for the area imaged. Consequently, it is exact for distances occurring at the

average elevation and is approximate at all other elevations. Average scale may be expressed as

where h avg is the average elevation of the terrain shown in the photograph.

The result of photo scale variation is geometric distortion. All points on a map are depicted in their true

relative horizontal (planimetric) positions, but points on a photo taken over varying terrain are displaced

from their true ―map positions.‖ This difference results because a map is a scaled orthographic projection

Page 40: REMOTE SENSING AND GIS LECTURES

40

of the ground surface, whereas a vertical photograph yields a perspective projection. The differing nature

of these two forms of projection is illustrated in Figure 2.6. As shown, a map results from projecting

vertical rays from ground points to the map sheet (at a particular scale). A photograph results from

projecting converging rays through a common point within the camera lens. Because of the nature of this

projection, any variations in terrain elevation will result in scale variation and displaced image positions.

Figure 2.6 Comparative geometry of (a) a map and (b) a vertical aerial photograph. Note

differences in size, shape, and location of the two trees.

On a map we see a top view of objects in their true relative horizontal positions. On a photograph, areas

of terrain at the higher elevations lie closer to the camera at the time of exposure and therefore appear

larger than corresponding areas lying at lower elevations. Furthermore, the tops of objects are always

displaced from their bases (Figure 2.6). This distortion is called relief displacement and causes any object

standing above the terrain to ―lean‖ away from the principal point of a photograph radially.

Page 41: REMOTE SENSING AND GIS LECTURES

41

المحاضرة

السابعة

Basic Principles of Photogrammetry

Page 42: REMOTE SENSING AND GIS LECTURES

42

AREA MEASUREMENT

The process of measuring areas using aerial photographs can take on many forms. The accuracy of area

measurement is a function of not only the measuring device used, but also the degree of image scale

variation due to relief in the terrain and tilt in the photography. Although large errors in area

determinations can result even with vertical photographs in regions of moderate to high relief, accurate

measurements may be made on vertical photos of areas of low relief. Simple scales may be used to

measure the area of simply shaped features. For example, the area of a rectangular field can be

determined by simply measuring its length and width. Similarly, the area of a circular feature can be

computed after measuring its radius or diameter.

EXAMPLE 2.4

A rectangular agricultural field measures 8.65 cm long and 5.13 cm wide on a vertical photograph having

a scale of 1:20,000. Find the area of the field at ground level.

Solution

The ground area of an irregularly shaped feature is usually determined by measuring the area of the

feature on the photograph. The photo area is then converted to a ground area from the following

relationship:

EXAMPLE 2.5

The area of a lake is 52.2 cm2 on a 1:7500 vertical photograph. Find the ground area of the lake.

Solution

RELIEF DISPLACEMENT OF VERTICAL FEATURES

Characteristics of Relief Displacement

In Figure 2.6, we illustrated the effect of relief displacement on a photograph taken over varied terrain. In

essence, an increase in the elevation of a feature causes its position on the photograph to be displaced

radially outward from the principal point. Hence, when a vertical feature is photographed, relief

Page 43: REMOTE SENSING AND GIS LECTURES

43

displacement causes the top of the feature to lie farther from the photo center than its base. As a result,

vertical features appear to lean away from the center of the photograph.

The geometric components of relief displacement are illustrated in Figure 2.7, which shows a vertical

photograph imaging a tower. The photograph is taken from flying height H above datum. When

considering the relief displacement of a vertical feature, it is convenient to arbitrarily assume a datum

plane placed at the base of the feature. If this is done, the flying height H must be correctly referenced to

this same datum, not mean sea level. Thus, in Figure 2.7 the height of the tower (whose base is at datum)

is h. Note that the top of the tower, A, is imaged at a in the photograph whereas the base of the tower, A ,

is imaged at a . That is, the image of the top of the tower is radially displaced by the distance d from that

of the bottom. The distance d is the relief displacement of the tower. The equivalent distance projected to

datum is D. The distance from the photo principal point to the top of the tower is r. The equivalent

distance projected to datum is R.

We can express d as a function of the dimensions shown in Figure 2.7. From similar triangles AAʹ A″ and

LOA″,

Figure 2.7 Geometric components of relief displacement.

Page 44: REMOTE SENSING AND GIS LECTURES

44

Expressing distances D and R at the scale of the photograph, we obtain

where d = relief displacement

r = radial distance on the photograph from the principal point to the displaced image point

h = height above datum of the object point

H = flying height above the same datum chosen to reference h

An analysis of the above equation indicates mathematically the nature of relief displacement seen

pictorially. That is, relief displacement of any given point increases as the distance from the principal

point increases, and it increases as the elevation of the point increases. Other things being equal, it

decreases with an increase in flying height. Hence, under similar conditions high altitude photography of

an area manifests less relief displacement than low altitude photography. Also, there is no relief

displacement at the principal point (since r = 0).

Object Height Determination from Relief Displacement Measurement

Equation above also indicates that relief displacement increases with the feature height h. This

relationship makes it possible to indirectly measure heights of objects appearing on aerial photographs.

By rearranging the above equation, we obtain

To use this equation, both the top and base of the object to be measured must be clearly identifiable on the

photograph and the flying height H must be known. If this is the case, d and r can be measured on the

photograph and used to calculate the object height h. (When using the equation above, it is important to

remember that H must be referenced to the elevation of the base of the feature, not to mean sea level.)

EXAMPLE 2.7

For the photo shown in Figure 3.12, assume that the relief displacement for the tower at A is 2.01 mm,

and the radial distance from the center of the photo to the top of the tower is 56.43 mm. If the flying

height is 1220 m above the base of the tower, find the height of the tower

Solution

Page 45: REMOTE SENSING AND GIS LECTURES

45

While measuring relief displacement is a very convenient means of calculating heights of objects from

aerial photographs, the reader is reminded of the assumptions implicit in the use of the method. We have

assumed use of truly vertical photography, accurate knowledge of the flying height, clearly visible

objects, precise location of the principal point, and a measurement technique whose accuracy is consistent

with the degree of relief displacement involved. If these assumptions are reasonably met, quite reliable

height determinations may be made using single prints and relatively unsophisticated measuring

equipment.

Correcting for Relief Displacement

In addition to calculating object heights, quantification of relief displacement can be used to correct the

image positions of terrain points appearing in a photograph. Keep in mind that terrain points in areas of

varied relief exhibit relief displacements as do vertical objects. If all terrain points were to lie at this

common elevation, terrain points A and B would be located at Aʹ and Bʹ and would be imaged at points aʹ

and bʹ on the photograph. Due to the varied relief, however, the position of point A is shifted radially

outward on the photograph (to a), and the position of point B is shifted radially inward (to b). These

changes in image position are the relief displacements of points A and B. Figure 2.7 b illustrates the effect

they have on the geometry of the photo. Because Aʹ and Bʹ lie at the same terrain elevation, the image line

aʹ bʹ accurately represents the scaled horizontal length and directional orientation of the ground line AB.

Page 46: REMOTE SENSING AND GIS LECTURES

46

Figure 2.7 Relief displacement on a photograph taken over varied terrain: (a) displacement of

terrain points; (b) distortion of horizontal angles measured on photograph.

When the relief displacements are introduced, the resulting line ab has a considerably altered length and

orientation. Angles are also distorted by relief displacements. In Figure 2.7b, the horizontal ground angle

ACB is accurately expressed by aʹ cbʹ on the photo. Due to the displacements, the distorted angle acb will

appear on the photograph. Note that, because of the radial nature of relief displacements, angles about the

origin of the photo (such as aob) will not be distorted.

Relief displacement can be corrected for by above Eq. to compute its magnitude on a point-by-point basis

and then laying off the computed displacement distances radially (in reverse) on the photograph. This

procedure establishes the datum-level image positions of the points and removes the relief distortions,

resulting in planimetrically correct image positions at datum scale. This scale can be determined from the

flying height above datum (S=f/H). Ground lengths, directions, angles, and areas may then be directly

determined from these corrected image positions.

EXAMPLE 2.8

Referring to the vertical photograph depicted in Figure 2.7, assume that the radial distance ra to point A

is 63.84 mm and the radial distance rb to point B is 62.65 mm. Flying height H is 1220 m above datum,

point A is 152 m above datum, and point B is 168 m below datum. Find the radial distance and direction

one must lay off from points a and b to plot a0 and b0

Solution

Page 47: REMOTE SENSING AND GIS LECTURES

47

المحاضرة

نةالثام Natural Land Resources Observation Systems

Page 48: REMOTE SENSING AND GIS LECTURES

48

CHAPTER THREE

Natural Land Resources Observation Systems

Landsat Series

The US space agency launched the first satellite of Landsat series (1-7) in 1972 and was used in

monitoring the Earth and environmental changes and then the rest of the satellites of the series were

launched. These sensors were of three types:

- Return beam vidicon

- MultiSpectral Scanner

- Thematic mapper

Characteristics of Landsat satellites 1-3

The Landsat series passes in oribits 9 degrees away from the poles. , they orbit the Earth every 103

minutes, about 14 cycles a day. The satellites intersect the equator at an angle of 9 degrees. To provide

nearly complete coverage of the Earth's surface, these satellites take images at length equal 185-km

The first generation of Landsat satellites (1-3) contains two types of sensors:

1. RBV (3-bands)

2. Multi - Spectral Scanner (MSS) with four spectral bands

The following table shows the general specifications of the Landsat satellite series

Page 49: REMOTE SENSING AND GIS LECTURES

49

Landsat Characteristics (4, 5)

Landsat (4, 5) was launched in orbits near the pole in conjunction with the sun and was lowered to

improve the spatial resolution of the sensors , each satellite crossing the equator at 9:45 am and taking 99

minutes ( 14.5) cycles per day, as shown in Table (1).

Page 50: REMOTE SENSING AND GIS LECTURES

50

There are two types of Landsat sensors (4 and 5),

- Multi-Spectral Scanner (MSS), and

- Thematic Mapper (TM).

-

The Landsat (MSS, TM) sensor system (4, 5) captures the reflected and emitted radiation from the

Earth's surface in Visible and Near Infrared (NIR) .

Landsat satellites (6 and 7)

The space shuttle, which carried Landsat 6 was fail , was unable to reach orbit. After the successful

launch of the Landsat 7 satellite, the improvements that added to Landsat 6 were transferred to Landsat 7.

The Landsat 7 has an enhanced Enhanced Thematic Mapper (ETM +), which is similar to the TM sensor,

but is more sophisticated, adding a new bancrometric band with a spatial accuracy of 15 and an infrared

thermal band. It also has a higher radiometric accuracy and resolution Accuracy than the sensor (MSS)

The following table shows the spectral domains of TM in Landsat (4, 5) and ETM + In Landsat 7 which

operates in seven waves, with the characteristics of each field.

Page 51: REMOTE SENSING AND GIS LECTURES

51

Landsat 8 LC08

Page 52: REMOTE SENSING AND GIS LECTURES

52

Page 53: REMOTE SENSING AND GIS LECTURES

53

SPOT Satellites Series

Page 54: REMOTE SENSING AND GIS LECTURES

54

The first satellite of the Spot series was launched from the Court launch range in French Guiana in 1986,

carrying the first Ground survey sensor, containing a linear matrix sensor . Spot features other programs

with a light-emitting device, a linear array sensor and a holographic capability. After the launch of the

first satellite of the Spot series in 1986, Spot (2) was launched in 1990, followed by Spot 3 in 1993, which

stop in 1996, while Spot (1 and 2 still working in their orbits.

Sensors on board SPOT 1 – 3

Spot has two high resolution visual (HRV) and magnetic tape recorders. The camera employs two types

of sensor systems:

1. Pancromatic mode (P)

A wide spectral band, with the exception of blue, is sensed with a spatial resolution of up to 10 m. It is

mainly intended for studies that require geometrical detail,

2. Mutiltispectral mode (XS)

Three spectral bands are sensed:

- XS1 (0.50 – 0.59) µm (Green band)

- XS2(0.61-0.68) µm (Red Band)

- XS3(0.79 – 0.89) µm (NIR band)

By gaining the data recorded in this set of spectral beams, composite color images can be produced with

a precision resolution of 20 m.

The figure below shows the HRV sensor in Spot Satellite

Page 55: REMOTE SENSING AND GIS LECTURES

55

Page 56: REMOTE SENSING AND GIS LECTURES

56

المحاضرة

التاسعة

Global Positioning System (GPS)

Page 57: REMOTE SENSING AND GIS LECTURES

57

CHAPTER FOUR

Global Positioning System (GPS)

The GPS or ―Global Positioning System‖ is a system for calculating position

from signals sent by a network of satellites. The satellites orbit and the earth

is in a geosynchronous orbit, at an altitude of 20.2 km, forming a

constellation of satellites, so that at least 4 of them are visible from any point

on the planet.

The designation of the initials () means GPS, and location is a geographical

indication of a place relative to what surrounds it. The location of any object

was previously calculated by calculating the distance between the body and

the nearest object, taking into account the direction or angle.

Global Positioning System (GPS) Global

positioning system

GPS is a global positioning system , the location is a geographical indication

of a place relative to its surroundings. The location of any object was

previously calculated by calculating the distance between the object and the

nearest object, taking into account the direction or angle.

When coordinate systems appeared such as :

- Geographic coordinate system (latitude and longitude)

- Universal Network Transverse Mercator - UTM

Page 58: REMOTE SENSING AND GIS LECTURES

58

It is possible to determine the second coordinate of any object and in a high

degree of accuracy, but there are some disadvantages, including:

1. Identify the small objects is not accurate in some applications.

2. The difficulty of locating the objects from maps or aerial and space

images in areas characterized by low diversity in the earth features such as

deserts, dense forests and water bodies.

3. The aforementioned coordinate systems do not fit in the positioning of

moving targets.

At present time , due to availability of satellites , it has become possible to

locate and track moving targets, by determining the third coordinate of any

target or point on the earth surface, whether the target is fixed or moving.

GPS is defined as a satellite-based navigation system to provide the location

to user with speed and time data whether on land, at sea, in the atmosphere,

under any weather conditions, by using radio waves.

The Principal of the GPS

The work of the system depends on the knowledge of the distance between

two points at a specific time and known. The first point represents the

location of a device transmitting a signal at a very precise and precise time,

the second point representing the location of a device receiving that signal

and the time taken by the signal in access to the receiver is calculated from

the following equation:

Distance between the transmitter and receiver = time * signal velocity

The exact distance between the device that the signal and the device received

is determined. From the location of the transmitter, the position of the

receiver is determined by a set of mathematical equations.

In the Global Positioning System (GPS):

Page 59: REMOTE SENSING AND GIS LECTURES

59

- The device that sent the signal is a group of satellites that are termed it

by the space segment specialized and distributed in orbits around the

earth.

- The signals that are sent by these satellites are the Radio Waves.

- The Receiving device is called User Segment,

- The Control Segment for Control and correct information,

- It is worth noting that the receiver does not identify the location only,

but is able to calculate the height and time and speed.

- To find the location and altitude throughout GPS, must be at least four

satellites in the space that Receiving signal from the User unit in same

time.

Advantages of GPS

1. Provides user three-dimensional information "astronomical location and

altitude above sea level" in addition to the time and speed accurately ,

continuously during the all hours of the day hours and for fixed and moving

targets.

2. The system can be used under any weather conditions and even in areas

where the sky is partially obscured by clouds or plants, but it can not be used

when the sky is completely obscured , as inside the house or under bridges.

3. GPSs Covers most of the world and is used in areas of military operations.

4. It is flexible in providing the information for user according desired axes

system.

5. Low cost and great benefit.

6. Available for civil use other than the Russian system "GLONASS", which

is reserved only for military use.

GPS Components

Page 60: REMOTE SENSING AND GIS LECTURES

60

This system consists of three basic units and three sub-units that connect the

main units and as shown below:

1. Space Unit:

The Space unit consists of twenty-seven satellites, twenty-four of them in the

working position and three satellites use it when needed (in case of failure of

one of the satellites). Orbiting on the earth in six precisely defined orbits with

four satellites in each orbit and at an approximate height of "2,000 km"

completing its orbit around the earth every 12 hours. The orbits of the

satellites are 55 degrees below the equator that.

Page 61: REMOTE SENSING AND GIS LECTURES

61

These satellites send the radio waves, specifically L waves , with a

wavelength of 19,40 - 76.9 cm. Two signals, L1 at 1575.42 mica Hertz and

L2 at a frequency of 1227.6 Mhz, are generated and controlled by four

atomic clocks mounted on these satellites.

There are unintended errors by receiver performance or to the engineering

relationship between the satellites, we can reduce these errors and treatment

. The decline in GPS performance led to the development a new type of GPS

called the Differential Global Positioning System (DGPS). The system works

with two or more receptors, one of them is fixed at the point that knowing its

location precisely and its represent as a reference for the secondary

receiver, (Dynamic receiver).

Page 62: REMOTE SENSING AND GIS LECTURES

62

DGPS offers a probability of accuracy ranging from 1 - 10 m for moving

targets and is limited to a few centimeters and can be a few mm for fixed

targets.

2. Control Segment

GPS satellites are affected by a number of factors, including:

a. The gravity of the sun

b. moon Gravity

c. Solar wind

d. The electronic units in the satellite itself

- Thus, through the control unit, it will be control the paths of GPS

satellites and correct it information and monitor the devices work

carried by the satellites.

Page 63: REMOTE SENSING AND GIS LECTURES

63

- There is a major control unit located in Colorado and five GPS

observation stations, two of them in the Pacific Ocean, one in the

Indian Ocean, one in the Atlantic Ocean and the fifth located near the

main control unit.

- The stations are connected to three main antennas, one in the Pacific

Ocean, the other in the Indian Ocean and the third in the Atlantic

Ocean.

- The monitoring stations receive data from the satellites every day and

send them to the main control station to process them and return them

to observation stations that sent by antennas to the satellites.

3. User Segment

There are several types of user units, all kinds consist of five main parts:

1) An antenna operating in the L-wave range of 19.40 - 76.90 cm to capture

the transmitted signals from satellites.

2) Receiver for tracking signals and performs the measurements.

3) Data processor to apply mathematical equations to calculate the location

and so on.

4) Storage unit for data processing or retrieve the data processing.

5) Input unit and display unit.

Page 64: REMOTE SENSING AND GIS LECTURES

64

The receiver antenna collects the satellites signals and converts it from

electromagnetic waves into electronic currents that are sensitive to the

receiver of radio frequencies.

The channels inside receiver will perform distinguish, separate, sort and

classify these currents. After the identification these frequency, the processor

applies a number of mathematical equations to the data that was classified at

a previous stage in order to calculate the location, then, the results will sent

to the display unit or stored until needed.

The most important advantages that the user is looking for receivers are:

1) High degree of accuracy in measurements work.

2) High ability to track signals

3) The ability to immediate correction in the field

4) Light weight and cheap price.

5) High resistance to frost and heat

6) Multiple receive channels

7) The possibility of making digital maps immediately

8) Possibility to use different languages.

Page 65: REMOTE SENSING AND GIS LECTURES

65

Uses of Global Positioning System GPS in Agriculture:

1) Planning, mapping and surveying agricultural land.

2) Identify the location of agricultural lands, nature reserves, forests, wells

and facilitating access to these areas.

3) Calculate the area of fields and agricultural lands

4) Assist in the interpretation of aerial and satellites images for agricultural

purposes as a method of matching the agricultural region on the earth and its

location in satellites images.

5) Helps in the correction stage of satellites images .

Uses of GPS in Transportation:

Page 66: REMOTE SENSING AND GIS LECTURES

66

1) Direct the various of transport "Maritime transport, road transport, air

transport " and identify it location and pathways and directions of the process

and tracking and easier ways to reach the required places.

2) Assistance in the launch of spacecraft and the identification of orbits and

assistance in taking off and landing.

3) Help in improve the performance of landing and takeoffs especially in bad

weather conditions.

Uses of GPS in Geology:

1) Assisting in prospecting by locating the drilling rigorously at sea or at sea

and following up and directing the movement of the exploration vehicles and

supply convoys in the desert areas.

2) Helps to identify potential earthquake areas and severity of these tremors

Uses of GPS in other applications:

1) to be used by wireless operators in synchronizing the stations of the

terrestrial digital network and satellites (artificial satellites)

2) Rescue and search operations to rescue those injured in crashes, accidents,

ship drowning, tracing and forest fires

3) Monitoring the movement of migratory and wild birds and wild and

marine animals

4) A main source for supplying GIS with data.

Page 67: REMOTE SENSING AND GIS LECTURES

67

المحاضرة

ةالعاشر

Principle of Geographic Information System

Page 68: REMOTE SENSING AND GIS LECTURES

68

CHAPTER FIVE

Principles of GIS

Scope of Geographic Information System

The appearance of geographic information systems (GIS) in the mid-1960s reflects

the progress in computer technology and the influence of quantitative revolution in

geography. GIS has evolved dramatically from a tool of automated mapping and

data management in the early days into a capable spatial data-handling and

analysis technology and, more recently, into geographic information science

(GISc).

The commercial success since the early 1980s has gained GIS an increasingly

wider application. Therefore, to give GIS a generally accepted definition is

difficult nowadays. An early definition by Calkins and Tomlinson (1977) states:

" A geographic information system is an integrated software package specifically designed for use with geographic data that performs a comprehensive range of data handling tasks. These tasks include data input, storage, retrieval and output, in addition to a wide variety of descriptive and analytical processe. "

From the definition, it becomes clear that GIS handles geographic data, which

include both spatial and attribute data that describe geographic features. Second,

the basic functions of GIS include data input, storage, processing, and output. The

basic concept of GIS is one of location and spatial distribution and relationship

(Fedra, 1996). The backbone analytical function of GIS is overlay of spatially

referenced data layers, which allows delineating their spatial relationships.

However, it is also true that data used in GIS are predominantly static in nature,

and

most of them are collected on a single occasion and then archived.

GIS today is far broader and harder to define. Many people prefer to define its

domain as geographic information science and technology (GIS&T), and it has

become imbedded in many academic and practical fields. The GIS&T field is a

loose coalescence of groups of users, managers, academics, and professionals all

working with geospatial information. Each group has a distinct educational and

Page 69: REMOTE SENSING AND GIS LECTURES

69

―cultural‖ background. Each identifies itself with particular ways of approaching

particular sets of problems.

Over the course of development, many disciplines have contributed to GIS.

Therefore, GIS has many close and far ―relatives.‖ Disciplines that traditionally

have researched geographic information technologies include cartography, remote

sensing, geodesy, surveying, photogrammetry, etc. Disciplines that traditionally

have researched digital technology and information include computer science in

general and databases, computational geometry, image processing, pattern

recognition, and information science in particular. Disciplines that traditionally

have studied the nature of human understanding and its interactions with machines

include particularly cognitive psychology, environmental psychology, cognitive

science, and artificial intelligence.

In developing a core curriculum for GIS, the University Consortium for

Geographic Information Science (UCGIS, 2003) suggests that ―GIS&T should not

be defined by merely linking segments of the traditional domains (e.g.,

cartography, remote sensing, statistical analysis, locational analysis, etc.), but

rather by placing the emphasis directly upon concepts and methods for geographic

problem solving in a computational environment.‖ GIS should not be viewed

merely as a collection

of tools and techniques. Rather, the scope of GIS&T represents ―a body of

knowledge that focuses in an analytic fashion upon various aspects of spatial and

spatiotemporal information and therefore constitutes, in some of its aspects, a

science‖ (UCGIS, 2003).

GIS has been called or defined as an enabling technology because of the breadth of

uses in the following disciplines as a tool. Disciplines that traditionally have

studied the earth, particularly its surface and near surface in either physical or

human aspect, include geology, geophysics, oceanography, agriculture, ecology,

biogeography, environmental science, geography, global science,

sociology, political science, epidemiology, anthropology, demography,

and many more.

When the focus is largely on the utilization of GIS&T to attain solutions to real-

world problems, GIS has more of an engineering flavor, with attention being given

to both the creation and the use of complex tools and techniques that embody the

concepts of GIS&T.

Page 70: REMOTE SENSING AND GIS LECTURES

70

Among the management and decision-making groups, for instance, GIS finds its

intensive and extensive applications in resource inventory and management, urban

planning, land records for taxation and ownership control, facilities management,

marketing and retail planning, vehicle routing and scheduling, etc. Each

application area of GIS requires a special treatment and must examine data

sources, data models, analytical methods, problem-solving approaches, and

planning and management issues.

Raster GIS and Capabilities

In GIS, data models provide rules to convert real geographic variation into discrete

objects. There are generally two major types of data models: raster and vector.

Figure 1.3 illustrates these two GIS data models. A raster model divides the entire

study area into a regular grid of cells in specific sequence (similar to pixels in

digital remote sensing imagery), with each cell containing a single value.

The raster model is a space-filling model because every location in the study area

corresponds to a cell in the raster. A set of data describing a single characteristic

for each location (cell) within a bounded geographic area forms a raster data layer.

Within a raster layer, there may be numerous zones (also named patches, regions,

and polygons), with each zone being a set of contiguous locations that exhibit the

same value. All individual zones that have the same characteristics form a class of

a raster layer. Each cell is identified by an ordered pair of coordinates (row and

column numbers) and does not have an explicit topological relationship with its

neighboring cells.

Important raster datasets used in GIS include digital land use and land cover data,

DEMs of various resolutions, digital orthoimages, and digital raster graphics

(DRGs) produced by the U.S. Geological Survey. Because remote sensing

generates digital images in raster format, it is easier to interface with a raster GIS

than any other type. It is believed that remote sensing images and information

extracted from such images, along with GPS data, have become major data sources

for raster GIS (Lillesand et al., 2008).

The analytical capabilities of a raster GIS result directly from the application of

traditional statistics and algebra to process spatial data—raster data layers. Raster

processing operations are commonly grouped into four classes based on the set of

cells in the input layer(s) participating in the computation of a value for a cell in

the output layer (Gao et al., 1996). They are (1) per-cell or local operation, (2)

Page 71: REMOTE SENSING AND GIS LECTURES

71

perneighborhood or focal operation, (3) per-zone or zonal operation, and (4) per-

layer or global operation. During a per-cell operation, the value of each new pixel

is defined by the values of the same pixel on the input layer(s), and neighboring or

distant pixels have no effect.

In a per-neighborhood operation, the value of a pixel on the new layer is

determined by the local neighborhood of the pixel on the old data layer. Per-zone

operations treat each zone as an entity and apply algebra to the zones. Per-layer or

global operations apply algebra to all cells in the old layer to the value of a pixel on

the new layer. A suite of raster processing operations may be logically organized in

order to implement a particular data-analysis application. Cartographic modeling is

such a technique, which builds models within a raster GIS by using map algebra

(Tomlin, 1991).

Page 72: REMOTE SENSING AND GIS LECTURES

72

Page 73: REMOTE SENSING AND GIS LECTURES

73

The method of map algebra, first introduced by Tomlin, involves the application of

a sequence of processing commands to a set of input map layers to generate a

desired output layer. Each processing operation is applied to all the cells in the

layers, and the result is saved in a new layer. All the layers are referenced to a

uniform geometry, usually that of a regular square grid. Map algebra is quite

similar to traditional algebra. Most of the traditional mathematical capabilities plus

an extensive set of advanced map-processing functions are used (Berry, 1993).

Many raster GIS of the current generation have made a rich set of functions

available to the user for cartographic modeling. Among these functions are

overlay, distance and connectivity mapping, neighborhood analysis, topographic

shading analysis, watershed analysis, surface interpolation, regression analysis,

clustering, classification, and visibility analysis (Berry, 1993; Gao et al., 1996).

The overlay functions, for example, may be categorized into two approaches:

location-specific

overlay and category-specific overlay. The first approach involves the vertical

spearing of a set of layers; that is, the values assigned are a function of the point-

by-point coincidence of the existing layers.

Environmental modeling typically involves the manipulation of quantitative data

using basic arithmetic operations, as well as simple descriptive statistical

parameters. The type of function used for a overlaying may vary in the light of the

nature of the data (e.g., nominal, ratio, etc.) being processed and use of the

resulting data.

An extreme case of this approach is dealing with Boolean images that only have

values 1 and 0. Simple arithmetic operations in this case are used to perform logic

or Boolean algebra operations. The second overlaying approach uses one layer to

identify boundaries by which information is extracted from other layers; that is, the

value assigned to a thematic region is considered as a function of the values on

other layers that occur within its boundary. Summary statistics may be applied to

this approach.

1.2.3 Vector GIS and Capabilities

The traditional vector data model is based on vectors. Its fundamental primitive is

a point. Lines are created by connecting points with straight lines, and areas are

defined by sets of lines. Encoding these features (i.e., points, lines, and polygons)

is based on Cartesian coordinates using the principles of Euclidean geometry. The

more advanced topological vector model, based on graph theory, encodes

Page 74: REMOTE SENSING AND GIS LECTURES

74

geographic features using nodes, arcs, and label points. The stretch of common

boundary between two nodes forms an arc. Polygons are built by linking arcs or

are stored as a sequence of coordinates. Figure 1.4 illustrates the arc-node

topological data model.

Page 75: REMOTE SENSING AND GIS LECTURES

75

Page 76: REMOTE SENSING AND GIS LECTURES

76

The creation of a vector GIS database mainly involves three stages: (1) input of

spatial data, (2) input of attribute data, and (3) linking of the spatial and attribute

data.

The spatial data are entered via digitized points and lines, scanned, and via

vectorized lines or directly from other digital sources. In the process of spatial data

generation, once points are entered and geometric lines are created, topology must

be built, which is followed by editing, edge matching, and other graphic spatial

correction procedures. Topology is the mathematical relationship among the

points, lines, and polygons in a vector data layer. Building topology involves

calculating and encoding relationships between the points, lines, and polygons.

Attribute data

are data that describe the spatial features, which can be keyed in or imported from

other digital databases. Attribute data often are stored and manipulated in entirely

separate ways from the spatial data, with a linkage between the two types of data

by a corresponding object ID. Various database models are used for spatial and

attribute data in vector GIS. The most widely used one is the georelational

database model (Fig. 1.5), which uses a common-feature identifier to link the

topological data model, representing spatial data and their relationships, to a

relational database management system (DBMS), representing attribute data.

The analytical capabilities of vector GIS are not quite the same as those of raster

GIS. There are more operations dealing with objects, and measures such as area

have to be calculated from coordinates of objects instead of counting cells. The

analytical functions can be broadly grouped into topological and nontopological

ones.

The former includes buffering, overlay, network analysis, and reclassification,

whereas the latter includes attribute database query, address geocoding, area

calculation, and statistical computation (Lo and Yeung, 2002). Database query

may be conducted according to the characteristics of the spatial or attribute data.

Spatial queries may use points and arcs to display the locations of all objects stored

or a subset of the data and may search within the radius of a point, within a

bounding rectangle, or within an irregular polygon. Attributes and entity types can

be displayed by varying colors, line patterns, or point symbols. Different vector

GIS systems use different ways of formulating queries, but the Standard Query

Page 77: REMOTE SENSING AND GIS LECTURES

77

Language (SQL), including relational, arithmetic, and Boolean operators, is used

by many systems.

Topological overlay is an important vector GIS analytical function. When two GIS

layers are overlaid, the topology is updated for the new, combined map. The result

may be information about new relationships for the old (input) maps rather than the

creation of new objects. Figure 1.6 shows various types of topological overlays.

The point-in-polygon overlay superimposes point objects on areas, thus computing

―is contained in‖ relationship and resulting in a new attribute for each point. The

line-on-polygon overlay superimposes line objects on area objects, thus computing

―is contained in‖ relationships, and ―containing area‖ is a new attribute of each

output line.

Page 78: REMOTE SENSING AND GIS LECTURES

78

The polygon-on-polygon (i.e., polygon) overlay superimposes two layers of area

objects. After overlaying, either of the input layers can be recreated by dissolving

and merging based on the attributes contributed by the input layer.A buffer can be

constructed around a point, line, or area, and buffering thus creates a new area

enclosing the buffered object. Sometimes width of the buffer can be determined by

an attribute of the object. Buffering may be applied to both vector and raster GIS

and has found wide applications in transportation, forestry, resource management,

etc.

Page 79: REMOTE SENSING AND GIS LECTURES

79

Page 80: REMOTE SENSING AND GIS LECTURES

80

المحاضرة

الحادية عشر

Principle of Geographic Information System

Page 81: REMOTE SENSING AND GIS LECTURES

81

Network Data Model

Anetwork consists of a set of interconnected linear features. Examples of networks

are roads, railways, air routes, rivers and streams, and utilities networks. In GIS,

networks are modeled essentially by the arc-node topological data model, with

attributes of links, nodes, stops, centers, and turns. To apply a network data model

to real world applications, additional attributes such as travel time, impedance

(e.g., cost associated with passing a link, stop, center, turn, etc), the nature of flow

(i.e., one-way flow, two-way flow, close, etc.), and supply and demand must be

included. Figure 1.7 provides an example of a network data model. Correct

topology is of foremost importance for network analysis, but correct geographic

representation is

not so long as key attributes are preserved (Heywood et al., 1998). Applications

that use a network data model include shortest-path analysis, location-allocation

analysis, transportation planning, spatial optimization, and so on

.

Object-Oriented Data Model

Page 82: REMOTE SENSING AND GIS LECTURES

82

The concept of object orientation embraces four components: (1) object oriented

user interface (e.g., icons, dialog boxes, glyphs, etc.), (2) object oriented

programming languages (e.g., Visual Basic and Visual C), (3) object-oriented

analysis, and (4) object-oriented database management (Lo and Yeung, 2002). The

object-oriented data model discussed in most GIS textbooks focuses on the last two

components. Unlike the georelational data model, which separates spatial and

attribute data and links them by using a common identifier, the object-oriented data

model views the real world as a set of individual objects. An object has a set of

properties and can perform operations on requests (Chang, 2009). Figure 1.8

illustrates the object-oriented representation approach. Features are no longer

divided into separate layers; instead, they are grouped into classes and hierarchies

of objects.

Page 83: REMOTE SENSING AND GIS LECTURES

83

The properties of each object include geographic information (e.g., size, shape, and

location), topological information (i.e., one object relating to others), behaviors of

the object, and behaviors of the objects in relation to one another. To apply the

object-oriented data model to GIS, one must consider both the structural aspects of

objects (Worboys, 1995) and behavior aspects of the objects (Egenhofer and Frank,

1992). Basic principles for explaining the structures of objects include association,

aggregation, generalization, instantiation, and specialization, whereas those for

explaining the behaviors of objects include inheritance, encapsulation, and

polymorphism (Chang, 2009; Lo and Yeung, 2002).

Page 84: REMOTE SENSING AND GIS LECTURES

84

المحاضرة

الثانية عشر

DIGITAL IMAGE ANALYSIS

Page 85: REMOTE SENSING AND GIS LECTURES

85

CHAPTER SIX

DIGITAL IMAGE ANALYSIS

1. Introduction

Digital image analysis refers to the manipulation of digital images with the

aid of a computer.

This could range from an amateur photographer using freely available

software to adjust the contrast and brightness of pictures from her or his

digital camera to a team of scientists using neural-network classification to

map mineral types in an airborne hyperspectral image.

In this lecture, we will begin with simple and widely used methods for

enhancing digital images, correcting errors, and generally improving image

quality prior to further visual interpretation or digital analysis.

The enhancement, processing, and analysis of digital images comprise an

extremely broad subject, often involving procedures that can be

mathematically complex.

The use of computers for digital processing and analysis began in the 1960s

with early studies of airborne multispectral scanner data and digitized aerial

photographs. However, it was not until the launch of Landsat-1 in 1972 that

digital image data became widely available for land remote sensing

applications. At that time, not only was the theory and practice of digital

image processing in its infancy, but also the cost of digital computers was

very high, and their computational

efficiency was very low by modern standards.

Today, access to low-cost, efficient computer hardware and software is

commonplace, and the sources of digital image data are many and varied.

Page 86: REMOTE SENSING AND GIS LECTURES

86

These sources range from commercial and governmental earth resource

satellite systems, to the meteorological satellites, to airborne scanner data, to

airborne digital camera data, to image data generated by photogrammetric

scanners and other high resolution digitizing systems. All of these forms of

data can be processed and analyzed using the techniques described in this

chapter.

The central idea behind digital image processing is quite simple. One or more

images are loaded into a computer. The computer is programmed to perform

calculations using an equation, or series of equations, that take pixel values

from the raw image as input. In most cases, the output will be a new digital

image whose pixel values are the result of those calculations.

This output image may be displayed or recorded in pictorial format or may

itself be further manipulated by additional software. However, virtually all

these procedures may be categorized into one (or more) of the following

seven broad types of computer-assisted operations:

1. Image preprocessing. These operations aim to correct distorted or

degraded image data to create a more faithful representation of the

original scene and to improve an image’s utility for further

manipulation later on. This typically involves the initial processing of

raw image data

- to eliminate noise present in the data,

- to calibrate the data radiometrically,

- to correct for geometric distortions, and

- to expand or contract the extent of an image via mosaicking or

subsetting.

These procedures are often termed preprocessing operations because

they normally precede further manipulation and analysis of the image

data to extract specific information.

Page 87: REMOTE SENSING AND GIS LECTURES

87

2. Image enhancement. These procedures are applied to image data in

order to more effectively render the data for subsequent interpretation.

In many cases, image enhancement involves techniques for

Page 88: REMOTE SENSING AND GIS LECTURES

88

heightening the visual distinctions among features in a scene,

ultimately increasing the amount of information that can be interpreted

from the data.

There is various approaches to enhancement the images :

- the contrast of an image (level slicing and contrast stretching).

Page 89: REMOTE SENSING AND GIS LECTURES

89

- spatial feature manipulation (spatial filtering, convolution, edge

enhancement, and Fourier analysis).

- enhancements involving multiple spectral bands of imagery (spectral

ratioing, principal and canonical components, vegetation components, and

intensity–hue–saturation color space transformations).

Page 90: REMOTE SENSING AND GIS LECTURES

90

المحاضرة

الثالثة عشرDIGITAL IMAGE ANALYSIS

Page 91: REMOTE SENSING AND GIS LECTURES

91

3. Image classification. The objective of image classification is to replace

visual interpretation of image data with quantitative techniques for

automating the identification of features in a scene.

This normally involves the analysis of multiple bands of image data

(typically multispectral, multitemporal, or other sources of complementary

information) and the application of statistically based decision rules for

determining the land cover identity of each pixel in an image.

4. Analysis of change over time. Analysis of change over time. Many

remote sensing projects involve the analysis of two or more images from

different points in time, to determine the extent and nature of changes over

time. Change detection methods are apply to identify specific areas of

change. With the proliferation of temporally rich data sets that offer dozens

or hundreds of images of a given area over a period of weeks or years, there

is also a new interest in time-series analysis of remotely sensed imagery ,

Page 92: REMOTE SENSING AND GIS LECTURES

92

such as the analysis of seasonal cycles, interannual variability, and long-term

trends in dense multitemporal data sets.

5. Data fusion and GIS integration. These procedures are used to combine

image data for a given geographic area with other geographically referenced

data sets for the same area. These other data sets might simply consist of

image data generated on other dates by the same sensor or by other remote

sensing systems. Frequently, the intent of data merging is to combine

remotely sensed data with other ancillary sources of information in the

context of a GIS. For example, image data are often combined with soil,

topographic, ownership, zoning, and assessment information.

6. Hyperspectral image analysis. Virtually all of the image processing

principles introduced in this chapter in the context of multispectral image

analysis may be extended directly to the analysis of hyperspectral data.

However, the basic nature and sheer volume of hyperspectral data sets is such

that various image processing procedures have been developed to analyze

such data specifically.

7. Biophysical modeling. The objective of biophysical modeling is to relate

quantitatively the digital data recorded by a remote sensing system to

biophysical features and phenomena measured on the ground. For example,

remotely sensed data might be used to estimate such varied parameters as

crop yield, pollution concentration, or water depth. A biophysical model is a

simulation of a biological system using mathematical formalizations of the

physical properties of that system. Such models can be used to predict the

influence of biological and physical factors on complex systems.

Remote sensing may be used to collect information on several important

biophysical variables. This capability is in addition to the standard use of

Page 93: REMOTE SENSING AND GIS LECTURES

93

remotely sensed data for land-use and land-cover mapping. The status of

work on remotely sensing nine fundamental biophysical variables is

presented, including planimetric (x, y) location, topographic-bathymetric (x,

y, z) elevation, color and the spectral signature of features, vegetation

chlorophyll absorption characteristics, vegetation biomass, vegetation

moisture content, soil moisture content, surface temperature, and texture or

surface roughness. Remotely sensed ratio-scaled data on these biophysical

variables may be more amenable for use in models and simulations than the

nominal-scale land-use data. Emphasis is placed on current remote sensing

capability and future potential and on the optimum spectral region(s) for

sensing these biophysical variables.

2. Preprocessing of Images

In nearly every case, there are certain preprocessing operations that are

performed on the raw data of an image prior to its use in any further

enhancement, interpretation, or analysis. Some of these operations are

designed to correct flaws in the data, while others make the data more

amenable to further processing. In this section we will discuss these

preprocessing operations under the general headings of noise removal,

radiometric correction, geometric correction, and image subsetting and

mosaicking.

Some of these operations may be performed by the data provider, before the

imagery is provided to the analysts who will be interpreting it. In other cases,

the users may need to perform one or more of these preprocessing steps

themselves.

Page 94: REMOTE SENSING AND GIS LECTURES

94

المحاضرة

الرابعة عشرApplication of remote sensing in water resources

Presentation (project)