Areial Image

122
An Automatic Wavelet Based- Nonlinear Image Enhancement Technique for Aerial Imagery

Transcript of Areial Image

Page 1: Areial Image

An Automatic Wavelet Based- Nonlinear

Image Enhancement Technique for Aerial

Imagery

Page 2: Areial Image

INDEX

1. INTRODUCTION…………………………………….

2. ABSTRACT………………………………………………

3. IMAGE PROCESSING………………………………

3.1 DIGITAL IMAGE PROCESSING………………..

3.2 IMAGE PROCESSING AND ANALYSIS……………

3.3 IMAGE RESOLUTION…………………….

3.4 HOW TO IMPROVE YOUR IMAGE…………….

3.5 PREPROCESSING OF REMOTELY SENSED IMAGES……

3.6 APPLICATIONS……………………………..

4. AERIAL IMAGERY……………………………………..

4.1 HISTORY…………………………………………………..

4.2 USES OF AERIAL IMAGERY…………………………….

4.3 TYPES OF AERIAL PHOTOGRAPHY………………

4.4 AERIAL VIDEO…………………………………………

5. NON-LINEAR IMAGE ENHANCEMENT TECHNIQUE………….

5.1 PROPOSED METHOD……………………………….

5.2 AUTOMATIC IMAGE ENHANCEMENT…………………..

5.3 IMAGE EDITORS…………………………………

6. WAVELETS…………………………………………………….

6.1 CONTINOUS WAVELET TRANSFORM…………………..

6.2 DISCRETE WAVELET TRANSFORM………………………

6.3 SUBBAND CODING AND MULTIRESOLUTIONAL ANALYSIS……….

7. ALGORITHM……………………………………………………………..

7.1 HISTOGRAM ADJUSTMENT………………………………….

7.2 WAVELET BASED DYNAMIC RANGE COMPRESSION AND

CONTRAST ENHANCEMENT……………………………….

7.3 COLOR RESTORATION………………………………………..

8. CONCLUSION……………………………………………………………………….

9. REFERENCES………………………………………………………………………..

Page 3: Areial Image

LIST OF TABLES:

1. ELEMENTS OF IMAGE INTERPRETATION……………………………..

LIST OF FIGURES:

1. AERIAL IMAGE OF A TEST BUILDING………………………………….

2. CONTINOUS WAVELET TRANSFORM EXAMPLE………………………

3. CWT OF COSINE SIGNALS…………………………………………

4. DECOMPOSITION LEVELS………………………….

5. DWT COFFECIENTS…………………………………………………………..

Page 4: Areial Image

ABSTRACT

Recently we proposed a wavelet-based dynamic range compression algorithm to

improve the visual quality of digital images captured in the high dynamic range scenes with

nonuniform lighting conditions. The fast image enhancement algorithm which provides dynamic

range compression preserving the local contrast and tonal rendition is a very good candidate in

aerial imagery applications such as image interpretation for defense and security tasks.

This algorithm can further be applied to video streaming for aviation safety. In this

project the latest version of the proposed algorithm which is able to enhance aerial images so that

the enhanced images are better then direct human observation, is presented. The results obtained

by applying the algorithm to numerous aerial images show strong robustness and high image

quality.

Page 5: Areial Image

Introduction

Aerial images captured from aircrafts, spacecrafts, or satellites usually suffer from lack of

clarity, since the atmosphere enclosing Earth has effects upon the images such as turbidity

caused by haze, fog, clouds or heavy rain. The visibility of such aerial images may decrease

drastically and

Sometimes the conditions at which the images are taken may only lead to near zero

visibility even for the human eyes. Even though human observers may not see much than smoke,

there may exist useful information in those images taken under such poor conditions. Captured

images are usually not the same as what we see in a real world scene, and are generally a poor

rendition of it.

High dynamic range of the real life scenes and the limited dynamic range of imaging

devices results in images with locally poor contrast. Human Visual System (HVS) deals with the

high dynamic range scenes by compressing the dynamic range and adapting locally to each part

of the scene. There are some exceptions such as turbid (e.g. fog, heavy rain or snow) imaging

conditions under which acquired images and the direct observation possess a close parity .The

extreme narrow dynamic range of such scenes leads to extreme low contrast in the acquired

images.

To deal with the problems caused by the limited dynamic range of the imaging devices,

many image processing algorithms have been developed .These algorithms also provide contrast

enhancement to some extent. Recently we have developed a wavelet-based dynamicrange

compression (WDRC) algorithm to improve the visual quality of digital images of high dynamic

range scenes with non-uniform lighting conditions .The WDRC algorithm is modified in by

introducing an histogram adjustment and non-linear color restoration process so that it provides

color constancy and deals with “pathological” scenes having very strong spectral characteristics

in a single band. The fast image enhancement algorithm which provides dynamic range

compression preserving the local contrast and tonal rendition is a very good candidate in aerial

imagery applications such as image interpretation for defense and security tasks. This algorithm

can further be applied to video streaming for aviation safety. In this project application of the

Page 6: Areial Image

WDRC algorithm in aerial imagery is presented. The results obtained from large variety of aerial

images show strong robustness and high image quality indicating promise for aerial imagery

during poor visibility flight conditions.

Page 7: Areial Image

Image Processing

In electrical engineering and computer science, image processing is any form of signal

processing for which the input is an image, such as a photograpy or video frame the output of

image processing may be either an image or, a set of characteristics or parameters related to the

image. Most image-processing techniques involve treating the image as a two-dimensional signal

and applying standard signal-processing techniques to it.

Image processing usually refers to digital image processing, but optical and analog image

processing also are possible. This article is about general techniques that apply to all of them.

The acquisition of images (producing the input image in the first place) is referred to as imaging.

Image processing is a physical process used to convert an image signal into a physical image.

The image signal can be either digital or analog. The actual output itself can be an actual

physical image or the characteristics of an image.

The most common type of image processing is photography. In this process, an image is

captured using a camera to create a digital or analog image. In order to produce a physical

picture, the image is processed using the appropriate technology based on the input source type.

In digital photography, the image is stored as a computer file. This file is translated using

photographic software to generate an actual image. The colors, shading, and nuances are all

captured at the time the photograph is taken the software translates this information into an

image.

Euclidean geometry transformations such as enlargement, reduction, and rotation

Color corrections such as brightness and contrast adjustments, color mapping, color balancing,

quantization, or color translation to a different color space

Digital compositing or optical compositing (combination of two or more images), which is used

in film-making to make a "matte"

Interpolation, demosaicing, and recovery of a full image from a raw image format using a Bayer

filter pattern

Page 8: Areial Image

Image registration, the alignment of two or more images

Image differencing and morphing

Image recognition, for example, may extract the text from the image using optical character

recognition or checkbox and bubble values using optical mark recognition

Image segmentation

High dynamic range imaging by combining multiple images

Geometric hashing for 2-D object recognition with affine invariance

Digital image processing

Digital image processing is the use of computer algorithms to perform image processing on digital

images. As a subcategory or field of digital signal processing, digital image processing has many

advantages over analog image processing. It allows a much wider range of algorithms to be applied to the

input data and can avoid problems such as the build-up of noise and signal distortion during processing.

Since images are defined over two dimensions (perhaps more) digital image processing may be modeled

in the form of Multidimensional Systems.

Many of the techniques of digital image processing, or digital picture processing as it often was

called, were developed in the 1960s at the Jet Propulsion Laboratory, Massachusetts Institute of

Technology, Bell Laboratories, University of Maryland, and a few other research facilities, with

application to satellite imagery, wire-photo standards conversion, medical imaging, videophone,

character recognition, and photograph enhancement. The cost of processing was fairly high,

however, with the computing equipment of that era. That changed in the 1970s, when digital

image processing proliferated as cheaper computers and dedicated hardware became available.

Images then could be processed in real time, for some dedicated problems such as television

standards conversion. As general-purpose computers became faster, they started to take over the

role of dedicated hardware for all but the most specialized and computer-intensive operations.

With the fast computers and signal processors available in the 2000s, digital image processing

has become the most common form of image processing and generally, is used because it is not

only the most versatile method, but also the cheapest.

Page 9: Areial Image

Digital image processing technology for medical applications was inducted into the Space

Foundation Space Technology Hall of Fame in 1994.

Digital image processing allows the use of much more complex algorithms for image processing,

and hence, can offer both more sophisticated performance at simple tasks, and the

implementation of methods which would be impossible by analog means.

In particular, digital image processing is the only practical technology for:

Classification

Feature extraction

Pattern recognition

Projection

Multi-scale signal analysis

Some techniques which are used in digital image processing include:

Pixelization

Linear filtering

Principal components analysis

Independent component analysis

Hidden Markov models

Anisotropic diffusion

Partial differential equations

Self-organizing maps

Neural networks

Wavelets

Image Processing and Analysis

Introduction

Image Processing and Analysis can be defined as the "act of examining images for the purpose

Page 10: Areial Image

of identifying objects and judging their significance" Image analyst study the remotely sensed

data and attempt through logical process in detecting, identifying, classifying, measuring and

evaluating the significance of physical and cultural objects, their patterns and spatial relationship.

Digital Data

In a most generalized way, a digital image is an array of numbers depicting spatial distribution of

a certain field parameters (such as reflectivity of EM radiation, emissivity, temperature or some

geophysical or topographical elevation. Digital image consists of discrete picture elements called

pixels. Associated with each pixel is a number represented as DN (Digital Number), that depicts

the average radiance of relatively small area within a scene. The range of DN values being

normally 0 to 255. The size of this area effects the reproduction of details within the scene. As

the pixel size is reduced more scene detail is preserved in digital representation.

Remote sensing images are recorded in digital forms and then processed by the computers to

produce images for interpretation purposes. Images are available in two forms - photographic

film form and digital form. Variations in the scene characteristics are represented as variations in

brightness on photographic films. A particular part of scene reflecting more energy will appear

bright while a different part of the same scene that reflecting less energy will appear black.

Digital image consists of discrete picture elements called pixels. Associated with each pixel is a

number represented as DN (Digital Number), that depicts the average radiance of relatively small

area within a scene. The size of this area effects the reproduction of details within the scene. As

the pixel size is reduced more scene detail is preserved in digital representation.

Data Formats For Digital Satellite Imagery

Digital data from the various satellite systems supplied to the user in the form of computer

readable tapes or CD-ROM. As no worldwide standard for the storage and transfer of remotely

sensed data has been agreed upon, though the CEOS (Committee on Earth Observation

Satellites) format is becoming accepted as the standard. Digital remote sensing data are often

Page 11: Areial Image

organised using one of the three common formats used to organise image data . For an instance

an image consisting of four spectral channels, which can be visualised as four superimposed

images, with corresponding pixels in one band registering exactly to those in the other bands.

These common formats are:

Band Interleaved by Pixel (BIP)

Band Interleaved by Line (BIL)

Band Sequential (BQ)

Digital image analysis is usually conducted using Raster data structures - each image is treated as

an array of values. It offers advantages for manipulation of pixel values by image processing

system, as it is easy to find and locate pixels and their values. Disadvantages becomes apparent

when one needs to represent the array of pixels as discrete patches or regions, where as Vector

data structures uses polygonal patches and their boundaries as fundamental units for analysis and

manipulation. Though vector format is not appropriate to for digital analysis of remotely sensed

data.

ImageResolution

Resolution can be defined as "the ability of an imaging system to record fine details in a

distinguishable manner". A working knowledge of resolution is essential for understanding both

practical and conceptual details of remote sensing. Along with the actual positioning of spectral

bands, they are of paramount importance in determining the suitability of remotely sensed data

for a given applications. The major characteristics of imaging remote sensing instrument

operating in the visible and infrared spectral region are described in terms as follow:

Spectral resolution

Radiometric resolution

Spatial resolution

Temporal resolution

Spectral Resolution refers to the width of the spectral bands. As different material on the earth

surface exhibit different spectral reflectances and emissivities. These spectral characteristics

Page 12: Areial Image

define the spectral position and spectral sensitivity in order to distinguish materials. There is a

tradeoff between spectral resolution and signal to noise. The use of well -chosen and sufficiently

numerous spectral bands is a necessity, therefore, if different targets are to be successfully

identified on remotely sensed images.

Radiometric Resolution or radiometric sensitivity refers to the number of digital levels used to

express the data collected by the sensor. It is commonly expressed as the number of bits (binary

digits) needs to store the maximum level. For example Landsat TM data are quantised to 256

levels (equivalent to 8 bits). Here also there is a tradeoff between radiometric resolution and

signal to noise. There is no point in having a step size less than the noise level in the data. A low-

quality instrument with a high noise level would necessarily, therefore, have a lower radiometric

resolution compared with a high-quality, high signal-to-noise-ratio instrument. Also higher

radiometric resolution may conflict with data storage and transmission rates.

Spatial Resolution of an imaging system is defines through various criteria, the geometric

properties of the imaging system, the ability to distinguish between point targets, the ability to

measure the periodicity of repetitive targets ability to measure the spectral properties of small

targets.

The most commonly quoted quantity is the instantaneous field of view (IFOV), which is the

angle subtended by the geometrical projection of single detector element to the Earth's surface. It

may also be given as the distance, D measured along the ground, in which case, IFOV is clearly

dependent on sensor height, from the relation: D = hb, where h is the height and b is the angular

IFOV in radians. An alternative measure of the IFOV is based on the PSF, e.g., the width of the

PDF at half its maximum value.

A problem with IFOV definition, however, is that it is a purely geometric definition and does not

take into account spectral properties of the target. The effective resolution element (ERE) has

been defined as "the size of an area for which a single radiance value can be assigned with

reasonable assurance that the response is within 5% of the value representing the actual relative

radiance". Being based on actual image data, this quantity may be more useful in some situations

than the IFOV.

Page 13: Areial Image

Other methods of defining the spatial resolving power of a sensor are based on the ability of the

device to distinguish between specified targets. Of the concerns the ratio of the modulation of the

image to that of the real target. Modulation, M, is defined as:

M = Emax -Emin / Emax + Emin

Where Emax and Emin are the maximum and minimum radiance values recorded over the

image.

Temporal resolution refers to the frequency with which images of a given geographic location

can be acquired. Satellites not only offer the best chances of frequent data coverage but also of

regular coverage. The temporal resolution is determined by orbital characteristics and swath

width, the width of the imaged area. Swath width is given by 2htan(FOV/2) where h is the

altitude of the sensor, and FOV is the angular field of view of the sensor.

How to Improve Your Image?

Analysis of remotely sensed data is done using various image processing techniques and

methods that includes:

Analog image processing

Digital image processing.

Visual or Analog processing techniques is applied to hard copy data such as photographs or

printouts. Image analysis in visual techniques adopts certain elements of interpretation, which are

as follow:

The use of these fundamental elements of depends not only on the area being studied, but the

knowledge of the analyst has of the study area. For example the texture of an object is also very

useful in distinguishing objects that may appear the same if the judging solely on tone (i.e., water

and tree canopy, may have the same mean brightness values, but their texture is much different.

Association is a very powerful image analysis tool when coupled with the general knowledge of

the site. Thus we are adept at applying collateral data and personal knowledge to the task of

Page 14: Areial Image

image processing. With the combination of multi-concept of examining remotely sensed data in

multispectral, multitemporal, multiscales and in conjunction with multidisciplinary, allows us to

make a verdict not only as to what an object is but also its importance. Apart from these analog

image processing techniques also includes optical photogrammetric techniques allowing for

precise measurement of the height, width, location, etc. of an object.

Elements of Image Interpretation 

Primary Elements

Black and White Tone

Color

Stereoscopic Parallax

Spatial Arrangement of Tone &

Color

Size

Shape

Texture

Pattern

Based on Analysis of Primary

Elements

Height

Shadow

Contextual Elements Site

 Association

Digital Image Processing is a collection of techniques for the manipulation of digital images by

computers. The raw data received from the imaging sensors on the satellite platforms contains

flaws and deficiencies. To overcome these flaws and deficiencies inorder to get the originality of

the data, it needs to undergo several steps of processing. This will vary from image to image

depending on the type of image format, initial condition of the image and the information of

interest and the composition of the image scene. Digital Image Processing undergoes three

general steps:

Pre-processing

Display and enhancement

Page 15: Areial Image

Information extraction

Pre-processing consists of those operations that prepare data for subsequent analysis that

attempts to correct or compensate for systematic errors. The digital imageries are subjected to

several corrections such as geometric, radiometric and atmospheric, though all these correction

might not be necessarily be applied in all cases. These errors are systematic and can be removed

before they reach the user. The investigator should decide which pre-processing techniques are

relevant on the basis of the nature of the information to be extracted from remotely sensed data.

After pre-processing is complete, the analyst may use feature extraction to reduce the

dimensionality of the data. Thus feature extraction is the process of isolating the most useful

components of the data for further study while discarding the less useful aspects (errors, noise

etc). Feature extraction reduces the number of variables that must be examined, thereby saving

time and resources.

Image Enhancement operations are carried out to improve the interpretability of the image by

increasing apparent contrast among various features in the scene. The enhancement techniques

depend upon two factors mainly

The digital data (i.e. with spectral bands and resolution)

The objectives of interpretation

As an image enhancement technique often drastically alters the original numeric data, it is

normally used only for visual (manual) interpretation and not for further numeric analysis.

Common enhancements include image reduction, image rectification, image magnification,

transect extraction, contrast adjustments, band ratioing, spatial filtering, Fourier transformations,

principal component analysis and texture transformation.

Information Extraction is the last step toward the final output of the image analysis. After pre-

processing and image enhancement the remotely sensed data is subjected to quantitative analysis

to assign individual pixels to specific classes. Classification of the image is based on the known

and unknown identity to classify the remainder of the image consisting of those pixels of

unknown identity. After classification is complete, it is necessary to evaluate its accuracy by

Page 16: Areial Image

comparing the categories on the classified images with the areas of known identity on the

ground. The final result of the analysis consists of maps (or images), data and a report. These

three components of the result provide the user with full information concerning the source data,

the method of analysis and the outcome and its reliability.

Pre-Processing of the Remotely Sensed Images

When remotely sensed data is received from the imaging sensors on the satellite platforms it

contains flaws and deficiencies. Pre-processing refers to those operations that are preliminary to

the main analysis. Preprocessing includes a wide range of operations from the very simple to

extremes of abstractness and complexity. These categorized as follow:

1. Feature Extraction

2. Radiometric Corrections

3. Geometric Corrections

4. Atmospheric Correction

The techniques involved in removal of unwanted and distracting elements such as image/system

noise, atmospheric interference and sensor motion from an image data occurred due to

limitations in the sensing of signal digitization, or data recording or transmission process.

Removal of these effects from the digital data are said to be "restored" to their correct or original

condition, although we can, of course never know what are the correct values might be and must

always remember that attempts to correct data what may themselves introduce errors. Thus

image restoration includes the efforts to correct for both radiometric and geometric errors.

Feature Extraction

Feature Extraction does not mean geographical features visible on the image but rather

"statistical" characteristics of image data like individual bands or combination of band values

that carry information concerning systematic variation within the scene. Thus in a multispectral

data it helps in portraying the necessity elements of the image. It also reduces the number of

spectral bands that has to be analyzed. After the feature extraction is complete the analyst can

work with the desired channels or bands, but inturn the individual bandwidths are more potent

for information. Finally such a pre-processing increases the speed and reduces the cost of

Page 17: Areial Image

analysis.

Radiometric Corrections

Radiometric Corrections are carried out when an image data is recorded by the sensors they

contain errors in the measured brightness values of the pixels. These errors are referred as

radiometric errors and can result from the

1. Instruments used to record the data

2. From the effect of the atmosphere

Radiometric processing influences the brightness values of an image to correct for sensor

malfunctions or to adjust the values to compensate for atmospheric degradation. Radiometric

distortion can be of two types:

1. The relative distribution of brightness over an image in a given band can be different to

that in the ground scene.

2. The relative brightness of a single pixel from band to band can be distorted compared

with spectral reflectance character of the corresponding region on the ground.

Applications

Digital camera images

Digital cameras generally include dedicated digital image processing chips to convert the

raw data from the image sensor into a color-corrected image in a standard image file

format. Images from digital cameras often receive further processing to improve their

quality, a distinct advantage that digital cameras have over film cameras. The digital

image processing typically is executed by special software programs that can manipulate

the images in many ways.

Many digital cameras also enable viewing of histograms of images, as an aid for the

photographer to understand the rendered brightness range of each shot more readily.

Page 18: Areial Image

Aerial imagery

Aerial imagery can expose a great deal about soil and crop conditions. The “bird’s eye”

view an aerial image provides, combined with field knowledge, allows growers to observe issues

that affect yield. Our imagery technology enhances the ability to be proactive and recognize a

Problematic area, thus minimizing yield loss and limiting exposure to other areas of your field.

Hemisphere GPS Imagery uses infrared technology to help you see the big picture to identify

these everyday issues. Digital infrared sensors are very sensitive to subtle differences in plant

health and growth rate. Anything that changes the appearance of leaves (such as curling, wilting,

and defoliation) has an effect on the image. Computer enhancement makes these

Variations within the canopy stand out, often indicating disease, water, weed, or fertility

problems.

Page 19: Areial Image

Because of Hemisphere GPS technology, aerial imagery is over 30 times more

detailed than any commercially available satellite imagery and is available in selected areas for

the 2010 growing season. Images can be taken on a scheduled or as needed basis. Aerial images

provide a snapshot of the crop condition. The example on the right shows healthy crop

conditions in red and less than healthy conditions in green. These snapshots of crop variations

can then be turned into variable rate prescription maps (PMaps), which is shown on the right

Imagery can be used to identify crop stress over a period of time. In the images to the left, the

Problem areas identified with yellow arrows show potential plant damage (e.g. disease, insects,

etc.).

Aerial images, however, store information about the electro-magnetic radiance of the

Complete scene in almost continuous form. Therefore they support the localization of break lines

and linear or spatial objects.

The Map Mart Aerial Image Library covers all of the continental United States as well

as a growing number if International locations. The aerial imagery ranges in date from 1926 to

the present day depending upon the location. Imagery can be requested and ordered by selecting

an area on an interactive map or larger areas, such as cities or counties can be purchased in

bundles. Many of the current digital datasets are available for download within a few minutes

of purchase.

Aerial image measurement includes non-linear, 3-dimensional, and materials effects on

imaging. Aerial image measurement excludes the processing effects of printing and etching on

the wafer.

The successful application of aerial image emulation for CDU measurement

traditionally, aerial image metrology systems are used to evaluate defect printability and repair

success.

Page 20: Areial Image

Areal image of the test the building

Whereas, digital aerial imagery should remain in the public domain and be archived

to secure its availability for future scientific, legal, and historical purposes.

Aerial photography is the taking of photographs of the ground from an elevated position.

The term usually refers to images in which the camera is not supported by a ground-based

structure. Cameras may be hand held or mounted, and photographs may be taken by a

photographer, triggered remotely or triggered automatically. Platforms for aerial photography

include fixed-wing aircraft, helicopters, balloons, blimps and dirigibles, rockets, kites, poles,

parachutes, vehicle mounted poles . Aerial photography should not be confused with Air-to-Air

Photography, when aircraft serve both as a photo platform and subject.

History

Aerial photography was first practiced by the French photographer and balloonist

Gaspard-Félix Tournachon, known as "Nadar", in 1858 over Paris, France.

Page 21: Areial Image

The first use of a motion picture camera mounted to a heavier-than-air aircraft took

place on April 24, 1909 over Rome in the 3:28 silent film short, Wilbur Wright und seine

Flugmaschine.

The first special semiautomatic aerial camera was designed in 1911 by Russian military

engineer — Colonel Potte V. F.[2] This aerial camera was used during World War I.

The use of aerial photography for military purposes was expanded during World War I by

many others aviators such as Fred Zinn. One of the first notable battles was that of Neuve

Chapelle.

With the advent of inexpensive digital cameras, many people now take candid

photographs from commercial aircraft and increasingly from general aviation aircraft on private

pleasure flights.

Uses of imagery:

Aerial photography is used in cartography (particularly in photogrammetric

surveys, which are often the basis for topographic maps), land-use planning, archaeology, movie

production, environmental studies, surveillance, commercial advertising, conveyancing, and

artistic projects. In the United States, aerial photographs are used in many Phase I Environmental

Site Assessments for property analysis. Aerial photos are often processed using GIS software.

Page 22: Areial Image

Radio-controlled aircraft

Advances in radio controlled models have made it possible for model aircraft to

conduct low-altitude aerial photography. This has benefited real-estate advertising, where

commercial and residential properties are the photographic subject. Full-size, manned aircraft are

prohibited from low flights above populated locations.[3] Small scale model aircraft offer

increased photographic access to these previously restricted areas. Miniature vehicles do not

replace full size aircraft, as full size aircraft are capable of longer flight times, higher altitudes,

and greater equipment payloads. They are, however, useful in any situation in which a full-scale

aircraft would be dangerous to operate. Examples would include the inspection of transformers

atop power transmission lines and slow, low-level flight over agricultural fields, both of which

can be accomplished by a large-scale radio controlled helicopter. Professional-grade,

gyroscopically stabilized camera platforms are available for use under such a model; a large

model helicopter with a 26cc gasoline engine can hoist a payload of approximately seven

kilograms (15 lbs).

Recent (2006) FAA regulations grounding all commercial RC model flights have been

upgraded to require formal FAA certification before permission to fly at any altitude in USA.

Because anything capable of being viewed from a public space is considered outside the

realm of privacy in the United States, aerial photography may legally document features and

occurrences on private property.

Types of aerial photograph

Oblique photographs

Photographs taken at an angle are called oblique photographs. If they are taken almost

straight down are sometimes called low oblique and photographs taken from a shallow angle are

called high oblique.

Page 23: Areial Image

Vertical photographs

Vertical photographs are taken straight down. They are mainly used in

photogrammetric and image interpretation. Pictures that will be used in photogrammetric was

traditionally taken with special large format cameras with calibrated and documented geometric

properties.

Combinations

Aerial photographs are often combined. Depending on their purpose it can be done

in several ways. A few are listed below.

Several photographs can be taken with one handheld camera to later be stitched together

to a panorama.

In pictometry five rigidly mounted cameras provide one vertical and four low oblique

pictures that can be used together.

In some digital cameras for aerial photogrammetry photographs from several imaging

elements, sometimes with separate lenses, are geometrically corrected and combined to

one photograph in the camera.

Orthophotos

Vertical photographs are often used to create orthophotos, photographs which have

been geometrically "corrected" so as to be usable as a map. In other words, an orthophoto is a

simulation of a photograph taken from an infinite distance, looking straight down from nadir.

Perspective must obviously be removed, but variations in terrain should also be corrected for.

Multiple geometric transformations are applied to the image, depending on the perspective and

terrain corrections required on a particular part of the image.

Orthophotos are commonly used in geographic information systems, such as are used

by mapping agencies (e.g. Ordnance Survey) to create maps. Once the images have been aligned,

or 'registered', with known real-world coordinates, they can be widely deployed.

Large sets of orthophotos, typically derived from multiple sources and divided into "tiles" (each

typically 256 x 256 pixels in size), are widely used in online map systems such as Google Maps.

Page 24: Areial Image

OpenStreetMap offers the use of similar orthophotos for deriving new map data. Google Earth

overlays orthophotos or satellite imagery onto a digital elevation model to simulate 3D

landscapes.

Aerial video

With advancements in video technology, aerial video is becoming more popular.

Orthogonal video is shot from aircraft mapping pipelines, crop fields, and other points of

interest. Using GPS, video may be embedded with meta data and later synced with a video

mapping program.

This ‘Spatial Multimedia’ is the timely union of digital media including still

photography, motion video, stereo, panoramic imagery sets, immersive media constructs, audio,

and other data with location and date-time information from the GPS and other location designs.

Aerial videos are emerging Spatial Multimedia which can be used for scene

understanding and object tracking. The input video is captured by low flying aerial platforms and

typically consists of strong parallax from non-ground-plane structures. The integration of digital

video, global positioning systems (GPS) and automated image processing will improve the

accuracy and cost-effectiveness of data collection and reduction. Several different aerial

platforms are under investigation for the data collection

Non-linear image enhancement technique

We propose a non-linear image enhancement method, which allows selective enhancement

based on the contrast sensitivity function of the human visual system. We also proposed

An evaluation method for measuring the performance of the algorithm and for comparing it with

Page 25: Areial Image

existing approaches. The selective enhancement of the proposed approach is especially suitable

for digital television applications to improve the perceived visual quality of the images when the

source image contains less satisfactory amount of high frequencies due to various reasons,

including interpolation that is used to convert standard definition sources into high-definition

images. Non-linear processing can presumably generate new frequency components and thus it is

attractive in some applications.

PROPOSED ENHANCEMENT METHOD

Basic Strategy

The basic strategy of the proposed approach shares the same principle of the methods

That is, assuming that the input image is denoted by I, then the enhanced image O is obtained by

the following processing

O = I + NL(HP( I ))

where HP() stands for high-pass filtering and NL() is a nonlinear operator. As will become clear

in subsequent sections, the non-linear processing includes a scale step and a clipping step. The

HP() step is based on a set of Gabor filters.

The performance of a perceptual image enhancement algorithm is typically judged through

a subjective test. In most current work in the literature, such as this subjective test is simplified

to simply showing an enhancement image along with the original to a viewer. While a viewer

may report that a blurry image is indeed enhanced, this approach does not allow systematic

comparison between tow competing methods.

Furthermore, since the ideal goal of enhancement is to make up the high-frequency

components that are lost in the imaging orother processes, it would be desired to show whether

an enhancement algorithm indeed generates the desired highfrequency components. The tests in

do not answer this question. (Note that, although showing the Fourier transform of the enhanced

image may illustrate whether high-frequency components are added this is not an accurate

Page 26: Areial Image

evaluation of a method, due to the fact that the Fourier transform provides only a global measure

of the signal spectrum. For example, disturbing ringing artifacts may appear as false high-

frequency components in the Fourier transform.)

Automatic image enhancement

Digital data compression

Many image file formats use data compression to reduce file size and save storage

space. Digital compression of images may take place in the camera, or can be done in the

computer with the image editor. When images are stored in JPEG format, compression has

already taken place. Both cameras and computer programs allow the user to set the level of

compression.

Some compression algorithms, such as those used in PNG file format, are lossless,

which means no information is lost when the file is saved. By contrast, the JPEG file format uses

a lossy compression algorithm by which the greater the compression, the more information is

lost, ultimately reducing image quality or detail that can not be restored. JPEG uses knowledge

of the way the human brain and eyes perceive color to make this loss of detail less noticeable.

Image editor features

Listed below are some of the most used capabilities of the better graphic manipulation

programs. The list is by no means all inclusive. There are a myriad of choices associated with the

application of most of these features.

Selection

Page 27: Areial Image

One of the prerequisites for many of the applications mentioned below is a method of

selecting part(s) of an image, thus applying a change selectively without affecting the entire

picture. Most graphics programs have several means of accomplishing this, such as a marquee

tool, lasso, vector-based pen tools as well as more advanced facilities such as edge detection,

masking, alpha compositing, and color and channel-based extraction.

Layers

Another feature common to many graphics applications is that of Layers, which are

analogous to sheets of transparent acetate (each containing separate elements that make up a

combined picture), stacked on top of each other, each capable of being individually positioned,

altered and blended with the layers below, without affecting any of the elements on the other

layers. This is a fundamental workflow which has become the norm for the majority of programs

on the market today, and enables maximum flexibility for the user while maintaining non-

destructive editing principles and ease of use.

Image size alteration

Image editors can resize images in a process often called image scaling, making them

larger, or smaller. High image resolution cameras can produce large images which are often

reduced in size for Internet use. Image editor programs use a mathematical process called

resampling to calculate new pixel values whose spacing is larger or smaller than the original

pixel values. Images for Internet use are kept small say 640 x 480 pixels which would equal 0.3

megapixels.

Cropping an image

Digital editors are used to crop images. Cropping creates a new image by selecting a

desired rectangular portion from the image being cropped. The unwanted part of the image is

discarded. Image cropping does not reduce the resolution of the area cropped. Best results are

obtained when the original image has a high resolution. A primary reason for cropping is to

improve the image composition in the new image.

Page 28: Areial Image

Histogram

Image editors have provisions to create an image histogram of the image being edited.

The histogram plots the number of pixels in the image (vertical axis) with a particular brightness

value (horizontal axis). Algorithms in the digital editor allow the user to visually adjust the

brightness value of each pixel and to dynamically display the results as adjustments are made.

Improvements in picture brightness and contrast can thus be obtained.

Noise reduction

Image editors may feature a number of algorithms which can add or remove

noise in an image. JPEG artifacts can be removed; dust and scratches can be removed and an

image can be de-speckled. Noise reduction merely estimates the state of the scene without the

noise and is not a substitute for obtaining a "cleaner" image. Excessive noise reduction leads to a

loss of detail, and its application is hence subject to a trade-off between the undesirability of the

noise itself and that of the reduction artifacts.

Noise tends to invade images when pictures are taken in low light settings. A new picture can be

Given an ‘antiquated’ effect by adding uniform monochrome noise.

Removal of unwanted elements

Most image editors can be used to remove unwanted branches, etc, using a

"clone" tool. Removing these distracting elements draws focus to the subject, improving overall

composition. Introduced in Photoshop CS5, the "Content-Aware Fill" could be used to select an

object (unwanted branches) and remove it out of the picture by simply pressing "Delete" on the

keyboard, without destroying the image. The same feature is available for GIMP in form of the

plug-in "Resynthesizer" developed by Paul Harrison.

Image editors

Page 29: Areial Image

For example, to make an image lighter or darker, or to increase or decrease contrast. Advanced

photo enhancement software also supports many filters for altering images in various ways.

Programs specialized for image enhancements are sometimes called image editors.

Sharpening and softening images

Graphics programs can be used to both sharpen and blur images in a number of

ways, such as unsharp masking or deconvolution. Portraits often appear more pleasing when

selectively softened (particularly the skin and the background) to better make the subject stand

out. This can be achieved with a camera by using a large aperture, or in the image editor by

making a selection and then blurring it. Edge enhancement is an extremely common technique

used to make images appear sharper, although purists frown on the result as appearing unnatural.

Selecting and merging of images

Many graphics applications are capable of merging one or more individual images into

a single file. The orientation and placement of each image can be controlled.

When selecting a raster image that is not rectangular, it requires separating the

edges from the background, also known as silhouetting. This is the digital version of cutting out

the image. Clipping paths may be used to add silhouetted images to vector graphics or page

layout files that retain vector data. Alpha compositing, allows for soft translucent edges when

selecting images. There are a number of ways to silhouette an image with soft edges including

selecting the image or its background by sampling similar colors, selecting the edges by raster

tracing, or converting a clipping path to a raster selection. Once the image is selected, it may be

copied and pasted into another section of the same file, or into a separate file. The selection may

also be saved in what is known as an alpha channel.

A popular way to create a composite image is to use transparent layers. The background image

is used as the bottom layer, and the image with parts to be added are placed in a layer above that.

Using an image layer mask, all but the parts to be merged are hidden from the layer, giving the

impression that these parts have been added to the background layer. Performing a merge in this

Page 30: Areial Image

manner preserves all of the pixel data on both layers to more easily enable future changes in the

new merged image.

Slicing of images

A more recent tool in digital image editing software is the image slicer. Parts of

images for graphical user interfaces or web pages are easily sliced, labeled and saved separately

from whole images so the parts can be handled individually by the display medium. This is

useful to allow dynamic swapping via interactivity or animating parts of an image in the final

presentation.

Special effects

Image editors usually have a list of special effects that can create unusual results.

Images may be skewed and distorted in various ways. Scores of special effects can be applied to

an image which includes various forms of distortion, artistic effects, geometric transforms and

texture effects, or combinations thereof.

Change color depth

It is possible, using software, to change the color depth of images. Common color depths are 2,

4, 16, 256, 65.5 thousand and 16.7 million colors. The JPEG and PNG image formats are capable

of storing 16.7 million colors (equal to 256 luminance values per color channel). In addition,

gray scale images of 8 bits or less can be created, usually via conversion and down sampling

form a full color image.

Contrast change and brightening

Image editors have provisions to simultaneously change the contrast of images and

brighten or darken the image. Underexposed images can often be improved by using this feature.

Recent advances have allowed more intelligent exposure correction whereby only pixels below a

particular luminosity threshold are brightened, thereby brightening underexposed shadows

without affecting the rest of the image. The exact transformation that is applied to each color

channel can vary from editor to editor. GIMP applies the following formula.

Page 31: Areial Image

Color adjustments

The color of images can be altered in a variety of ways. Colors can be faded in and

out, and tones can be changed using curves or other tools. The color balance can be improved,

which is important if the picture was shot indoors with daylight film, or shot on a camera with

the white balance incorrectly set. Special effects, like sepia and grayscale can be added to a

image. In addition, more complicated procedures such as the mixing of color channels are

possible using more advanced graphics editors.

The red-eye effect, which occurs when flash photos are taken when the pupil is too widely

open (so that light from the flash that passes into the eye through the pupil reflects off the fundus

at the back of the eyeball), can also be eliminated at this stage.

Printing

Controlling the print size and quality of digital images requires an understanding of

the pixels-per-inch (ppi) variable that is stored in the image file and sometimes used to control

the size of the printed image. Within the Image Size dialog (as it is called in Photoshop), the

image editor allows the user to manipulate both pixel dimensions and the size of the image on

the printed document. These parameters work together to produce a printed image of the desired

size and quality. Pixels per inch of the image, pixel per inch of the computer monitor, and dots

per inch on the printed document are related, but in use are very different. The Image Size dialog

can be used as an image calculator of sorts. For example, a 1600 x 1200 image with a ppi of 200

will produce a printed image of 8 x 6 inches. The same image with a ppi of 400 will produce a

printed image of 4 x 3 inches. Change the ppi to 800, and the same image now prints out at 2 x

1.5 inches. All three printed images contain the same data (1600 x 1200 pixels) but the pixels are

closer together on the smaller prints, so the smaller images will potentially look sharp when the

larger ones do not. The quality of the image will also depend on the capability of the printer.

Wavelets

Page 32: Areial Image

Wavelet is a waveform of effectively limited duration that has an average value

of zero.

The Wavelet transform is a transform of this type. It provides the time-frequency

representation. (There are other transforms which give this information too, such as short time

Fourier transform, Wigner distributions, etc.)

Often times a particular spectral component occurring at any instant can be of particular

interest. In these cases it may be very beneficial to know the time intervals these particular

spectral components occur. For example, in EEGs, the latency of an event-related potential is of

particular interest (Event-related potential is the response of the brain to a specific stimulus like

flash-light, the latency of this response is the amount of time elapsed between the onset of the

stimulus and the response).

Wavelet transform is capable of providing the time and frequency information

simultaneously, hence giving a time-frequency representation of the signal.

How wavelet transform works is completely a different fun story, and should be

explained after short time Fourier Transform (STFT) . The WT was developed as an

alternative to the STFT. The STFT will be explained in great detail in the second part of this

tutorial. It suffices at this time to say that the WT was developed to overcome some resolution

related problems of the STFT, as explained in Part II.

To make a real long story short, we pass the time-domain signal from various highpass

and low pass filters, which filters out either high frequency or low frequency portions of the

signal. This procedure is repeated, every time some portion of the signal corresponding to some

frequencies being removed from the signal.

Here is how this works: Suppose we have a signal which has frequencies up to 1000

Hz. In the first stage we split up the signal in to two parts by passing the signal from a highpass

and a lowpass filter (filters should satisfy some certain conditions, so-called admissibility

condition) which results in two different versions of the same signal: portion of the signal

corresponding to 0-500 Hz (low pass portion), and 500-1000 Hz (high pass portion).

Page 33: Areial Image

Then, we take either portion (usually low pass portion) or both, and do the same thing

again. This operation is called decomposition .

Assuming that we have taken the lowpass portion, we now have 3 sets of data, each

corresponding to the same signal at frequencies 0-250 Hz, 250-500 Hz, 500-1000 Hz.

Then we take the lowpass portion again and pass it through low and high pass filters;

we now have 4 sets of signals corresponding to 0-125 Hz, 125-250 Hz,250-500 Hz, and 500-

1000 Hz. We continue like this until we have decomposed the signal to a pre-defined certain

level. Then we have a bunch of signals, which actually represent the same signal, but all

corresponding to different frequency bands. We know which signal corresponds to which

frequency band, and if we put all of them together and plot them on a 3-D graph, we will have

time in one axis, frequency in the second and amplitude in the third axis. This will show us

which frequencies exist at which time ( there is an issue, called "uncertainty principle", which

states that, we cannot exactly know what frequency exists at what time instance , but we can

only know what frequency bands exist at what time intervals , more about this in the

subsequent parts of this tutorial).

However, I still would like to explain it briefly:

The uncertainty principle, originally found and formulated by Heisenberg, states

that, the momentum and the position of a moving particle cannot be known simultaneously. This

applies to our subject as follows:

The frequency and time information of a signal at some certain point in the time-

frequency plane cannot be known. In other words: We cannot know what spectral component

exists at any given time instant. The best we can do is to investigate what spectral components

exist at any given interval of time. This is a problem of resolution, and it is the main reason why

researchers have switched to WT from STFT. STFT gives a fixed resolution at all times, whereas

WT gives a variable resolution as follows:

Higher frequencies are better resolved in time, and lower frequencies are better

resolved in frequency. This means that, a certain high frequency component can be located better

Page 34: Areial Image

in time (with less relative error) than a low frequency component. On the contrary, a low

frequency component can be located better in frequency compared to high frequency component.

Take a look at the following grid:

 

  

 f ^

   |*******************************************         continuous

   |*  *  *  *  *  *  *  *  *  *  *  *  *  *  *         wavelet transform

   |*     *     *     *     *     *     *         

   |*           *           *           *         

   |*                       *

    --------------------------------------------> time

 

 

 

Interpret the above grid as follows: The top row shows that at higher frequencies

we have more samples corresponding to smaller intervals of time. In other words, higher

frequencies can be resolved better in time. The bottom row however, corresponds to low

frequencies, and there are less number of points to characterize the signal, therefore, low

frequencies are not resolved well in time.

 

 

 

^frequency

Page 35: Areial Image

|    

 |

|

| *******************************************************

|      

 |   

 |       

 | *  *  *  *  *  *  *  *  *  *  *  *  *  *  *  *  *  *  *   discrete time

|                                                           wavelet transform

| *     *     *     *     *     *     *     *     *     *   

 |

 | *           *           *           *           *

| *                       *                       *

|----------------------------------------------------------> time

 

 In discrete time case, the time resolution of the signal works the same as above,

but now, the frequency information has different resolutions at every stage too. Note that, lower

frequencies are better resolved in frequency, where as higher frequencies are not. Note how the

spacing

between subsequent frequency components increase as frequency increases.

 

 Below , are some examples of continuous wavelet transform:

Let's take a sinusoidal signal, which has two different frequency components at two

different times:

 

 Note the low frequency portion first, and then the high frequency.

 

Page 36: Areial Image

 

 

The continuous wavelet transform of the above signal:

  

Page 37: Areial Image

Figure

Note however, the frequency axis in these plots are labeled as scale . The concept of

the scale will be made more clear in the subsequent sections, but it should be noted at this time

that the scale is inverse of frequency. That is, high scales correspond to low frequencies, and low

scales correspond to high frequencies. Consequently, the little peak in the plot corresponds to the

high frequency components in the signal, and the large peak corresponds to low frequency

components (which appear before the high frequency components in time) in the signal.

 

  You might be puzzled from the frequency resolution shown in the plot, since it shows

good frequency resolution at high frequencies. Note however that, it is the good  scale

resolution  that looks good at high frequencies (low scales), and good scale resolution means

poor frequency resolution and vice versa.

Continuous wavelet transform

The continuous wavelet transform was developed as an alternative approach to the

short time Fourier transform to overcome the resolution problem. The wavelet analysis is done in

a similar way to the STFT analysis, in the sense that the signal is multiplied with a function, {\it

the wavelet}, similar to the window function in the STFT, and the transform is computed

separately for different segments of the time-domain signal. However, there are two main

differences between the STFT and the CWT:

1. The Fourier transforms of the windowed signals are not taken, and therefore single peak will

be seen corresponding to a sinusoid, i.e., negative frequencies are not computed.

2. The width of the window is changed as the transform is computed for every single spectral

component, which is probably the most significant characteristic of the wavelet transform.

Page 38: Areial Image

The continuous wavelet transform is defined as follows

As seen in the above equation , the transformed signal is a function of two variables, tau and

s , the translation and scale parameters, respectively. psi(t) is the transforming function, and it

is called the mother wavelet . The term mother wavelet gets its name due to two important

properties of the wavelet analysis as explained below:

The term wavelet means a small wave . The smallness refers to the condition that this

(window) function is of finite length ( compactly supported). The wave refers to the condition

that this function is oscillatory . The term mother implies that the functions with different region

of support that are used in the transformation process are derived from one main function, or the

mother wavelet. In other words, the mother wavelet is a prototype for generating the other

window functions.

The term translation is used in the same sense as it was used in the STFT; it is related to

the location of the window, as the window is shifted through the signal. This term, obviously,

corresponds to time information in the transform domain. However, we do not have a frequency

parameter, as we had before for the STFT. Instead, we have scale parameter which is defined as

$1/frequency$. The term frequency is reserved for the STFT. Scale is described in more detail in

the next section.

The Scale

The parameter scale in the wavelet analysis is similar to the scale used in maps. As in the

case of maps, high scales correspond to a non-detailed global view (of the signal), and low scales

correspond to a detailed view. Similarly, in terms of frequency, low frequencies (high scales)

Page 39: Areial Image

correspond to a global information of a signal (that usually spans the entire signal), whereas high

frequencies (low scales) correspond to a detailed information of a hidden pattern in the signal

(that usually lasts a relatively short time). Cosine signals corresponding to various scales are

given as examples in the following figure .

figure

Fortunately in practical applications, low scales (high frequencies) do not last for the

entire duration of the signal, unlike those shown in the figure, but they usually appear from time

to time as short bursts, or spikes. High scales (low frequencies) usually last for the entire

duration of the signal.

Page 40: Areial Image

Scaling, as a mathematical operation, either dilates or compresses a signal. Larger

scales correspond to dilated (or stretched out) signals and small scales correspond to compressed

signals. All of the signals given in the figure are derived from the same cosine signal, i.e., they

are dilated or compressed versions of the same function. In the above figure, s=0.05 is the

smallest scale, and s=1 is the largest scale.

In terms of mathematical functions, if f(t) is a given function f(st) corresponds to a

contracted (compressed) version of f(t) if s > 1 and to an expanded (dilated) version of f(t) if s <

1 .

However, in the definition of the wavelet transform, the scaling term is used in the

denominator, and therefore, the opposite of the above statements holds, i.e., scales s > 1 dilates

the signals whereas scales s < 1 , compresses the signal.

Discrete wavewle transform

  The foundations of the DWT go back to 1976 when Croiser, Esteban, and

Galand devised a technique to decompose discrete time signals. Crochiere, Weber, and Flanagan

did a similar work on coding of speech signals in the same year. They named their analysis

scheme as subband coding. In 1983, Burt defined a technique very similar to subband coding

and named it pyramidal coding which is also known as multiresolution analysis. Later in 1989,

Vetterli and Le Gall made some improvements to the subband coding scheme, removing the

existing redundancy in the pyramidal coding scheme. Subband coding is explained below. A

detailed coverage of the discrete wavelet transform and theory of multiresolution analysis can be

found in a number of articles and books that are available on this topic, and it is beyond the

scope of this tutorial.

Page 41: Areial Image

 

Subband Coding and Multiresolution Analysis

 

The main idea is the same as it is in the CWT. A time-scale representation of a

digital signal is obtained using digital filtering techniques. Recall that the CWT is a correlation

between a wavelet at different scales and the signal with the scale (or the frequency) being used

as a measure of similarity. The continuous wavelet transform was computed by changing the

scale of the analysis window, shifting the window in time, multiplying by the signal, and

integrating over all times. In the discrete case, filters of different cutoff frequencies are used to

analyze the signal at different scales. The signal is passed through a series of high pass filters to

analyze the high frequencies, and it is passed through a series of low pass filters to analyze the

low frequencies.

The resolution of the signal, which is a measure of the amount of detail information

in the signal, is changed by the filtering operations, and the scale is changed by upsampling and

downsampling (subsampling) operations. Subsampling a signal corresponds to reducing the

sampling rate, or removing some of the samples of the signal. For example, subsampling by two

refers to dropping every other sample of the signal. Subsampling by a factor n reduces the

number of samples in the signal n times.

Upsampling a signal corresponds to increasing the sampling rate of a signal by

adding new samples to the signal. For example, upsampling by two refers to adding a new

sample, usually a zero or an interpolated value, between every two samples of the signal.

Upsampling a signal by a factor of n increases the number of samples in the signal by a factor of

n.

Although it is not the only possible choice, DWT coefficients are usually sampled

from the CWT on a dyadic grid, i.e., s0 = 2 and 0 = 1, yielding s=2j and =k*2j, as described

Page 42: Areial Image

in Part 3. Since the signal is a discrete time function, the terms function and sequence will be

used interchangeably in the following discussion. This sequence will be denoted by x[n], where n

is an integer.

The procedure starts with passing this signal (sequence) through a half band digital

lowpass filter with impulse response h[n]. Filtering a signal corresponds to the mathematical

operation of convolution of the signal with the impulse response of the filter. The convolution

operation in discrete time is defined as follows:

A half band lowpass filter removes all frequencies that are above half of the highest

frequency in the signal. For example, if a signal has a maximum of 1000 Hz component, then

half band lowpass filtering removes all the frequencies above 500 Hz.

The unit of frequency is of particular importance at this time. In discrete signals,

frequency is expressed in terms of radians. Accordingly, the sampling frequency of the signal is

equal to 2 radians in terms of radial frequency. Therefore, the highest frequency component

that exists in a signal will be radians, if the signal is sampled at Nyquist’s rate (which is twice

the maximum frequency that exists in the signal); that is, the Nyquist’s rate corresponds to

rad/s in the discrete frequency domain. Therefore using Hz is not appropriate for discrete signals.

However, Hz is used whenever it is needed to clarify a discussion, since it is very common to

think of frequency in terms of Hz. It should always be remembered that the unit of frequency for

discrete time signals is radians.

After passing the signal through a half band lowpass filter, half of the samples can

be eliminated according to the Nyquist’s rule, since the signal now has a highest frequency of

/2 radians instead of radians. Simply discarding every other sample will subsample the

signal by two, and the signal will then have half the number of points. The scale of the signal is

now doubled. Note that the lowpass filtering removes the high frequency

information, but leaves the scale unchanged. Only the subsampling process changes the scale.

Page 43: Areial Image

Resolution, on the other hand, is related to the amount of information in the signal, and therefore,

it is affected by the filtering operations. Half band lowpass filtering removes half of the

frequencies, which can be interpreted as losing half of the information. Therefore, the resolution

is halved after the filtering operation. Note, however, the subsampling operation after filtering

does not affect the resolution, since removing half of the spectral components from the signal

makes half the number of samples redundant anyway. Half the samples can be discarded without

any loss of information. In summary, the lowpass filtering halves the resolution, but leaves the

scale unchanged. The signal is then subsampled by 2 since half of the number of samples are

redundant. This doubles the scale.

This procedure can mathematically be expressed as

Having said that, we now look how the DWT is actually computed: The DWT

analyzes the signal at different frequency bands with different resolutions by decomposing the

signal into a coarse approximation and detail information. DWT employs two sets of functions,

called scaling functions and wavelet functions, which are associated with low pass and highpass

filters, respectively. The decomposition of the signal into different frequency bands is simply

obtained by successive highpass and lowpass filtering of the time domain signal. The original

signal x[n] is first passed through a halfband highpass filter g[n] and a lowpass filter h[n]. After

the filtering, half of the samples can be eliminated according to the Nyquist’s rule, since the

signal now has a highest frequency of /2 radians instead of . The signal can therefore be

subsampled by 2, simply by discarding every other sample. This constitutes one level of

decomposition and can mathematically be expressed as follows:

Page 44: Areial Image

where yhigh[k] and ylow[k] are the outputs of the highpass and lowpass filters, respectively, after

subsampling by 2.

This decomposition halves the time resolution since only half the number of samples

now characterizes the entire signal. However, this operation doubles the frequency resolution,

since the frequency band of the signal now spans only half the previous frequency band,

effectively reducing the uncertainty in the frequency by half. The above procedure, which is also

known as the subband coding, can be repeated for further decomposition. At every level, the

filtering and subsampling will result in half the number of samples (and hence half the time

resolution) and half the frequency band spanned (and hence double the frequency resolution).

Figure illustrates this procedure, where x[n] is the original signal to be decomposed, and h[n]

and g[n] are lowpass and highpass filters, respectively. The bandwidth of the signal at every

level is marked on the figure as "f".

Page 45: Areial Image

Figure

The Subband Coding Algorithm As an example, suppose that the original

signal x[n] has 512 sample points, spanning a frequency band of zero to rad/s. At the first

Page 46: Areial Image

decomposition level, the signal is passed through the highpass and lowpass filters, followed by

subsampling by 2. The output of the highpass filter has 256 points (hence half the time

resolution), but it only spans the frequencies /2 to rad/s (hence double the frequency

resolution). These 256 samples constitute the first level of DWT coefficients. The output of the

lowpass filter also has 256 samples, but it spans the other half of the frequency band, frequencies

from 0 to /2 rad/s. This signal is then passed through the same lowpass and highpass filters for

further decomposition.

The output of the second lowpass filter followed by subsampling has 128

samples spanning a frequency band of 0 to /4 rad/s, and the output of the second highpass filter

followed by subsampling has 128 samples spanning a frequency band of /4 to /2 rad/s. The

second highpass filtered signal constitutes the second level of DWT coefficients. This signal has

half the time resolution, but twice the frequency resolution of the first level signal. In other

words, time resolution has decreased by a factor of 4, and frequency resolution has increased by

a factor of 4 compared to the original signal. The lowpass filter output is then filtered once again

for further decomposition. This process continues until two samples are left. For this specific

example there would be 8 levels of decomposition, each having half the number of samples of

the previous level. The DWT of the original signal is then obtained by concatenating all

coefficients starting from the last level of decomposition (remaining two samples, in this case).

The DWT will then have the same number of coefficients as the original signal.

The frequencies that are most prominent in the original signal will appear as high

amplitudes in that region of the DWT signal that includes those particular frequencies. The

difference of this transform from the Fourier transform is that the time localization of these

frequencies will not be lost. However, the time localization will have a resolution that depends

on which level they appear. If the main information of the signal lies in the high frequencies, as

happens most often, the time localization of these frequencies will be more precise, since they

are characterized by more number of samples. If the main information lies only at very low

frequencies, the time localization will not be very precise, since few samples are used to express

signal at these frequencies. This procedure in effect offers a good time resolution at high

frequencies, and good frequency resolution at low frequencies. Most practical signals

encountered are of this type.

Page 47: Areial Image

The frequency bands that are not very prominent in the original signal will have

very low amplitudes, and that part of the DWT signal can be discarded without any major loss of

information, allowing data reduction. Figure 4.2 illustrates an example of how DWT signals look

like and how data reduction is provided. Figure 4.2a shows a typical 512-sample signal that is

normalized to unit amplitude. The horizontal axis is the number of samples, whereas the vertical

axis is the normalized amplitude. Figure 4.2b shows the 8 level DWT of the signal in Figure

4.2a. The last 256 samples in this signal correspond to the highest frequency band in the signal,

the previous 128 samples correspond to the second highest frequency band and so on. It should

be noted that only the first 64 samples, which correspond to lower frequencies of the analysis,

carry relevant information and the rest of this signal has virtually no information. Therefore, all

but the first 64 samples can be discarded without any loss of information. This is how DWT

provides a very effective data reduction scheme.

 

Page 48: Areial Image

We will revisit this example, since it provides important insight to how DWT

should be interpreted. Before that, however, we need to conclude our mathematical analysis of

the DWT.

One important property of the discrete wavelet transform is the relationship between

the impulse responses of the highpass and lowpass filters. The highpass and lowpass filters are

not independent of each other, and they are related by

where g[n] is the highpass, h[n] is the lowpass filter, and L is the filter length (in

number of points). Note that the two filters are odd index alternated reversed versions of each

other. Lowpass to highpass conversion is provided by the (-1)n term. Filters satisfying this

condition are commonly used in signal processing, and they are known as the Quadrature Mirror

Filters (QMF). The two filtering and subsampling operations can be expressed by

The reconstruction in this case is very easy since halfband filters form orthonormal

bases. The above procedure is followed in reverse order for the reconstruction. The signals at

every level are upsampled by two, passed through the synthesis filters g’[n], and h’[n] (highpass

and lowpass, respectively), and then added. The interesting point here is that the analysis and

synthesis filters are identical to each other, except for a time reversal. Therefore, the

reconstruction formula becomes (for each layer)

However, if the filters are not ideal halfband, then perfect reconstruction cannot

be achieved. Although it is not possible to realize ideal filters, under certain conditions it is

Page 49: Areial Image

possible to find filters that provide perfect reconstruction. The most famous ones are the ones

developed by Ingrid Daubechies, and they are known as Daubechies’ wavelets.

Note that due to successive subsampling by 2, the signal length must be a power

of 2, or at least a multiple of power of 2, in order this scheme to be efficient. The length of the

signal determines the number of levels that the signal can be decomposed to. For example, if the

signal length is 1024, ten levels of decomposition are possible.

Interpreting the DWT coefficients can sometimes be rather difficult because

the way DWT coefficients are presented is rather peculiar. To make a real long story real short,

DWT coefficients of each level are concatenated, starting with the last level. An example is in

order to make this concept clear:

Suppose we have a 256-sample long signal sampled at 10 MHZ and we

wish to obtain its DWT coefficients. Since the signal is sampled at 10 MHz, the highest

frequency component that exists in the signal is 5 MHz. At the first level, the signal is passed

through the lowpass filter h[n], and the highpass filter g[n], the outputs of which are subsampled

by two. The highpass filter output is the first level DWT coefficients. There are 128 of them, and

they represent the signal in the [2.5 5] MHz range. These 128 samples are the last 128 samples

plotted. The lowpass filter output, which also has 128 samples, but spanning the frequency band

of [0 2.5] MHz, are further decomposed by passing them through the same h[n] and g[n]. The

output of the second highpass filter is the level 2 DWT coefficients and these 64 samples precede

the 128 level 1 coefficients in the plot. The output of the second lowpass filter is further

decomposed, once again by passing it through the filters h[n] and g[n]. The output of the third

highpass filter is the level 3 DWT coefficiets. These 32 samples precede the level 2 DWT

coefficients in the plot.

The procedure continues until only 1 DWT coefficient can be computed at level 9.

This on coefficient is the first to be plotted in the DWT plot. This is followed by 2 level 8

coefficients, 4 level 7 coefficients, 8 level 6 coefficients, 16 level 5 coefficients, 32 level 4

coefficients, 64 level 3 coefficients, 128 level 2 coefficients and finally 256 level 1 coefficients.

Note that less and less number of samples is used at lower frequencies, therefore, the time

Page 50: Areial Image

resolution decreases as frequency decreases, but since the frequency interval also decreases at

low frequencies, the frequency resolution increases. Obviously, the first few coefficients would

not carry a whole lot of information, simply due to greatly reduced time resolution

To illustrate this richly bizarre DWT representation let us take a look at a

real world signal. Our original signal is a 256-sample long ultrasonic signal, which was sampled

at 25 MHz. This signal was originally generated by using a 2.25 MHz transducer, therefore the

main spectral component of the signal is at 2.25 MHz. The last 128 samples correspond to [6.25

12.5] MHz range. As seen from the plot, no information is available here, hence these samples

can be discarded without any loss of information. The preceding 64 samples represent the signal

in the [3.12 6.25] MHz range, which also does not carry any significant information. The little

glitches probably correspond to the high frequency noise in the signal. The preceding 32 samples

represent the signal in the [1.5 3.1] MHz range. As you can see, the majority of the signal’s

energy is focused in these 32 samples, as we expected to see. The previous 16 samples

correspond to [0.75 1.5] MHz and the peaks that are seen at this level probably represent the

lower frequency envelope of the signal. The previous samples probably do not carry any other

significant information. It is safe to say that we can get by with the 3 rd and 4th level coefficients,

that is we can represent this 256 sample long signal with 16+32=48 samples, a significant data

reduction which would make your computer quite happy.

 One area that has benefited the most from this particular property of the wavelet

transforms is image processing. As you may well know, images, particularly high-resolution

images, claim a lot of disk space. As a matter of fact, if this tutorial is taking a long time to

download, that is mostly because of the images. DWT can be used to reduce the image size

without losing much of the resolution. Here is how:

For a given image, you can compute the DWT of, say each row, and discard

all values in the DWT that are less then a certain threshold. We then save only those DWT

coefficients that are above the threshold for each row, and when we need to reconstruct the

original image, we simply pad each row with as many zeros as the number of discarded

coefficients, and use the inverse DWT to reconstruct each row of the original image. We can also

analyze the image at different frequency bands, and reconstruct the original image by using only

Page 51: Areial Image

the coefficients that are of a particular band. I will try to put sample images hopefully soon, to

illustrate this point.

Another issue that is receiving more and more attention is carrying out the

decomposition (subband coding) not only on the lowpass side but on both sides. In other words,

zooming into both low and high frequency bands of the signal separately. This can be visualized

as having both sides of the tree structure of Figure 4.1. What result is what is known as the

wavelet packages. We will not discuss wavelet packages in this here, since it is beyond the

scope of this tutorial. Anyone who is interested in wavelet packages, or more information on

DWT can find this information in any of the numerous texts available in the market.

And this concludes our mini series of wavelet tutorial. If I could be of any

assistance to anyone struggling to understand the wavelets, I would consider the time and the

effort that went into this tutorial well spent. I would like to remind that this tutorial is neither a

complete nor a through coverage of the wavelet transforms. It is merely an overview of the

concept of wavelets and it was intended to serve as a first reference for those who find the

available texts on wavelets rather complicated. There might be many structural and/or technical

mistakes, and I would appreciate if you could point those out to me. Your feedback is of utmost

importance for the success of this tutorial.

Page 52: Areial Image

Algorithm

The proposed enhancement algorithm consists of three stages: the first and the

third stage are applied in the spatial domain and the second one in the discrete wavelet domain.

Histogram Adjustment

Our motivation in making an histogram adjustment for minimizing the illumination

effect is based on some assumptions about image formation and human vision behavior. The

sensor signal S(x, y) incident upon an imaging system can be approximated as the product[8],[26]

S(x,y) = L(x,y)R(x,y), (1)

where R(x, y) is the reflectance and L(x, y) is the illuminance at each point (x, y). In

lightness algorithms, assuming that the sensors and filters used in artificial visual systems

possess the same nonlinear property as human photoreceptors, i.e.,logarithmic responses to

physical intensities incident on the their photoreceptors [8], Equation 1 can be decomposed into

a sum of two components by using the transformation

I(x,y) =log(S(x,y)):

I(x,y) = log(L(x,y)) + log(R(x,y)), (2)

where I(x,y) is the intensity of the image at pixel location (x,y).Equation 2 implies that

illumination has an effect on the image histogram as a linear shift. This shift, intrinsically, is not

same in different spectral bands.

Another assumption of the lightness algorithms is the grayworld assumption stating

that the average surface reflectance of each scene in each wavelength band is the same: gray [8].

From an image processing stance, this assumption indicates that images of natural scenes should

contain pixels having almost equal average gray levels in each spectral band.

Combining Equation 2 with the gray-world assumption, we perform histogram

adjustment as follows:

1. The amount of shift corresponding to illuminance is determined from the beginning of the

lower tail of the histogram such that a predefined amount (typically

Page 53: Areial Image

0.5%) of image pixels is clipped

2. The shift is subtracted from each pixel value

3. This process is repeated separately for each color

Channel

Wavelet Based Dynamic Range Compression

And Contrast Enhancement

Dynamic Range Compression

Dynamic range compression and the local contrast enhancement in WDRC are

performed on the luminance channel. For input color images, the intensity image I(x,y) can

be obtained with the following equation:

I(x, y) = max[Ii (x, y)], i Î {R,G,B}. (3)

The enhancement algorithm is applied on this intensity image.The luminance values are

decomposed using orthonormal wavelet transform as shown in (4):

where aJ,k,l are the approximation coefficients at scale J with corresponding scaling functions

FJ,k,l (x, y) , and d j,k,l are the detail coefficients at each scale with corresponding

wavelet functions Yj,k,l (x, y) . A raised hyperbolic sinefunction given by Equation 5 maps the

normalized range [0,1] of aJ,k,l to the same range, and is used for compressing the dynamic

range represented by the coefficients. The compressed coefficients at level J can be obtained by

Page 54: Areial Image

where a ¢ J,k,l are normalized coefficients given by

,

and, r is the curvature parameter which adjusts the shape of the hyperbolic sine function.

Applying the mapping operator to the coefficients and taking the inverse wavelet transform

would result in a compressed dynamic range with a significant loss of contrast. Thus, a

center/surround procedure that preserves/enhances the local contrast is applied to those

mapped coefficients.

Local Contrast Enhancement

The local contrast enhancement which employs a center/surround approach is

carried out as follows

The surrounding intensity information related to each coefficient is obtained by

filtering the normalized

approximation coefficients with a Gaussian kernel.

where s is the surround space constant, and k is determined

under the constraint that

Local average image representing the surround is obtained by 2D convolution of (7) with image

A ¢, the elements of which are the normalized approximation coefficients a ¢ J,k,l and given

Page 55: Areial Image

by (6) :

The contrast enhanced coefficients matrix Anew which will replace the original approximation

coefficients aJ,k,l is given by

where, R is the centre/surround ratio given by d is the enhancement strength

constant with a default value of 1; A is the matrix whose elements are the output of the

hyperbolic sine function in (5).

A linear combination of three kernels with three different scales, combined-scale-

Gaussian (Gc ), is used for improved rendition is given by

Page 56: Areial Image

Detail Coefficient Modification

The detail coefficients are modified using the ratio between the enhanced and

original approximation coefficients. This ratio is applied as an adaptive gain mask such as:

where A and Anew are the original and the enhanced approximation coefficient

matrices at level 1; Dh , Dv , Dd are the detail coefficient matrices for horizontal, vertical and

diagonal details at the same level, and Dnew h , Dnew v , Dnew d are the corresponding

modified matrices, respectively. If the wavelet decomposition is carried out for more than one

level, this procedure is repeated for each level.

Color Restoration

A linear color restoration process is used to obtain the final color image in our

previous work .For WDRC with color restoration a non-linear approach is employed. The

RGB values of the enhanced color image ( , ) , I x y enh i along with the CR factor are given as:

where Ii(x, y)is the RGB values of the input color image at the corresponding pixel

location and Ienh (x, y) is the resulting enhanced intensity image derived from the inverse

wavelet transform of the modified coefficients. Here _ is the non-linear gain factor

Page 57: Areial Image

corresponding. This factor has a canonical value and increases the color saturation resulting in

more appealing color rendition. Since the coefficients are normalized during the

enhancement process, the enhanced intensity image obtained by the inverse transform of

enhanced coefficients, along with the enhanced color image given by (15) span almost only the

lower half of the full range of the histogram. For the final display domain output enh i I , ’s in

(15) are stretched to represent the full dynamic range. Histogram clipping from the upper tail of

histograms in each channel give the best results in

converting the output to display domain.

Page 58: Areial Image

CONCLUSION

In this project application of the WDRC algorithm in aerial imagery is

presented. The results obtained from large variety of aerial images show strong robustness, high

image quality, and improved visibility indicating promise for aerial imagery during poor

visibility flight conditions. This algorithm can further be applied to real time video streaming and

the enhanced video can be projected to the pilot’s heads-up display for aviation safety.

Page 59: Areial Image

REFERENCES

[1] D. J. Jobson, Z. Rahman, G. A. Woodell, G.D.Hines, “A Comparison of Visual Statistics for

the Image Enhancement of FORESITE Aerial Images with Those of Major Image Classes,”

Visual Information Processing XV, Proc. SPIE 6246, (2006)

[2] S. M. Pizer, J. B. Zimmerman, and E. Staab, “Adaptive grey level assignment in CT scan

display,” Journal of Computer Assistant Tomography, vol. 8, pp. 300-305 ,(1984).

[3] J. B. Zimmerman, S. B. Cousins, K. M. Hartzell, M. E. Frisse, and M. G. Kahn, “A

psychophysical comparison of two methods for adaptive histogram equalization,” Journal of

Digital Imaging, vol. 2, pp. 82-91

(1989).

[4] S. M. Pizer and E. P. Amburn, “Adaptive histogram equalization and its variations,”

Computer Vision, Graphics, and Image Processing, vol. 39, pp. 355-368, (1987).

[5] K. Rehm and W. J. Dallas, “Artifact suppression in digital chest radiographs enhanced with

adaptive histogram equalization,” SPIE: Medical Imaging III, (1989).

[6] Y. Jin, L. M. Fayad, and A. F. Laine, “Contrast enhancement by multiscale adaptive

histogram equalization,” Proc. SPIE, vol. 4478, pp. 206-213, (2001).

[7] E. Land and J. McCann, “Lightness and Retinex theory,” Journal of the Optical Society of

America, vol. 61, pp. 1-11, (1971).

[8] A. Hurlbert, “Formal Connections Between Lightness Algorithms”, Journal of the Optical

Society of America, vol. 3, No 10 pp. 1684-1693,

(1986).

Page 60: Areial Image

[9] E. Land, “An alternative technique for the computation of the designator in the Retinex

theory of color vision,” Proc. of the National.

[10] J. McCann, “Lessons learned from mondrians applied to real images

and color gamuts,” Proc. IS&T/SID Seventh Color Imaging

Conference, pp. 1-8, (1999).

[11] R. Sobol, “Improving the Retinex algorithm for rendering wide dynamic range

photographs,” Proc. SPIE 4662, pp. 341–348, (2002).

[12] A. Rizzi, C. Gatta, and D. Marini, “From Retinex to ACE: Issues in developing a new

algorithm for unsupervised color equalization,” Journal of Electronic Imaging, vol. 13, pp. 75-

84, (2004).

[13] D. J. Jobson, Z. Rahman and G.A. Woodell, “Properties and performance of a

center/surround retinex,” IEEE Transactions onImage Processing: Special Issue on Color

Processing, No. 6, pp. 451-

462, (1997).

[14] Z. Rahman, D. J. Jobson, and G. A. Woodell, “Multiscale retinex forcolor image

enhancement,” Proc. IEEE International. Conference. on I Processing, (1996).

[15] D. J. Jobson, Z. Rahman, and G. A. Woodell, “A multi-scale retinex for bridging the gap

between color images and the human observation of scenes,” IEEE Transactions on Image

Processing, Vol. 6, pp. 965-976,

(1997).

[16] Z. Rahman, D. J. Jobson, and G. A. Woodell, “Retinex Processing for Automatic Image

Enhancement", Journal of Electronic Imaging, January (2004).

Page 61: Areial Image

Appendix A

MATLAB

A.1 Introduction

MATLAB is a high-performance language for technical computing. It integrates

computation, visualization, and programming in an easy-to-use environment where problems and

solutions are expressed in familiar mathematical notation. MATLAB stands for matrix

laboratory, and was written originally to provide easy access to matrix software developed by

LINPACK (linear system package) and EISPACK (Eigen system package) projects. MATLAB

is therefore built on a foundation of sophisticated matrix software in which the basic element is

array that does not require pre dimensioning which to solve many technical computing problems,

especially those with matrix and vector formulations, in a fraction of time.

MATLAB features a family of applications specific solutions called toolboxes. Very

important to most users of MATLAB, toolboxes allow learning and applying specialized

technology. These are comprehensive collections of MATLAB functions (M-files) that extend

the MATLAB environment to solve particular classes of problems. Areas in which toolboxes are

available include signal processing, control system, neural networks, fuzzy logic, wavelets,

simulation and many others.

Typical uses of MATLAB include: Math and computation, Algorithm development, Data

acquisition, Modeling, simulation, prototyping, Data analysis, exploration, visualization,

Scientific and engineering graphics, Application development, including graphical user interface

building.

A.2 Basic Building Blocks of MATLAB

Page 62: Areial Image

The basic building block of MATLAB is MATRIX. The fundamental data type is the

array. Vectors, scalars, real matrices and complex matrix are handled as specific class of this

basic data type. The built in functions are optimized for vector operations. No dimension

statements are required for vectors or arrays.

A.2.1 MATLAB Window

The MATLAB works based on five windows: Command window, Workspace window,

Current directory window, Command history window, Editor Window, Graphics window and

Online-help window.

A.2.1.1 Command Window

The command window is where the user types MATLAB commands and expressions at

the prompt (>>) and where the output of those commands is displayed. It is opened when the

application program is launched. All commands including user-written programs are typed in

this window at MATLAB prompt for execution.

A.2.1.2 Work Space Window

MATLAB defines the workspace as the set of variables that the user creates in a work

session. The workspace browser shows these variables and some information about them.

Double clicking on a variable in the workspace browser launches the Array Editor, which can be

used to obtain information.

A.2.1.3 Current Directory Window

The current Directory tab shows the contents of the current directory, whose path is

shown in the current directory window. For example, in the windows operating system the path

might be as follows: C:\MATLAB\Work, indicating that directory “work” is a subdirectory of

the main directory “MATLAB”; which is installed in drive C. Clicking on the arrow in the

current directory window shows a list of recently used paths. MATLAB uses a search path to

find M-files and other MATLAB related files. Any file run in MATLAB must reside in the

current directory or in a directory that is on search path.

A.2.1.4 Command History Window

Page 63: Areial Image

The Command History Window contains a record of the commands a user has entered in

the command window, including both current and previous MATLAB sessions. Previously

entered MATLAB commands can be selected and re-executed from the command history

window by right clicking on a command or sequence of commands. This is useful to select

various options in addition to executing the commands and is useful feature when experimenting

with various commands in a work session.

A.2.1.5 Editor Window

The MATLAB editor is both a text editor specialized for creating M-files and a graphical

MATLAB debugger. The editor can appear in a window by itself, or it can be a sub window in

the desktop. In this window one can write, edit, create and save programs in files called M-files.

MATLAB editor window has numerous pull-down menus for tasks such as saving,

viewing, and debugging files. Because it performs some simple checks and also uses color to

differentiate between various elements of code, this text editor is recommended as the tool of

choice for writing and editing M-functions.

A.2.1.6 Graphics or Figure Window

The output of all graphic commands typed in the command window is seen in this

window.

A.2.1.7 Online Help Window

MATLAB provides online help for all it’s built in functions and programming language

constructs. The principal way to get help online is to use the MATLAB help browser, opened as

a separate window either by clicking on the question mark symbol (?) on the desktop toolbar, or

by typing help browser at the prompt in the command window. The help Browser is a web

browser integrated into the MATLAB desktop that displays a Hypertext Markup Language

(HTML) documents. The Help Browser consists of two panes, the help navigator pane, used to

find information, and the display pane, used to view the information. Self-explanatory tabs other

than navigator pane are used to perform a search.

A.3 MATLAB Files

MATLAB has three types of files for storing information. They are: M-files and MAT-

files.

Page 64: Areial Image

A.3.1 M-Files

These are standard ASCII text file with ‘m’ extension to the file name and creating own

matrices using M-files, which are text files containing MATLAB code. MATLAB editor or

another text editor is used to create a file containing the same statements which are typed at the

MATLAB command line and save the file under a name that ends in .m. There are two types of

M-files:

1. Script Files

It is an

M-file with a set of MATLAB commands in it and is executed by typing name of file on the

command line. These files work on global variables currently present in that environment.

2. Function Files

A function file is also an M-file except that the variables in a function file are all local.

This type of files begins with a function definition line.

A.3.2 MAT-Files

These are binary data files with .mat extension to the file that are created by MATLAB

when the data is saved. The data written in a special format that only MATLAB can read. These

are located into MATLAB with ‘load’ command.

A.4 the MATLAB System:

The MATLAB system consists of five main parts:

A.4.1 Development Environment:

 This is the set of tools and facilities that help you use MATLAB functions and files. Many

of these tools are graphical user interfaces. It includes the MATLAB desktop and Command

Page 65: Areial Image

Window, a command history, an editor and debugger, and browsers for viewing help, the

workspace, files, and the search path.

A.4.2 the MATLAB Mathematical Function:

This is a vast collection of computational algorithms ranging from elementary functions

like sum, sine, cosine, and complex arithmetic, to more sophisticated functions like matrix

inverse, matrix eigen values, Bessel functions, and fast Fourier transforms.

A.4.3 the MATLAB Language:

This is a high-level matrix/array language with control flow statements, functions, data

structures, input/output, and object-oriented programming features. It allows both "programming

in the small" to rapidly create quick and dirty throw-away programs, and "programming in the

large" to create complete large and complex application programs.

A.4.4 Graphics:

MATLAB has extensive facilities for displaying vectors and matrices as graphs, as well as

annotating and printing these graphs. It includes high-level functions for two-dimensional and

three-dimensional data visualization, image processing, animation, and presentation graphics. It

also includes low-level functions that allow you to fully customize the appearance of graphics as

well as to build complete graphical user interfaces on your MATLAB applications.

A.4.5 the MATLAB Application Program Interface (API):

This is a library that allows you to write C and FORTRAN programs that interact with

MATLAB. It includes facilities for calling routines from MATLAB (dynamic linking), calling

MATLAB as a computational engine, and for reading and writing MAT-files.

A.5 SOME BASIC COMMANDS:

pwd prints working directory

Demo demonstrates what is possible in Mat lab

Page 66: Areial Image

Who lists all of the variables in your Mat lab workspace?

Whose list the variables and describes their matrix size

clear erases variables and functions from memory

clear x erases the matrix 'x' from your workspace

close by itself, closes the current figure window

figure creates an empty figure window

hold on holds the current plot and all axis properties so that subsequent graphing

commands add to the existing graph

hold off sets the next plot property of the current axes to "replace"

find find indices of nonzero elements e.g.:

d = find(x>100) returns the indices of the vector x that are greater than 100

break terminate execution of m-file or WHILE or FOR loop

for repeat statements a specific number of times, the general form of a FOR

statement is:

FOR variable = expr, statement, ..., statement END

for n=1:cc/c;

magn(n,1)=NaNmean(a((n-1)*c+1:n*c,1));

end

diff difference and approximate derivative e.g.:

DIFF(X) for a vector X, is [X(2)-X(1) X(3)-X(2) ... X(n)-X(n-1)].

NaN the arithmetic representation for Not-a-Number, a NaN is obtained as a

Page 67: Areial Image

result of mathematically undefined operations like 0.0/0.0

INF the arithmetic representation for positive infinity, a infinity is also produced

by operations like dividing by zero, e.g. 1.0/0.0, or from overflow, e.g. exp(1000).

save saves all the matrices defined in the current session into the file,

matlab.mat, located in the current working directory

load loads contents of matlab.mat into current workspace

save filename x y z saves the matrices x, y and z into the file titled filename.mat

save filename x y z /ascii save the matrices x, y and z into the file titled filename.dat

load filename loads the contents of filename into current workspace; the file can

be a binary (.mat) file

load filename.dat loads the contents of filename.dat into the variable filename

xlabel(‘ ’) : Allows you to label x-axis

ylabel(‘ ‘) : Allows you to label y-axis

title(‘ ‘) : Allows you to give title for

plot

subplot() : Allows you to create multiple

plots in the same window

A.6 SOME BASIC PLOT COMMANDS:

Kinds of plots:

plot(x,y) creates a Cartesian plot of the vectors x & y

plot(y) creates a plot of y vs. the numerical values of the elements in the y-vector

Page 68: Areial Image

semilogx(x,y) plots log(x) vs y

semilogy(x,y) plots x vs log(y)

loglog(x,y) plots log(x) vs log(y)

polar(theta,r) creates a polar plot of the vectors r & theta where theta is in radians

bar(x) creates a bar graph of the vector x. (Note also the command stairs(x))

bar(x, y) creates a bar-graph of the elements of the vector y, locating the bars

according to the vector elements of 'x'

Plot description:

grid creates a grid on the graphics plot

title('text') places a title at top of graphics plot

xlabel('text') writes 'text' beneath the x-axis of a plot

ylabel('text') writes 'text' beside the y-axis of a plot

text(x,y,'text') writes 'text' at the location (x,y)

text(x,y,'text','sc') writes 'text' at point x,y assuming lower left corner is (0,0)

and upper right corner is (1,1)

axis([xmin xmax ymin ymax]) sets scaling for the x- and y-axes on the current plot

A.7 ALGEBRIC OPERATIONS IN MATLAB:

Scalar Calculations:

+ Addition

- Subtraction

* Multiplication

Page 69: Areial Image

/ Right division (a/b means a ÷ b)

\ left division (a\b means b ÷ a)

^ Exponentiation

For example 3*4 executed in 'matlab' gives ans=12

4/5 gives ans=0.8

Array products: Recall that addition and subtraction of matrices involved

addition or subtraction of the individual elements of the matrices. Sometimes it is desired to

simply multiply or divide each element of an matrix by the corresponding element of another

matrix 'array operations”.

Array or element-by-element operations are executed when the operator is preceded by a '.'

(Period):

a .* b multiplies each element of a by the respective element of b

a ./ b divides each element of a by the respective element of b

a .\ b divides each element of b by the respective element of a

a .^ b raise each element of a by the respective b element

A.8 MATLAB WORKING ENVIRONMENT:

A.8.1 MATLAB DESKTOP

Matlab Desktop is the main Matlab application window. The desktop contains five sub

windows, the command window, the workspace browser, the current directory window, the

command history window, and one or more figure windows, which are shown only when the

user displays a graphic.

The command window is where the user types MATLAB commands and expressions at

the prompt (>>) and where the output of those commands is displayed. MATLAB defines the

workspace as the set of variables that the user creates in a work session.

Page 70: Areial Image

The workspace browser shows these variables and some information about them. Double

clicking on a variable in the workspace browser launches the Array Editor, which can be used to

obtain information and income instances edit certain properties of the variable.

The current Directory tab above the workspace tab shows the contents of the current

directory, whose path is shown in the current directory window. For example, in the windows

operating system the path might be as follows: C:\MATLAB\Work, indicating that directory

“work” is a subdirectory of the main directory “MATLAB”; WHICH IS INSTALLED IN

DRIVE C. clicking on the arrow in the current directory window shows a list of recently used

paths. Clicking on the button to the right of the window allows the user to change the current

directory.

MATLAB uses a search path to find M-files and other MATLAB related files, which are

organize in directories in the computer file system. Any file run in MATLAB must reside in the

current directory or in a directory that is on search path. By default, the files supplied with

MATLAB and math works toolboxes are included in the search path. The easiest way to see

which directories are soon the search path, or to add or modify a search path, is to select set path

from the File menu the desktop, and then use the set path dialog box. It is good practice to add

any commonly used directories to the search path to avoid repeatedly having the change the

current directory.

The Command History Window contains a record of the commands a user has entered in

the command window, including both current and previous MATLAB sessions. Previously

entered MATLAB commands can be selected and re-executed from the command history

window by right clicking on a command or sequence of commands.

This action launches a menu from which to select various options in addition to executing

the commands. This is useful to select various options in addition to executing the commands.

This is a useful feature when experimenting with various commands in a work session.

A.8.2 Using the MATLAB Editor to create M-Files:

Page 71: Areial Image

The MATLAB editor is both a text editor specialized for creating M-files and a graphical

MATLAB debugger. The editor can appear in a window by itself, or it can be a sub window in

the desktop. M-files are denoted by the extension .m, as in pixelup.m.

The MATLAB editor window has numerous pull-down menus for tasks such as saving,

viewing, and debugging files. Because it performs some simple checks and also uses color to

differentiate between various elements of code, this text editor is recommended as the tool of

choice for writing and editing M-functions.

To open the editor , type edit at the prompt opens the M-file filename.m in an editor

window, ready for editing. As noted earlier, the file must be in the current directory, or in a

directory in the search path.

A.8.3 Getting Help:

The principal way to get help online is to use the MATLAB help browser, opened as a

separate window either by clicking on the question mark symbol (?) on the desktop toolbar, or by

typing help browser at the prompt in the command window. The help Browser is a web browser

integrated into the MATLAB desktop that displays a Hypertext Markup Language(HTML)

documents. The Help Browser consists of two panes, the help navigator pane, used to find

information, and the display pane, used to view the information. Self-explanatory tabs other than

navigator pane are used to perform a search.

Appendix B

INTRODUCTION TO DIGITAL IMAGE PROCESSING

6.1 What is DIP?

An image may be defined as a two-dimensional function f(x, y), where x & y are

spatial coordinates, & the amplitude of f at any pair of coordinates (x, y) is called the intensity

or gray level of the image at that point. When x, y & the amplitude values of f are all finite

Page 72: Areial Image

discrete quantities, we call the image a digital image. The field of DIP refers to processing digital

image by means of digital computer. Digital image is composed of a finite number of elements,

each of which has a particular location & value. The elements are called pixels.

Vision is the most advanced of our sensor, so it is not surprising that image play the single

most important role in human perception. However, unlike humans, who are limited to the visual

band of the EM spectrum imaging machines cover almost the entire EM spectrum, ranging from

gamma to radio waves. They can operate also on images generated by sources that humans are

not accustomed to associating with image.

There is no general agreement among authors regarding where image processing stops &

other related areas such as image analysis& computer vision start. Sometimes a distinction is

made by defining image processing as a discipline in which both the input & output at a process

are images. This is limiting & somewhat artificial boundary. The area of image analysis (image

understanding) is in between image processing & computer vision.

There are no clear-cut boundaries in the continuum from image processing at one end to

complete vision at the other. However, one useful paradigm is to consider three types of

computerized processes in this continuum: low-, mid-, & high-level processes. Low-level

process involves primitive operations such as image processing to reduce noise, contrast

enhancement & image sharpening. A low- level process is characterized by the fact that both its

inputs & outputs are images. Mid-level process on images involves tasks such as segmentation,

description of that object to reduce them to a form suitable for computer processing &

classification of individual objects. A mid-level process is characterized by the fact that its inputs

generally are images but its outputs are attributes extracted from those images. Finally higher-

level processing involves “Making sense” of an ensemble of recognized objects, as in image

analysis & at the far end of the continuum performing the cognitive functions normally

associated with human vision.

Page 73: Areial Image

Digital image processing, as already defined is used successfully in a broad range of

areas of exceptional social & economic value.

6.2 What is an image?

An image is represented as a two dimensional function f(x, y) where x and y are spatial co-

ordinates and the amplitude of ‘f’ at any pair of coordinates (x, y) is called the intensity of the

image at that point.

Gray scale image:

A grayscale image is a function I (xylem) of the two spatial coordinates of the image

plane.

I(x, y) is the intensity of the image at the point (x, y) on the image plane.

I (xylem) takes non-negative values assume the image is bounded by a rectangle [0, a] [0, b]I:

[0, a] [0, b] [0, info)

Color image:

It can be represented by three functions, R (xylem) for red, G (xylem) for green and

B (xylem) for blue.

An image may be continuous with respect to the x and y coordinates and also in

amplitude. Converting such an image to digital form requires that the coordinates as well as the

amplitude to be digitized. Digitizing the coordinate’s values is called sampling. Digitizing the

amplitude values is called quantization.

6.3 Coordinate convention:

The result of sampling and quantization is a matrix of real numbers. We use two principal

ways to represent digital images. Assume that an image f(x, y) is sampled so that the resulting

Page 74: Areial Image

image has M rows and N columns. We say that the image is of size M X N. The values of the

coordinates (xylem) are discrete quantities. For notational clarity and convenience, we use

integer values for these discrete coordinates. In many image processing books, the image origin

is defined to be at (xylem)=(0,0).The next coordinate values along the first row of the image are

(xylem)=(0,1).It is important to keep in mind that the notation (0,1) is used to signify the second

sample along the first row. It does not mean that these are the actual values of physical

coordinates when the image was sampled. Following figure shows the coordinate convention.

Note that x ranges from 0 to M-1 and y from 0 to N-1 in integer increments.

The coordinate convention used in the toolbox to denote arrays is different from the

preceding paragraph in two minor ways. First, instead of using (xylem) the toolbox uses the

notation (race) to indicate rows and columns. Note, however, that the order of coordinates is the

same as the order discussed in the previous paragraph, in the sense that the first element of a

coordinate topples, (alb), refers to a row and the second to a column. The other difference is that

the origin of the coordinate system is at (r, c) = (1, 1); thus, r ranges from 1 to M and c from 1 to

N in integer increments. IPT documentation refers to the coordinates. Less frequently the toolbox

also employs another coordinate convention called spatial coordinates which uses x to refer to

columns and y to refers to rows. This is the opposite of our use of variables x and y.

6.4 Image as Matrices:

The preceding discussion leads to the following representation for a digitized image

function:

f (0, 0) f (0, 1) ……….. f (0, N-1)

f (1, 0) f (1, 1) ………… f (1, N-1)

f (xylem) = . . .

. . .

f (M-1, 0) f (M-1, 1) ………… f (M-1, N-1)

Page 75: Areial Image

The right side of this equation is a digital image by definition. Each element of this

array is called an image element, picture element, pixel or pel. The terms image and pixel are

used throughout the rest of our discussions to denote a digital image and its elements.

A digital image can be represented naturally as a MATLAB matrix:

f (1, 1) f (1, 2) ……. f (1, N)

f (2, 1) f (2, 2) …….. f (2, N)

. . .

f = . . .

f (M, 1) f (M, 2) …….f (M, N)

Where f (1, 1) = f (0, 0) (note the use of a monoscope font to denote MATLAB

quantities). Clearly the two representations are identical, except for the shift in origin. The

notation f (p, q) denotes the element located in row p and the column q. For example f (6, 2) is

the element in the sixth row and second column of the matrix f. typically we use the letters M

and N respectively to denote the number of rows and columns in a matrix. A 1xN matrix is

called a row vector whereas an Mx1 matrix is called a column vector. A 1x1 matrix is a scalar.

Matrices in MATLAB are stored in variables with names such as A, a, RGB, real array and

so on. Variables must begin with a letter and contain only letters, numerals and underscores. As

noted in the previous paragraph, all MATLAB quantities are written using monoscope

characters. We use conventional Roman, italic notation such as f(x, y), for mathematical

expressions

6.5 Reading Images:

Images are read into the MATLAB environment using function imread whose syntax is

imread(‘filename’)

Page 76: Areial Image

Format name Description recognized extension

TIFF Tagged Image File Format .tif, .tiff

JPEG Joint Photograph Experts Group .jpg, .jpeg

GIF Graphics Interchange Format .gif

BMP Windows Bitmap .bmp

PNG Portable Network Graphics .png

XWD X Window Dump .xwd

Here filename is a spring containing the complete of the image file(including any

applicable extension).For example the command line

>> f = imread (‘8. jpg’);

reads the JPEG (above table) image chestxray into image array f. Note the use of single quotes

(‘) to delimit the string filename. The semicolon at the end of a command line is used by

MATLAB for suppressing output. If a semicolon is not included. MATLAB displays the results

of the operation(s) specified in that line. The prompt symbol(>>) designates the beginning of a

command line, as it appears in the MATLAB command window.

When as in the preceding command line no path is included in filename, imread reads the

file from the current directory and if that fails it tries to find the file in the MATLAB search path.

The simplest way to read an image from a specified directory is to include a full or relative path

to that directory in filename.

For example,

>> f = imread ( ‘D:\myimages\chestxray.jpg’);

reads the image from a folder called my images on the D: drive, whereas

Page 77: Areial Image

>> f = imread(‘ . \ myimages\chestxray .jpg’);

reads the image from the my images subdirectory of the current of the current working

directory. The current directory window on the MATLAB desktop toolbar displays MATLAB’s

current working directory and provides a simple, manual way to change it. Above table lists

some of the most of the popular image/graphics formats supported by imread and imwrite.

Function size gives the row and column dimensions of an image:

>> size (f)

ans = 1024 * 1024

This function is particularly useful in programming when used in the following form to

determine automatically the size of an image:

>>[M,N]=size(f);

This syntax returns the number of rows(M) and columns(N) in the image.

The whole function displays additional information about an array. For instance ,the

statement

>> whos f

gives

Name size Bytes Class

F 1024*1024 1048576 unit8 array

Grand total is 1048576 elements using 1048576 bytes

The unit8 entry shown refers to one of several MATLAB data classes. A semicolon at the

end of a whose line has no effect ,so normally one is not used.

Page 78: Areial Image

6.6 Displaying Images:

Images are displayed on the MATLAB desktop using function imshow, which has the basic

syntax:

imshow(f,g)

Where f is an image array, and g is the number of intensity levels used to display it.

If g is omitted ,it defaults to 256 levels .using the syntax

Imshow (f, {low high})

Displays as black all values less than or equal to low and as white all values greater

than or equal to high. The values in between are displayed as intermediate intensity values using

the default number of levels .Finally the syntax

Imshow(f,[ ])

Sets variable low to the minimum value of array f and high to its maximum value.

This form of imshow is useful for displaying images that have a low dynamic range or that have

positive and negative values.

Function pixval is used frequently to display the intensity values of individual pixels

interactively. This function displays a cursor overlaid on an image. As the cursor is moved over

the image with the mouse the coordinates of the cursor position and the corresponding intensity

values are shown on a display that appears below the figure window .When working with color

images, the coordinates as well as the red, green and blue components are displayed. If the left

button on the mouse is clicked and then held pressed, pixval displays the Euclidean distance

between the initial and current cursor locations.

The syntax form of interest here is Pixval which shows the cursor on the last image

displayed. Clicking the X button on the cursor window turns it off.

Page 79: Areial Image

The following statements read from disk an image called rose_512.tif extract basic

information about the image and display it using imshow :

>>f=imread(‘rose_512.tif’);

>>whos f

Name Size Bytes Class

F 512*512 262144 unit8 array

Grand total is 262144 elements using 262144 bytes

>>imshow(f)

A semicolon at the end of an imshow line has no effect, so normally one is not used.

If another image,g, is displayed using imshow, MATLAB replaces the image in the screen with

the new image. To keep the first image and output a second image, we use function figure as

follows:

>>figure ,imshow(g)

Using the statement

>>imshow(f),figure ,imshow(g) displays both images.

Note that more than one command can be written on a line ,as long as different commands

are properly delimited by commas or semicolons. As mentioned earlier, a semicolon is used

whenever it is desired to suppress screen outputs from a command line.

Suppose that we have just read an image h and find that using imshow produces the image.

It is clear that this image has a low dynamic range, which can be remedied for display purposes

by using the statement.

Page 80: Areial Image

>>imshow(h,[ ])

6.7 WRITING IMAGES:

Images are written to disk using function imwrite, which has the following basic syntax:

Imwrite (f,’filename’)

With this syntax, the string contained in filename must include a recognized file format

extension .Alternatively, the desired format can be specified explicitly with a third input

argument. >>imwrite(f,’patient10_run1’,’tif’)

Or alternatively

For example the following command writes f to a TIFF file named patient10_run1:

>>imwrite(f,’patient10_run1.tif’)

If filename contains no path information, then imwrite saves the file in the current working

directory.

The imwrite function can have other parameters depending on e file format selected. Most

of the work in the following deals either with JPEG or TIFF images ,so we focus attention here

on these two formats.

More general imwrite syntax applicable only to JPEG images is

imwrite(f,’filename.jpg,,’quality’,q)

where q is an integer between 0 and 100(the lower number the higher the degradation due to

JPEG compression).

For example, for q=25 the applicable syntax is

Page 81: Areial Image

>> imwrite(f,’bubbles25.jpg’,’quality’,25)

The image for q=15 has false contouring that is barely visible, but this effect becomes quite

pronounced for q=5 and q=0.Thus, an expectable solution with some margin for error is to

compress the images with q=25.In order to get an idea of the compression achieved and to obtain

other image file details, we can use function imfinfo which has syntax.

Imfinfo filename

Here filename is the complete file name of the image stored in disk.

For example,

>> imfinfo bubbles25.jpg

outputs the following information(note that some fields contain no information in this case):

Filename: ‘bubbles25.jpg’

FileModDate: ’04-jan-2003 12:31:26’

FileSize: 13849

Format: ‘jpg’

Format Version: ‘ ‘

Width: 714

Height: 682

Bit Depth: 8

Color Depth: ‘grayscale’

Format Signature: ‘ ‘

Page 82: Areial Image

Comment: { }

Where file size is in bytes. The number of bytes in the original image is corrupted simply

by multiplying width by height by bit depth and dividing the result by 8. The result is

486948.Dividing this file size gives the compression ratio:(486948/13849)=35.16.This

compression ratio was achieved. While maintaining image quality consistent with the

requirements of the appearance. In addition to the obvious advantages in storage space, this

reduction allows the transmission of approximately 35 times the amount of un compressed data

per unit time.

The information fields displayed by imfinfo can be captured in to a so called structure

variable that can be for subsequent computations. Using the receding an example and assigning

the name K to the structure variable.

We use the syntax >>K=imfinfo(‘bubbles25.jpg’);

To store in to variable K all the information generated by command imfinfo, the

information generated by imfinfo is appended to the structure variable by means of fields,

separated from K by a dot. For example, the image height and width are now stored in structure

fields K. Height and K. width.

As an illustration, consider the following use of structure variable K to commute the

compression ratio for bubbles25.jpg:

>> K=imfinfo(‘bubbles25.jpg’);

>> image_ bytes =K.Width* K.Height* K.Bit Depth /8;

>> Compressed_ bytes = K.FilesSize;

>> Compression_ ratio=35.162

Note that iminfo was used in two different ways. The first was t type imfinfo bubbles25.jpg

at the prompt, which resulted in the information being displayed on the screen. The second was

Page 83: Areial Image

to type K=imfinfo (‘bubbles25.jpg’),which resulted in the information generated by imfinfo

being stored in K. These two different ways of calling imfinfo are an example of command_

function duality, an important concept that is explained in more detail in the MATLAB online

documentation.

More general imwrite syntax applicable only to tif images has the form

Imwrite(g,’filename.tif’,’compression’,’parameter’,….’resloution’,[colres rowers] )

Where ‘parameter’ can have one of the following principal values: ‘none’ indicates no

compression, ‘pack bits’ indicates pack bits compression (the default for non ‘binary images’)

and ‘ccitt’ indicates ccitt compression. (the default for binary images).The 1*2 array [colres

rowers]

Contains two integers that give the column resolution and row resolution in dot per_ unit

(the default values). For example, if the image dimensions are in inches, colres is in the number

of dots(pixels)per inch (dpi) in the vertical direction and similarly for rowers in the horizontal

direction. Specifying the resolution by single scalar, res is equivalent to writing [res res].

>>imwrite(f,’sf.tif’,’compression’,’none’,’resolution’,……………..[300 300])

the values of the vector[colures rows] were determined by multiplying 200 dpi by the ratio

2.25/1.5, which gives 30 dpi. Rather than do the computation manually, we could write

>> res=round(200*2.25/1.5);

>>imwrite(f,’sf.tif’,’compression’,’none’,’resolution’,res)

where its argument to the nearest integer.It function round rounds is important to note that

the number of pixels was not changed by these commands. Only the scale of the image changed.

The original 450*450 image at 200 dpi is of size 2.25*2.25 inches. The new 300_dpi image is

identical, except that is 450*450 pixels are distributed over a 1.5*1.5_inch area. Processes such

Page 84: Areial Image

as this are useful for controlling the size of an image in a printed document with out sacrificing

resolution.

Often it is necessary to export images to disk the way they appear on the MATLAB

desktop. This is especially true with plots .The contents of a figure window can be exported to

disk in two ways. The first is to use the file pull-down menu is in the figure window and then

choose export. With this option the user can select a location, filename, and format. More control

over export parameters is obtained by using print command:

Print-fno-dfileformat-rresno filename

Where no refers to the figure number in the figure window interest, file format refers

one of the file formats in table above. ‘resno’ is the resolution in dpi, and filename is the name

we wish to assign the file.

If we simply type print at the prompt, MATLAB prints (to the default printer) the contents of the

last figure window displayed. It is possible also to specify other options with print, such as specific

printing device.