Remote Sensing-Image Interpretation

download Remote Sensing-Image Interpretation

of 19

Transcript of Remote Sensing-Image Interpretation

  • 7/31/2019 Remote Sensing-Image Interpretation

    1/19

    Remote sensing: Image InterpretationRemote sensing interpretation

    Data collected by sensor onboard the space/air/terrestrial platform is available in the formof digital images. This data is processed to derive useful information about earth features.

    Various steps involve interpretation of these images after applying suitable corrections,

    enhancements, and classification techniques. A typical image interpretation may involvemanual and digital (computer assisted) procedures (Figure).

    Conceptual framework for image analysis procedures (Colwell, 1983)

    Image interpretation

    It can be defined as the act of examining images for the purpose of identifying objects andjudging their significance. Depending upon the instruments employed for data collectionone can interpret a variety of images such as aerial photographs, scanner, thermal and radar

    imagery. Even a digitally processed imagery requires image interpretation.

    The success in image interpretation is a function of:

    o training and experience of interpreter

    o nature of objects being interpreted

    o image quality

  • 7/31/2019 Remote Sensing-Image Interpretation

    2/19

    Basic principles

    o Image is a pictorial representation of pattern of landscape which is composed of

    elements- indicators of things and events which reflect physical, biological, andcultural components of landscape.

    o Similar conditions in similar surroundings reflect similar patterns and unlike

    conditions reflect unlike patternso Type and nature of extracted information is proportional to knowledge, skill, and

    experience of interpreter, method used for interpretation and understanding of its

    limitations.

    Factors governing interpretability

    Sensor characteristics

    Season of the year

    Time of the day

    Atmospheric effects

    Imaging system resolution Image scale

    Image motion Stereoscopic parallax

    Visual and mental acuity of interpreter

    Equipment and techniques of interpretation

    Exposure and processing

    Interpretation keys

    Elements of image interpretation The following image characteristics allow the interpreter to

    detect, delineate, identify and evaluate objects:

    1. ShapeSpecific shape of the object under consideration and relates to the general form,

    configuration or outline of an individual object. A railways is readily distinguishable from

    a road as its shape consists of long straight tangents and gentle curves as opposed to curvedshape of a highway.

    2. Size

    Length, width, height, area, volume of the object. It is a function of image scale.3. Tone

    Grey tone, type of color of object in image representation referring to its reflective and

    emissive properties.

    4. ShadowCharacteristic shadow makes - possibly hidden - objects recognizable both in passive

    sensor systems with the sun as illumination source, as well as in active systems, such as the

    occurrence of radar shadow. Shadow may also provide height information. Useful in twoopposing ways. The outline of shadow affords a profile view of objects which aids

    interpretation. However, objects within shadow reflect little and pose difficulty in

    interpretation.

  • 7/31/2019 Remote Sensing-Image Interpretation

    3/19

    5. Pattern

    Spatial phenomenon such as noise pattern or structural pattern (also spatial repetition) of an

    object in an image may be characteristic of artificial as well as natural objects such asparceling patterns, land use, geomorphology of tidal marshes or shallows, land reclamation,

    erosion gullies, tillage, plant direction ridges of sea waves, lake districts, nature terrain etc.

    6. TextureSpatial grey tone distribution of an object in the image may enable recognition:

    qualitatively described with terms like coarse, fine, regular irregular, fibrous, smooth,

    rough; quantitatively to be described by mathematical texture measures, valid with in aselected image window.

    7. Site

    Location of an object amidst certain terrain characteristics shown by the image may

    exclude incorrect conclusions e.g., site of an apartment building is not acceptable in aswamp or a jungle

    8. Association

    Interrelationship of the objects on basis of general geographical knowledge, basic

    knowledge of physics or particularly specific professional strengthens the interpretation ofparts of the image, the relationship of the flow of the river, the banks and the adjacent

    slopes; a power station discharging cooling water will be sited along a river; an industrialarea may indicate the vicinity of the urban area.

    9. Resolution

    Spatial resolution of a sensor determines the size the object detail just distinguishable,obviously dependent on radiometric resolution and the contrast in the surroundings of the

    detail. Objects of a size or a repetition measure considerably smaller than these resolutions

    will not be recognized or designated in the image. The resolution as an interpretation

    element may also refer to the concept of phenomenological resolution which means theextent of the surroundings of a detail necessary for recognition.

    During the interpretation process, one uses a combination of various interpretation elements. The

    tone or colour is the most important and simplest of interpretation elements. It is used as the first

    level of element for discrimination (Figure). Other elements such as those characterizing the spatialarrangements in a scene are used at secondary and higher level which are fairly complex for use

  • 7/31/2019 Remote Sensing-Image Interpretation

    4/19

    during computational implementation.

    Primary ordering of image interpretation elements (Colwell, 1983)

    Activities in image interpretation Various activities can be grouped as:

    DetectionSelectively picking up the object of importance for the

    particular kind of interpretation

    Recognition and

    identification

    Classification of an object by means of specific knowledge,

    within a known category, upon its detection in the image.

    AnalysisProcess of separating a set of similar objects and involves

    drawing boundary lines.

    DeductionSeparation of different group of objects and deducing theirsignificance based on converging of evidence

    ClassificationEstablishment of the identity of objects delineated by

    analysis

    IdealizationStandardization of representation of what is actually seen in

    imagery.

    Digital technique for interpretation

    Data collected by the sensor onboard the space or airborne system is available in digital

    images.

    Digital Image Processing (DIP) is concerned with the computer processing of pictures orimages that have been converted into numeric form. The purpose of DIP is to enhance or

    improve the image in some way, or to extract information from it.

    Various advantages of DIP are:

    o Cost effective in terms of money and time.

  • 7/31/2019 Remote Sensing-Image Interpretation

    5/19

    o Quantitative information is available.

    o Multidate, multispectral, and multisource data analysis is possible.

    o Various types of computations are possible: areal extent, statistics etc.

    o Versatile and repeatable hence precision is maintained

    Some limitations of DIP are:

    o Complementary to visual approach.o Less accurate for subtle interpretation.

    DIP system

    DIP system may be considered as a unified collection of computer programs written in highlevel languages and designed for processing the remotely sensed data for a variety of

    applications.

    A typical set of sequence of operations for image processing are given below:

    Typical sequence of operations in a DIP system

  • 7/31/2019 Remote Sensing-Image Interpretation

    6/19

    (a) Components of DIP system

    DIP system has two main parts: (a) Hardware (b) Software Hardware: Typical hardwareconfiguration is indicated in the above figure. Software: Various DIP related software consists ofmainly two parts:

    (1) Operating system related software

    (2) Image processing related software(a) image processing related command language (b) application programs Hardware

    components of a DIP system (b) Typical software functions in DIP The following table lists a

    typical set of DIP functions (Jensen, 19)

    1. Pre-processing(A) Radiometric correction

    (B) Geometric correction2. Display and Enhancement(C) Black and white display

    (D) Colour composite display

    (E) Density slicing(F) Magnification and reduction

    (G) Transects

    (H) Contrast stretch

  • 7/31/2019 Remote Sensing-Image Interpretation

    7/19

    (I) Image algebra (band ratioing, differencing, etc.)

    (J) Spatial filtering

    (K) Edge enhancement(L) Principal components

    (M) Linear combinations (Kauth transform)

    (N) Texture transforms(O) Fourier transforms

    3. Information Extraction

    (P) Supervised classification(Q) Unsupervised classification

    (R) Contextual classification

    (S) Incorporation of ancillary data in classification

    4. Geographical Information System(T) Raster- or image-based GIS

    (U) Vector- or polygon-based GIS

    5. Integrated system

    (V) Complete image processing system (functions A to S)(W) Complete image processing and GIS (functions A to S and T to U)

    A few widely used DIP software are ERDAS Imagine, IDRISI, Geomatica, ERMapper, ILWIS,

    ENVI.

    Introduction to DIP techniques Concept of digital image

    Digital image is a file containing numbers that constitute gray level values or digital

    number (DN) values, and is usually stored in the computer as a two-dimensional array.

    Upper figure shows a sample image of a simple geometric pattern with its corresponding

    digital image in the lower figure. This digital image has eleven rows and eleven columns. Each DN in this digital image

    corresponds to one small area of the visual image and gives the level of darkness orlightness of the area. Higher the DN value, the lighter the area. Hence the zero value

    represents a perfect black, the maximum value perfect white and the intermediate values

    are shades of gray.

  • 7/31/2019 Remote Sensing-Image Interpretation

    8/19

    Pixel

    The term pixel is derived from two words picture and element and represents the smallest

    representative area to which a DN value is assigned. Each pixel has an associated DN valueand co-ordinates in terms rows and columns. This gives its location and attribute in the

    image array. The origin of the co-ordinate system adopted and the corresponding gray level

    values are shown in previous figures

    Grey level value

    The numeric value assigned to each pixel is called the grey level value (Pixel or DN value).

    The minimum and maximum values assigned in an image depend on how the photograph is

    scanned. Scanner provides an option to select the range of these values during scanning of

    photographs. For example, if a photograph is scanned for a range of 0 to ng -1 will

  • 7/31/2019 Remote Sensing-Image Interpretation

    9/19

    generates ng number of grey levels with 0 minimum and ng -1 maximum grey level value,

    usually a scale of 0 to 255 is used. This is also called an 8-bit or one-byte image.

    Introduction to Image processing techniques

    Image pre-processing

    Image pre-processing operations aim to correct distorted or degraded image data to create a

    more faithful representation of the original scene.

    This typically involves the initial processing of raw image data to calibrate the data

    radiometrically and to correct for geometric distortions.

    These operations are called pre-processing because they normally precede further

    manipulation and analysis of image data to extract specific information.

    The nature of any image pre-processing operation depends upon the characteristics of

    sensor used to acquire the image data.

    Two stages in pre-processing:

    o Radiometric correctiono Geometric correction

    A) Radiometric correction Radiance measured by a RS system depends upon the following factors:

    1. changes in scene illumination2. atmospheric conditions

    3. viewing geometry

    4. instrument response characteristics

    Radiometric errors are present in the form of noise which is any unwanted disturbance in

    image data due to limitations in sensing, signal digitization, and data recording process.The potential sources of these errors are:

    (a) periodic drift or malfunctioning of a detector(b) electronic interference between sensor components

    (c) intermittent hiccups in data transmission and recording

    Radiometric errors are of two types:

    o Internal errors

    Calibration source

    Detector response

    o External errors

    Atmospheric attenuation

    Internal errors and corrections Internal errors which include errors of calibration and detector

    responses can be corrected at two levels: (a) Nominal correction

    These corrections or calibration techniques attempt to make the detector outputs correct.These corrections are primarily applied by agency responsible for maintaining the data

    quality.

  • 7/31/2019 Remote Sensing-Image Interpretation

    10/19

    An onboard radiance calibration mechanism is provided to correct for drift of detector

    output from time to time and identify correct input/output values for each detector.

    Occasional solar observations are used to correct for changes in the output of calibrationlamp.

    (b) Supplemental corrections

    Supplemental corrections are applied when the nominal correction methods fail to fully

    compensate for differences in detector outputs. These provide only a cosmetic correctionand attempt only to make the output from detectors equal by statistical procedures at the

    user's end.

    (a) Nominal corrections

    Some operational satellite systems have in-flight calibration facilities, others do not, or it is

    difficult to use this ancillary information.

    The quantitative use of satellite radiometry needs ground verification of satellite measuredradiance values referring to the ground areas of known reflectances. The large uniform area

    of gypsum sand at white sands, New Mexico has been thoroughly studied as a calibrationsite for Landsat 4/5 TM, SPOT HRV and NOAA AVHRR sensors due to the following

    reasons:

    o Extensive area, flat area.

    o Visible and near IR, high uniform reflectance for this material.

    o Close to being Lambertian reflector.

    o Situated at an elevation of about 1200 m in a region with low aerosols loading and

    hence chances of having clear weather high.

    (b) Supplemental corrections Detector related / Detector response errors (Jensen,(1) Line dropout errors

    In this kind of error, a particular line may be containing spurious DN value (zero). If one ofthe six detectors in Landsat MSS or one of the sixteen detectors in TM fails to function

    during a scan, this results in a brightness of zero for that scan line. This is often called line

    dropout and may appear as completely black line in the band k, of the imagery. There is noway to restore this lost data.

    However, once the problem line is identified by using a simple thresholding algorithm that

    can flag any scan line having a mean brightness value at or near zero, it is possible to

    evaluate the output for the dropout line as the pixel-wise average of the preceding and

    succeeding lines which are not influenced by dropout errors.

    (2) Line striping/banding errors:

    Sometimes, a detector does not fail completely, but simply goes out of adjustment (e.g.

    provides readings perhaps twice as great as the other detectors for the same band). This isreferred to as n -line striping or banding. For example, Landsat MSS has 6 detectors/band.

    If perfectly operating then each of the detectors would give same output if received the

  • 7/31/2019 Remote Sensing-Image Interpretation

    11/19

    same input. However, with lapse of time, the radiometric response of one or more of

    detectors tended to drift over time.

    Such errors can be corrected by applying a linear model which assumes that the mean andthe standard deviation of data from each detector should be the same i.e. the detector

    imbalance is considered to be the only factor producing differences in mean and standard

    deviation. To get rid of striping effects of detector imbalance, means and standarddeviations are equalized i.e. forced to be equal to a chosen value (the overall mean and the

    overall standard deviation of the image).

    External errors/atmospheric corrections The composite signal received at the sensor is given by:

    Ltot- total spectral radiance measured by sensor - target reflectance

    E - target irradiance

    T - target transmission

    Lp - path radiance

    The first term in the above equation contains valid information about ground reflectance and the

    second term contains scattered path radiance and causes haze in the image and reduces contrast.

    Correction for atmospheric scattering is necessary if:

    1. The scattering level is spatially variable. For example, an image covering a large urban areaand surrounding natural scene will have entirely different image contrast and spectral

    characteristics for urban area from non-urban area because of particulate and gaseous

    components in the air.2. Multispectral image is to be analysed and the scattering level is temporally variant. The

    changing atmospheric conditions can prevent extension of class signatures from one date to

    another.

    3. Certain analysis has to be performed on the data such as spectral band ratios. The radiancebias, Lp , caused by atmosphere scattering is not removed by scattering.

    Various first order atmospheric correction methods (a) Haze correction:

    Two methods are available for haze correction. Both depend upon the fact that Landsat

    band 7 (or 1, from 0.8 to 1.1 m m) is essentially free of atmospheric effects. Deep clearwater and dark shadows have DN values 0-1 in band 7. Two methods can be used:

    Method-1 (Regression adjustment)

    This method requires that the analyst identifies an area in an image either in shadow or in

    homogeneous deep, non-turbid water. The pixel brightness values are then extracted fromthis representative area in each band. For MSS, band 7 is used as the base band free of

    scattering effects.

    (a) plot, for each pixel, DN in band 7 against band 4 (band 7 on Y-axis and band 4 on X-

    axis).

  • 7/31/2019 Remote Sensing-Image Interpretation

    12/19

    (b) fit straight line using least squares method.

    (c) If there is no haze, then the line would pass through origin, else the offset on X-axis

    determines haze correction which is an additive effect.(d) subtract this offset from each pixel value in band 4.

    (e) repeat steps (a) to (d) for band 7 and bands 5, and 6.

    For TM band 6, infrared band is used as a base band for correction.

    Method-2 (Histogram adjustment):

    This method is applied for images containing steep topography. It is assumed that for

    shadows and deep water bodies the histogram would originate at grey level value of 0.

    However, the method will fail if no steep topography is present in the image or there are noband 7 pixels with DN value of 0.

    Steps:

    (a) draw histogram for each band.

    (b) determine offset for each band.

    (c) subtract the offset. The subtraction of bias or offset as described in these methods results in an image that is

    low in contrast. Therefore, techniques are rarely used without also applying a gain(multiplicative) adjustment to the new brightness values. This amounts to first subtracting a

    bias from each GL value and then multiplying the resulting GL value by a constant (gain)

    to expand the values to fill the entire dynamic range of the output device (i.e. linearcontrast stretching).

    It should be noted that histogram adjustment technique is useful if data is used for ratioing

    or multispectral normalisation. However, if images are to be used only for visual analysis

    of single bands or colour composites, the global atmospheric correction is redundantbecause the same type of bias is usually a part of contrast enhancement.

    (B) Geometric correction

    Geometric correction is the process of rectification of geometric errors introduced in the

    imagery during the process of its acquisition. It is the process of transformation of aremotely sensed image so that it has the scale and projection properties of a map.

    A related technique called registration is the fitting of the coordinate system of one image

    to that of a second image of the same area.

    Geocoding and georeferencing are the often-used terms in connection with the geometric

    correction process. The basic concept behind geocoding is the transformation of satellite

    images into a standard map projection so that image features can be accurately located on

    the earth's surface, and the image can be compared directly with other sources ofgeographic information (such as maps).

    Geometric corrections account for various geometrical errors during the scanning of the

    sensor, movement of platform, earth curvature, etc.

    Types of geometric distortions Geometric distortions in satellite images can be classified on the

    basis of the nature and source of errors as follows: (a) Systematic distortions (stationary in nature)

    The effect is constant and can be predicted in advance, hence these can be easily corrected by

  • 7/31/2019 Remote Sensing-Image Interpretation

    13/19

    applying formulae derived by modelling sources of distortions mathematically. Various types of

    errors in this category are:

    (i) scan skew(ii) scanner distortion/panoramic distortion

    (iii) variations in scanner mirror velocity

    (iv) perspective projection(v) map projection (b) Non-systematic distortions (non-stationary in nature) Their effects are not

    constant because they result from variations in spacecraft altitude, velocity, and attitude and hence

    unpredictable. These can be corrected by satellite tracking data or well-distributed ground controlpoints (GCPs) occurring in the image. These distortions are also of two type on the basis of

    correction method:

    1. distortions evaluated from the satellite tracking data:

    1. earth rotation correction2. spacecraft velocity correction

    2. distortions evaluated from ground control:

    1. altitude variations2. attitude variations (pitch, roll, and yaw variations)

    Error Type Source Effects Nature Direction

    Altitude PlatformDeviation from nominal altitude of

    satelliteNon-systematic Along/across scan

    Attitude PlatformDeviation of sensor axis from normal to

    earth ellipsoid surface.Non-systematic Along/across scan

    Scan skew PlatformScanned lines are not exactly

    perpendicular to ground trackSystematic Across scan

    Spacecraft

    velocity Platform Change in along track IFOV Systematic Across scan

    Earth rotation SceneWestward shift of different scan linesof a scene

    Systematic Along scan

    Map projection SceneGeometric error in projecting image on

    2D map planeSystematic Along/across scan

    Terrain relief SceneRelative planimetric error betweenobjects imaged at different heights.

    Systematic Along/across scan

    Earth curvature Scene

    Change in image pixel size than actual

    one and negligible for small IFOV

    sensors like IRS, LISS-III and PAN.

    Systematic Along/across scan

    Optical SensorBarrel and pincushion distortions in

    image pixelsSystematic Along/across scan

    Aspect ratio SensorImage pixel size different in horizontal

    and vertical directionsSystematic Along/across scan

    Mirror velocity SensorCompression or stretching of imagepixels at various points along scan line.

    Systematic Along scan

    Detector Sensor Misalignment of different band scan Systematic Along/across scan

  • 7/31/2019 Remote Sensing-Image Interpretation

    14/19

    geometry and

    scanning

    sequence

    lines of multi-spectral sensors.

    Perspective

    projection

    Scene and

    sensor

    Enlargement and compression of imagescene close and far off to nadir point

    respectively.

    Systematic Along scan

    PanoramicScene and

    sensorIntroduces along scan distortions Systematic Along scan

  • 7/31/2019 Remote Sensing-Image Interpretation

    15/19

    Geometrical distortions in remotely sensed imagery (Colwell, 1983)

    Terms related to geometric correction (ERDAS User manual)

    RectificationProcess of projecting the data on to a plane and making it conform to a map projection

    system. Resampling

    Process of extrapolating data values for the pixels on the new grid from the values ofsource pixels.

    Registration

    Process of making image data conform to another image. In this a map coordinate system isnot necessarily involved.

    Georeferencing

    It is the process of assigning map coordinates to image data. The image data may not needto be rectified - the data may already be projected on the desired plane, but not yet be

    referenced to the proper coordinate system.

    Geocoded dataGeocoded data are images that have been rectified to a particular map projection and pixel

    size, and have had radiometric correction applied. It is only necessary to rectify geocoded

    data if it must conform to a different projection system, or be registered to another data.

    1. Rectification, by definition involves georeferencing, since all map projection system areassociated with map coordinates.

    2. Image-to-image registration involves georeferencing only if the reference image is

    already georeferenced. Georeferencing, by itself, only involves changing the mapcoordinate information in the image file. The grid of the image does not change.

    Reasons to Rectify It is necessary where pixel grid of image must be changed to fit a map

    projection system or a reference image. It is needed in the following cases:

    1. For scene to scene comparison of individual pixels in applications such as change detectionor thermal inertia mapping.

    2. For GIS data for GIS modeling.

    3. For identifying training samples according to map coordinates.

    4. For creating accurate scaled photomaps.5. To overlay an image with vector data such as ARC/INFO.

    6. For extracting accurate area and distance measures.

    7. For mosaicing.8. To compare images that are originally at different scales.

    9. Any other application where precise geographical location is needed.

    Disadvantages of rectification

    During rectification, the data file values of rectified pixels must be resampled to fit in tonew grid. This may result in loss of spectral integrity of data. If map coordinates are not

    needed in application, then it is advisable not to rectify the data. An unrectified image is

  • 7/31/2019 Remote Sensing-Image Interpretation

    16/19

    spectrally more correct than rectified data. It is recommended to classify before

    rectification because classification will be based on original data values.

    Correction of geometric distortions Broadly three methods are employed to correct geometricaldistortions: (a) Parametric or model-based method

    The image pixel is related to earth latitude and longitude in two stages:

    First stage The sub-satellite point (location and velocity) is established in relation to the

    earth.

    Second stage The image-viewing geometry is modeled using satellite ephemeris information.

    The satellite position can be estimated with the help of laws and theories of orbital mechanics,

    using various parameters related to the earth and satellite orbit namely, the earth's ellipsoid axes,

    the satellite orbit semi-major axis, eccentricity, inclination, argument of perigee, longitude of

    ascending node and true anomaly. In this way, a set of spatial transformation is established

    between real-world (map) and image plane.

    (b) Non-parametric or GCP-based method

    Geometric distortions are rectified by defining a spatial transformation, which establishes a

    spatial correspondence between ground control points (GCPs) of reference map and theimage to be corrected.

    Since the method is dependent on the GCPs, it is also known as GCP-based method.

    (c) Combination of Parametric and GCP-based methods

    In the model-based GC approach, the variations in the orbital parameters limit the accuracyattained. Two main sources of errors: altitude and attitude variations can be rectified using

    some GCPs. Hence, the hybrid method utilizes limited set of GCPs for the improvement of

    the accuracy.

    GCPs are points that can be easily identified on map and image to be corrected forgeometric distortion. There should be sufficiently large number of well-distributed and

    temporally invariant GCPs for good geometric correction.

    The GCP based geometric correction involves two stages:

    o Spatial interpolation stage: The unknown spatial relationship between the distorted

    image and map can be defined by using various techniques such as polynomial

    fitting using least squares method, Delaunay triangulation etc.o Intensity interpolation stage: This stage fills pixel values in the corrected spatial

    grid. This process of interpolation from the sampled values of signals for the image

    reconstruction is known as image resampling or intensity interpolation.

    Various widely used methods of resampling in RS are nearest neighbor,bilinear, cubic convolution, B-spline etc.

    The nearest neighbor approach, also called the zero-order interpolation, is

    the simplest of all methods. The linear interpolation method when extended

  • 7/31/2019 Remote Sensing-Image Interpretation

    17/19

    to two dimensions is called the bilinear or the first order interpolation. The

    higher order interpolation involves fitting some curve to the interpolation

    function. For example, cubic convolution is an interpolation method usingtwo cubic polynomials.

    Spatial interpolation

    It establishes geometrical relationship between image to be corrected and the correct

    reference map. . A least squares polynomial function can be used to express the functionalrelationship between these coordinate systems (Map: (X, Y) and distorted image: (C, R)) as

    follows:

    (i)Xas a function ofCandR ;X=f1 ( C,R ).

    (ii) Yas a function of C andR ; Y=f2 ( C,R ).(iii) Cas a function ofXand Y; C=f3 (X,Y).

    (iv)R as a function ofXand Y;R =f3 (X,Y).

    Spatial interpolation (Mather, 1987)

    In order to map the complete out put image, corner coordinates are transformed first by

    using the computed forward mapping function.

  • 7/31/2019 Remote Sensing-Image Interpretation

    18/19

    Using these coordinates a bounding box is prepared and further this box is divided into a

    grid of desired pixel size. After obtaining the image grid, for each out put pixel location,

    corresponding input location is found by a backward transformation function.

    Figure shows two rectangles ABCD and PQRS representing the uncorrected and corrected

    image boundaries respectively.

    Intensity interpolation Intensity interpolation is the process of determining the pixel value at

    positions lying between various samples. There are three widely used methods of intensityinterpolation: (i) Nearest neighbor (NN)

    It selects the intensity of the closest input element and assigns that value to the output element.

    This method is fast, the pixel values in the output image are real (not fabricated) as they aredirectly copied from input image. However, this method tends to produce blocky picture

    appearance and introduces spatial shifts. The effect is negligible for most visual display

    applications, but may be important for subsequent numerical analyses. This method is also termedzero-order interpolation. (ii) Bilinear interpolation

    This method assumes that a surface fitted to the pixel values in immediate neighbourhoodis plannar like a roof tile.

    The computational requirements of this resampling algorithm are higher than NN andresults in a smoother image. Thus there may be blurring of sharp boundaries in the picture.

    This method is also termed the first-order interpolation.

    (iii) Cubic Convolution

    It is also called bicubic because it is based on the fitting of a two-dimensional, third degreepolynomial surface to the region surrounding ( i',j'). 16 nearest pixels are used to estimate

    value at ( i',j').

    The technique is more complex than NN or BIL, but tends to give more natural lookingimage without the blockiness of NN or oversmoothing of BIL.

    This interpolator is also essentially a low pass filter and introduces some loss of high

    frequency information. This method is also termed the second-order interpolation.

    The interpolated pixel value at ( i', j'), f ( i', j'), is given by the following set of equations:

  • 7/31/2019 Remote Sensing-Image Interpretation

    19/19