Prerequisites - ecestudy.files.wordpress.com  · Web view22.Digital image processing techniques...

31
Prerequisites Signals and systems Since DIP is a subfield of signals and systems, so it would be good if you already have some knowledge about signals and systems, but it is not necessary. But you must have some basic concepts of digital electronics. Calculus and probability Basic understanding of calculus, probability and differential equations is also required for better understanding. Basic programming skills Other than this, it requires some of the basic programming skills on any of the popular languages such as C++, Java, or MATLAB. Signal 1. In physical world, any quantity measurable through time over space or any higher dimension can be taken as a signal. A signal is a mathematical function, and it conveys some information. 2. A signal can be one dimensional or two dimensional or higher dimensional signal. One dimensional signal is a signal that is measured over time. The common example is a voice signal. The two dimensional signals are those that are measured over some other physical quantities. The example of two dimensional signals is a digital image. 3. Signal processing deals with analysis and processing of signals(analog and digital), and deals with storing, filtering, and other operations on signals. These signals include transmission signals, sound or voice signals, image signals, and other signals e.t.c. 4. Out of all these signals, the field that deals with the type of signals for which the input is an image and the output is also

Transcript of Prerequisites - ecestudy.files.wordpress.com  · Web view22.Digital image processing techniques...

PrerequisitesSignals and systemsSince DIP is a subfield of signals and systems, so it would be good if you already have some

knowledge about signals and systems, but it is not necessary. But you must have some basic

concepts of digital electronics.

Calculus and probabilityBasic understanding of calculus, probability and differential equations is also required for better

understanding.

Basic programming skillsOther than this, it requires some of the basic programming skills on any of the popular languages

such as C++, Java, or MATLAB.

Signal

1. In physical world, any quantity measurable through time over space or any higher dimension can be taken as a signal. A signal is a mathematical function, and it conveys some information.

2. A signal can be one dimensional or two dimensional or higher dimensional signal. One dimensional signal is a signal that is measured over time. The common example is a voice signal. The two dimensional signals are those that are measured over some other physical quantities. The example of two dimensional signals is a digital image.

3. Signal processing deals with analysis and processing of signals(analog and digital), and deals with storing, filtering, and other operations on signals. These signals include transmission signals, sound or voice signals, image signals, and other signals e.t.c.

4. Out of all these signals, the field that deals with the type of signals for which the input is an image and the output is also an image is done in image processing. As it name suggests, it deals with the processing on images.

5. Signal processing is an umbrella and image processing lies under it.

How a digital image is formed

6.Since capturing an image from a camera is a physical process. The sunlight is used as a source of energy. A sensor array is used for the acquisition of the image. So when the sunlight falls upon the object, then the amount of light reflected by that object (is pass through the lens of the

camera )is sensed by the sensors, and a continuous voltage signal is generated by the amount of sensed data. In order to create a digital image , we need to convert this data into a digital form. This involves sampling and quantization. (They are discussed later on). The result of sampling and quantization results in an two dimensional array or matrix of numbers which are nothing but a digital image. and then this digital image is manipulated in digital image processing

What is an Image

7.An image is nothing more than a two dimensional signal. It is defined by the mathematical function f(x,y) where x and y are the two co-ordinates horizontally and vertically.

The value of f(x,y) at any point is gives the pixel value at that point of an image.

8.The above figure is an example of digital image that you are now viewing on your computer screen. But actually , this image is nothing but a two dimensional array of numbers ranging between 0 and 255.

128 30 123

232 123 321

123 77 89

80 255 255

Each number represents the value of the function f(x,y) at any point. In this case the value 128 , 230 ,123 each represents an individual pixel value. The dimensions of the picture is actually the dimensions of this two dimensional array.

What's an image?

• An image refers to a 2D light intensity function f(x,y), where (x,y) denote spatial coordinates and the value of f at any point (x,y) is proportional to the brightness or gray levels of the image at that point.

• A digital image is an image f(x,y) that has been discretized both in spatial coordinates and brightness. • The elements of such a digital array are called image elements or pixel

9.An image is an array, or a matrix, of square pixels (picture elements) arranged in columns and

rows.

10.In a (8-bit) greyscale image each picture element has an assigned intensity that ranges from 0 to

255. A grey scale image is what people normally call a black and white image, but the name

emphasizes that such an image will also include many shades of grey.

Figure 2: Each pixel has a value from 0 (black) to 255 (white). The possible range of the pixel

values depend on the colour depth of the image, here 8 bit = 256 tones or greyscales.

11.A normal greyscale image has 8 bit colour depth = 256 greyscales. A “true colour”

image has 24 bit colour depth = 8 x 8 x 8 bits = 256 x 256 x 256 colours = ~16 million colours.

12. mages are produced by a variety of physical devices, including still and video cameras, x - ray devices, electron microscopes, radar, and ultrasound, and used for a variety of Pu r poses, including entertainment, medical, business (e.g. documents), industrial, military, civil (e.g. traffic),security, and scientific. The goal in each case is for an observer, human or machine, to extract useful information about the scene being imaged.

What is Image Processing?

13.It is a type of signal processing in which input is an image and output may be image or characteristics/features associated with that image.

14.Image processing is a method to perform some operations on an image, in order to get an enhanced image or to extract some useful information from it.

15 Image Processing is a technique to enhance raw images received from cameras/sensors placed on satellites, space probes and aircrafts or pictures take n in normal day-to-day life for various applications.

16. Various techniques have been developed in Image Processing during the last four to five decades. Most of the techniques are developed for enhancing images obtained from unmanned spacecrafts, space probes and military reconnaissance flights.

Methods of Image Processing

17. There are two types of methods used for image processing namely, analogue and digital image processing.

Analog image processing:

Analog image processing is done on analog signals. It includes processing on two dimensional analog signals. In this type of processing, the images are manipulated by electrical means by varying the electrical signal. The common example include is the television image.

18.The television signal is a voltage level which varies in amplitude to represent brightness through the image. By electrically varying the signal, the displayed image appearance is altered.

19.The brightness and contrast controls on a TV set serve to adjust the amplitude and reference of the video signal, resulting in the brightening, darkening and alteration of the brightness range of the displayed image.

Analogue image processing can be used for the hard copies like printouts and photographs

Digital image processing

20. Digital Image processing is a method to convert an image into digital form and perform some operations on it, in order to get an enhanced image or to extract some useful information from it. It is a type of signal dispensation in which input is image, like video frame or photograph and output may be image or characteristics associated with that image.

21. It is a subfield of signals and systems but focus particularly on images. DIP focuses on developing a computer system that is able to perform processing on an image. The input of that system is a digital image and the system process that image using efficient algorithms, and gives an image as an output. The most common example is Adobe Photoshop. It is one of the widely used application for processing digital images.

22.Digital image processing techniques help in manipulation of the digital images by using computers. The three general phases that all types of data have to undergo while using digital technique are pre-processing, enhancement, and display, information extraction.

23. The digital image processing deals with developing a digital system that performs operations on an

digital image.

24.Digital Processing techniques help in manipulation of the digital images by using computers. As raw data from imaging sensors from satellite platform contains deficiencies. To get over such flaws and to get originality of information, it has to undergo various phases of processing. The three general phases that all types of data have to undergo while using digital technique are Pre- processing, enhancement and display, information extraction.

25.How it works.

In the above figure, an image has been captured by a camera and has been sent to a digital system

to remove all the other details, and just focus on the water drop by zooming it in such a way that the

quality of the image remains the same.

Applications and Usage26.Digital image processing has dominated over analog image processing with the passage of time due its wider range of applications

27Since digital image processing has very wide applications and almost all of the technical fields

are impacted by DIP, we will just discuss some of the major applications of DIP.

28.Digital Image processing is not just limited to adjust the spatial resolution of the everyday

images captured by the camera. It is not just limited to increase the brightness of the photo, e.t.c.

Rather it is far more than that.

29.Electromagnetic waves can be thought of as stream of particles, where each particle is moving

with the speed of light. Each particle contains a bundle of energy. This bundle of energy is called a

photon.

30.The electromagnetic spectrum according to the energy of photon is shown below.

In this electromagnetic spectrum, we are only able to see the visible spectrum.Visible spectrum

mainly includes seven different colors that are commonly term as (VIBGOYR). VIBGOYR stands for

violet , indigo , blue , green , orange , yellow and Red.

31.But that doesnot nullify the existence of other stuff in the spectrum. Our human eye can only see

the visible portion, in which we saw all the objects. But a camera can see the other things that a

naked eye is unable to see. For example: x rays , gamma rays , e.t.c. Hence the analysis of all that

stuff too is done in digital image processing.

This discussion leads to another question which is

32.why do we need to analyze all that other stuff in EM spectrum too?The answer to this question lies in the fact, because that other stuff such as XRay has been widely

used in the field of medical. The analysis of Gamma ray is necessary because it is used widely in

nuclear medicine and astronomical observation. Same goes with the rest of the things in EM

spectrum.

It is among rapidly growing technologies today, with its applications in various aspects of a

business. Image Processing forms core research area within engineering and computer science

disciplines too

33.Applications of Digital Image ProcessingSome of the major fields in which digital image processing is widely used are mentioned below

Image sharpening and restoration

Medical field

Remote sensing

Transmission and encoding

Machine/Robot vision

Color processing

Pattern recognition

Video processing

Microscopic Imaging

Others

Image sharpening and restorationImage sharpening and restoration refers here to process images that have been captured from the

modern camera to make them a better image or to manipulate those images in way to achieve

desired result. It refers to do what Photoshop usually does.

This includes Zooming, blurring , sharpening , gray scale to color conversion, detecting edges and

vice versa , Image retrieval and Image recognition. The common examples are:

The original image

The zoomed image

Blurr image

Sharp image

Edges

Medical fieldThe common applications of DIP in the field of medical is

Gamma ray imaging

PET scan

X Ray Imaging

Medical CT

UV imaging

UV imagingIn the field of remote sensing , the area of the earth is scanned by a satellite or from a very high

ground and then it is analyzed to obtain information about it. One particular application of digital

image processing in the field of remote sensing is to detect infrastructure damages caused by an

earthquake.

As it takes longer time to grasp damage, even if serious damages are focused on. Since the area

effected by the earthquake is sometimes so wide , that it not possible to examine it with human eye

in order to estimate damages. Even if it is , then it is very hectic and time consuming procedure. So

a solution to this is found in digital image processing. An image of the effected area is captured

from the above ground and then it is analyzed to detect the various types of damage done by the

earthquake.

The key steps include in the analysis are

The extraction of edges

Analysis and enhancement of various types of edges

Transmission and encodingThe very first image that has been transmitted over the wire was from London to New York via a

submarine cable. The picture that was sent is shown below.

The picture that was sent took three hours to reach from one place to another.

Now just imagine , that today we are able to see live video feed , or live cctv footage from one

continent to another with just a delay of seconds. It means that a lot of work has been done in this

field too. This field doesnot only focus on transmission , but also on encoding. Many different

formats have been developed for high or low bandwith to encode photos and then stream it over

the internet or e.t.c.

Machine/Robot visionApart form the many challenges that a robot face today , one of the biggest challenge still is to

increase the vision of the robot. Make robot able to see things , identify them , identify the hurdles

e.t.c. Much work has been contributed by this field and a complete other field of computer vision

has been introduced to work on it.

Hurdle detectionHurdle detection is one of the common task that has been done through image processing, by

identifying different type of objects in the image and then calculating the distance between robot

and hurdles.

Line follower robotMost of the robots today work by following the line and thus are called line follower robots. This help

a robot to move on its path and perform some tasks. This has also been achieved through image

processing.

Color processingColor processing includes processing of colored images and different color spaces that are used.

For example RGB color model , YCbCr, HSV. It also involves studying transmission , storage , and

encoding of these color images.

Pattern recognitionPattern recognition involves study from image processing and from various other fields that

includes machine learning ( a branch of artificial intelligence). In pattern recognition , image

processing is used for identifying the objects in an images and then machine learning is used to

train the system for the change in pattern. Pattern recognition is used in computer aided diagnosis ,

recognition of handwriting , recognition of images e.t.c

Video processingA video is nothing but just the very fast movement of pictures. The quality of the video depends on

the number of frames/pictures per minute and the quality of each frame being used. Video

processing involves noise reduction , detail enhancement , motion detection , frame rate conversion

, aspect ratio conversion , color space conversion e.t.c.

Artificial intelligenceArtificial intelligence is more or less the study of putting human intelligence into machines. Artificial

intelligence has many applications in image processing. For example: developing computer aided

diagnosis systems that help doctors in interpreting images of X-ray , MRI e.t.c and then highlighting

conspicuous section to be examined by the doctor.

Applications

1.      Intelligent Transportation Systems – This technique can be used in Automatic number plate

recognition and Traffic sign recognition.

 

2.      Remote Sensing – For this application, sensors capture the pictures of the earth’s surface in

remote sensing satellites or multi – spectral scanner which is mounted on an aircraft. These pictures

are processed by transmitting it to the Earth station. Techniques used to interpret the objects and

regions are used in flood control, city planning, resource mobilization, agricultural production

monitoring, etc.

 

3.      Moving object tracking – This application enables to measure motion parameters and

acquire visual record of the moving object. The different types of approach to track an object are:

·            Motion based tracking

·            Recognition based tracking

 

4.      Defense surveillance – Aerial surveillance methods are used to continuously keep an eye on

the land and oceans. This application is also used to locate the types and formation of naval vessels

of the ocean surface. The important duty is to divide the various objects present in the water body

part of the image. The different parameters such as length, breadth, area, perimeter, compactness

are set up to classify each of divided objects. It is important to recognize the distribution of these

objects in different directions that are east, west, north, south, northeast, northwest, southeast and

south west to explain all possible formations of the vessels. We can interpret the entire oceanic

scenario from the spatial distribution of these objects.

 

5.      Biomedical Imaging techniques – For medical diagnosis, different types of imaging tools

such as X- ray, Ultrasound, computer aided tomography (CT) etc are used. The diagrams of X- ray,

MRI, and computer aided tomography (CT) are given below.

 

 

Some of the applications of Biomedical imaging applications are as follows:

·               Heart disease identification– The important diagnostic features such as size of the heart and its

shape are required to know in order to classify the heart diseases. To improve the diagnosis of heart

diseases, image analysis techniques are employed to radiographic images.

·               Lung disease identification – In X- rays, the regions that appear dark contain air while region

that appears lighter are solid tissues. Bones are more radio opaque than tissues. The ribs, the heart,

thoracic spine, and the diaphragm that separates the chest cavity from the abdominal cavity are

clearly seen on the X-ray film.

·               Digital mammograms – This is used to detect the breast tumour. Mammograms can be analyzed

using Image processing techniques such as segmentation, shape analysis, contrast enhancement,

feature extraction, etc. 

 

6.      Automatic Visual Inspection System – This application improves the quality and productivity

of the product in the industries.

·               Automatic inspection of incandescent lamp filaments – This involves examination of the bulb

manufacturing process. Due to no uniformity in the pitch of the wiring in the lamp, the filament of the

bulb gets fused within a short duration. In this application, a binary image slice of the filament is

created from which the silhouette of the filament is fabricated. Silhouettes are analyzed to recognize

the non uniformity in the pitch of the wiring in the lamp. This system is being used by the General

Electric Corporation.

 

·               Automatic surface inspection systems – In metal industries it is essential to detect the flaws on

the surfaces. For instance, it is essential to detect any kind of aberration on the rolled metal surface

in the hot or cold rolling mills in a steel plant. Image processing techniques such as texture

identification, edge detection, fractal analysis etc are used for the detection.

 

·               Faulty component identification – This application identifies the faulty components in electronic

or electromechanical systems. Higher amount of thermal energy is generated by these faulty

components. The Infra-red images are produced from the distribution of thermal energies in the

assembly. The faulty components can be identified by analyzing the Infra-red images.

Different dimensions of signals1 dimension signalThe common example of a 1 dimension signal is a waveform. It can be mathematically represented

as

F(x) = waveform

Where x is an independent variable. Since it is a one dimension signal , so thats why there is only

one variable x is used.

Pictorial representation of a one dimensional signal is given below:

The above figure shows a one dimensional signal.

Now this lead to another question, which is, even though it is a one dimensional signal ,then why

does it have two axis?. The answer to this question is that even though it is a one dimensional

signal , but we are drawing it in a two dimensional space. Or we can say that the space in which we

are representing this signal is two dimensional. Thats why it looks like a two dimensional signal.

Perhaps you can understand the concept of one dimension more better by looking at the figure

below.

Now refer back to our initial discussion on dimension, Consider the above figure a real line with

positive numbers from one point to the other. Now if we have to explain the location of any point on

this line, we just need only one number, which means only one dimension.

2 dimensions signalThe common example of a two dimensional signal is an image , which has already been discussed

above.

As we have already seen that an image is two dimensional signal, i-e: it has two dimensions. It can

be mathematically represented as:

F (x , y) = Image

Where x and y are two variables. The concept of two dimension can also be explained in terms of

mathematics as:

Now in the above figure, label the four corners of the square as A,B,C and D respectively. If we

call , one line segment in the figure AB and the other CD , then we can see that these two parallel

segments join up and make a square. Each line segment corresponds to one dimension , so these

two line segments correspond to 2 dimensions.

3 dimension signalThree dimensional signal as it names refers to those signals which has three dimensions. The most

common example has been discussed in the beginning which is of our world. We live in a three

dimensional world. This example has been discussed very elaborately. Another example of a three

dimensional signal is a cube or a volumetric data or the most common example would be animated

or 3d cartoon character.

The mathematical representation of three dimensional signal is:

F(x,y,z) = animated character.

Another axis or dimension Z is involved in a three dimension, that gives the illusion of depth. In a

Cartesian co-ordinate system it can be viewed as:

4 dimension signalIn a four dimensional signal , four dimensions are involved. The first three are the same as of three

dimensional signal which are: (X, Y, Z), and the fourth one which is added to them is T(time). Time

is often referred to as temporal dimension which is a way to measure change. Mathematically a four

d signal can be stated as:

F(x,y,z,t) = animated movie.

The common example of a 4 dimensional signal can be an animated 3d movie. As each character

is a 3d character and then they are moved with respect to the time, due to which we saw an illusion

of a three dimensional movie more like a real world.

So that means that in reality the animated movies are 4 dimensional i-e: movement of 3d

characters over the fourth dimension time.

II. Fundamental Steps of Digital Image Processing : There are some fundamental steps but as they are fundamental, all these steps may have sub-steps. The fundamental steps are described below with a neat diagram.There are two categories of the steps involved in the image processing – (1) Methods whose outputs are input are images. (2) Methods whose outputs are attributes extracted from those images.

            (i)       Image Acquisition : This is the first step or process of the fundamental steps of digital image processing. Image acquisition could be as simple as being given an image that is already in digital form. Generally, the image acquisition stage involves preprocessing, such as scaling etc.

                (ii)      Image Enhancement   :  Image enhancement is among the simplest and most appealing(attractive or interesting) areas of digital image processing. Basically, the idea behind enhancement techniques is to bring out detail that is obscured,( not clear to the understanding; hard to perceive not clear or plain; ambiguous, vague, or uncertain) or simply to

highlight certain features of interest in an image. Such as, changing brightness & contrast etc. Image enhancement is a very subjective area of image processing

            (iii)    Image Restoration : Image restoration is an area that also deals with improving the appearance of an image. However, unlike enhancement, which is subjective, image restoration is objective, It is an objective approach, in the sense that restoration techniques tend to be based on mathematical or probabilistic models of image processing. Enhancement, on the other hand is based on human subjective preferences regarding what constitutes a “good” enhancement result

            (iv)     Color Image Processing : Color image processing is an area that has been gaining its importance because of the significant increase in the use of digital images over the Internet. This may include color modeling and processing in a digital domain etc. Color image processing deals with basically color models and their implementation in image processing applications.Use the colour of the image to extract features of interest in an image

            (v)      Wavelets and Multiresolution Processing : Wavelets are the foundation for representing images in various degrees of resolution. Images subdivision successively into smaller regions for data compression and for pyramidal representation.

Wavelets ─ Varying frequency and limited duration; an attempt to reveal both the frequency and temporal information in the transform domain � Multiresolution analysis ─ Signal (image) representation at multiple resolutions� Images ─ Connected regions of similar texture combined to form different objects • Small sized or low in contrast objects: examined at high resolution • Large sized or high in contrast: examined at a coarse view

• Several resolutions needed to distinguished between different objects ─ Mathematically images are 2-D arrays of intensity values of locally varying statistics (resulting from edges, homogenous regions, etc. – see Fig. 7.1)

            (vi)     Compression : Compression deals with techniques for reducing the storage required to save an image or the bandwidth to transmit it. Particularly in the uses of internet it is very much necessary to compress data.It has to major approaches a) Lossless Compression b) Lossy Compression

            (vii)   Morphological Processing : Morphological processing deals with tools for extracting image components that are useful in the representation and description of shape and boundary of objects. It is majorly used in automated inspection applications.

– boundaries extraction– skeletons– convex hull– morphological filtering

– thinning– pruning

– erosion– removal of structures of certain shape and size, given by SE

– Dilation– filling of holes of certain shape and size, given by SE

Binary images may contain numerous imperfections. In particular, the binary regions produced by simple thresholding are distorted by noise and texture. Morphological image processing pursues the goals of removing these imperfections by accounting for the form and structure of the image. These techniques can be extended to greyscale images.

Morphological image processing is a collection of non-linear operations related to the shape or morphology of features in an image. According toWikipedia, morphological operations rely only on the relative ordering of pixel values, not on their numerical values, and therefore are especially suited to the processing of binary images. Morphological operations can also be applied to greyscale images such that their light transfer functions are unknown and therefore their absolute pixel values are of no or minor interest.

        (viii)  Segmentation : Segmentation procedures partition an image into its constituent parts or objects. In general, autonomous segmentation is one of the most difficult tasks in digital image processing. A rugged segmentation procedure brings the process a long way toward successful solution of imaging problems that require objects to be identified individually.

Important Tip: The more accurate the segmentation, the more likely recognition is to succeed.            (ix)     Representation and Description : Representation and description almost always follow the output of a segmentation stage, which usually is raw pixel data, constituting either the boundary of a region or all the points in the region itself. Choosing a representation is only part of the solution for transforming raw data into a form suitable for subsequent computer processing. 

- Representation: Make a decision whether the data should be represented as a boundary or as a complete region. It is almost always follows the output of a segmentation stage.

- Boundary Representation: Focus on external shape characteristics, such as corners and inflections (انحناءات)

- Region Representation: Focus on internal properties, such as texture or skeleton (هيكلية) shape

Description deals with extracting attributes that result in some quantitative information of interest or are basic for differentiating one class of objects from another.

            (x)      Object recognition : Recognition is the process that assigns a label, such as,  “vehicle” to an object based on its descriptors.

                (xi)     Knowledge Base : Knowledge may be as simple as detailing regions of an image where the information of interest is known to be located, thus limiting the search that has to be conducted in seeking that information. The knowledge base also can be quite complex, such as an interrelated list of all major possible defects in a materials inspection problem or an image database containing high-resolution satellite images of a region in connection with change-detection applications.

Components of Image Processing System.

i) Image Sensors

With reference to sensing, two elements are required to acquire digital image. The first is a physical device that is sensitive to the energy radiated by the object we wish to image and second is specialized image processing hardware.

ii) Specialize image processing hardware –It consists of the digitizer just mentioned, plus hardware that performs other primitive operations such as an arithmetic logic unit, which performs arithmetic such addition and subtraction and logical operations in parallel on imagesiii) ComputerIt is a general purpose computer and can range from a PC to a supercomputer depending on the application. In dedicated applications, sometimes specially designed computer are used to achieve a required level of performanceiv) SoftwareIt consist of specialized modules that perform specific tasks a well designed package also includes capability for the user to write code, as a minimum, utilizes the specialized module. More sophisticated software packages allow the integration of these modules.v) Mass storage –This capability is a must in image processing applications. An image of size 1024 x1024 pixels ,in which the intensity of each pixel is an 8- bit quantity requires one megabytes of storage space if the image is not compressed .Image processing applications falls into three principal categories of storagei) Short term storage for use during processingii) On line storage for relatively fast retrievaliii) Archival storage such as magnetic tapes and disks

vi) Image displays-Image displays in use today are mainly color TV monitors. These monitors are driven by the outputs of image and graphics displays cards that are an integral part of computer systemvii) Hardcopy devices -The devices for recording image includes laser printers, film cameras, heat sensitive devices inkjet units and digital units such as optical and CD ROM disk. Films provide the highest possible resolution, but paper is the obvious medium of choice for written applications.viii) Networking

It is almost a default function in any computer system in use today because of the large amount of data inherent in image processing applications. The key consideration in image transmission bandwidth.

A Simple Image ModelAn image is denoted by a two dimensional function of the form f{x, y}. The value oramplitude of f at spatial coordinates {x,y} is a positive scalar quantity whose physicalmeaning is determined by the source of the image.When an image is generated by a physical process, its values are proportional to energy radiated by a physical source. As a consequence, f(x,y) must be nonzero and finite; that is o < f(x,y) < infiniteThe function f(x,y) may be characterized by two components-The amount of the source illumination incident on the scene being viewed.The amount of the source illumination reflected back by the objects in the sceneThese are called illumination and reflectance components and are denoted by i (x,y) an r (x,y) respectively.The functions combine as a product to form f(x,y)

We call the intensity of a monochrome image at any coordinates (x,y) the gray level (l) of the image at that point l= f (x, y.)L min ≤ l ≤ LmaxLmin is to be positive and Lmax must be finiteLmin = imin rminLamx = imax rmaxThe interval [Lmin, Lmax] is called gray scale. Common practice is to shift this interval numerically to the interval [0, L-l] where l=0 is considered black and l= L-1 is considered white on the gray scale. All intermediate values are shades of gray of gray varying from black to white.