Image processing sw & hw

78

Transcript of Image processing sw & hw

Image processing

is the study of any algorithm that takes an image as input and returns an image as output.

WHY DO WE NEED IMAGE PROCESSING?…

Since the digital image is “invisible” it must be prepared for viewing on one or more output device (laser printer,monitor,etc)

The digital image can be optimized for the application by enhancing or altering the appearance of structures within it (based on: body part, diagnostic task, viewing preferences,etc)

It might be possible to analyze the image in the computer and provide cues to the radiologists to help detect important/suspicious structures (e.g.: Computed Aided Diagnosis, CAD)

Components of an Image Processing System

SensorsTwo elements required to acquire digital imagesPhysical device: sensitive to the energy radiated by the object we wish to image.

Digitizer: converts output of physical sensing device into digital form.

Specialized image processing hardware:Digitizer + hardwareHardware: Performs Arithmetic Logic Unit (ALU) on entire image.

Computer:Image processing system ranging from PC to supercomputer.

Image Processing Software: Specialized modules performing specific tasks

Mass Storage:Short-term storage for use during processing.on line storage for relatively fast recallArchival storage, characterized by infrequent access

Image Displays:Flat screen, TV, Monitors, LCD, LED, 3D displays.

Hardcopy:Laser Printers, Camera Films, Heat Sensitive devices, inkjet units, digital units like optical and CD-ROM.

Image processing software

1-KS400

One of commercial image processing software packages used for some of the applications is the KONTRON Imaging System KS400.

very powerful and convenient in use.

the KS400 software has one essential disadvantage:

it does not allow to have direct access form a macro to single pixels in the image.

In practice this means

that the functionality of the software is limited by the set of standard functions.

In principle, the problem can be overcome by using Free Programming KONTRON Software Development Kit which is supplied with KS400.

Developed The software written for UNIX consists of a set of low-level image processing/analysis functions including:-

Sun raster file image (RAS) reading/writing;

automatic and manual image thresholding;

gray-scale and binary morphology;

fractal analysis of contours using 'hand and dividers' method;

fractal analysis of percolation networks;

image correlation.

2-Image processing/analysis software :

The MatLab and MathCAD environments are ideally suited to image processing.

programs have Image Processing Toolboxes which provide a powerful and flexible environment for image processing and analysis. Both programs were used to perform different calculations on images.

-There are several advantages One of them is the ability to have direct access to any portion of available information what in general is not possible with many commercial image analysis systems.

3-Image processing with MathCAD and Matlab.

Image processing algorithms

1. Start at the first pixel in the image.2. Get the actual color of the pixel.3. Find the nearest color of this pixel from the available palette.4. Replace the pixel with the nearest color.5. Have we reached the end of the image? If so stop here.6. Move on to the next pixel.7. Go back to step 2.

1-FINDING THE NEAREST COLOUR:

There are a few methods for finding the nearest color.

The one that I am going to explain here is called the Euclidean distance method.

The algorithm for this is as follows:1. Set minimum distance to a value higher than the highest possible.2. Start at the first color in the palette.3. Find the difference between each RGB value of the actual color and the current palette color.4. Calculate the Euclidean distance.5. If the distance is smaller than the minimum distance set minimum distance to the smallervalue and note the current palette color.6. Have we reached the end of the palette? If so stop here.7. Move on to the next color in the palette.8. Go back to step 3.

-Error diffusion works by comparing the actual color of a pixel against its nearest color and taking the difference between them.

-This difference is known as the error.

- Portions of the error are split between neighbouringpixels causing the error to be diffused hence the name “error diffusion”.

2-ERROR DIFFUSION:

The simplest form of error diffusion can be shown as:

-With this form of error diffusion half of the error from the current pixel (represented by the black dot) is diffused to the pixel on the right and half to the pixel below.

-In a color image error diffusion should be applied to the red, green and blue channels separately.

- An important point to note here is that the total amount of error diffused should never exceed a value of 1.

-It is also important to ensure that when a portion of the error is diffused to neighbouring pixels it does not cause invalid values (e.g. go below 0 or above 255).

- Should a value go outside of the valid range then it should be truncated (e.g. -10 would be truncated to 0 and 260 would be truncated to 255).

-The quality of this type of error diffusion however is pretty poor and very few people would actually use it.

-A better one, and arguably the most famous, is Floyd-Steinberg error diffusion.

With Floyd-Steinberg error diffusion the error is distributed amongst more neighbouring pixels providing a much nicer overall dithering effect.

Performing the colour reduction with Floyd-Steinberg error diffusion will produce the following results:

images reduced to 8 colors with error diffusion

to convert a particular color into greyscale we need to work out what the intensity or brightness of that color is.he formula for this is:

Another method of greyscale conversion which takes into account the human perception of color uses different weights for the red, green and blue components.

3-GREYSCALE CONVERSION:

• To illustrate the above points let’s have a look at an image of some colored bars:

• Let’s convert this image to greyscale using the mean method:

• Converting the image to greyscale using the weighted method will give us the following:

adding the desired change in brightness to each of the red, green and blue color components.

The value of brightness will usually be in the range of -255 to +255 for a 24 bit palette.

Negative values will darken the image and, conversely, positive values will brighten the image.

4-BRIGHTNESS ADJUSTMENT:

images which have had the brightness adjusted by -128 (darkened) and +128 (brightened):

The first step is to calculate a contrast correction factor which is given by the following formula where the value C is the desired level of contrast:

The next step is to perform the actual contrast adjustment itself. The following formula shows the adjustment in contrast being made to the red component of a color

The value of contrast will be in the range of -255 to +255. Negative values will decrease the amount of contrast and, conversely, positive values will increase the amount of contrast.

5-CONTRAST ADJUSTMENT:

can be described as the relationship between an input and the resulting output. For the scope of this article the input will be the RGB intensity values of an image.

The relationship in this case between the input and output is that the output is proportional to the input raised to the power of gamma. The formula for calculating the resulting output is as follows:

For calculating gamma correction the input value is raised to the power of the inverse of gamma. The formula for this is as follows:

The range of values used for gamma will depend on the application, but personally I tend to use a range of 0.01 to 7.99.

6-GAMMA CORRECTION:

-Color inversion, also known as the negative effect

- is one of the easiest effects to achieve in image processing.

-Color inversion is achieved by subtracting each RGB color value from the maximum possible value (usually 255).

7-COLOUR INVERSION AND SOLARISATION :

Another effect which is related to color inversion is the sola rise effect.

The difference between the sola rise effect and color inversion is that with the sola rise effect only color values above or below a set threshold are inverted.

The images below show the inversion of colours below a threshold of 128 in the first instance and then the inversion of colours above a threshold of 128 in the second.

Image processing applications mainly focuses on improving the visual appearance of images to a human viewer and preparing for measurement of the features and structures present.

The measurement of images generally requires that features be well defined, either by edges or unique brightness, colour, texture, or some combination of these factors.

Image processing functions/algorithms

An image can be filtered to remove a band of spatial frequenciesas high frequencies and low frequencies. Where rapid brightness transitions are established, high frequencies will be there

Spatial filtering operations include high pass, low pass and edge detection filters.

Spatial filtering

The main aim in image sharpening is to highlight fine detail in the image or to enhance detail that has been blurred due to noise or other effects.

The Laplacian often used for this purpose. However, image sharpening too can be interpreted in the frequency domain.

Sharpening emphasizes edges in the image and make them easier to see and recognize.

In addition to that, differences between each pixel and its neigbour too, can influence sharpening effect.

Sharpening

The visual effect of a low pass filter is image blurring.

This is because the sharp brightness transitions had been attenuated to small brightness transitions.It have less detail and blurry.

Blurring is aimed to diminish the effects of camera noise, unauthentic pixel values or missing pixel values.

For blurring effect, there are two mostly used techniques which are :--neighbourhood averaging (Gaussian filters) .-edge preserving (median filters).

Blurring

A Gaussian blur filter modifies each pixel by looking at its neighbours and computing a "weighted average" of their values, with more weight being given to closer neighbours.

The standard Gaussian blur filter found in most image processing programs is isotropic; it blends pixels values equally in all directions.

For median filtering on the other hand, the outcome is that pixels with outlying values are forced to become more like their neighbours but at the same time edges are preserved.

Edge detection is an image processing technique for finding the boundaries of objects within images.It works by detecting discontinuities in brightness.

Edge detection is used for image segmentation and data extraction in areas such as image processing, computer vision, and machine vision.

Edge Detection

Sobel Operator

Robert’s cross operator:The Roberts Cross operator performs a simple, quick to compute, 2-D spatial gradient measurement on an image.

Edge Detection Techniques

Prewitt’s operator:Prewitt operator [5] is similar to the Sobel operator and is used for detecting vertical and horizontal edges in images.

The Laplacian is a 2-D isotropic measure of the 2nd spatial derivative of an image.

The Laplacian is often applied to an image that has first been smoothed with something approximating a Gaussian Smoothing filter in order to reduce its sensitivity to noise.

The operator normally takes a single gray level image as input and produces another gray level image as output.

3-Image processing with MathCAD and Matlab.

Canny Edge Detection AlgorithmCanny's intentions were to enhance the many edge detectors already out at the time he started his work. The first and most obvious is low error rate.The second criterion is that the edge points be well localized.A third criterion is to have only one response to a single edge.

Hysteresis is used to track along the remaining pixels that have not been suppressed.Hysteresis uses two thresholds and if the magnitude is below the first threshold, it is set to zero (made a non edge).If the magnitude is above the high threshold, it is made an edge. And if the magnitude is between the 2 thresholds, then it is set to zero unless there is a path from this pixel to a pixel with a gradient above T2.

Advantages and Disadvantages of Edge Detector

Introduction: face recognition systems

•Due increasing concern regarding security issues around the world, the growing interest in general about the extent of private computer systems to identify the faces of accuracy, thus increasing the number of security systems and applications in this area, and has evolved significantly, and the algorithms used in varied between simplicity and complexity.

•There were many questions: •What is the extent of the accuracy of the algorithm to identify the faces even be suitable for such applications?

Stages of face recognition systems in general

step1The formation and extraction memory facial recognition training set cluster, which is a collection of photos submitted in advance of the system.

Mechanism of automated face recognition system

step2• Find the most feature vector closest resemblance within the training set to beam features extracted from the image provided to the test - any image desired to know the identity of the owner through identification system

feature vector An eclectic arrays of arrays original picture, and represent important values and fundamental part of the original images, and thereby reduced the size of images to represent the epitome ray images.

Step 3

Here Within this system, we want to know the identity of the person and discrimination, and pass through the image of that person to the system, called in this case this image provided to get to know them in a test image.

In order to draw the features vector images within this project, we will rely on the PCA algorithm to extract features of X-process feature vector images.

Principal Component Analysis (PCA) algorithm.

•PCA algorithm is considered one of the most successful techniques that have been used in the field of image recognition in the field of image compression.

•Used this algorithm in MATLAB program

PCA algorithm

The main goal in the PCA algorithm• lies in reducing the large dimensions of space data to the

dimensions of the smallest spaces. Usually the new spaces is a feature spaces (containing the basic and important features of the data in the original space).

Systems deal with large dimensional spaces.

Can make many improvements and through matching transfer existing data to the data in less space dimensions. And thus we have our dimensionality reduction from the original space with large dimensions to the new space with smaller dimensions.

Identify problems and large-dimensional systems

Let it be, for example, we have the following Beam:

•As part of the authorof N after the space , wereduce the cross-dimensionalmoving to another beam

To any space of K so that after K <N. consisting

Within this context, the PCA calculates linear conversion linear transformation Which in turn meet with existing data within the top-dimensional space to approve its information within a least-dimensional subspace, as is the position below:

•Or in other words

•Whereas:

•The optimum conversion Is a conversion in which the value is smaller.

Representation of each object within the training group and less space dimensions using PCA algorithm

Complete the task of face recognition using PCA

Assuming that the face that is required is a recognizable face one dimensions Steps facial recognitionStep1:the representation of the image lee form a single beam dimension Let named .Step 2: do b normalize the beam Put across from the average face step 3: find the nearest face within the training group for the face, which is required to identify it Where the difference between the youngest and error

Step 4: If er <Tr, where Tr is a threshold, it has been identified on the face on that face

Motion detection

Motion detection is the first essential process in the extraction of information regarding moving objects and makes use of stabilization in functional areas, such as tracking, classification, recognition, and so on.Motion AlarmIt is pretty easy to add motion alarm feature to all these motion detection algorithmsLiterature SurveyWe review many classes of algorithm used in motion detection, which include optical flow algorithms, twocomplementary background estimation technique, frame difference methodMethod of operation

Detection of Independent Motion

In this project we investigated two complementary methods for the detection of moving objects by a moving observer. The first is based on the fact that, in a rigid environment, the projected velocity at any point in the image is constrained to lie on a 1-D locus in velocity space, known as the constraint ray whose parameters depend only on the observer motion.

If the observer motion is known, an independently moving object can, in principle, be detected because its projected velocity is unlikely to fall on this locus. We show how this principle can be adapted to use partial information about the motion field and observer motion that can be rapidly computed from real image sequences.

The second method utilizes the fact that the apparent motion of a fixed point due to smooth observer motion changes slowly, while the apparent motion of many moving objects such as animals or maneuvering vehicles may change rapidly.

The motion field at a given time can thus be used to place constraints on the future motion field which, if violated, indicate the presence of an autonomously maneuvering object.

In both cases, the qualitative nature of the constraints allows the methods to be used with the inexact motion information typically available from real image sequences.

We have produced Implementations of the methods that run in real time on a parallel pipelined image processing system.

The pictures show two examples of the real-time system in operation. In the first, the camera is mounted on a robot platform that is moving vertically with respect to the room. In the second, the camera is mounted on a vehicle that is being driven towards the road on which the detected vehicle is moving.

Motion Recognition

In this project, we investigated the use of robustly computable motion features can be used directly as a means of recognition. We have designed, implemented, and tested a general framework for detecting and recognizing both distributed motion activity on the basis of temporal texture, and complexly moving, compact objects on the basis of their activity.

Motion-Detection StepsMotion-detection is a two-step process.

Step1

identifies the objects that have moved between the two frames (using difference and threshold filter). The difference between each corresponding pixel of two frames is calculated; and the pixels with difference greater than the specified threshold are marked in foreground color (white). All the other pixels are marked in background color (black). So, the output of Step1 will be a binary-image with only two colors (black and white).

The intensity value (brightness level) of the pixels are used to calculate the difference. The intensity value of each pixel in a grayscale image will be in the range 0 to 255. If RGB images are used, then the grayscale intensity value can be calculated as (R + G + B / 3).

Step2

identifies the significant movements and filters out the noise that are wrongly identified as motions (using erosion-filter). Erosion-filter removes the pixels that are not surrounded by enough amount of neighboring pixels. Essentially, this gives the ability to shrink the objects thereby removing the noisy pixels

i.e. stand-alone pixels. The binary image (output of Step1) is scanned pixel-by-pixel; and if enough number of pixels in current window are not turned on, then the entire window is turned off i.e. ignored as noise.

Image processing in Hardware

Field-Programmable Gate Array (FPGA)

A Field-Programmable Gate Array is a large-scale integrated circuit (LSI) which is programmable.

Finite State Machine (FSM)

Finite State Machine is a behavioral model which consists of a finite number of states and transitions between the states.

Theories

Gray-level Co-occurrence Matrix Statistics Image Generation

are images which represent the statistic uniqueness of textures in animage. Many statistics images are generated.

Gray-level Co-occurrence Matrix (GLCM)

GLCM is a square matrix which contains numbers of times thatpatterns of 2 scaled values are found while examining pairs of pixelsthrough an image.

Static Random Access Memory (SRAM)

SRAM is an electronic memory which is capable of storing data aslong as there is the power supply for the device.

Static Random Access Memory (SRAM)

SRAM is an electronic memory which is capable of storing data as long as there is thepower supply for the device.

Timing Diagram of the SRAM Write Operation

Experiments

The main object of this experiment is to find the best algorithm tobe implemented in the GLCM statistics image generation operation.

Materials and Equipments

1. Executable files of each algorithm which implements timer functionsin it2. An Open Dragon image which its size is 3822 pixels by 2560 pixels3. A Computer with these specificationsa. CPU: Intel Pentium 4 1.6 GHzb. Motherboard: IBM, Intel i845c. RAM: DDR 640 MB, 133 MHz

hardware devices 1. Prototyping Board: Design Gateway True PCI2. SRAM: AMIC LP621024D3. Bi-directional Buffer: 150Ω ResistorsSystem ComponentsThere are 13 modules in the system. They are …1. Memory Unit 7. Square Buffer2. Process Controller 8. GLCM Builder3. Memory Controller 9. Address Decoder4. Arbiter 10. Matrix Voter 5. Center Indexer 11. Matrix Integrator6. Square Fetcher 12. Clock Divider

13. pciif32

Designs

Matrix Integrator was designed to calculate the three GLCM Statistic values by summation all calculated positions of GLCM matrix is in Memory Unit.

Block structure showing ports of Matrix Integrator

Matrix Integrator

Clock Divider is responsible for dividing the frequency of a clock signal into another frequency.

Block structure showing ports of Clock Divider

Clock Divider

Image Processing in Hardware is the project concentrating ondesigning a coprocessor for the computer system to compute thecomputationally-intensive part ofoperations in the digital imageprocessing.

Thank you for your attention

Done by:-Eng.Amal Ahmed AlmathaniEng.Ansam MansourEng.Eftikhar ali alamriEng.Safia MoqbelEng.Somia Abdalhmeed