Project Report Audio Video Processor

download Project Report Audio Video Processor

of 70

  • date post

    27-Aug-2014
  • Category

    Documents

  • view

    108
  • download

    0

Embed Size (px)

Transcript of Project Report Audio Video Processor

Chapter 1 Introduction

1|Page

Chapter: 1 Introduction1.1 IntroductionsIn our project we are taking input from the hex keypad connecting it to microcontroller we are serially transmitting this data to the computer and this data is used for the video processing in different ways. In video processing we will perform following actions: 1. Motion Detection 2. Color detection 3. Motion tracking 4. Shape detection 5. Ghost mouse 6. Object counting 7. Audio playing 8. Optical character recognition The input is taken from hex key pad. The data from hex key pad goes parallel to the microcontroller (ATmega16a). This data is transmitted serially to the computer. This serially transmitted data is taken by the MATLAB. In the MATLAB this data is used for the different type of the video processing. The type of the video processing is decided by this data. The value of the data will tell the MATLAB which program to run.

2|Page

For the value: 1. It will be detecting the motion in front of the camera. This will show us the output on the screen when there is some movement in front of the camera. If there is no movement then it will not give us the output. The data will come on the screen if and only if there is some movement. This can be used for the smart switching and movement detection. 2. It will be detecting the color from the video input. Red color will be detected and a bounding box will be used to point its location. Centroid of the box will show the center point of that color region. This can easily detect the color and tell us how much color component is present. This processing can easily be used in the industrial automation jobs like picking the defective component from the bulk manufacturing and in the process like detection of the faulty place in the manufactured component. 3. It will track the motion of the video input. In this processing we will take a reference node that can be anything like nib of a pen, head of any person, the tip of the finger etc. here we are taking the tip of the finger as the reference node. It is detected and its motion is traced. The motion of the tip will be displayed on the screen. This traced path can be used for the detecting the path followed by the node. This processing will help us in keeping eyes over the fix node for example any naughty student moving in the library or using the application for painting. 4. It will detect the shape of the video input. In this processing any reference shape can be taken. It can be circular, rectangular etc. here we are taking the circular shape as the reference shape. The circular shape is detected from the video input. The circular shape in the input data will get highlighted by its boundary and its centroid will be displayed on the screen. Through this processing the circle is detected in the video input and its position is also displayed. This processing is used for the detection of the particular shape 3|Page

from the input data. It can also be used for the industrial purposes in order to find the particular component in the bulk of the raw material or in the manufactured materials. 5. In this processing the pointer control like we do in the mouse is controlled. As in the mouse the pointer moves when we move our mouse on the mouse pad similarly the pointer will move on the screen which we control through the moving any reference object it can be any object like our figure, any laser beam, any object of any shape, here we are using the red color object. Optical Flow is used for the motion detection and it is calculated from the video input data. The pointer position is calculated and then set accordingly. This processing can be used for the interactive interface between the machine and the human beings. This type of processing makes it possible to have the computers placed like TV in our houses and we are operating them without any wired device. 6. It will be counting the objects of the video data. In this processing the objects of any reference shape will be counted. The reference shape can any shape like circle, triangle, squares, rectangle or can be any shape. In this processing we are taking the rectangle as the reference shape. In this processing the shape can be filtered out from the image captured from the video input and these shapes are counted. The images will be captured at the rate of 30 frames per second so that we cannot recognized that it is an image processing. The no. of the object will be shown on the screen and all the objects will be highlighted. This processing can be used for the counting objects in the industries, for counting the people in the seminar hall etc. 7. It is the audio processing. This is the processing the audio format data. In this processing the audio data file is played like it happen in the widow media player. In this processing

4|Page

the actions like media player can be performed through the direct interfacing through the pointer. 8. It will be finding the character in the video input. In this processing the alphabet like a, b, c, d, 1, 4, A, F etc. are recognized form the video input. The character that is recognized is displayed on the text reader. In this processing the frames from the video input is captured as the image. The mage which we captured is convolved to the some reference pattern which is predefined in the programming. The convolution to the particular pattern will produce the character corresponding to that character. The recognized character is printed in the text editor. This processing can be used for the detecting the data written on the moving components.

1.2 Digital Image ProcessingImages are produced by a variety of physical devices, including still and video cameras, scanners, X-ray devices, electron microscopes, radar, and ultrasound, and are used for a variety of purposes, including entertainment, medical, business, industrial, military, civil, security, and scientific. The interests in digital image processing stem from the improvement of pictorial information for human interpretation and the processing of scene data for autonomous machine perception.

Digital image analysis is a process that converts a digital image into something other than a digital image, such as a set of measurement data or a decision. Image digitization is a process that converts a pictorial form to numerical data. A digital image is an image f(x,

5|Page

y) that has been discredited in both spatial coordinates and brightness (intensity). The image is divided into small regions called picture elements or pixels. Image digitization includes image sampling (i.e., digitization of spatial coordinates (x, y)) and gray-level quantization (i.e., brightness amplitude digitization). An image is represented by a rectangular array of integers. The image sizes and the number of gray levels are usually integer powers of 2. The number at each pixel represents the brightness or darkness (generally called the intensity) of the image at that point. In general, image processing operations can be categorized into four types: o Pixel operations: The output at a pixel depends only on the input at that pixel, independent of all other pixels in that image. Thresholding, a process of making the corresponding input pixels above a certain threshold level white and others black, is simply a pixel operation. Other examples include brightness addition/subtraction, contrast stretching, image inverting, log, and power law. o Local (neighborhood) operations: The output at a pixel depends on the input values in a neighborhood of that pixel. Some examples are edge detection, smoothing filters (e.g., the averaging filter and the median filter), and sharpening filters (e.g., the Laplacian filter and the gradient filter). This operation can be adaptive because results depend on the particular pixel values encountered in each image region. o Geometric operations: The output at a pixel depends only on the input levels at some other pixels defined by geometric transformations. Geometric operations are different from global operations, such that the input is only from some specific

6|Page

pixels based on geometric transformation. They do not require the input from all the pixels to make its transformation. o Global operations: The output at a pixel depends on all the pixels in an image. It may be independent of the pixel values in an image, or it may reflect statistics calculated for all the pixels, but not a local subset of pixels. A popular distance transformation of an image, which assigns to each object pixel the minimum distance from it to all the background pixels, belongs to a global operation.

Other examples include histogram equalization/specification, image warping, Hough transform, and connected components.

Nowadays, there is almost no area that is not impacted in some way by digital image processing. Its applications include o Remote sensing: Images acquired by satellites and other spacecrafts are useful in tracking Earths resources, solar features, geographical mapping, and space image applications. o Image transmission and storage for business: Its applications include broadcast television, teleconferencing, and transmission of facsimile images for office automation, communication over computer networks, security monitoring systems, and military communications. o Medical processing: Its applications include X-ray, cineangiogram, transaxial tomography, and nuclear magnetic resonance. These images may be Remote sensing images for tracking Earths climate and resources used for patient screening and monitoring or for detection of tumors or other diseases in patients.

7|Page

o Radar, sonar, and acoustic image processing: For example, the detection and recognition of various types of targets and the maneuvering of aircraft. o Robot/machine vision: Its applications include the identification or descrip