Thesis. Facial recognition security system

57
Facial recognition security system This report is being presented as a partial fulfillment in the requirement of the degree as a Bachelor of Computer Engineering At COMSATS Institute of Information Technology Abbottabad, Pakistan (Session:2009 - 2013) Submitted by: Tallah jamshaid Fa09-bce-051 Hashim khan sp08-bee-153 Israr-ul-haq sp08-bce-020 1

description

Windows based , VS2010.

Transcript of Thesis. Facial recognition security system

Page 1: Thesis. Facial recognition security system

Facial recognition security system

This report is being presented as a partial fulfillment in the requirement of the degree as a Bachelor of Computer Engineering

At

COMSATS Institute of Information Technology Abbottabad, Pakistan(Session:2009 - 2013)

Submitted by: Tallah jamshaid Fa09-bce-051 Hashim khan sp08-bee-153 Israr-ul-haq sp08-bce-020

Supervised by: Eng. Nauman Tareen Approved by :

Department of Electrical Engineering

1

Page 2: Thesis. Facial recognition security system

COMSATS Institute of Information Technology Abbottabad

In the Name of ALLAH

The Most GraciousThe Most Merciful

Yesterday ALLAH was with us,Today we are under HIS persone

&We should not worried about tomorrow,

ALLAH is already thereWe just need to ask for HIS Mercy to all.

2

Page 3: Thesis. Facial recognition security system

Acknowledgement

Almighty ALLAH is very kind, merciful and compassionate .His benevolence and blessings enabled us to accomplish this task. I thank Almighty ALLAH the most beneficent, the most merciful. I offer my humblest gratitude from deepest core of heart to the Holy Prophet “Hazarat Muhammad” (Peace Be Upon Him), who is forever will be a model of guidance and knowledge for humanity as a whole.

We had dedicated this project to our Parents and Family, whose love and affection had been inspirational throughout our lives,

To our Teachers to whom we owe a lot for success in our personeerAND

To all those who in any capacity had been help full in the project.

We are thankful to our project supervisor Engr. Nauman Tareen, lecturer CIIT/EE COMSATS University, Abbottabad. It is hard to find words of appropriate dimensions to express our gratitude to our worthy supervisor for his keen potential interest, suggestions, consistent encouragement and support throughout the course of this project.

3

Page 4: Thesis. Facial recognition security system

We are highly grateful to Engr. Asmat ali shah without whose help and guidance, this project could not have been a success.

Abstract

A comprehensive research has been undertaken to design and develop a system that will count ,detect and measure speed of Faces for highways .This project describes a visual object detection framework that is capable of processing images extremely rapidly while achieving high detection rates ,will count detected object and will measure speed of detected object. There are five key contributions.

The first is the introduction of a new image representation called the “Integral Image” which allows the features used by our detector to be computed very quickly. The second is a learning algorithm, based on AdaBoost, which selects a small number of critical visual features and yields extremely efficient classifiers. The third contribution is a method for combining classifiers in a “cascade” which allows background regions of the image to be quickly dispersonded while spending more computation on promising object-like regions. The fourth and fifth contribution is to count Faces and measure speed of Faces by processing on pixels of an image.

Facial recognition security system has a wide range of applications, such as it is useful for security system in many sensitive areas and helpful to reduces security risks in many departments and in institutes.

A set of experiments in the domain of facial recognition security system presented. The system yields Facial recognition security system performance comparable to the best previous systems.

4

Page 5: Thesis. Facial recognition security system

ContentsTable of Content:

Chapter 1 Introduction..........................................................................................8

1.1 facial Detection:.................................................................................................................................8

1.2 data base:..........................................................................................................................................8

1.3 microcontroller:.................................................................................................................................9

1.4Objective:...........................................................................................................................................9

Chapter 2 Image Processing...........................................................................................................10

2.1 Background:.....................................................................................................................................10

2.2 Types of Processing:........................................................................................................................10

2.2.1 Low-level processing:................................................................................................................11

2.2.2 Mid-level processing:................................................................................................................11

2.2.3 Higher-level processing:............................................................................................................11

2.3 Components of an Image Processing System:.................................................................................11

2.4 Applications:....................................................................................................................................13

CHAPTER 3 Face Detection...........................................................................................................15

3.1 Introduction.....................................................................................................................................15

3.2 Rectangular features (Haar features):.............................................................................................15

3.3 An Integral Image for rapid feature computation:.........................................................................17

3.4 The AdaBoost Machine –learning method:.....................................................................................18

3.4.1 Working of Adaboost Algorithm:..............................................................................................19

3.5 The Cascaded Classifier:...................................................................................................................21

3.5.1 Training a cascade:....................................................................................................................22

3.5.2 Optimum Combination:............................................................................................................22

CHAPTER 4 Database .....................................................................................................25

5

Page 6: Thesis. Facial recognition security system

4.1 Types of database:...........................................................................................................................26

4.2 :Extensible Markup Language Database..........................................................................................26

4.3 Storing Image in XML database.......................................................................................................26

4.3.1 :.................................................................................................................................................27

4.3.2 Feature count technologies:.....................................................................................................27

4.4 Applications to collect Feature data................................................................................................30

5.3 Image Processing:............................................................................................................................35

Chapter 6 Conclusion.................................................................................................37

6.1 DETECTION:.....................................................................................................................................37

Appendix A................................................................................................................................................41

Appendix B................................................................................................................................................47

APPENDIX C...............................................................................................................................................49

Table of Figures:

Figure 2.1 Components of general purpose image processing system.

Figure.3-1. Example rectangle features shown relative to the enclosing detection window. The sum of the pixels which lie within the white rectangles are subtracted from the sum of pixels in the grey

6

Page 7: Thesis. Facial recognition security system

rectangles. Two-rectangle features are shown in (A) and (B). Figure (C) shows a three-rectangle feature, and (D) a four-rectangle feature.

Figure 3-2. shows how an integral image formed from the input image……………………………………18

Figure 3-3. The value of the integral image at point (x, y) is the sum of all the pixels above and to the left………………………………………………………………………………….……………………………….19

Figure.3-4.The sum of the pixels within rectangle D can be computed with four array references. The value of the integral image at location 1 is the sum of the pixels in rectangle A. The value at location 2 is A + B, at location 3 is A + C, and at location 4 is A + B + C + D. The sum within D can be computed as 4 + 1 − (2 + 3)…………………………………………………………………………………………………………………..19

Figure. 3.5 This feature is selected by AdaBoost. The feature measures the difference in intensity between the region below headfeatures and a region of shadow between the tyres. The feature capitalizes on the observation that the headfeature region is often featureer than the shadow region.

Figure. 3.6. Schematic depiction of a detection cascade. A series of classifiers are applied to every sub-window. The initial classifier eliminates a large number of negative examples with very little processing. Subsequent layers eliminate additional negatives but require additional computation. After several stages of processing the numbers of sub-windows have been reduced radically. Further processing can take any form such as additional stages of the cascade (as in our detection system) or an alternative detection system……25

Figure. 3.7. Positive Samples used in training process…………………………………………………………..26

Figure. 3.7. Positive Samples used in training process………………………………………………………26

Figure. 3.8. Negative Samples used in training process…………………………………………………………27

Figure 4.1 shows the counting of Faces on the Detection Window…………………………………………………………29

Figure 4.2 shows different technologies of Face count…………………………………………………………………33Figure 6.1 shows Face detection…………………………………………………………………………………41

Figure 6.2 shows the Face count………………………………………………………………………………42

Figure 6.3 shows the Face speed……………………………………………………………………………..43

7

Page 8: Thesis. Facial recognition security system

8

Page 9: Thesis. Facial recognition security system

Chapter 1 Introduction

1.1 Biometric Security SystemsBiometrics is used in the process of authentication of a person by verifying or Identifying that a user requesting a network resource is who he, she, or it claims to be, and vice versa. It uses the property that a human trait associated with a person itself like structure of finger, face details etc. By comparing the existing data with the incoming data we can verify the identity of a particular person.There are many types of biometric system like fingerprint recognition, face detection and recognition, iris recognition etc., these traits are used for human identification in surveillance system, criminal identification. Advantages of using these traits for identification are that they cannot be forgotten or lost. These are unique features of a human being which is being used widely.

1.2Goal The goals of this project are to enhance public safety, reduce congestion, improved travel and transitinformation, generate cost savings to person personriers and emergencies operators, reduce detrimental environmental impacts, etc. This technology

9

Page 10: Thesis. Facial recognition security system

assists states, cities, and towns nationwide to meet fully the increasing demands on Facial recognition Security system. The efficiency of the system is mainly there is based on the performance and comprehensiveness of the Face detection technology. Face on detection and tracking are an integral part of any Face detection technology, since it gathers all or part of the information that are used in an efficient way.

1.3 Face Detection:In recognition, Face detection system may be defined as a system which is capable of detecting faces and measure Feature parameters such as count, speed, incidents, etc. Face detection by video cameras is one of the most promising non-intrusive technologies for large-scale data collection and implementation of advanced Feature control and management schemes. Face detection is also the basis for Face tracking. The correct Face detection results in better tracking. Modern computer controlled Feature systems have more complex Face detection requirements than those adopted for normal Feature-actuated controllers for Feature signals, for which many off-the-shelf Face detectors were designed. Many useful and comprehensive parameters like count, speed, Face classification, queue lengths, volume/lane, lane changes, microscopic and macroscopic behaviors can be evaluated through video based Face detection and tracking.

1.4 Face RecognitionFace is a complex multidimensional structure and needs good computing techniques for recognition. The face is our primary and first focus of attention in social life playing an important role in identity of individual. We can recognize a number of faces learned throughout our lifespan and identify that faces at a glance even after years. There may be variations in faces due to aging and distractions like beard, glasses or change of hairstyles. Face recognition is an integral part of biometrics. In biometrics basic traits of human is matched to the existing data and depending on result of matching identification of a human being is traced. Facial features are extracted and implemented through algorithms which are efficient and some modifications are done to improve the existing algorithm models. Computers that detect and recognize.

Faces Could Be Applied to A Wide variety of practical applications including criminal identification, security systems, identity verification etc. Face detection and recognition is used in many places now a days , in websites hosting images and social networking sites. Face recognition and detection can be achieved using technologies related to computer science. Features extracted from a face are processed and compared with similarly processed faces present in the database. If a face is recognized it is known or the system may show a similar face existing in database else it is unknown. In surveillance system if a unknown face appears more than

10

Page 11: Thesis. Facial recognition security system

one time then it is stored in database for further recognition. These steps are very useful in criminal identification. In general, face recognition techniques can be divided into two groups based on the face representation they use appearance-based, which uses holistic texture features and is applied to either whole-face or specific regions in a face image and feature-based, which uses permanent facial features (mouth, eyes, brows, cheeks etc.), and geometric relationships between them.

1.5 Video image detection:We present Face detection and counting system based on digital image-processing techniques. These images can be taken by digital cameras installed at the top of existing Feature features. By using the proposed approach, it is possible to detect the number of Faces waiting on each side of the intersection, hence, providing the necessary information for optimal Feature management. Results achieved after testing this methodology on three real intersections are promising, attaining high accuracy while counting several Faces at the same time. Hence, the system is equivalent to installing multiple inductive loops in all the streets of the intersection, but with lower installation and maintenance costs.

1.6 Microcontroller 89c51The 89c51 is a microcontroller manufactured by the Atmel Corporation. It is considered as a general purpose microcontroller. This is due to its low cost and its industry standard instruction set.

1.7 DescriptionThe AT89C51 is a low-power, high-performance CMOS 8-bit microcomputer with 4K bytes of Flash programmable and erasable read only memory (PEROM). The device is manufactured using Atmel’s high-dens it’s nonvolatile memory technology and is compatible with the industry-standard MCS-51 instruction set and pin out. The on-chip Flash allows the program memory to be reprogrammed in-system or by a conventional nonvolatile memory programmer. By combining a versatile 8-bit CPU with Flash on a monolithic chip, the Atmel AT89C51 is a powerful microcomputer which provides a highly-flexible and cost-effective solution to many embedded control applications.

1.8 Features 4K Bytes of In-System Reprogrammable Flash Memory

Fully Static Operation: 0 Hz to 24 MHz

Three-level Program Memory Lock

128 x 8-bit Internal RAM

11

Page 12: Thesis. Facial recognition security system

32 Programmable I/O Lines

Two 16-bit Timer/Counters

Six Interrupt Source

1.9 Data Sheet 89c51

12

Page 13: Thesis. Facial recognition security system

13

Page 14: Thesis. Facial recognition security system

Chapter 2 Image Processing

The field of digital image processing refers to processing digital images by means of a digital computer. There is no general agreement among authors regarding where image processing stops and other related areas, such as image analysis and computer vision, start. Sometimes a distinction is made by defining image processing as a discipline in which both the input and output of a process are images. We believe this to be a limiting and somewhat artificial boundary. For example, under this definition, even the trivial task of computing the average intensity of an image (which yields a single number) would not be considered an image processing operation. On the other hand, there are fields such as computer vision whose ultimate goal is to use computers to emulate human vision, including learning and being able to make inferences and take actions based on visual inputs. This area itself is a branch of artificial intelligence (AI) whose objective is to emulate human intelligence. The field of AI is in its earliest stages of infancy in terms of development, with progress having been much slower than originally anticipated. The area of image analysis (also called image understanding) is in between image processing and computer vision.

2.1 Background:

One of the first applications of digital images was in the newspaper industry, when pictures were first sent by submarine cable between London and New York. Introduction of the Bartlane cable picture transmission system in the early 1920s reduced the time required to transport a picture across the Atlantic from more than a week to less than three hours. Specialized printing equipment coded pictures for cable transmission and then reconstructed them at the receiving end.

Some of the initial problems in improving the visual quality of these early digital pictures were related to the selection of printing procedures and the distribution of intensity levels. The printing method used to obtain was abandoned toward the end of 1921 in favor of a technique based on photographic reproduction made from tapes perforated at the telegraph receiving terminal.

2.2 Types of Processing: There are no clear-cut boundaries in the continuum from image processing at one end to computer vision at the other. However, one useful paradigm is to consider three types of computerized processes in this continuum: low-, mid-, and high-level processes.

14

Page 15: Thesis. Facial recognition security system

2.2.1 Low-level processing:

Low-level processing involves primitive operations such as image preprocessing to reduce noise, contrast enhancement, and image sharpening. A low-level process is characterized by the fact that both its inputs and outputs are images.

2.2.2 Mid-level processing:

Mid-level processing on images involves tasks such as segmentation (partitioning an image into regions or objects), description of those objects to reduce them to a form suitable for computer processing, and classification (recognition) of individual objects. A mid-level process is characterized by the fact that its inputs generally are images, but its outputs are attributes extracted from those images (e.g., edges, contours, and the identity of individual objects).

2.2.3 Higher-level processing:

Higher-level processing involves “making sense” of an ensemble of recognized objects, as in image analysis, and, at the far end of the continuum, performing the cognitive functions normally associated with vision.

2.3 Components of an Image Processing System:

As recently as the mid-1980s, numerous models of image processing systems being sold throughout the world were rather substantial peripheral device that attached to equally substantial host computers. Late in the 1980s and early in the 1990s, the market shifted to image processing hardware in the form of single boards designed to be compatible with industry standard buses and to fit into engineering workstation cabinets and personal computers. In addition to lowering costs, this market shift also served as a catalyst for a significant number of new companies specializing in the development of software written specifically for image processing.Although large-scale image processing systems still are being sold for massive imaging applications, such as processing of satellite images, the trend continues toward miniaturizing and blending of general-purpose small computers with specialized image processing hardware. Figure 2.1 shows the basic components comprising a typical general purpose system used for digital image processing. The function of each component is discussed in the following para-graphs, starting with image sensing.With reference to sensing, two elements are required to acquire digital images. The first is a physical device that is sensitive to the energy radiated by the object we wish to image. The second, called a digitizer, is a device for converting the output of the physical sensing device into

15

Page 16: Thesis. Facial recognition security system

digital form. For instance, in a digital video camera, the sensors produce an electrical output proportional to feature intensity.

Problem domain

Figure 2.1 Components of general purpose image processing system.

Specialized image processing hardware usually consists of the digitizer just mentioned, plus hardware that performs other primitive operations, such as an arithmetic logic unit (ALU), that performs arithmetic and logical operations in parallel on entire images. One example of how an ALU is used is in averaging images as quickly as they are digitized, for the purpose of noise reduction. This type of hardware sometimes is called a front-end subsystem, and its most

16

Page 17: Thesis. Facial recognition security system

distinguishing characteristic is speed. In other words, this unit performs functions that require fast data throughputs (e.g., digitizing and averaging video images at 30 frames/s) that the typical main computer cannot handle. The computer in an image processing system is a general-purpose computer and can range from a PC to a supercomputer. In dedicated applications, some-times custom computers are used to achieve a required level of performance, but our interest here is on general-purpose image processing systems. In these systems, almost any well-equipped PC-type machine is suitable for off-line image processing tasks.Software for image processing consists of specialized modules that perform specific tasks. A well-designed package also includes the capability for the user to write code that, as a minimum, utilizes the specialized modules. More sophisticated software packages allow the integration of those modules and general-purpose software commands from at least one computer language.

2.4 Applications:

Today, there is almost no area of technical endeavor that is not impacted in some way by digital image processing. We can cover only a few of these applications in the context and space of the current discussion. However, limited as it is, the material presented in this section will leave no doubt in your mind regarding the breadth and importance of digital image processing.

In this section numerous areas of application, each of which routinely utilizes the digital image processing techniques. Many of the images shown in this section are used later in one or more of the examples given in the book. The areas of application of digital image processing are so varied that some form of organization is desirable in attempting to capture the breadth of this field.

One of the simplest ways to develop a basic understanding of the extent of image processing applications is to categorize images according to their source.

The principal energy source for images in use today is the electromagnetic energy spectrum. Other important sources of energy include acoustic, ultrasonic, and electronic. Synthetic images, used for modeling and visualization, are generated by computer.

Images based on radiation from the EM spectrum are the most familiar, especially images in the X-ray and visual bands of the spectrum. Electromagnetic waves can be conceptualized as propagating sinusoidal waves of varying wavelengths, or they can be thought of as a stream of mass less particles, each traveling in a wavelike pattern and moving at the speed of feature. Each mass less particle contains a certain amount (or bundle) of energy. Each bundle of energy is called a photon.

Angiography is another major application in an area called contrast enhancement radiography. This procedure is used to obtain images of blood vessels.

17

Page 18: Thesis. Facial recognition security system

Applications of ultraviolet “feature” is varied .They include lithography, industrial inspection, microscopy, lasers, biological imaging, and astronomical observations. We illustrate imaging in this band with examples from microscopy and astronomy. Ultraviolet feature is used in fluorescence microscopy, one of the fastest growing areas of microscopy. Fluorescence is a phenomenon discovered in the middle of the nineteenth century, when it was first observed that the mineral fluorspar fluoresces when ultraviolet feature is directed upon it.

Considering that the visual band of the electromagnetic spectrum is the most familiar in all our activities, it is not surprising that imaging in this band outweighs by far all the others in terms of breadth of application. The infrared band often is used in conjunction with visual imaging, so we have grouped the visible and infrared bands in this section for the purpose of illustration. We consider in the following discussion applications in feature microscopy, astronomy, remote sensing, industry, and law enforcement. Image processing has been applied to Feature analysis in recent years, with different goals. In this report, a new approach is presented for extracting vehicular speed information, given a sequence of real-time Feature images.

18

Page 19: Thesis. Facial recognition security system

CHAPTER 3 FACE DETECTION

3.1 Introduction

There is a lot of work done on the problem of Face detection. Several of the existing techniques can robustly detect with extremely high accuracy. Due to various poses, different scales, different expressions, featureing conditions, complex background and orientation, the detection rate and real time of exist methods have not reached the satisfaction. For Face detection, we use Viola-Jones method. Their method is the most common method for Face detection. In this chapter we will explain the Face detection process.

There are four key concepts in Face detection.

3.2. Computing simple rectangular features, called Haar features.

3.3. An Integral image is first created which is then used for rapid computation of features.

3.4. The AdaBoost machine-learning method is used to select and train weak classifiers from the feature set.

3.5. A cascaded classifier is constructed by combining many features (week classifiers) efficiently.

Now we will explain these concepts in more detail and describe the Face detection procedure.

3.2 Rectangular features (Haar features):

Face detection procedure classifies images based on the value of simple features. Motivations for using features rather than the pixels directly are that features can act to encode ad-hoc domain

19

Page 20: Thesis. Facial recognition security system

knowledge that is difficult to learn using a finite quantity of training data. Also feature-based system operates much faster than a pixel-based system. Most commonly used features are shown in Fig 3.1. The value of a two-rectangle feature is the difference between the sums of the pixels within two rectangular regions. A three- rectangle feature computes the sum within two outside rectangles subtracted from the sum in a center rectangle. Finally a four-rectangle feature computes the difference between diagonal pairs of rectangles. Given that the base resolution of the detector is 24 × 24, the exhaustive set of rectangle features is quite large, 160,000.

Figure.3-1. Example rectangle features shown relative to the enclosing detection window. The sum of the pixels which lie within the white rectangles are subtracted from the sum of pixels in the grey rectangles. Two-

rectangle features are shown in (A) and (B). Figure (C) shows a three-rectangle feature, and (D) a four-rectangle feature.

3.3 Feature Count:

A feature count is a count of feature along a particular Detection Window, either done electronically or by people counting by the side of the Detection Window. Feature counts can be used by local councils to identify which routes are used most, and to either improve that Detection Window or provide an alternative if there is an excessive amount of Feature. Also, some geographic fieldwork involves a Feature count. They are useful for comparing two or more Detection Windows.

20

Page 21: Thesis. Facial recognition security system

Figure 4.1 shows the counting of Faces on the Detection Window

3.4 An Integral Image for rapid feature computation:

Rectangle features can be computed very rapidly using integral image. The integral image at location x, y is the sum of the pixels above and to the left of x, y, inclusive:

ii(x,y) = ∑x' ≤ x, y '≤ y

i(x ' , y ' ) (3-1)where ii(x,y) is the integral image and i(x,y) is the original image. See figure (3.2). Using the following pair of recurrences:

s(x,y) = s(x,y-1) + i(x,y) ii(x,y) = ii(x-1,y) + s(x,y) (3-2)

In an integral image the value at pixel (x,y) is the sum of pixels above and to the left of (x,y).

21

Page 22: Thesis. Facial recognition security system

Input image Integral image

Figure 3-2. shows how an integral image formed from the input image

The integral image is computed in one pass over the original image. Once integral image is computed, any rectangular sum can be computed in four array references (see Fig. 3.4). Clearly the difference between two rectangular sums can be computed in eight references. Since the two-rectangle features defined above involve adjacent rectangular sums they can be computed in six array references, eight in the case of the three-rectangle features, and nine for four-rectangle features.

Figure 3-3. The value of the integral image at point (x, y) is the sum

of all the pixels above and to the left.

22

Page 23: Thesis. Facial recognition security system

Figure.3-4.The sum of the pixels within rectangle D can be computed with four array references. The value of the integral image at location 1 is the sum of the pixels in rectangle A. The value at location 2 is A + B, at location 3 is A + C, and at location 4 is A + B + C + D. The sum within D can be computed as 4 + 1 − (2 + 3).

By seeing above figures, now we understand the concept of Integral Image which is used in the computation of rectangular features. Now we will discuss the AdaBoost machine-learning algorithm which was first time used by Viola and Jones in 2004.

3.5 The AdaBoost Machine –learning method:

Viola and Jones had developed first real time detection system using AdaBoost in 2004. AdaBoost is an algorithm for constructing a “strong” classifier as a linear combination of “weak” classifiers. This algorithm is a method to improve the accuracy based on a series of weak classifiers stage by stage.

Given a feature set and a training set of positive and negative images, a classification function can be developed. Recall that there are 160,000 rectangle features associated with each image sub-window, a number far larger than the number of pixels. A very small number of these features can be combined to form an effective classifier. A variant of AdaBoost is used both to select the features and to train the classifier. Basically AdaBoost learning algorithm is used to boost the classification performance of a simple learning algorithm (e.g., it might be used to boost the performance of a simple perceptron). It does this by combining a collection of weak classification functions to form a stronger classifier. In the language of boosting the simple learning algorithm is called a weak learner. In order for the weak learner to be boosted, it is called upon to solve a sequence of learning problems. After the first round of learning, the examples are re-weighted in order to highfeature those which were incorrectly classified by the previous weak classifier. The weak learning algorithm is designed to select the single rectangle feature which best separates the positive and negative examples. For each feature, the weak

23

Page 24: Thesis. Facial recognition security system

learner determines the optimal threshold classification function, such that the minimum numbers of examples are misclassified. A weak classifier (h(x, f, p, θ)) thus consists of a feature (f), a threshold (θ) and a polarity (p) indicating the direction of the inequality:

h ( x , f , p , θ )=¿ (3-3)

Here x is a 24 × 24 pixel sub-window of an image. In practice no single feature can perform the classification task with low error. Table 3.1 shows the learning algorithm.

3.5.1 Working of Adaboost Algorithm: AdaBoost starts with a uniform distribution of “weights” over training examples. Select the classifier with the lowest weighted error (i.e. a “weak” classifier)Increase the weights on the training examples that were misclassified.Repeat the procedure.At the end, personefully make a linear combination of the weak classifiers obtained at all iterations

Table 3.1. The boosting algorithm for learning a query online. T hypotheses are constructed each using a single feature. The final hypothesis is a weighted linear combination of the T hypotheses where the weights are inversely proportional to the training errors.

Given example images (x1,y1),……… (xn,yn) where yi = 0, 1 for negative and positive examples respectively.

Initialize weights w1,I = 1

2m , 12l for yi = 0, 1 respectively where m and l are the number

of negatives and positives respectively. For t = 1,………….T.

1. Normalize the weights

So, that wt is the probability distribution.2. For each feature j, train a classifier hj which is restricted to using a single feature. The error is evaluated with respect to wt,

3. Choose the classifier ht with lowest error €i.

24

Page 25: Thesis. Facial recognition security system

4. Update the weights: where ei = 0 if example xi is classifier correctly, ei = 1 otherwise, and

5. The final strong classifier is:

Since final classifier is a weighted combination of weak classifiers. Pictorial representation of AdaBoost classification phenomenon is shown in Figure 3-4. The AdaBoost learning algorithm initially maintains a uniform distribution of weights over each training samples. In the first iteration, the algorithm trains a weak classifier using one Haar-like feature that achieves the best recognition performance for the training samples. In the second iteration, the training samples that were misclassified by the first weak classifier receive higher weights so that the newly selected Haar-like feature must focus more computation power towards these misclassified samples. The iteration goes on and the final result is a cascade of linear combinations of the selected weak classifiers, i.e. a strong classifier, which achieves the required accuracy.

3.6 The Cascaded Classifier:In practical implementation, the Attention cascade is employed to speed up the performance of the learning algorithm. In the first stage of the training process, the threshold of the weak classifier is adjusted low enough so that 100% of the target objects can be detected while keeping the false negative rate close to zero. The trade-off of a low threshold is that a higher false positive detection rate will accompany. A cascade of classifiers achieves increased detection performance while reducing computation time. Smaller and efficient boosted classifiers can be constructed which reject many of the negative sub-windows while detecting almost all positive instances. Simpler classifiers are used to reject the most of sub-windows before more complex classifiers used to achieve low false positive rates. Stages in the cascade are constructed by training classifiers using AdaBoost. Starting with a two-feature strong classifier, an effective filter can be obtained by adjusting the strong classifier threshold to minimize false negatives. The initial AdaBoost threshold is designed to yield a low error rate on the training data. A lower threshold yields higher detection rates and higher false positive rates. Based on performance measured using a validation training set, the two-feature classifier can be adjusted to detect

25

Page 26: Thesis. Facial recognition security system

100% of the players/ s with a false positive rate of 50%. Fig. 3.5 shows a description of the features used in this classifier.

Figure. 3.6 This feature is selected by AdaBoost. The feature measures the difference in intensity between the region below headfeatures and a region of shadow between the tyres. The feature capitalizes on the

observation that the headfeature region is often featureer than the shadow region.

Training a cascade:

To design a cascade we must choose: Number of stages in cascade (strong classifiers). Number of features of each strong classifier.

26

Page 27: Thesis. Facial recognition security system

Threshold of each strong classifier Optimization problem: Can we find optimum combination?

3.6.2 Optimum Combination:

Since finding optimum combination is extremely difficult. Viola & Jones suggested a heuristic algorithm for the cascade training.Manual Tweaking:

select fi (Maximum Acceptable False Positive rate / stage) select di (Minimum Acceptable True Positive rate / stage) select Ftarget (Target Overall False Positive rate)

Until Ftarget is met: Add new stage: Until fi, di rates are met for this stage Keep adding features & train new strong classifier with AdaBoost.

Table 3-2 shows the shows pseudo code for building a cascade detector.

User selects values for f, the maximum acceptable false positive rate per layer and d, the minimum acceptable detection rate per layer.

User selects target overall false positive rate[1], Ftarget.

P = set of positive examples N = set of negative examples F0 = 1.0; D0 = 1.0. I=0

While Fi > Ftarget

i← i + 1 ni = 0; Fi = Fi -1 while Fi > f x Fi -1

ni ← ni + 1 Use P and N to train a classifier with ni features using AdaBoost. Evaluate current cascaded classifier on validation set to determine Fi and Di. Decrease threshold for the ith classifier until the current cascaded classifier has

a detection rate of at least d x Di -1 (this also affects Fi) N ← Ø If Fi > Ftarget then evaluate the current cascaded detector on the set of non-

Face images and put any false detections into the set N

The overall form of the detection process is that of a degenerate decision tree, what is called a “cascade” (Quinlan, 1986) (see Fig. 2.6). A positive result from the first classifier starts a second classifier which has also been adjusted to achieve very high detection rates. A positive result

27

Page 28: Thesis. Facial recognition security system

from the second classifier triggers evaluation of a third classifier, and so on. A negative outcome at any point means an immediate rejection of the sub-window. The structure of the cascade utilized the fact that within any single image an overwhelming majority of sub-windows are negative. So the cascade tries to reject as many negatives as possible at the earliest stage possible. We developed 20 stage classifier and training took 30 hours to complete on Pentium IV system with 3 GHz processor and 2 GB RAM.

Figure. 3.6. Schematic depiction of a detection cascade. A series of classifiers are applied to every sub-window. The initial classifier eliminates a large number of negative examples with very little processing. Subsequent layers eliminate additional negatives but require additional computation. After several stages of processing the numbers of sub-windows have been reduced radically. Further processing can take any form such as additional stages of the cascade (as in our detection system) or an alternative detection system.

While a positive instance will starts the evaluation of every classifier in the cascade, this is a very rare event. Like a decision tree, subsequent classifiers are trained using those examples which pass through all the previous stages. So the second classifier performs a more difficult task than the first. The examples which passes through the first stage are difficult than typical examples. We collected 1000 positive samples of Faces and conditions for training with size of 24x24. To obtain a robust classifier we used good quality positive samples for training. Figure 3.8 shows some of the positive samples used in our work. Similarly 1500 negative images were used of size 640x480. Figure 3.9 shows some of the negative images used in our training process

28

Page 29: Thesis. Facial recognition security system

Figure. 3.7. Positive Samples used in training process

Figure. 3.8. Negative Samples used in training process

Now the training algorithm based on AdaBoost learning algorithm takes a set of “Positive” and “Negative” samples and generates a classifier and detect the Face.

29

Page 30: Thesis. Facial recognition security system

Chapter 6 Conclusion

In this work we developed an application for high ways. Our application can vigorously detect Faces, count them and measure their speed. We achieved precise and robust Face detection, count and speed measurement. For Face detection and count we used Haar-like features and AdaBoost algorithm for both feature selection and classification. The Face speed measurement system used AdaBoost algorithm with optical flow method. For Face recognition general databases have been used. Tested Face images videos include different featureing conditions. In our application we have tested the Face video of different image sizes. The recognition system efficiently worked on most difficult scenario of sample size of 5 x 5. The results have shown that the developed application is fully applicable in real world environments.

6.1 DETECTION:

Figure 6.1 shows Face detection

It is the result of detection. It only detects the respective Face and does not detect any other thing in the surrounding.

6.2 SPEED MEASUREMENT

It is Face speed measurement. It is done by distance time formula i.e., speed=(distance2-distance1)/time.

1) Distance1 and distance 2 2) Time known.

30

Page 31: Thesis. Facial recognition security system

REFRENCES:

1)Detection Window Feature Data: Collection Methods and Applications Guillaume Leduc Working Papers on Energy, Transport and Climate Change

2) Automatic player detection and recognition in images using adaboost Zahid Mahmood August 2011. 3) Digital Image Processing Third Edition Rafael C. Gonzales published in 7th October 2007.

4) Robust Real-time Object Detection Paul Viola Michael Jones 13 July 2001

Website:

5) http://ntl.bts.gov/DOCS/arizona_report.html

6) http://en.wikipedia.org/wiki/OpenCV

7) http://en.wikipedia.org/wiki/Optical_flow

31

Page 32: Thesis. Facial recognition security system

32

Page 33: Thesis. Facial recognition security system

Appendix A

OpenCV (Open Source Computer Vision Library) is a library of programming functions mainly aimed at real-time computer vision, developed by Intel, and now supported by Willow Garage and Itseez. It is free for use under the open source BSD license. The library is cross-platform. It focuses mainly on real-time image processing. If the library finds Intel's Integrated Performance Primitives on the system, it will use these proprietary optimized routines to accelerate itself.

Officially launched in 1999, the OpenCV project was initially an Intel Research initiative to advance CPU-intensive applications, part of a series of projects including real-time ray tracing and 3D display walls. The main contributors to the project included a number of optimization experts in Intel Russia, as well as Intel’s Performance Library Team. In the early days of OpenCV, the goals of the project were described as

Advance vision research by providing not only open but also optimized code for basic vision infrastructure. No more reinventing the wheel.

Disseminate vision knowledge by providing a common infrastructure that developers could build on, so that code would be more readily readable and transferable.

Advance vision-based commercial applications by making portable, performance-optimized code available for free—with a license that did not require to be open or free themselves.

The first alpha version of OpenCV was released to the public at the IEEE Conference on Computer Vision and Pattern Recognition in 2000, and five betas were released between 2001 and 2005. The first 1.0 version was released in 2006. In mid-2008, OpenCV obtained corporate support from Willow Garage, and is now again under active development. A version 1.1 "pre-release" was released in October 2008.

The second major release of the OpenCV was on October 2009. OpenCV 2 includes major changes to the C++ inter, aiming at easier, more type-safe patterns, new functions, and better implementations for existing ones in terms of performance (especially on multi-core systems). Official releases now occur every six months and development is now done by an independent Russian team supported by commercial corporations.

In August 2012, support for OpenCV was taken over by a non-profit foundation, OpenCV.org, which maintains a developer and user site.

33

Page 34: Thesis. Facial recognition security system

APPLICATIONS:

OpenCV's applications areas include:

2D and 3D feature toolkits Egomotion estimation Facial recognition system Gesture recognition Mobile robotics Motion understanding Object identification Segmentation and Recognition Stereopsis Stereo vision: depth perception from 2 cameras Structure from motion (SFM) Motion tracking

OS support:

OpenCV runs on Windows, Android, Maemo, FreeBSD, OpenBSD, BlackBerry ,Linux and OS X. The user can get official releases from SourceForge, or take the current snapshot under SVN from there. OpenCV uses CMake.

Description: The main objective for all computer vision processes

Syntax: OpenCV(parent);

Fields:

BILATERAL Blur method

BLUR Blur method

BUFFER Type of Image

CASCADE_FRONTAL_ALT Standard Haar classifier cascade file used for

object detection

34

Page 35: Thesis. Facial recognition security system

CASCADE_FRONTAL_ALT2 Standard Haar classifier cascade file used for

object detection

CASCADE_FRONTAL_ALT_TREE Standard Haar classifier cascade file used for

object detection

CASCADE_FRONTAL_DEFAULT Standard Haar classifier cascade file used for

object detection

CASCADE_FULLBODY Standard Haar classifier cascade file used for

object detection

Standard Haar classifier cascade file used for

object detection

CASCADE_UPPERBODY Standard Haar classifier cascade file used for

object detection

FLIP_BOTH Flip mode

FLIP_HORIZONTAL Flip mode

FLIP_VERTICAL Flip mode

GAUSSIAN Blur method

GRAY Color space of image

HAAR_DO_CANNY_PRUNING Haar classifier flag

HAAR_DO_ROUGH_SEARCH Haar classifier flag

HAAR_FIND_BIGGEST_OBJECT Haar classifier flag

HAAR_SCALE_IMAGE Haar classifier flag

INTER_AREA Interpolation method

INTER_CUBIC Interpolation method

INTER_LINEAR Interpolation method

35

Page 36: Thesis. Facial recognition security system

INTER_NN Interpolation method

MAX_VERTICES The maximum number of contour points available

to blob detection (by default)

MEDIAN Blur method

MEMORY Type of Image

MOVIE_FRAMES Movie info selector (not yet implemented)

MOVIE_MILLISECONDS Movie info selector (not yet implemented)

MOVIE_RATIO Movie info selector (not yet implemented)

RGB Color space of image

SOURCE Type of Image

THRESH_BINARY Thresholding method

THRESH_BINARY_INV Thresholding method

THRESH_OTSU Thresholding method

THRESH_TOZERO Thresholding method

THRESH_TOZERO_INV Thresholding method

THRESH_TRUNC Thresholding method

height OpenCV image/buffer height

width OpenCV image/buffer width

ROI() Set image region of interest to the given rectangle.

absDiff() Calculate the absolute difference between the image in

memory and the current image.

allocate() Allocate required buffer with the given size.

36

Page 37: Thesis. Facial recognition security system

blobs() Blob and contour detection.

blur() Smooth the image in one of several ways.

brightness() Adjust the image brightness with the specified value

(in range of -128 to 128).

capture() Allocate and initialize resources for reading a video

stream from a camera.

cascade() Load into memory the descriptor file for a trained cascade classifier.

contrast() Adjust the image contrast with the specified value

(in range of -128 to 128).

convert() Convert the current image from one colorspace to another.

copy() Copy the image (or a part of it) into the current OpenCV

buffer (or a part of it).

detect() Detect object(s) in the current image depending on the

current cascade description.

flip() Flip the current image around vertical, horizontal or both axes.

image() Return the current (or specified) OpenCV image

interpolation() Set global interpolation method.

invert() Invert image.

jump() Jump to a specified movie frame.

loadImage() Load an image from the specified file.

movie() Allocate and initialize resources for reading a video

file from the specified file name.

pixels() Retrieve cuurent (or specified) image data.

read() Grab a new frame from the input camera or a movie file.

37

Page 38: Thesis. Facial recognition security system

remember() Place the image (original or current) in memory.

restore() Revert to the original image.

stop() Stop OpenCV process.

threshold() Apply fixed-level threshold to the current image.

Appendix B

C++ language:

C++ (pronounced "see plus plus") is a statically typed, free-form, multi-paradigm, compiled, general-purpose programming language. It is regarded as an intermediate-level language, as it comprises both high-level and low-level language features. Developed by Bjarne Stroustrup starting in 1979 at Bell Labs, C++ was originally named C with Classes,

38

Page 39: Thesis. Facial recognition security system

adding object oriented features, such as classes, and other enhancements to the C programming language. The language was renamed C++ in 1983,as a pun involving the increment operator.

C++ is one of the most popular programming languages and is implemented on a wide variety of hardware and operating system platforms. As an efficient compiler to native code, its application domains include systems software, application software, device drivers, embedded software, high-performance server and client applications, and entertainment software such as video games.Several groups provide both free and proprietary C++compiler software, including the GNU Project, LLVM,Microsoft, Intel and Embarcadero Technologies. C++ has greatly influenced many other popular programming languages, most notably C# and Java. Other successful languages such as Objective-C use a very different syntax and approach to adding classes to C.

C++ is also used for hardware design, where the design is initially described in C++, then analyzed, architecturally constrained, and scheduled to create a register-transfer level hardware description language via high-level synthesis.

The language began as enhancements to C, first adding classes, then virtual functions, operator overloading, multiple inheritance, templates and exception handling, among other features. After years of development, the C++ programming language standard was ratified in 1998 as ISO/IEC 14882:1998.

Philosophy:

In The Design and Evolution of C++ (1994), Bjarne Stroustrup describes some rules that he used for the design of C++.

C++ is designed to be a statically typed, general-purpose language that is as efficient and portable as C

C++ is designed to directly and comprehensively support multiple programming styles (procedural programming, data abstraction, object-oriented programming, and generic programming)

C++ is designed to give the programmer choice, even if this makes it possible for the programmer to choose incorrectly

C++ is designed to be compatible with C as much as possible, therefore providing a smooth transition from C

C++ avoids features that are platform specific or not general purpose C++ does not incur overhead for features that are not used (the "zero-overhead principle") C++ is designed to function without a sophisticated programming environmentInside the C++ Object Model (Lippman, 1996) describes how compilers may convert C++ program statements into an in-memory layout. Compiler authors are, however, free to implement the standard in their own manner.

39

Page 40: Thesis. Facial recognition security system

Operators and operator overloading

Operators that cannot be overloaded

Operators Symbols

Scope Resolution Operator ::

Conditional Operator :?

Dot Operator .

Member Selection Operator .*

“sizeof” Operator Sizeof

“typeid” Operator typeid

C++ provides more than 35 operators, covering basic arithmetic, bit manipulation, indirection, comparisons, logical operations and others. Almost all operators can be overloaded for user-defined types, with a few notable exceptions such as member access (. and .*) as well as the conditional operator. The rich set of overloadable operators is central to using user created types in C++ as well and as easily as built in types (so that the user using them cannot tell the difference). The overloadable operators are also an essential part of many advanced C++ programming techniques, such as smart pointers. Overloading an operator does not change the precedence of calculations involving the operator, nor does it change the number of operands that the operator uses (any operand may however be ignored by the operator, though it will be evaluated prior to execution). Overloaded "&&" and "||" operators lose their short-circuit evaluation property.

40

Page 41: Thesis. Facial recognition security system

APPENDIX C

FALSE POSITIVE RATE:

In statistics when performing multiple comparisons, the term false positive ratio, also known as false alarm ratio, usually refers to the probability of falsely rejecting the null hypothesis for a particular test. The false positive rate usually refers to the expectancy of the positive ratio.

41

Page 42: Thesis. Facial recognition security system

REFRENCES:

1)Detection Window Feature Data: Collection Methods and Applications Guillaume Leduc Working Papers on Energy, Transport and Climate Change

2) Automatic player detection and recognition in images using adaboost Zahid Mahmood August 2011. 3) Digital Image Processing Third Edition Rafael C. Gonzales published in 7th October 2007.

42

Page 43: Thesis. Facial recognition security system

4) Robust Real-time Object Detection Paul Viola Michael Jones 13 July 2001

Website:

5) http://ntl.bts.gov/DOCS/arizona_report.html

6) http://en.wikipedia.org/wiki/OpenCV

7) http://en.wikipedia.org/wiki/Optical_flow

43

Page 44: Thesis. Facial recognition security system

44

Page 45: Thesis. Facial recognition security system

45