F ACE D ETECTION FOR A CCESS C ONTROL By Dmitri De Klerk Supervisor: James Connan.
Dmitri Warren de Klerk - 2653786 - Thesis
-
Upload
rgv2417arora -
Category
Documents
-
view
220 -
download
0
Transcript of Dmitri Warren de Klerk - 2653786 - Thesis
-
8/13/2019 Dmitri Warren de Klerk - 2653786 - Thesis
1/70
A FACE DETECTION SYSTEM USED FORACCESS CONTROL
by
Dmitri Warren De Klerk
A mini-thesis submitted in partial fulfillment of therequirements for the degree of
Bachelor of Science (Honours) in Computer Science
University of the Western Cape
Supervisor: Mr. J. Connan
November 2009
-
8/13/2019 Dmitri Warren de Klerk - 2653786 - Thesis
2/70
ii
University of Western Cape
Abstract
A FACE DETECTION SYSTEM USED FORACCESS CONTROL
by Dmitri Warren De Klerk
Supervisor: Mr. J. ConnanDepartment of Computer Science
In this project we develop a face detection system which is used for access control. Theface detection system will accurately determine the locations and sizes of all possiblehuman faces. The faces are then scaled to a recognizable size and passed to a facerecognition system implemented by Desmond Eustin van Wyk [1]that can accuratelydetermine the identity of a person and decide whether or not to grant them access to afacility.
-
8/13/2019 Dmitri Warren de Klerk - 2653786 - Thesis
3/70
iii
TABLE OF CONTENTS
Table of Contents iii
declaration v
keywords vi
Acknowledgments vii
Glossary viii
CHAPTER 1 10
1.1 - Users view of the problem 10
1.2 - Problem domain 11
1.3 - Complete description of the problem 11
1.4 - Expectations from a system 12
CHAPTER 2 13
2.1 - Users requirements interpretation 13
2.2 - Existing solutions 14
2.3 - Suggested system 14
CHAPTER 3 16
3.1Complete user interface 16
3.1.1 - How the user interface behaves 19
3.1.2 - The Add User Dialog 20
3.1.3 - The Confirm User Delete dialog 22
3.1.4 - The File menu 233.1.5 - The Face Detection menu 24
3.1.6Face Detection Settings dialog 25
3.1.7The Help menu 26
3.1.8The User Guide dialog 27
3.1.9The About dialog 28
CHAPTER 4 30
Object Orientated Analysis (OOA) 30
4.1 - Data dictionary 30
4.2 - Class diagrams 33
4.3 - Relationship between objects 40
CHAPTER 5 41Object Orientated Analysis (OOD) 41
5.1Inner details of class attributes and methods 41
CHAPTER 6 48
CHAPTER 7 49
-
8/13/2019 Dmitri Warren de Klerk - 2653786 - Thesis
4/70
iv
CHAPTER 8 51
8.1 - System requirements 51
8.2 - The Face Recognition System project directory 52
8.3 - Running the Access Control system 548.4Complete user interface 54
8.4.1 - How the user interface behaves 57
8.4.1 - The Add User Dialog 58
8.4.2 - The Confirm User Delete dialog 61
8.4.3 - The File menu 62
8.4.4 - The Face Detection menu 63
8.4.5Face Detection Settings dialog 64
8.4.6The Help menu 65
8.4.7The User Guide dialog 66
8.4.8The About dialog 67
Chapter 9 69
CONCLUSION 69
-
8/13/2019 Dmitri Warren de Klerk - 2653786 - Thesis
5/70
v
DECLARATION
I declare that A FACE DETECTION SYSTEM USED FOR ACCESS CONTROL is
my own work, that it has not been submitted for any degree or examination in any other
university and that all sources I have used or quoted have been indicated and
acknowledged by complete references.
Dmitri Warren De Klerk November 2009
Signature: .
-
8/13/2019 Dmitri Warren de Klerk - 2653786 - Thesis
6/70
vi
KEYWORDS
Haar Features
Integral Image
Cascade
AdaBoost
Boosting
Classifier
Weak Classifier
Strong Classifier
-
8/13/2019 Dmitri Warren de Klerk - 2653786 - Thesis
7/70
vii
ACKNOWLEDGMENTS
I thank the Almighty God, for its through his strength; I can do all things through Christ
who strengthens me. Thanks to my supervisor Mr. J. Connan for his patience, support
and guidance and his input in this project. Thanks to my family and girlfriend who has
supported and motivated me.
Thanks to the staff members of the Department of Computer Science and my classmates
for their kind assistance during the year.
I also would like to acknowledge the work done by Desmond Eustin van Wyk.. His
documentation proved extremely useful in documenting my work.
-
8/13/2019 Dmitri Warren de Klerk - 2653786 - Thesis
8/70
viii
GLOSSARY
AdaBoost (Adaptive Boosting): AdaBoost is an efficient machine learning boosting
algorithm which combines weak classifiers while reducing significantly not only the
training error but also the more elusive generalization error.
Boosting: A method of producing a very accurate prediction rule by combining rough
and moderately inaccurate rules of thumb.
Base Resolution:The resolution at which the detector starts to detect faces.
Weak Classifier:The classifier is called weak because, they only need to classify correctly
the examples in more of 50% of the cases.
-
8/13/2019 Dmitri Warren de Klerk - 2653786 - Thesis
9/70
9
INTRODUCTION
In this project a face detection system is implemented and integrated into an Access
Control system. Face detection systems locate the size and scale of human faces in images
and video sequence, if present. Face detection is the first step for Face localization, Face
Tracking, Facial Expression recognition, and Face Recognition.
Face detection in itself is a challenging problem. The difficulty resides in the face that faces
are non rigid objects. Face appearance may vary between two photographs of the same
person, depending on the emotional stage, lighting conditions or pose[4]. This is why so
many methods have been developed in the past years.
The goal is to detect very quickly faces in cluttered backgrounds. This situation can be
found in many applications as surveillance of public places, common Access Control
conditions. Thus far learning-based approaches have been the most effective and have
therefore attracted a lot of attention the last years. Viola and Jones [6][7] introduced an
impressive face detection system capable of detection frontal-view faces in real time. This
is attributed to the AdaBoost learning algorithm presented by Freund and Schapire [8] and
the response of simple
features used by Viola and Jones [6]. Hundreds of features can quickly be calculated by
introducing a new image representation called the Integral Image.The Adaboost
algorithm sequentially contructs a classifier as a linear combination of weak classifier.
The classifiers are combined in a cascade which allows background regions to be quickly
discarded while spending more computation on more promising object like reqions.
-
8/13/2019 Dmitri Warren de Klerk - 2653786 - Thesis
10/70
10
C H A P T E R 1
USERS REQUIREMENTS DOCUMENT (URD)
This document describes the problem from the user's point of view. It briefly describes
the problem domain of face detection. Then the document delivers a simple and exact
problem description, wherein the user states exactly what he/she would like the face
detection system to do. We focus on the tasks to be solved rather than the interface
required solving them.
1.1 - Users view of the problemThe user requires a face detection system given an image, the goal of the face detection
system is to determine whether or not there are any faces in the image and, if present,
return the face location. The system should operate at real-time to make for a passive and
fully automatic Access Control system. A user would only be required to stand in front of
the camera in order to be recognized. The main motivation for a face detection system is
that the user wouldnt be required to position his/her face into a fixed size box in order to
be recognized by the face recognition system.
Other reasons for a face detection system are that it is the first step:
Face localization seeks to determine the position of a single face within an image; the
detection problem is simplified since the input image contains only one face.
Facial feature detectionseeks to detect the presence and location of features, such as the
mouth, nose, eyes, lips, ears, etc.; the detection problem is simplified since the input image
contains only one face.
Facial expression recognitionidentifies the emotional states of humans, e.g. happy, sad,anger.
Face trackingmethods estimates the location and possibly the orientation of a face in an
image on a continuous basis within real time.
-
8/13/2019 Dmitri Warren de Klerk - 2653786 - Thesis
11/70
11
As proven above face detection is the first step in any fully automated system which solves
the above problems, therefore a robust and accurate face detector system is critical.
1.2 - Problem domain
Given an image the goal of a face detection system is to identify all face regions regardless
of its position and scale [1]. This problem is challenging as faces are non rigid objects. Face
appearance may vary between two different persons and also between two images of the
same person, depending on the lighting conditions, emotional state and pose of the
subject [4].
Face detection difficulties:
The face global attributes. Some common face attributes from every face. A face can
globally be estimated by a kind of ellipse, but humans have thin faces, round faces, etc.
Skin color is also different from one person to another.
The facial expression. Face appearance is highly depends on emotional state of people.
Face features of a smiling face is far from those of an indifferent temperament or a sad
face.
Presence or absence of structural components. Face detection included objects thatcan be found on a face: glasses which change one of the main characteristics of the faces,
the darkness of the eyes. Natural facial features: beards, mustaches or can occult part of
the face.
1.3 - Complete description of the problem
Determine whether or not there are any faces in the camera output and, if present, return
the face locations of the images. The biggest face detected, being the user closest to the
camera are scaled to a recognizable scale. This detection window is then passed to the facerecognition system implemented by Desmond Eustin van Wyk [3] for recognition.
-
8/13/2019 Dmitri Warren de Klerk - 2653786 - Thesis
12/70
12
1.4 - Expectations from a system
The first and foremost expectation for a face recognition system is that it must have a high
degree of accuracy when recognizing people. The next highest expectation from a system
is that people should be indicated who they are when the system recognized them. It must
also be possible to add and remove people from the system that should be recognized.
The system is to accurately identify and locate human faces under the following conditions
and circumstances:
upright, frontal faces
minor variations in lighting conditions and
minor variations in facial expression.
minor variation in illumination
big enough scale in order to perform face recognition.
any position
1.5 - Not expected from a system
A solution system is not expected to detect a human face under the following conditions
and circumstances:
non frontal face pose rotated faces extreme lighting conditions
darkness too much light
-
8/13/2019 Dmitri Warren de Klerk - 2653786 - Thesis
13/70
13
C H A P T E R 2
REQUIREMENTS ANALYSIS DOCUMENT (RAD)
This document takes the user requirements as a starting point and looks at face detection
from the designers view. The analysis focuses on the system and software needed to
implement the user requirements. We take the users requirements and clearly identify all
of the details and mitigating factors that will affect the solution the user wants. The RAD
then identifies the software system and paradigm that will best fit the user requirements.
2.1 - Users requirements interpretationAccess control systems are considered to be mission-critical and real-time systems and
thus must operate correctly under many different situations and circumstances. For a fully
automatic face recognition system, face detection and face localization are very important
and the very first steps to developing such a system[3]. The background composition is
one of the main factors for explaining the difficulties in face detection [4]. Face detection
in access control systems need to detect faces in any background, meaning the background
can be textured and with great variability [4]. The single most dominant problem with a
face recognition system or other biometric systems is accuracy and they do not perform
well under the many different situations and circumstances that are encountered in day-to-
day life [3].
The two most important characteristics for a face detector are its detection and error rate.
The detection rate of a face detector is defined as the ratio between the number of
correctly detected and the number of actual faces. The error can be broken down into two
types of error namely;
False positivesan image sub-region is declared to be a face, but is not.
False negativesan image sub-region is not declared as a face, when it is a face.
-
8/13/2019 Dmitri Warren de Klerk - 2653786 - Thesis
14/70
14
2.2 - Existing solutions
Face detection is the first step to any fully face recognition, face localization, face
expression recognition system, etc. Thus face detection has been highly researched in the
past years, there are many different techniques and algorithms for performing face
detection. All these techniques fall under the following main methods.
Knowledge-based methods
Based on what constitutes a typical face. e.g. the relationship between facial
features.
Feature invariant approaches
Finds structural features of a face that exist even when the viewpoint, lighting or
pose vary.
Template matching methods:
Uses several standard patterns to describe the face as a whole or the facial features
separately.
Appearance based methods (classifiers/learning-based):
The models are learned from a set of training images that capture the
representative variability of facial appearance.
2.3 - Suggested system
Detecting faces in black and white, still images with unconstrained, complex backgrounds
is a complicated task [1]. Thus far, learning-based approaches have been most effective
and have therefore attracted a lot of attention over the last years. In 2001 Viola and Jones
[6][7] published an impressive face detector system capable of detecting frontal-view facesin real time. The properties of the detector are partly attributed to the AdaBoost learning
algorithm.
-
8/13/2019 Dmitri Warren de Klerk - 2653786 - Thesis
15/70
15
AdaBoost (Adaptive Boosting) was rapidly made popular in the machine learning
community when it was presented by Freund and Schapire [8].
As we want to detect faces in various backgrounds found in Access Control systems, it
would be improper to use purely geometrical methods. In fact the main advantage to these
geometrical methods is the geometric invariant properties. We are not interested by them
since were staying in the context of frontal face detection. So it is quite naturally that we
have oriented our choice towards learning algorithms. Boosting is a powerful iterative
procedure that builds efficient classifiers by selecting and combining very simple classifiers.
The suggested system uses Boosting and Haar features (features).
The first step is to compute an image representation called an integral image that allows
for very fast feature evaluation. In order to compute these features very rapidly at many
scales we introduce the integral image representation for images. The integral image can be
computed from an image using a few operations per pixel. Once computed; any one of
these Haar-like features can be computed at any scale or location in constant time[7].
The second contribution of this paper is a method for constructing a classifier by selecting
a small number of important features using AdaBoost [6]. AdaBoost is used to both select
and train the classifier. AdaBoost then boost the performance of weak classifier.
The third major contribution of this paper is a method for combining successively more
complex classifiers in a cascade structure which dramatically increases the speed of the
detector by focusing attention on promising regions of the image [7].
2.4 - Testing the suggested system
The system can be tested on the MIT+CMU frontal face test set [6]. This set consists of
130 images with 507 labeled frontal faces.
The image dataset is used by the CMU Face Detection Project and is provided for
evaluating algorithms for detecting frontal views of human faces.
http://www.ri.cmu.edu/projects/project_320.htmlhttp://www.ri.cmu.edu/projects/project_320.html -
8/13/2019 Dmitri Warren de Klerk - 2653786 - Thesis
16/70
16
C H A P T E R 3
USERS INTERFACE SPECIFICATION
This chapter describes exactly what the user interface is going to do, what it looks like, and
how the user interacts with the program. The UIS however does not describe how the
interface is implemented. Nor does it describe what the program does behind the
interface. Rather, the UIS focuses in detail specifically on the user interface itself.
3.1Complete user interface
Figure 3.1 below displays the main graphical user interface frame for our system titled
Access Control.
Figure 3.1: Complete user interface
-
8/13/2019 Dmitri Warren de Klerk - 2653786 - Thesis
17/70
17
User Image panel
The User Image panel displays the user image of the user currently selected in the Userlist
panel.
User List panel
The User List panel lists the authorized users the Access Control system can recognize.
Search field
The search field can be used to search for a user in the face recognition system. This
feature can be very useful when the face recognition system have hundreds of users.
The search function searches for the substring of the search text in the username and
lists all the matched usernames in alphabetical order, displaying the first matched users
image in the user image panel.
Camera output
The camera output displays the cameras output. Used for capturing images when adding
users to the face recognition system, and for monitor current user login activity.
Add user button
The add user button adds an authorized user to the Access Control system. On execution
of the Add User button the Add User dialog as shown in Figure 3.5 is displayed.
Remove user buttonThe remove button is only enabled when there are users in the system. The remove button
removes a user from the Access Control system. On execution of the Remove User
button the Confirm User Delete dialog in Figure 3.8 is displayed.
Recognition output
The Access Control system displays the output of the recognized user below camera
output. When a user is not recognized by the Access Control system, the system displays
Who are you? , otherwise the system displays the user ID of the recognized user and the
rate of the face recognition.
-
8/13/2019 Dmitri Warren de Klerk - 2653786 - Thesis
18/70
18
Log panel
When a user is detected by the face detection system and recognized by the face
recognition system, the Access Control system logs the users information to the log panel.
The log panel logs the user and the time at which the user logged in, the log file gets saves
with the current date every time the user terminates the system.
Acceptance threshold
The face recognition system has a threshold at which users should be recognized. With 0
being the lowest threshold value and with 1 being the highest threshold value. With 0
being the most lenient and 1 being the most strict at which the recognition operates. The
Acceptance threshold spinner is used to adjust the acceptance threshold value used with
which the ANNs output is compared[1].
-
8/13/2019 Dmitri Warren de Klerk - 2653786 - Thesis
19/70
19
3.1.1 - How the user interface behaves
When the face detection system does not detect any faces or detects a false positive or the
face recognition system does not recognize the user. Then the system displays Who are
you?As displayed in Figure 3.3 below.
Figure 3.3: How the user interface behaveswith detectionand no recognition.
When the users face is detected and recognized, the system will display the user ID and
the recognition rate as displayed in Figure 3.4 below.
Figure 3.4: How the user interface behaveswith detectionand recognition
-
8/13/2019 Dmitri Warren de Klerk - 2653786 - Thesis
20/70
20
3.1.2 - The Add User Dialog
The Add User Dialog is displayed in Figure 3.5 below, this frame is displayed when the
Add User button in the Access Control GUI is pressed or when the Add User menu item
from the File menu is selected. The location of the Add User Dialog is such that the
cameras output can be clearly viewed. Each interface component in the Add User Dialog
with its purpose or action is described in Table 3.1.
Figure 3.5: The Add User dialog.Add User dialog
Interface component Purpose/Action
User Image panel Displays the captured image.
Capture Image button Captures an image.
ID label and text field The text field where a user identifier should
be entered.
Ok button Add the user with the specified identifier
that should be recognized. If the ID text
field is empty the error dialog in Figure 3.6
is displayed. If no images was captured the
error dialog in Figure 3.7 is displayed.
Cancel button Do not add the user, discard the captured
image and close the Add User dialog.
Table 3.1: The Add User dialog interface componentsdescribed.
-
8/13/2019 Dmitri Warren de Klerk - 2653786 - Thesis
21/70
21
From the add user dialog, if a user clicks Ok without entering a user ID or capturing an
image. The following dialog in Figure 3.6 below is displayed.
Figure 3.6: Error dialog displayed for empty ID text field or nocaptured image.
From the add user dialog, if a user clicks Ok without capturing an image. The following
dialog in Figure 3.7 below is displayed.
Figure 3.7: Error dialog displayed when no image wascaptured.
-
8/13/2019 Dmitri Warren de Klerk - 2653786 - Thesis
22/70
22
3.1.3 - The Confirm User Delete dialog
The Confirm User dialog is displayed in Figure 3.8 below, this dialog is displayed when
selecting a user in the User List panel and clicking the Remove User button. When
removing a user from the system, the default option is set to no, such that the
administrator doesnt blindly remove a user from the Access Control system.
Figure 3.8: Confirm User Delete dialog displayed when clickingthe remove user button.
Confirm User Delete dialog
Interface component Purpose/Action
Yes button Completely removes the user from the
Access Control system.
No button Do not remove the user, close the
confirmation dialog.
Table 3.2: The Confirm User Delete dialog interfacecomponents described.
-
8/13/2019 Dmitri Warren de Klerk - 2653786 - Thesis
23/70
23
3.1.4 - The File menu
The complete File menu with the menu items it contains is displayed in Figure 3.9 below.
Each menu item of the File menu with its associated action is described in Table 3.3.
Figure 3.9: The File menu.
The File menu
Menu item Action
Add User Adds a user to the Access Control system.Exit Exits the Access Control system.
Table 3.3: The File menu items described.
-
8/13/2019 Dmitri Warren de Klerk - 2653786 - Thesis
24/70
24
3.1.5 - The Face Detection menu
The complete Face Detection menu with the menu items it contains is displayed in Figure
3.10 below. Each menu item of the Face Detection menu with its associated action is
described in Table 3.4.
Figure 3.10: The File Detection menu.
The Face Detection menu
Menu item Action
Settings Opens the Face Detection Settings dialog as
displayed in Figure 3.11.Table 3.4: The File menu items described.
-
8/13/2019 Dmitri Warren de Klerk - 2653786 - Thesis
25/70
-
8/13/2019 Dmitri Warren de Klerk - 2653786 - Thesis
26/70
26
3.1.7The Help menu
The complete Help menu with the menu items it contains is displayed in Figure 3.12. Each
menu item of the Help menu with its associated action is described in Table 3.5.
Figure 3.12: The Help menu.
The Help menu
Menu item Action
User Guide Open the Users Guide dialog displayed
in Figure 3.13.
About Open the About dialog displayed in
Figure 3.14.
Table 3.5: The Help menu items described.
-
8/13/2019 Dmitri Warren de Klerk - 2653786 - Thesis
27/70
27
3.1.8The User Guide dialog
The Users Guide dialog is a modal dialog that is displayed in Figure 3.13. This dialog
contains the Users Guide for the Administrator. It is opened when the Users Guide menu
item from the Help menu is selected. The Users Guide dialog can be closed by either the
close button at the top right corner or the Ok button at the bottom of the dialog[1].
Figure 3.13: The Users Guide dialog.
-
8/13/2019 Dmitri Warren de Klerk - 2653786 - Thesis
28/70
28
3.1.9The About dialog
The About dialog is a modal dialog that is displayed in Figure 3.13. This dialog contains a
message about the Access Control system. The About dialog can be closed by either the
close button at the top right corner or the OK button at the bottom of the dialog.
Figure 3.14: The About dialog.
-
8/13/2019 Dmitri Warren de Klerk - 2653786 - Thesis
29/70
29
Figure 3.15 below displays the webcam that Users interact with and that is also used to
capture face images.
Figure 3.15: The webcam that Users interact with and that isalso used to capture face images.
-
8/13/2019 Dmitri Warren de Klerk - 2653786 - Thesis
30/70
30
C H A P T E R 4
OBJECT ORIENTATED ANALYSIS (OOA)
In this chapter we apply an object-oriented view to the face detection system. We begin by
providing a detailed description of the objects in the form of a data directory. In addition
we provide detailed class diagrams, identifying class attributes and methods. Finally, we
present the relationships between objects.
4.1 - Data dictionary
The data dictionary describes the Face detection system objects in detail, each object isdocumented. We provide a clean understanding of each object in terms of the functions
they perform in the form of a detailed description.
Class Description
adaboostTrain This class trains a weak classifier using the AdaBoostmachine learning algorithm presented by Freund &Schapire [8]. Once trained the weak classifier is saved tothe database of weak classifiers.
buildCascadeClassifier This class builds a cascade classifier which achievesincreased performance while radically improvingcomputation time. The key insight is, smaller moreefficient classifiers are constructed which reject many ofthe negative sub windows while detecting all positiveimage instances. Simple classifiers are used to reject themajority of sub-windows before more complexclassifiers are called upon to decrease the false positive[6][7]. The stages in the cascade classifier areconstructed by training classifiers using AdaBoost andthen adjusting the threshold to minimize the falsenegatives [6][7].
cascadeClassifier This class represents a complete cascade classifier withall its stages. The cascade classifier is a chain/array ofcascade stages. Recall that each stage in the cascade
-
8/13/2019 Dmitri Warren de Klerk - 2653786 - Thesis
31/70
31
classifier is a smaller more efficient boosted classifier.The form of the cascade classifier is that of adegenerate decision tree. A positive result from the firstclassifier triggers the evaluation of a second classifier,and so on. A negative result at any point leads to
immediate rejection of the sub-window[6][7].cascadeStage This class represents a cascade classifier stage used in
the cascade classifier class. The cascade classifier stageis an AdaBoost strong classifier with a fairly smallnumber of weak classifiers. This is such that thedetector can quickly distinguish if an image sub regionis a "Face" or a "Non-face". Each stage in the cascadeclassifier is trained by adding weak classifiers until thetarget detection and false positive rates are met. Thestage threshold is adjusted to accept all face example inthe training set, while minimizing the false negatives.
feature The class represents simple features, our detectorclassifies images based on the value of simple features;These features, which have also been used by Viola andJones, are also known as Haar features, or Haar-likefeatures. We use 5 kinds of features. Given the baseresolution of the detector is 19x19 our detector has anexhaustive set of 67209 features.
integralImage This class determines the integral image of a giveninput PGMImage. The integral image can beconsidered as a means to quickly compute the rectanglefeatures. This intermediate representation for the inputPGMImage at location (x, y) is the sum of all the pixelsabove and to the left of (x, y). Making use of theintegral image any rectangular sum can be computed infour array references.
integrateMultipleDetections This class represents a final detection; since the finaldetection is insensitive to small changes in translationand scale, multiple detections will normally occuraround each face in a scanned image. This class returnsin one final face detection per face by combiningclusters of overlapping detections into a single
detection. In addition this class returns the biggest facedetection used by the face recognition system.
scanDetector The final detector is scanned across the 320x240 imagesequences at multiple scales and locations. The detectoritself is scaled, rather than the image. The detector is
-
8/13/2019 Dmitri Warren de Klerk - 2653786 - Thesis
32/70
32
scanned across locations by shifting the detector by twopixels horizontally and vertically. The choice of thisshifting affects both speed and accuracy of the detector.The two pixels shifting promised good results inexperiments. In addition this class passes the biggest
detection to the face recognition system.trainImage This class represents a training image for the AdaBoost
machine learning algorithm. Each training image has anintegral image that represents the object of interest. Thetype being a positive or negative image. The weight ofthe training image is used for training the face detector.The weight is used by AdaBoost; the weight is ameasure of how important the training image is.
weakClassifier This class represents a weak classifier; the classifier iscalled weak because the classifier is only expected toclassify 50% of the training set images correctly. The
features are used to build the weak classifiers, A weakclassifier is a feature, with the extend of a classifier error how bad the classifier is, tested on a validation set,and a classifier weight how good the classifier is,tested on the positive training set.
Table 4-1: Data Dictionary - the objects combined with a briefdescription of each.
-
8/13/2019 Dmitri Warren de Klerk - 2653786 - Thesis
33/70
33
4.2 - Class diagrams
The class diagrams contain the name of the class; its attributes as well as the associated
methods of the class. The type of the class attributes, the return types of the methods as
well as the class method parameters.
Figure 4.1: adaboostTrain Classtrains a weak classifier usingthe AdaBoost algorithm.
-
8/13/2019 Dmitri Warren de Klerk - 2653786 - Thesis
34/70
34
Figure 4.2: buildCascadeClassifier Classtrains a cascadeclassifier used by the face detector for face detection.
-
8/13/2019 Dmitri Warren de Klerk - 2653786 - Thesis
35/70
35
Figure 4.3: cascadeClassifier ClassThe trained cascadeclassifier used by the face detector for face detection.
Figure 4.4: cascadeStage ClassA smaller more efficientboosted classifier used by the cascade classifier.
-
8/13/2019 Dmitri Warren de Klerk - 2653786 - Thesis
36/70
36
Figure 4.5: feature ClassOur detector classifies images basedon these features.
Figure 4.6: integralImage Classdetermines the integral image,in order to quickly compute feature response values.
-
8/13/2019 Dmitri Warren de Klerk - 2653786 - Thesis
37/70
-
8/13/2019 Dmitri Warren de Klerk - 2653786 - Thesis
38/70
38
Figure 4.8: scanDetector Classscans the detector at multiplescales and locations and passes the biggest detection to the face
recognition system.
-
8/13/2019 Dmitri Warren de Klerk - 2653786 - Thesis
39/70
-
8/13/2019 Dmitri Warren de Klerk - 2653786 - Thesis
40/70
40
4.3 - Relationship between objects
The following Figure 4.11 represents the relationship between the objects indicates how
the objects interact with each other or how they are related to each other.
Figure 4.11: Relationship between classes
-
8/13/2019 Dmitri Warren de Klerk - 2653786 - Thesis
41/70
41
C H A P T E R 5
OBJECT ORIENTATED ANALYSIS (OOD)
The document is as close to coding as you can get without actually coding. This document
takes the classes in the Object Oriented Analysis deeper into the realm of pseudo-code.
5.1Inner details of class methods
The inner details of class method in detail, each method is documented to give a more
detailed description of the class object.
Class Method Description
adaboostTrain determineNumPosAndNeg()This method determines the number of negative and positive
images the training set consists of.
getTrainingSet()This method gets the Adaboost training set.
getWeakClassifierCounter()This method returns the number of weak classifiers trained thus
far in the weak classifier database.
initializeWeights()This function initializes the weight uniformly over the training
data, the sum of the weights of all images in the trainingSet equals 1.
normalizeWeights(java.util.Vector training
Set)This method normalize the weights of the training set, such that
the weights is a probability distribution: Sum(all weights) = 1
printWeights(java.util.Vector trainingSet)This method is used for debugging.
setupTrainingSet()This function sets the positive and negative images of thetraining set.
setWeakClassifierCounter(int weakClassifierCounter)This method sets the Adaboost training set.
http://localhost/var/www/apps/conversion/tmp/faceDetection/adaboostTrain.html#determineNumPosAndNeg()http://localhost/var/www/apps/conversion/tmp/faceDetection/adaboostTrain.html#determineNumPosAndNeg()http://localhost/var/www/apps/conversion/tmp/faceDetection/adaboostTrain.html#getTrainingSet()http://localhost/var/www/apps/conversion/tmp/faceDetection/adaboostTrain.html#getWeakClassifierCounter()http://localhost/var/www/apps/conversion/tmp/faceDetection/adaboostTrain.html#initializeWeights()http://localhost/var/www/apps/conversion/tmp/faceDetection/adaboostTrain.html#normalizeWeights(java.util.Vector)http://localhost/var/www/apps/conversion/tmp/faceDetection/trainImage.htmlhttp://localhost/var/www/apps/conversion/tmp/faceDetection/adaboostTrain.html#printWeights(java.util.Vector)http://localhost/var/www/apps/conversion/tmp/faceDetection/trainImage.htmlhttp://localhost/var/www/apps/conversion/tmp/faceDetection/trainImage.htmlhttp://localhost/var/www/apps/conversion/tmp/faceDetection/adaboostTrain.html#setupTrainingSet()http://localhost/var/www/apps/conversion/tmp/faceDetection/adaboostTrain.html#setWeakClassifierCounter(int)http://localhost/var/www/apps/conversion/tmp/faceDetection/adaboostTrain.html#setWeakClassifierCounter(int)http://localhost/var/www/apps/conversion/tmp/faceDetection/adaboostTrain.html#setupTrainingSet()http://localhost/var/www/apps/conversion/tmp/faceDetection/trainImage.htmlhttp://localhost/var/www/apps/conversion/tmp/faceDetection/adaboostTrain.html#printWeights(java.util.Vector)http://localhost/var/www/apps/conversion/tmp/faceDetection/trainImage.htmlhttp://localhost/var/www/apps/conversion/tmp/faceDetection/adaboostTrain.html#normalizeWeights(java.util.Vector)http://localhost/var/www/apps/conversion/tmp/faceDetection/adaboostTrain.html#initializeWeights()http://localhost/var/www/apps/conversion/tmp/faceDetection/adaboostTrain.html#getWeakClassifierCounter()http://localhost/var/www/apps/conversion/tmp/faceDetection/adaboostTrain.html#getTrainingSet()http://localhost/var/www/apps/conversion/tmp/faceDetection/adaboostTrain.html#determineNumPosAndNeg() -
8/13/2019 Dmitri Warren de Klerk - 2653786 - Thesis
42/70
42
trainClassifierWithAdaBoost()This method trains a weak classifier using AdaBoost and writes
the weak classifier to the database of weak classifiers.
updateNegativeTrainingImages(java.util.Vector newNegatives)
This method updates the negative training images, by addingthe input vector of negatives to the training set.
updateWeight(weakClassifierweakClassifiers,
java.util.Vector trainingSet)This method updates the weight of the training set.
baseLearner allFeatures(int scale)This method calculates all possible features that can fit into a
given image of width and height specified.
baseLearner(java.util.Vector trainingSet)This method returns the weak classfier with the lowest training
error on the training set.
calculateOptimalThresholdValues (java.util.Vector trainingSet)
This method determines the optimal thresold values for all thefeatures.
evaluateError(int featureOptThreshold,
double lowestError)This method evaluates the weighted error of a feature over the
training set, such that we can choose the weak classifier with thelowest error.
getAllFeatures()This method gets the features used to choose a weak classifier
from.getFeatureOptimalThreshold()
This method sets the feature optimal threshold values, for allthe features.
initializeValues()This method initialize the features response values as well as the
feature optimal threshold values.
optimalThreshold()This method is used to determine the optimal threshold over
the training set.
removeFeatures(int featureNum)
This method removes a feature from being used.setAllFeatures(java.util.Vector allFeatures)
This method sets the features used to choose a weak classifierfrom.
setFeatureOptimalThreshold(java.util.Vector
-
8/13/2019 Dmitri Warren de Klerk - 2653786 - Thesis
43/70
43
Integer> featureOptimalThreshold)This method sets the feature optimal threshold values, for all
the features.
totalHScales(int imageWidth, featureevaluateFeature)This method determines the total number of scales at which the
input feature can be applied within a given imageWidth.buildCascadeClas
sifier
buildCascadeClassifier()This method build a cascade classifier using the AdaBoost
machine learning algorithm.
calculateTotalWeightForStage()This method calculates the total weight for the current stage,
from where the stage start to where the stage end.
evaluateD()This method calculates the values D the detection rate of the
cascade classifier on a the positive training images.
evaluateF(boolean getThresholdValues)
This method calculates the value F - the false positive rate ofthe cascade classifier on a validation set.
initializeThresholdValues()This method initialize the thresholdValues array.
smartTrainWrite(java.util.Vector trainingS
et, java.lang.String filename)
stageThreshold()The initial AdaBoost threshold is designed to yield a low error
rate on the training data.
cascadeClassifier addStage(cascadeStagestage)This method adds a stage to the cascade classifier.
getStage(int stageNum)This method returns the cascade classifier stage at the given
stageNum.
readCascade(java.lang.String filename)This method reads the cascade classifier from the file system.
stageEdit(int stageNum, cascadeStagenewStage)
toFile()This method is used by the writeCascade method.
toString()This method prints the cascade classifier to a string.
totalStages()This method returns the total number of stages in the cascade
classifier.
http://localhost/var/www/apps/conversion/tmp/faceDetection/baseLearner.html#totalHScales(int,%20faceDetection.feature)http://localhost/var/www/apps/conversion/tmp/faceDetection/feature.htmlhttp://localhost/var/www/apps/conversion/tmp/faceDetection/buildCascadeClassifier.html#buildCascadeClassifier()http://localhost/var/www/apps/conversion/tmp/faceDetection/buildCascadeClassifier.html#calculateTotalWeightForStage()http://localhost/var/www/apps/conversion/tmp/faceDetection/buildCascadeClassifier.html#evaluateD()http://localhost/var/www/apps/conversion/tmp/faceDetection/buildCascadeClassifier.html#evaluateD()http://localhost/var/www/apps/conversion/tmp/faceDetection/buildCascadeClassifier.html#evaluateF(boolean)http://localhost/var/www/apps/conversion/tmp/faceDetection/buildCascadeClassifier.html#evaluateF(boolean)http://localhost/var/www/apps/conversion/tmp/faceDetection/buildCascadeClassifier.html#initializeThresholdValues()http://localhost/var/www/apps/conversion/tmp/faceDetection/buildCascadeClassifier.html#smartTrainWrite(java.util.Vector,%20java.lang.String)http://localhost/var/www/apps/conversion/tmp/faceDetection/trainImage.htmlhttp://localhost/var/www/apps/conversion/tmp/faceDetection/buildCascadeClassifier.html#stageThreshold()http://localhost/var/www/apps/conversion/tmp/faceDetection/cascadeClassifier.html#addStage(faceDetection.cascadeStage)http://localhost/var/www/apps/conversion/tmp/faceDetection/cascadeStage.htmlhttp://localhost/var/www/apps/conversion/tmp/faceDetection/cascadeStage.htmlhttp://localhost/var/www/apps/conversion/tmp/faceDetection/cascadeStage.htmlhttp://localhost/var/www/apps/conversion/tmp/faceDetection/cascadeClassifier.html#getStage(int)http://localhost/var/www/apps/conversion/tmp/faceDetection/cascadeClassifier.html#readCascade(java.lang.String)http://localhost/var/www/apps/conversion/tmp/faceDetection/cascadeClassifier.html#readCascade(java.lang.String)http://localhost/var/www/apps/conversion/tmp/faceDetection/cascadeClassifier.html#stageEdit(int,%20faceDetection.cascadeStage)http://localhost/var/www/apps/conversion/tmp/faceDetection/cascadeClassifier.html#stageEdit(int,%20faceDetection.cascadeStage)http://localhost/var/www/apps/conversion/tmp/faceDetection/cascadeStage.htmlhttp://localhost/var/www/apps/conversion/tmp/faceDetection/cascadeClassifier.html#toFile()http://localhost/var/www/apps/conversion/tmp/faceDetection/cascadeClassifier.html#toString()http://localhost/var/www/apps/conversion/tmp/faceDetection/cascadeClassifier.html#totalStages()http://localhost/var/www/apps/conversion/tmp/faceDetection/cascadeClassifier.html#totalStages()http://localhost/var/www/apps/conversion/tmp/faceDetection/cascadeClassifier.html#totalStages()http://localhost/var/www/apps/conversion/tmp/faceDetection/cascadeClassifier.html#toString()http://localhost/var/www/apps/conversion/tmp/faceDetection/cascadeClassifier.html#toFile()http://localhost/var/www/apps/conversion/tmp/faceDetection/cascadeStage.htmlhttp://localhost/var/www/apps/conversion/tmp/faceDetection/cascadeClassifier.html#stageEdit(int,%20faceDetection.cascadeStage)http://localhost/var/www/apps/conversion/tmp/faceDetection/cascadeClassifier.html#readCascade(java.lang.String)http://localhost/var/www/apps/conversion/tmp/faceDetection/cascadeClassifier.html#getStage(int)http://localhost/var/www/apps/conversion/tmp/faceDetection/cascadeStage.htmlhttp://localhost/var/www/apps/conversion/tmp/faceDetection/cascadeClassifier.html#addStage(faceDetection.cascadeStage)http://localhost/var/www/apps/conversion/tmp/faceDetection/buildCascadeClassifier.html#stageThreshold()http://localhost/var/www/apps/conversion/tmp/faceDetection/trainImage.htmlhttp://localhost/var/www/apps/conversion/tmp/faceDetection/buildCascadeClassifier.html#smartTrainWrite(java.util.Vector,%20java.lang.String)http://localhost/var/www/apps/conversion/tmp/faceDetection/buildCascadeClassifier.html#initializeThresholdValues()http://localhost/var/www/apps/conversion/tmp/faceDetection/buildCascadeClassifier.html#evaluateF(boolean)http://localhost/var/www/apps/conversion/tmp/faceDetection/buildCascadeClassifier.html#evaluateD()http://localhost/var/www/apps/conversion/tmp/faceDetection/buildCascadeClassifier.html#calculateTotalWeightForStage()http://localhost/var/www/apps/conversion/tmp/faceDetection/buildCascadeClassifier.html#buildCascadeClassifier()http://localhost/var/www/apps/conversion/tmp/faceDetection/feature.htmlhttp://localhost/var/www/apps/conversion/tmp/faceDetection/baseLearner.html#totalHScales(int,%20faceDetection.feature) -
8/13/2019 Dmitri Warren de Klerk - 2653786 - Thesis
44/70
44
writeCascade(cascadeClassifiercascade,
java.lang.String filename)This method saves the cascade classifier to the file system.
cascadeStage getThreshold()This method gets the threshold of the stage.
getTotalClassifiers()This method gets the total number of weak classifiers there arein this stage.
getweakClassifierStart()This method gets the start of the weak classifiers in the weak
classifier database for this stage.
increaseTotalClassifiers()This method increase the total number of weak classifiers there
are in this stage.
setThreshold(double threshold)This method sets the threshold of the stage.
setTotalClassifiers(int weakClassifierTotal)This method sets the total number of weak classifiers there are
in this stage.
setweakClassifierStart(int weakClassifierStart)This method sets the start of the weak classifiers in the weak
classifier database for this stage.
feature calculateFeature(int[][] integralimage,int initialScale, int currentScale, int x, int y)
This method calculates the feature response value of thisfeature on the input integral image.
getHeight()
This method gets the height of the feature.getHeightScale()
This method gets the heightScale of the feature.
getOptimalThreshold()This method gets the optimal threshold of the feature.
getWidth()This method gets the width of the feature.
getWidthScale()This method gets the widthScale of the feature.
getX()
This method gets the x(top/left) column location of thefeature.
getY()This method gets the y(top/left) row location of the feature.
I(int[][] integralImage, int xCoordinate,
http://localhost/var/www/apps/conversion/tmp/faceDetection/cascadeClassifier.html#writeCascade(faceDetection.cascadeClassifier,%20java.lang.String)http://localhost/var/www/apps/conversion/tmp/faceDetection/cascadeClassifier.htmlhttp://localhost/var/www/apps/conversion/tmp/faceDetection/cascadeClassifier.htmlhttp://localhost/var/www/apps/conversion/tmp/faceDetection/cascadeStage.html#getThreshold()http://localhost/var/www/apps/conversion/tmp/faceDetection/cascadeStage.html#getTotalClassifiers()http://localhost/var/www/apps/conversion/tmp/faceDetection/cascadeStage.html#getweakClassifierStart()http://localhost/var/www/apps/conversion/tmp/faceDetection/cascadeStage.html#increaseTotalClassifiers()http://localhost/var/www/apps/conversion/tmp/faceDetection/cascadeStage.html#setThreshold(double)http://localhost/var/www/apps/conversion/tmp/faceDetection/cascadeStage.html#setTotalClassifiers(int)http://localhost/var/www/apps/conversion/tmp/faceDetection/cascadeStage.html#setweakClassifierStart(int)http://localhost/var/www/apps/conversion/tmp/faceDetection/feature.html#calculateFeature(int[][],%20int,%20int,%20int,%20int)http://localhost/var/www/apps/conversion/tmp/faceDetection/feature.html#getHeight()http://localhost/var/www/apps/conversion/tmp/faceDetection/feature.html#getHeight()http://localhost/var/www/apps/conversion/tmp/faceDetection/feature.html#getHeightScale()http://localhost/var/www/apps/conversion/tmp/faceDetection/feature.html#getOptimalThreshold()http://localhost/var/www/apps/conversion/tmp/faceDetection/feature.html#getWidth()http://localhost/var/www/apps/conversion/tmp/faceDetection/feature.html#getWidthScale()http://localhost/var/www/apps/conversion/tmp/faceDetection/feature.html#getX()http://localhost/var/www/apps/conversion/tmp/faceDetection/feature.html#getY()http://localhost/var/www/apps/conversion/tmp/faceDetection/feature.html#I(int[][],%20int,%20int)http://localhost/var/www/apps/conversion/tmp/faceDetection/feature.html#I(int[][],%20int,%20int)http://localhost/var/www/apps/conversion/tmp/faceDetection/feature.html#I(int[][],%20int,%20int)http://localhost/var/www/apps/conversion/tmp/faceDetection/feature.html#getY()http://localhost/var/www/apps/conversion/tmp/faceDetection/feature.html#getX()http://localhost/var/www/apps/conversion/tmp/faceDetection/feature.html#getWidthScale()http://localhost/var/www/apps/conversion/tmp/faceDetection/feature.html#getWidth()http://localhost/var/www/apps/conversion/tmp/faceDetection/feature.html#getOptimalThreshold()http://localhost/var/www/apps/conversion/tmp/faceDetection/feature.html#getHeightScale()http://localhost/var/www/apps/conversion/tmp/faceDetection/feature.html#getHeight()http://localhost/var/www/apps/conversion/tmp/faceDetection/feature.html#calculateFeature(int[][],%20int,%20int,%20int,%20int)http://localhost/var/www/apps/conversion/tmp/faceDetection/cascadeStage.html#setweakClassifierStart(int)http://localhost/var/www/apps/conversion/tmp/faceDetection/cascadeStage.html#setTotalClassifiers(int)http://localhost/var/www/apps/conversion/tmp/faceDetection/cascadeStage.html#setThreshold(double)http://localhost/var/www/apps/conversion/tmp/faceDetection/cascadeStage.html#increaseTotalClassifiers()http://localhost/var/www/apps/conversion/tmp/faceDetection/cascadeStage.html#getweakClassifierStart()http://localhost/var/www/apps/conversion/tmp/faceDetection/cascadeStage.html#getTotalClassifiers()http://localhost/var/www/apps/conversion/tmp/faceDetection/cascadeStage.html#getThreshold()http://localhost/var/www/apps/conversion/tmp/faceDetection/cascadeClassifier.htmlhttp://localhost/var/www/apps/conversion/tmp/faceDetection/cascadeClassifier.html#writeCascade(faceDetection.cascadeClassifier,%20java.lang.String) -
8/13/2019 Dmitri Warren de Klerk - 2653786 - Thesis
45/70
45
int yCoordinate)This method returns the integral image values in case, x = -1 or
y = -1 for the following condition: I(-1, y) = I(x, -1) = I(-1,-1) = 0,else it just returns the integral image value at x and y respectively.
setHeight(int height)
This method sets the height of the feature.setHeightScale(int heightScale)
This method sets the heightScale of the feature.
setOptimalThreshold(int optimalThreshold)This method sets the optimal threshold of the feature.
setWidth(int width)This method sets the width of the feature.
setWidthScale(int widthScale)This method sets the widthScale of the feature.
setX(int xCoordinate)This method gets the x(top/left) column location of the
feature.
setY(int yCoordinate)This method gets the y(top/left) row location of the feature.
toString()This method prints the feature to a string.
integralImage I(int x, int y)This function returns the integral image values in case, x = -1 or
y = -1 for the following condition: I(-1, y) = I(x, -1) = I(-1,-1) = 0,else it just returns the integral image value at x and y respectively.
integralImage(PGMImagepgm)
This function takes as input a pgm image and determines thecorresponding integral image of the image.
printIntegralImage(int[][] integralImage)This method is used for debugging.
integrateMultiple
Detections
biggestDetection()This method returns the biggest detection of all the detections.
clusterizeDetections()This method puts the detections into clusters - a cluster is
defined as detections which overlaps.
integratedDetections(boolean print)Each partition yields a single final detection.The corners of the
final bounding region are the average of the corners of all detectionsin the set.
integrateMultipleDetections(java.util.Vector multipleDetections, byte[] outData, int cy,
int cY, int cx, int cX, int lineStride,
http://localhost/var/www/apps/conversion/tmp/faceDetection/feature.html#setHeight(int)http://localhost/var/www/apps/conversion/tmp/faceDetection/feature.html#setHeight(int)http://localhost/var/www/apps/conversion/tmp/faceDetection/feature.html#setHeightScale(int)http://localhost/var/www/apps/conversion/tmp/faceDetection/feature.html#setOptimalThreshold(int)http://localhost/var/www/apps/conversion/tmp/faceDetection/feature.html#setWidth(int)http://localhost/var/www/apps/conversion/tmp/faceDetection/feature.html#setWidthScale(int)http://localhost/var/www/apps/conversion/tmp/faceDetection/feature.html#setX(int)http://localhost/var/www/apps/conversion/tmp/faceDetection/feature.html#setY(int)http://localhost/var/www/apps/conversion/tmp/faceDetection/feature.html#toString()http://localhost/var/www/apps/conversion/tmp/faceDetection/integralImage.html#I(int,%20int)http://localhost/var/www/apps/conversion/tmp/faceDetection/integralImage.html#I(int,%20int)http://localhost/var/www/apps/conversion/tmp/faceDetection/integralImage.html#integralImage(icp.PGMImage)http://localhost/var/www/apps/conversion/tmp/icp/PGMImage.htmlhttp://localhost/var/www/apps/conversion/tmp/faceDetection/integralImage.html#printIntegralImage(int[][])http://localhost/var/www/apps/conversion/tmp/faceDetection/integrateMultipleDetections.html#biggestDetection()http://localhost/var/www/apps/conversion/tmp/faceDetection/integrateMultipleDetections.html#clusterizeDetections()http://localhost/var/www/apps/conversion/tmp/faceDetection/integrateMultipleDetections.html#integratedDetections(boolean)http://localhost/var/www/apps/conversion/tmp/faceDetection/integrateMultipleDetections.html#integrateMultipleDetections(java.util.Vector,%20byte[],%20int,%20int,%20int,%20int,%20int,%20int)http://localhost/var/www/apps/conversion/tmp/faceDetection/detectionWindow.htmlhttp://localhost/var/www/apps/conversion/tmp/faceDetection/detectionWindow.htmlhttp://localhost/var/www/apps/conversion/tmp/faceDetection/detectionWindow.htmlhttp://localhost/var/www/apps/conversion/tmp/faceDetection/detectionWindow.htmlhttp://localhost/var/www/apps/conversion/tmp/faceDetection/detectionWindow.htmlhttp://localhost/var/www/apps/conversion/tmp/faceDetection/integrateMultipleDetections.html#integrateMultipleDetections(java.util.Vector,%20byte[],%20int,%20int,%20int,%20int,%20int,%20int)http://localhost/var/www/apps/conversion/tmp/faceDetection/integrateMultipleDetections.html#integratedDetections(boolean)http://localhost/var/www/apps/conversion/tmp/faceDetection/integrateMultipleDetections.html#clusterizeDetections()http://localhost/var/www/apps/conversion/tmp/faceDetection/integrateMultipleDetections.html#biggestDetection()http://localhost/var/www/apps/conversion/tmp/faceDetection/integralImage.html#printIntegralImage(int[][])http://localhost/var/www/apps/conversion/tmp/icp/PGMImage.htmlhttp://localhost/var/www/apps/conversion/tmp/faceDetection/integralImage.html#integralImage(icp.PGMImage)http://localhost/var/www/apps/conversion/tmp/faceDetection/integralImage.html#I(int,%20int)http://localhost/var/www/apps/conversion/tmp/faceDetection/feature.html#toString()http://localhost/var/www/apps/conversion/tmp/faceDetection/feature.html#setY(int)http://localhost/var/www/apps/conversion/tmp/faceDetection/feature.html#setX(int)http://localhost/var/www/apps/conversion/tmp/faceDetection/feature.html#setWidthScale(int)http://localhost/var/www/apps/conversion/tmp/faceDetection/feature.html#setWidth(int)http://localhost/var/www/apps/conversion/tmp/faceDetection/feature.html#setOptimalThreshold(int)http://localhost/var/www/apps/conversion/tmp/faceDetection/feature.html#setHeightScale(int)http://localhost/var/www/apps/conversion/tmp/faceDetection/feature.html#setHeight(int) -
8/13/2019 Dmitri Warren de Klerk - 2653786 - Thesis
46/70
46
int pixelStride)This method takes as input a Vector of detections of faces.
printDetection(detectionWindowdetectWindow)Takes as input a detection and prints a white border around the
detected face.
scanDetector loadCascade()This method loads in the cascade classifier from the file system.
scanDetector(PGMImageinputImage, byte[] outData,
int cy, int cY, int cx, int cX, int lineStride,
int pixelStride)This method runs the detector(cascade classifier) over the
camera output images.
trainImage getIntegralImage()This method gets the integral image for this trainImage.
getType()This method gets the type of an image, if its a positive or a
negative image.getWeight()
This method gets the weight of an image in the trainingdatabase over the training set.
setIntegralImage(int[][] image)This method sets the integral image for this trainImage.
setType(int type)This method sets the type of an image, if its a positive or
negative image.
setWeight(double weight)This method sets the weight of an image in the training
database over the training set.weakClassifier determineAndSetClassifierWeight (double error)
This method determines and sets the weight of the classifier,since hypothesis weight = 1/2 ln (1-error)/error
getClassifierError()This method gets the error of this classifier.
getClassifierFeature()This method gets the feature of a weak classifier.
getClassifierWeight()This method gets the weight of the classifier.
readWeakClassifier(java.lang.String filename)This function reads the weakClassifier from a file.
setClassifierError(double classifierError)This method sets the error for this classifier.
setClassifierFeature(featureclassifierFeature)
http://localhost/var/www/apps/conversion/tmp/faceDetection/integrateMultipleDetections.html#printDetection(faceDetection.detectionWindow)http://localhost/var/www/apps/conversion/tmp/faceDetection/detectionWindow.htmlhttp://localhost/var/www/apps/conversion/tmp/faceDetection/detectionWindow.htmlhttp://localhost/var/www/apps/conversion/tmp/faceDetection/scanDetector.html#loadCascade()http://localhost/var/www/apps/conversion/tmp/faceDetection/scanDetector.html#loadCascade()http://localhost/var/www/apps/conversion/tmp/faceDetection/scanDetector.html#scanDetector(icp.PGMImage,%20byte[],%20int,%20int,%20int,%20int,%20int,%20int)http://localhost/var/www/apps/conversion/tmp/icp/PGMImage.htmlhttp://localhost/var/www/apps/conversion/tmp/icp/PGMImage.htmlhttp://localhost/var/www/apps/conversion/tmp/faceDetection/trainImage.html#getIntegralImage()http://localhost/var/www/apps/conversion/tmp/faceDetection/trainImage.html#getType()http://localhost/var/www/apps/conversion/tmp/faceDetection/trainImage.html#getWeight()http://localhost/var/www/apps/conversion/tmp/faceDetection/trainImage.html#getWeight()http://localhost/var/www/apps/conversion/tmp/faceDetection/trainImage.html#setIntegralImage(int[][])http://localhost/var/www/apps/conversion/tmp/faceDetection/trainImage.html#setType(int)http://localhost/var/www/apps/conversion/tmp/faceDetection/trainImage.html#setWeight(double)http://localhost/var/www/apps/conversion/tmp/faceDetection/trainImage.html#setWeight(double)http://localhost/var/www/apps/conversion/tmp/faceDetection/weakClassifier.html#determineAndSetClassifierWeight(double)http://localhost/var/www/apps/conversion/tmp/faceDetection/weakClassifier.html#determineAndSetClassifierWeight(double)http://localhost/var/www/apps/conversion/tmp/faceDetection/weakClassifier.html#getClassifierError()http://localhost/var/www/apps/conversion/tmp/faceDetection/weakClassifier.html#getClassifierFeature()http://localhost/var/www/apps/conversion/tmp/faceDetection/weakClassifier.html#getClassifierWeight()http://localhost/var/www/apps/conversion/tmp/faceDetection/weakClassifier.html#readWeakClassifier(java.lang.String)http://localhost/var/www/apps/conversion/tmp/faceDetection/weakClassifier.html#setClassifierError(double)http://localhost/var/www/apps/conversion/tmp/faceDetection/weakClassifier.html#setClassifierFeature(faceDetection.feature)http://localhost/var/www/apps/conversion/tmp/faceDetection/feature.htmlhttp://localhost/var/www/apps/conversion/tmp/faceDetection/feature.htmlhttp://localhost/var/www/apps/conversion/tmp/faceDetection/feature.htmlhttp://localhost/var/www/apps/conversion/tmp/faceDetection/weakClassifier.html#setClassifierFeature(faceDetection.feature)http://localhost/var/www/apps/conversion/tmp/faceDetection/weakClassifier.html#setClassifierError(double)http://localhost/var/www/apps/conversion/tmp/faceDetection/weakClassifier.html#readWeakClassifier(java.lang.String)http://localhost/var/www/apps/conversion/tmp/faceDetection/weakClassifier.html#getClassifierWeight()http://localhost/var/www/apps/conversion/tmp/faceDetection/weakClassifier.html#getClassifierFeature()http://localhost/var/www/apps/conversion/tmp/faceDetection/weakClassifier.html#getClassifierError()http://localhost/var/www/apps/conversion/tmp/faceDetection/weakClassifier.html#determineAndSetClassifierWeight(double)http://localhost/var/www/apps/conversion/tmp/faceDetection/trainImage.html#setWeight(double)http://localhost/var/www/apps/conversion/tmp/faceDetection/trainImage.html#setType(int)http://localhost/var/www/apps/conversion/tmp/faceDetection/trainImage.html#setIntegralImage(int[][])http://localhost/var/www/apps/conversion/tmp/faceDetection/trainImage.html#getWeight()http://localhost/var/www/apps/conversion/tmp/faceDetection/trainImage.html#getType()http://localhost/var/www/apps/conversion/tmp/faceDetection/trainImage.html#getIntegralImage()http://localhost/var/www/apps/conversion/tmp/icp/PGMImage.htmlhttp://localhost/var/www/apps/conversion/tmp/faceDetection/scanDetector.html#scanDetector(icp.PGMImage,%20byte[],%20int,%20int,%20int,%20int,%20int,%20int)http://localhost/var/www/apps/conversion/tmp/faceDetection/scanDetector.html#loadCascade()http://localhost/var/www/apps/conversion/tmp/faceDetection/detectionWindow.htmlhttp://localhost/var/www/apps/conversion/tmp/faceDetection/integrateMultipleDetections.html#printDetection(faceDetection.detectionWindow) -
8/13/2019 Dmitri Warren de Klerk - 2653786 - Thesis
47/70
47
This method sets the feature of a weak classifier.
weakClassifiertoString()This function prints the weak classifier to a string.
writeWeakClassifier(weakClassifierweakclassifier,
java.lang.String filename)
This function write the weakClassifier to a file.
Table 5-1: Inner details of the system classes.
5.2Pseudo Code
5.2.1Scan Detector
Recall that the detector gets scanned at all scales and location across the image, rather than
the image itself. This pseudo code scans the image at the base (initial) resolution of 19x19
at all locations in the image; it then scans the 320x240 resolution image. The scale of thedetector is increased and the process in repeated until its scanned the images at all scales.
windowWidth = 320 ;heightWidth = 240 ;
width = 19 ;height = 19 ;
for ( all possible scales a width*height sub-window can fit into windowWidth*heightWidth)
scanDetectorWithSubwindow(width, height ) ;
width ++ ;height ++ ;}
function scanDetectorWithSubwindow(width, height )
for (h = 0 to h < windowHeight )
for (w=0 to w < windowWidth )
if (w + width < windowWidth AND h + height < windowHeight )
classifySubWindow(w+width, h+height )
endIfendForendFor
http://localhost/var/www/apps/conversion/tmp/faceDetection/weakClassifier.html#weakClassifiertoString()http://localhost/var/www/apps/conversion/tmp/faceDetection/weakClassifier.html#writeWeakClassifier(faceDetection.weakClassifier,%20java.lang.String)http://localhost/var/www/apps/conversion/tmp/faceDetection/weakClassifier.htmlhttp://localhost/var/www/apps/conversion/tmp/faceDetection/weakClassifier.htmlhttp://localhost/var/www/apps/conversion/tmp/faceDetection/weakClassifier.htmlhttp://localhost/var/www/apps/conversion/tmp/faceDetection/weakClassifier.html#writeWeakClassifier(faceDetection.weakClassifier,%20java.lang.String)http://localhost/var/www/apps/conversion/tmp/faceDetection/weakClassifier.html#weakClassifiertoString() -
8/13/2019 Dmitri Warren de Klerk - 2653786 - Thesis
48/70
48
C H A P T E R 6
CODE DOCUMENTATION
The full code documentation of our code is not contained in this project due to the
number of pages it covers. The code documentation can however be found on the
accompanying compack disk(CD).
In the code documentation:
Every class and class method are described using in-line comments or a brief
detailed description of the algorithm or its workings. The same applies for
methods. The javadoc web pages makes for easily browsing the code
documentation. Where applicable, we note any caveatsthings that could go
wrong or things that the code doesnt address.
-
8/13/2019 Dmitri Warren de Klerk - 2653786 - Thesis
49/70
49
C H A P T E R 7
TESTING DOCUMENT
This chapter describes how we tested our system. The system has been tested on the
MIT+CMU frontal face test set. The results of the face detector are shown below.
MIT+CMU frontal face test set. Images collected at CMU and MIT. 275 Correctly detected out of 472 images. 58% Detection rate
The following Figure 7.1 displays a false positive; the system detects the following sub-
window as a face, when its not.
Figure 7.1False positive detection
-
8/13/2019 Dmitri Warren de Klerk - 2653786 - Thesis
50/70
-
8/13/2019 Dmitri Warren de Klerk - 2653786 - Thesis
51/70
51
C H A P T E R 8
USERS GUIDE
This document tells a user how to use the Access Control system. This document may
also be used by a programmer as a guide to improve or edit the system. This document
describes the system requirements to use the Access Control system, the system project
directory structure and how to run the Access Control system.
8.1 - System requirements
All the requirements to setup, run and edit the Access Control system are contained in
Table 8.1.
System requirements
Hardware Software
A personal computer (PC) that can satisfy
the software requirements and webcams
requirements.
Microsoft Windows based operating system
available from Microsoft Corporation.
A webcam. Installed Sun Java Runtime Environment
(JRE) and Java Development Kit (JDK)
6u2 (Update 2) available from Sun
Microsystems.
Installed Java Media Framework 2.1.1e
available from Sun Microsystems.
Installed NetBeans IDE 6.5 available from
www.netbeans.org
Table 8.1: The system requirements a table to setup, run andedit the Access Control system.
-
8/13/2019 Dmitri Warren de Klerk - 2653786 - Thesis
52/70
52
8.2 - The Face Recognition System project directory
The Access Control System project directory is displayed in Figure 8.1 below. Table 8.2
explains the directory and its contents.
Figure 8.1: Access Control Systems project directory structure.
-
8/13/2019 Dmitri Warren de Klerk - 2653786 - Thesis
53/70
53
Directory Contents/Description
Access_Control_System The main project directory which contains all
project directories. The directory contains two
files.AccessControl.jar an executable jar file that
runs the Access Control system.
AccessControlJavadocs.html a link to the
javadocs index.html file which documents the
source code.
Access_Control_System\UserGuide This directory contains the Html User Guide
file used by the Access Control system.
Access_Control_System\log This directory logs any Access Control activity.
The directory contains a directory namely
YEAR-MONTH-DAY, this directory contains
the log files for that day. The log files are
named per hour and minute the system was
terminated, e.g. HOUR-MINUTE.log
Access_Control_System\train This directory contains the training images used
for training the face detector, as well as theweak classifiers and cascade classifier the
detector uses for face detection.
Access_Control_System\dist\javadoc This directory contains the generated javadoc
files.
Access_Control_System\src The directory that contains all the source code
used by the Access Control system.
Access_Control_System\src\accessControl
System\resources
The directory that contains all the Access
Control resources.
Access_Control_System\src\faceDetection The source code directory for the faceDetection
package.
-
8/13/2019 Dmitri Warren de Klerk - 2653786 - Thesis
54/70
54
Table 8.2: The Access Control system project directory andcontents explained.
8.3 - Running the Access Control system
Table 8.3 below describes the steps necessary to run the Access Control system.
Running the Access Control system
Step 1 Make sure that the system requirements in Table 8.1 are met.
Step 2 Make sure that a video capture device is connected to the PC.
Step 3 Double click the Executable Jar File named Access Control.jar to run theAccess Control System.
Table 8.3: Running the Access Control system.
8.4Complete user interface
Figure 8.2 below displays the main graphical user interface frame for our system titled
Access Control.
-
8/13/2019 Dmitri Warren de Klerk - 2653786 - Thesis
55/70
55
Figure 8.2: Complete user interface
User Image panel
The User Image panel displays the user image of the user currently selected in the Userlist
panel.
User List panel
The User List panel lists the authorized users the Access Control system can recognize.
Search field
The search field can be used to search for a user in the face recognition system. This
feature can be very useful when the face recognition system have hundreds of users.
The search function searches for the substring of the search text in the username and
lists all the matched usernames in alphabetical order, displaying the first matched users
image in the user image panel.
Camera output
The camera output displays the cameras output. Used for capturing images when adding
users to the face recognition system, and for monitor current user login activity.
-
8/13/2019 Dmitri Warren de Klerk - 2653786 - Thesis
56/70
56
Add user button
The add user button adds an authorized user to the Access Control system. On execution
of the Add User button the Add User dialog as shown in Figure 8.6 is displayed.
Remove user button
The remove button is only enabled when there are users in the system. The remove button
removes a user from the Access Control system. On execution of the Remove User
button the Confirm User Delete dialog in Figure 8.9 is displayed.
Recognition output
The Access Control system displays the output of the recognized user below camera
output. When a user is not recognized by the Access Control system, the system displays
Who are you? , otherwise the system displays the user ID of the recognized user and the
rate of the face recognition.
Log panel
When a user is detected by the face detection system and recognized by the face
recognition system, the Access Control system logs the users information to the log panel.
The log panel logs the user and the time at which the user logged in, the log file gets saves
with the current date every time the user terminates the system.
Acceptance threshold
The face recognition system has a threshold at which users should be recognized. With 0
being the lowest threshold value and with 1 being the highest threshold value. With 0
being the most lenient and 1 being the most strict at which the recognition operates. The
Acceptance threshold spinner is used to adjust the acceptance threshold value used with
which the ANNs output is compared[1].
-
8/13/2019 Dmitri Warren de Klerk - 2653786 - Thesis
57/70
57
8.4.1 - How the user interface behaves
When the face detection system does not detect any faces or detects a false positive or the
face recognition system does not recognize the user. Then the system displays Who are
you?As displayed in Figure 8.4 below.
-
8/13/2019 Dmitri Warren de Klerk - 2653786 - Thesis
58/70
58
Figure 8.4: How the user interface behaveswith detectionand no recognition.
When the users face is detected and recognized, the system will display the user ID and
the recognition rate as displayed in Figure 8.5 below.
Figure 8.5: How the user interface behaveswith detectionand recognition
8.4.1 - The Add User Dialog
The Add User Dialog is displayed in Figure 8.5 below, this frame is displayed when the
Add User button in the Access Control GUI is pressed or when the Add User menu item
-
8/13/2019 Dmitri Warren de Klerk - 2653786 - Thesis
59/70
59
from the File menu is selected. The location of the Add User Dialog is such that the
cameras output can be clearly viewed. Each interface component in the Add User Dialog
with its purpose or action is described in Table 8.1.
Figure 8.6: The Add User dialog.
Add User dialog
Interface component Purpose/Action
User Image panel Displays the captured image.
Capture Image button Captures an image.
ID label and text field The text field where a user identifier should
be entered.
Ok button Add the user with the specified identifierthat should be recognized. If the ID text
field is empty the error dialog in Figure 8.7
is displayed. If no images was captured the
error dialog in Figure 8.8 is displayed.
Cancel button Do not add the user, discard the captured
image and close the Add User dialog.
Table 8.1: The Add User dialog interface componentsdescribed.
-
8/13/2019 Dmitri Warren de Klerk - 2653786 - Thesis
60/70
60
From the add user dialog, if a user clicks Ok without entering a user ID or capturing an
image. The following dialog in Figure 8.7 below is displayed.
Figure 8.7: Error dialog displayed for empty ID text field or nocaptured image.
From the add user dialog, if a user clicks Ok without capturing an image. The following
dialog in Figure 8.8 below is displayed.
Figure 8.8: Error dialog displayed when no image wascaptured.
-
8/13/2019 Dmitri Warren de Klerk - 2653786 - Thesis
61/70
61
8.4.2 - The Confirm User Delete dialog
The Confirm User dialog is displayed in Figure 8.9 below, this dialog is displayed when
selecting a user in the User List panel and clicking the Remove User button. When
removing a user from the system, the default option is set to no, such that the
administrator doesnt blindly remove a user from the Access Control system.
Figure 8.9: Confirm User Delete dialog displayed when clickingthe remove user button.
Confirm User Delete dialog
Interface component Purpose/Action
Yes button Completely removes the user from the
Access Control system.
No button Do not remove the user, close the
confirmation dialog.
Table 8.2: The Confirm User Delete dialog interfacecomponents described.
-
8/13/2019 Dmitri Warren de Klerk - 2653786 - Thesis
62/70
62
8.4.3 - The File menu
The complete File menu with the menu items it contains is displayed in Figure 8.10 below.
Each menu item of the File menu with its associated action is described in Table 8.8.
Figure 8.10: The File menu.
The File menu
Menu item Action
Add User Adds a user to the Access Control system.Exit Exits the Access Control system.
Table 8.8: The File menu items described.
-
8/13/2019 Dmitri Warren de Klerk - 2653786 - Thesis
63/70
63
8.4.4 - The Face Detection menu
The complete Face Detection menu with the menu items it contains is displayed in Figure
8.11 below. Each menu item of the Face Detection menu with its associated action is
described in Table 8.4.
Figure 8.11: The File Detection menu.
The Face Detection menu
Menu item Action
Settings Opens the Face Detection Settings dialog as
displayed in Figure 8.12.Table 8.4: The File menu items described.
-
8/13/2019 Dmitri Warren de Klerk - 2653786 - Thesis
64/70
64
8.4.5Face Detection Settings dialog
The Face Detection Settings dialog is displayed in Figure 8.12 below, this dialog is
displayed when the Settings menu item from the Face Detection menu is selected.
In this dialog the administrator can set the scales in which the face detection shouldoperate. The face detection system will try and locate faces at the starting scale and
increasing in step size until it reaches the final scale. The default settings has a starting
scale of 19, thus the face detection system will try and locate all 19x19 faces in the camera
output. Then the system will increase this resolution by its step size, which are 5 by
default. The system then tries and locates all 24x24 faces in the camera output. The system
will continue increasing in step size until it reaches the final scale, which are set to 240x240
by default.
Choosing a big starting scale will dramatically improve the performance of the face
detection system and thus the Access Control system. This is because there are more small
scales in an 820x240 window, than big scales.
Important
These settings highly affect the performance of the Access Control system. The more
scale the face detections system has to cover, the slower the system will run.
The starting scale and final scale is also the distance as to how close a user should stand
in front of the camera in order to be detected by the face detection system; so much care
should be taken into playing with these settings.
Figure 8.12: The File Detection Setting dialog.
-
8/13/2019 Dmitri Warren de Klerk - 2653786 - Thesis
65/70
65
8.4.6The Help menu
The complete Help menu with the menu items it contains is displayed in Figure 8.13. Each
menu item of the Help menu with its associated action is described in Table 8.5.
Figure 8.13: The Help menu.
The Help menu
Menu item Action
User Guide Open the Users Guide dialog displayed
in Figure 8.14.
About Open the About dialog displayed in
Figure 8.15.
Table 8.5: The Help menu items described.
-
8/13/2019 Dmitri Warren de Klerk - 2653786 - Thesis
66/70
66
8.4.7The User Guide dialog
The Users Guide dialog is a modal dialog that is displayed in Figure 8.14. This dialog
contains the Users Guide for the Administrator. It is opened when the Users Guide menu
item from the Help menu is selected. The Users Guide dialog can be closed by either the
close button at the top right corner or the Ok button at the bottom of the dialog[1].
Figure 8.14: The Users Guide dialog.
-
8/13/2019 Dmitri Warren de Klerk - 2653786 - Thesis
67/70
67
8.4.8The About dialog
The About dialog is a modal dialog that is displayed in Figure 8.15. This dialog contains a
message about the Access Control system. The About dialog can be closed by either the
close button at the top right corner or the OK button at the bottom of the dialog.
Figure 8.15: The About dialog.
-
8/13/2019 Dmitri Warren de Klerk - 2653786 - Thesis
68/70
68
Figure 8.16 below displays the webcam that Users interact with and that is also used to
capture face images.
Figure 8.16: The webcam that Users interact with and that isalso used to capture face images.
-
8/13/2019 Dmitri Warren de Klerk - 2653786 - Thesis
69/70
69
C h a p t e r 9
CONCLUSION
In this mini-theses, We discussed the implementation of a face detection system to be
used for access control. The focus was thus to implement a face detection system good
enough to be used for access control. Access control systems normally use video cameras
that deliver image data of poor quality and that also contain much noise [3]. The focus was
also on developing a real-time face detection system. The Viola and Jones have been most
suited for our requirements, thus we implemented a detector strongly based on the Viola
and Jones detector.
-
8/13/2019 Dmitri Warren de Klerk - 2653786 - Thesis
70/70
BIBLIOGRAPHY
[1] A. Jorgensen. AdaBoost and Histograms for Fast Face Detection, 2006.
[2] B.K.L. Erik Hjelmas, Face Detection: A Survey, Computer Vision and Image
Understanding, vol. 3, no. 3, pp. 236-274, Sept. 2001.
[3] D. van Wyk .http://www.cs.uwc.ac.za/index.php/Honours-2006/Desmond-Van-
Wyk.html[online], November 2006
[4] J. Meynet. Fast Face Detection Using AdaBoost, July 2003.
[5] R. Lienhart. and J. Maydt., An extended set of Haar-like features for rapid object
detection. In: IEEE ICIP 2002, Vol.1, pp 900-903.
[6] P. Viola and M. Jones. Rapid object detection using a boosted cascade of
simple features. In Proc. IEEE Conference on Computer Vision and Pattern Recognition, pages
511518, Dec 2001.
[7] P. Viola and M. Jones. Robust real-time object detection. IEEE ICCV Workshop
Statistical and Computational Theories of Vision, July 2001.
[8] Y. Freund and R.E. Schapire. A decision-theoretic generalization of on-line learningand an application to boosting. In Proceedings of the Second Europen Conference on Computational
Learning Theory, pages 2337. Springer-Verlag, 1995.
http://www.cs.uwc.ac.za/index.php/Honours-2006/Desmond-Van-Wyk.htmlhttp://www.cs.uwc.ac.za/index.php/Honours-2006/Desmond-Van-Wyk.htmlhttp://www.cs.uwc.ac.za/index.php/Honours-2006/Desmond-Van-Wyk.htmlhttp://www.cs.uwc.ac.za/index.php/Honours-2006/Desmond-Van-Wyk.html