EMOTION RECOGNITION USING FACIAL EXPRESSIONS WITH...

6
EMOTION RECOGNITION USING FACIAL EXPRESSIONS WITH ACTIVE APPEARANCE MODELS Matthew S. Ratliff Department of Computer Science University of North Carolina Wilmington 601 South College Road Wilmington, NC, USA [email protected] Eric Patterson Department of Computer Science University of North Carolina Wilmington 601 South College Road Wilmington, NC, USA [email protected] ABSTRACT Recognizing emotion using facial expressions is a key ele- ment in human communication. In this paper we discuss a framework for the classification of emotional states, based on still images of the face. The technique we present in- volves the creation of an active appearance model (AAM) trained on face images from a publicly available database to represent shape and texture variation key to expression recognition. Parameters from the AAM are used as fea- tures for a classification scheme that is able to successfully identify faces related to the six universal emotions. The re- sults of our study demonstrate the effectiveness of AAMs in capturing the important facial structure for expression identification and also help suggest a framework for future development. KEY WORDS Emotion, Facial Expression, Expression Recognition, Ac- tive Appearance Model 1 Introduction Facial expressions provide a key mechanism for under- standing and conveying emotion. Even the term “interface” suggests the primary role of the face in communication be- tween two entities. Studies have shown that interpreting facial expressions can significantly alter the interpretation of what is spoken as well as control the flow of a conver- sation [31]. Meharbian has suggested that the ability for humans to interpret emotions is very important to effective communication, accounting for up to 93% of communica- tion used in a normal conversation [23]. For ideal human- computer interfaces (HCI), we would desire that machines have this capability as well. Computer applications could better communicate by changing responses according to the emotional state of human users in various interactions. In order to work toward these capabilities, efforts have recently been devoted to integrating affect recogni- tion into human-computer applications [20]. Applications exist in both emotion recognition and agent-based emotion generation [17]. The work presented in this paper explores the recognition of expressions, although the same research can be useful for synthesising facial expressions to convey emotion [17] [18]. By creating machines that can under- stand emotion, we enhance the communication that exists between humans and computers. This would open a variety of possibilities in robotics and human-computer interfaces such as devices that warn a drowsy driver, attempt to pla- cate an angry customer, or better meet user needs in gen- eral. The field of psychology has played an important role in understanding human emotion and in developing con- cepts that may aid these HCI technologies. Ekman and Freisen have been pioneers in this area, helping to iden- tify six basic emotions (anger, fear, disgust, joy, surprise, sadness) that appear to be universal across humanity [12] . In addition, they developed a scoring system used to sys- temically categorize the physical expression of emotions, known as the Facial Action Coding System (FACS) [13]. FACS has been used in a variety of studies and applica- tions, and has found its way into many face-based com- puter technologies. The study of the facial muscle move- ments classified by FACS in creating certain expressions was used to inform the choice of landmarks for active ap- pearance model (AAM) shape parameters in our work. Our work thus far has been focused on developing a framework for emotion recognition based on facial expressions. Facial images representing the six universal emotions mentioned previously as well as a neutral expression were labeled in a manner to capture expressions. An AAM was built using training data and tested on a separate dataset. Test face im- ages were then classified as one of the six emotion-based expressions or a neutral expression using the AAM param- eters as classification features. The technique achieved a high level of performance in classifying these different fa- cial expressions based on still images. This paper presents a summary of current contributions to this area of research, discusses our approach to the problem, and details tech- niques we plan to pursue for this work. 2 Previous Work Facial expressions provide the building blocks with which to understand emotion. In order to effectively use facial expressions, it is necessary to understand how to interpret expressions, and it is also important to study what others have done in the past. Fasel and Luettin performed an in-

Transcript of EMOTION RECOGNITION USING FACIAL EXPRESSIONS WITH...

Page 1: EMOTION RECOGNITION USING FACIAL EXPRESSIONS WITH …people.uncw.edu/pattersone/research/publications/RatliffPatterson_HCI2008.pdfin capturing the important facial structure for expression

EMOTION RECOGNITION USING FACIAL EXPRESSIONS WITH ACTIVEAPPEARANCE MODELS

Matthew S. RatliffDepartment of Computer Science

University of North Carolina Wilmington601 South College RoadWilmington, NC, [email protected]

Eric PattersonDepartment of Computer Science

University of North Carolina Wilmington601 South College RoadWilmington, NC, [email protected]

ABSTRACTRecognizing emotion using facial expressions is a key ele-ment in human communication. In this paper we discuss aframework for the classification of emotional states, basedon still images of the face. The technique we present in-volves the creation of an active appearance model (AAM)trained on face images from a publicly available databaseto represent shape and texture variation key to expressionrecognition. Parameters from the AAM are used as fea-tures for a classification scheme that is able to successfullyidentify faces related to the six universal emotions. The re-sults of our study demonstrate the effectiveness of AAMsin capturing the important facial structure for expressionidentification and also help suggest a framework for futuredevelopment.

KEY WORDSEmotion, Facial Expression, Expression Recognition, Ac-tive Appearance Model

1 Introduction

Facial expressions provide a key mechanism for under-standing and conveying emotion. Even the term “interface”suggests the primary role of the face in communication be-tween two entities. Studies have shown that interpretingfacial expressions can significantly alter the interpretationof what is spoken as well as control the flow of a conver-sation [31]. Meharbian has suggested that the ability forhumans to interpret emotions is very important to effectivecommunication, accounting for up to 93% of communica-tion used in a normal conversation [23]. For ideal human-computer interfaces (HCI), we would desire that machineshave this capability as well. Computer applications couldbetter communicate by changing responses according tothe emotional state of human users in various interactions.

In order to work toward these capabilities, effortshave recently been devoted to integrating affect recogni-tion into human-computer applications [20]. Applicationsexist in both emotion recognition and agent-based emotiongeneration [17]. The work presented in this paper exploresthe recognition of expressions, although the same researchcan be useful for synthesising facial expressions to convey

emotion [17] [18]. By creating machines that can under-stand emotion, we enhance the communication that existsbetween humans and computers. This would open a varietyof possibilities in robotics and human-computer interfacessuch as devices that warn a drowsy driver, attempt to pla-cate an angry customer, or better meet user needs in gen-eral. The field of psychology has played an important rolein understanding human emotion and in developing con-cepts that may aid these HCI technologies. Ekman andFreisen have been pioneers in this area, helping to iden-tify six basic emotions (anger, fear, disgust, joy, surprise,sadness) that appear to be universal across humanity [12] .In addition, they developed a scoring system used to sys-temically categorize the physical expression of emotions,known as the Facial Action Coding System (FACS) [13].FACS has been used in a variety of studies and applica-tions, and has found its way into many face-based com-puter technologies. The study of the facial muscle move-ments classified by FACS in creating certain expressionswas used to inform the choice of landmarks for active ap-pearance model (AAM) shape parameters in our work. Ourwork thus far has been focused on developing a frameworkfor emotion recognition based on facial expressions. Facialimages representing the six universal emotions mentionedpreviously as well as a neutral expression were labeled ina manner to capture expressions. An AAM was built usingtraining data and tested on a separate dataset. Test face im-ages were then classified as one of the six emotion-basedexpressions or a neutral expression using the AAM param-eters as classification features. The technique achieved ahigh level of performance in classifying these different fa-cial expressions based on still images. This paper presentsa summary of current contributions to this area of research,discusses our approach to the problem, and details tech-niques we plan to pursue for this work.

2 Previous Work

Facial expressions provide the building blocks with whichto understand emotion. In order to effectively use facialexpressions, it is necessary to understand how to interpretexpressions, and it is also important to study what othershave done in the past. Fasel and Luettin performed an in-

Page 2: EMOTION RECOGNITION USING FACIAL EXPRESSIONS WITH …people.uncw.edu/pattersone/research/publications/RatliffPatterson_HCI2008.pdfin capturing the important facial structure for expression

depth study in an attempt to understand the sources thatdrive expressions [15] . Their results indicate that usingFACS may incorporate other sources of emotional stimulusincluding non-emotional mental and physiological aspectsused in generating emotional expressions.

Much of expression research to date has focused onunderstanding how underlying muscles move to create ex-pressions [15] [18]. For example, studies have shown thatmovement of the nasiolabial furrow, in addition to move-ment of the eyes and eyebrows, is a primary indicator ofdisgust [3]. FACS as well as EMFACS go into much de-tail as to the poses of the face and have served as importantreferences and a system of study for other work [13] [16].Though it was originally used for the analysis of facialmovement by human observers, FACS has been adopted bythe animation and recognition communities. (EMFACS isa further simplification to focus only on facial action unitsthat contribute emotional information). Cohn et al. reportthat facial expressions can differ somewhat from culture toculture for a particular emotion, but the similarities in ex-pression for an emotion are usually strong enough to over-come these cultural differences [9].

Much of previous work has used FACS as a frame-work for classification. In addition to this, previous studieshave traditionally taken two approaches to emotion classi-fication according to Fasel and Luettin [15]: a judgmentbased approach and a sign-based approach. The judg-ment approach develops the categories of emotion in ad-vance such as the traditional six universal emotions. Thesign-based approach uses a FACS system, encoding actionunits in order to categorize an expression based on its con-stituents. This approach assumes no categories, but ratherassigns an emotional value to a face using a combination ofthe key action units that create the expression.

2.1 Data Collection

One challenge of research in expression and emotionrecognition is collection of suitable data for training andtesting systems. Several databases have been developed forfacial expressions. Some are small and have limited acces-sibility, and most have used actors to portray emotions invideo recordings [27] [1] [24] [18]. The use of professionalactors may fail to capture expression that is completely andaccurately representational of underlying emotional con-tent. There is likely a difference between artificially posedexpressions and those based on true underlying emotion.One feature that suggests this is the lack of constrictionin the obicularis oculi during artificial smiles [12]. Faseland Luettin recognized that the use of posed facial expres-sions tends to be exaggerated and easier to recognize as op-posed to those found in spontaneous expressions [15]. Wal-hoff however, has developed and released a database con-structed in an effort to elicit genuine responses [30]. Thisis the database that we have used so far, and it is discussedin further detail in the next section of this paper. Actualcomparisons between results on specific databases has also

been somewhat limited, and it would be useful to encour-age comparisons of techniques on the same data sets.

2.2 Feature Extraction Methods

In order to recognize expressions of the face, a useful fea-ture scheme and extraction method must be chosen. Oneof the most famous techniques used in face recognition andrelated areas is that of ‘eigenfaces’ developed by Turk andPentland [29]. An average face and a set of basis functionsfor ‘face-space’ is constructed using principal componentsanalysis. Although a successful method for simple facerecognition, this technique would lack feature specificityof underlying muscle movements appropriate to facial ex-pressions.

Other feature extraction methods have been exploredincluding image-processing techniques such as Gabor fil-ters and wavelets [21]. Bartlett used a similar approach forfeature extraction employing a cascade of classifiers usedto locate the best filters for feature extraction [1]. Micheland Kaliouby use a method similar to our approach forextracting features [24]. Their method employs a featurepoint tracking system similar to active shape models. Ac-cording to their research, Cohen suggests that using fea-ture point tracking shows on average a 92% agreement withmanual FACS coding by professionals [7]. Shape informa-tion of some kind is likely one of the most important typesof data to include in any feature method.

Image-based methods have been applied in many ar-eas of facial computing. One of the most successful recenttechniques, though, incorporates both shape and texture in-formation from facial images. The AAM, developed ini-tially by Cootes and Taylor [11], has shown strong poten-tial in a variety of facial recognition technologies, but to ourknowledge has yet to be used in recognizing emotions. Ithas the ability to aid in initial face-search algorithms and inextracting important information from both the shape andtexture (wrinkles, nasio-labial lines, etc.) of the face thatmay be useful for communicating emotion.

2.3 Classification Schemes

Several classification schemes have been used thus far, in-cluding support vector machines, fuzzy-logic systems, andneural networks. For instance, Eckschlager et al. used anANN to identify a user’s emotional state based on certainpre-defined criteria [10]. NEmESys attempts to predict theemotional state of a user by obtaining certain knowledgeabout things that commonly cause changes in behavior. Bygiving the computer prior information such as eating habits,stress levels, sleep habits, etc. the ANN predicts the emo-tional state of the user and can change its responses accord-ingly. (One weakness of this approach is that it requiresthe user to fill out a questionaire providing the system withthe information in advance). While this system is unique,it does not incorporate any interpretation of facial emotion,

Page 3: EMOTION RECOGNITION USING FACIAL EXPRESSIONS WITH …people.uncw.edu/pattersone/research/publications/RatliffPatterson_HCI2008.pdfin capturing the important facial structure for expression

which has been identified as one of the key sources of emo-tional content [12] [18] [9]. Another approach used a fuzzy,rule-based system to match facial expressions that returneda probable emotion based on rules of the system [25].

Several have used support vector machines (SVM) asa classification mechanism [21] [1] [24]. In most casesSVMs yield good separation of the clusters by projectingthe data into a higher dimension. Michel and Keliouby in-dicated a 93.3% successful classification when aided withAdaboost for optimal filter selection [24]. Sebe, Lew, Co-hen, Garg, and Huang [27] offer a naive Bayes approachin emotion recognition based on a probability model of fa-cial features given a corresponding emotional class. Thework presented in this paper does not focus on classifica-tion schemes and uses a simple Euclidean-distance classi-fication.

3 Techniques in Our Work

In this work our main goal was to study the effectivenessof using AAMs to build a robust framework for recogniz-ing expressions indicative of emotion in still images of thehuman face. We present a method for feature extactionand classification that yields successful results and buildsa framework for future development.

3.1 Background

In order to develop an emotion classification system usingstill images, several issues must be resolved. One of thefirst and most important challenges is acquiring appropri-ate data for both training and testing. As discussed earlier,we chose to use the facial expression database developedby Walhoff [30], known as “FEEDTUM.” This databasecontains still images and video sequences of eighteen testsubjects, both male and female, of varying age. Ratherthan hiring actors to artificially create or mimic emotionalresponses, this database was developed with the attemptto actually elicit the required emotions. Using a cameramounted on a computer screen, subjects are shown var-ious movie clips that hopefully trigger an emotional re-sponse. No prior information about what was to be shownwas given in an attempt to elicit genuine responses. Thedatabase is organized by category using the six basic emo-tions [12]. In our experiment we create classification statesfor each of these basic emotions and also for neutral facialexpressions.

In addition to acquiring training data, a method forfeature extraction from the training data is also needed.AAMs are well suited for the task of handling various posesand expressions and are thus chosen for this work.

Building an appearance model entails choosing im-ages to be used as training data and then properly labelingthose images using a pre-defined format based on the natureof the experiment. The following subsection discusses theselection of data, landmark labeling, and AAM creation.

Test subject scoring

Subject Sincerity Clarity Movement Score1 3 of10 3 of10 No 7.02 4 of10 7 of10 No 7.53 8 of10 4 of10 Y es 6.54 7 of10 9 of10 No 6.05 9 of10 5 of10 No 3.5...

.... . .

...500 9 of10 9 of10 No 9.5

Table 1. Classification result by subject

3.2 The Experiment

Upon evaluation of the facial expression database [30], sev-eral subjects were removed from the data set used for thiswork. Occlusions such as eyeglasses, hair, as well as in-consistencies in expression were the main factors that con-tributed to the removal of these subjects. Facial images andtheir representative expression in the database were catego-rized based on emotion clarity, sincerity, and head move-ment. Emotion clarity ranks the image based on the clarityof the emotional content. Sincerity was also chosen as ameasure to help determine how well the subject conveysthe intended emotion. Head movement is not included inthe experiment and those images exhibiting certain levelsof head movement are excluded from the training and testsets. Also, subjects 7, 14, and 17 were removed from thedata set due to facial occlusions such as eyeglasses and hairas well as other inconsistencies. Overall, the database stillswere evaluated and each image given an overall score asshown in Table 1. A benchmark was set which marked theminimum score required for inclusion in this initial experi-ment.

Figure 1. Points used in training the AAM.

Once the scoring process was complete, subjects withvery low scores were omitted from the training and test-

Page 4: EMOTION RECOGNITION USING FACIAL EXPRESSIONS WITH …people.uncw.edu/pattersone/research/publications/RatliffPatterson_HCI2008.pdfin capturing the important facial structure for expression

ing set for this initial experiment. The facial landmarksin this work are shown in Figure 1. They were chosen inan attempt to capture pose information of the underlyingmuscles of the face that create expressions. FACS helpsprovide insight into which parts of the face correspond tocertain emotional expressions [12]. This guided the land-mark selection process with a total of 113 landmarks. Keyareas were chosen to capture the movement of the brow,eyes, mouth, and nasio-labial region as formed by the un-derlying muscles expected for expression of the face. Oncean initial AAM was trained on several subjects, the searchfunction helped automate the labeling process. Using thistechnique we labeled over 500 images (4 images per sub-ject x 18 subjects x six emotions + neutral).

Figure 2. Sample of faces used in training.

In this experiment we used a leave-one-out approachto improve testing methods with relatively few subjects.Stills from each of the fifteen subjects were used for testingdata after an AAM and class parameter-vector means werefound using the other subject stills as training data. Thiswork uses a simple Euclidean distance from face parame-ters to the mean parameter vector for each emotion as theclassification scheme. Vectors from both the training andtest data were extracted from the appearance model andloaded into MATLAB code to create the mean parametervectors and compute the distances for classification.

Figure 3. Viewing AAM generation using training data.

One mode of the AAM based on these parameter vec-tors is shown in Figure 3. The two faces on either siderepresent variation from the mean within the model. The

Percentage CorrectSubject 1 80.0%Subject 2 74.0%Subject 3 90.5%Subject 4 90.9%Subject 5 96.3%Subject 6 79.2%Subject 8 83.3%Subject 9 100%Subject 10 60.0%Subject 11 100%Subject 12 75%Subject 13 100%Subject 15 83.3%Subject 16 89.7%Subject 18 100%

Total Average Correct 91.7%

Table 2. Classification results by subject.

center face represents the average face created using thecombinations of all images for all emotions. The variabil-ity of the data is adjusted by modifying certain parametersin the AAM to make sure that a high percentage of the datais represented by the model to be able to deal with subtlechanges in facial features.

4 Experimental Results

An analysis of the results in Table 2 show that the systemcorrectly classified anywhere between 60% and 100% foreach individual using only still images. Most subjects werein the 80% to 90% range, but a few subjects showed poorperformance recognition. It is difficult with these early re-sults to say whether that is due to subject expression, fea-ture method, or classification method. It may be that moresophisticated classifiers could achieve better separation andthus results for these individuals. Table 3 shows anger andfear generated the largest margin of error with only 63.9%average correct for both. Possibilities for low performanceon these also include limited training data and poor sep-aration due to the simple Euclidean classifier. Based onthe scoring scheme mentioned earlier, an evaluation of thedatabase also suggests that the subjects had difficulty ex-pressing negative emotions. The overall success in thisfirst classification approach leaves room for future devel-opment, particularly in some areas. Overall, though, AAMparameters achieved significant success using only a Eu-clidean distance measure and produced results that com-pare well with other techniques.

Page 5: EMOTION RECOGNITION USING FACIAL EXPRESSIONS WITH …people.uncw.edu/pattersone/research/publications/RatliffPatterson_HCI2008.pdfin capturing the important facial structure for expression

Percentage CorrectFear 90.0%Joy 93.3%

Surprise 79.7%Anger 63.9%

Disgust 93.3%Sadness 63.9%Neutral 93.3%

Table 3. Classification results by emotion.

5 Conclusions

Using the AAM as a feature method has proven success-ful even with a simple Euclidean-distance classificationscheme. The capability of AAMs to model both the shapeand texture of faces makes them a strong tool to derive fea-ture sets for emotion-based expression classification. It iscertainly likely that more sophisticated classifiers such asSVMs will provide better results on this data set. Overall,though, this initial work has shown potential for AAMs asa feature set for expression classification.

5.1 Future Work

We are currently expanding this initial data set to con-sider other classification schemes used in conjunction withAAM parameter features, considering Bayesian classifiersand SVMs initially. We also plan to explore dynamic ex-pression recognition, as levels of sophistication can be im-proved with temporal information. This also has the possi-bility of strengthening methods based on FACS and scoringschemes that are generated automatically.

A current weakness in this area of facial study,though, is still the lack of comparable databases. We plan toconsider others in our future work [2] but would also like toencourage the creation and use of common data sets in thisarea as a means to strengthen comparison and fine-tuningof techniques.

6 Acknowledgements

Special thanks to Dr. Eric Patterson for guidance and di-rection with the project, as well as Dr. Curry Guinn forassistance with project scope and data collection methods.Also, gratitude is extended to Frank Walhoff for providinga freely accessible database.

References

[1] Marian Stewart Barlett, Gwen Littlewort, Ian Fasel,and Javier R. Movellan. Real time face detectionand facial expression recognition: Development andapplications to human computer interaction. In Pro-

ceeding of the 2003 Converence on Computer Visionand Pattern Recognition Workshop, 2003.

[2] Alberto Battocchi, Fabio Pianesi, and Dina Goren-Bar. A first evaluation study of a database of kineticfacial expressions (dafex). In ICMI ’05: Proceedingsof the 7th international conference on Multimodal in-terfaces, pages 214–221, New York, NY, USA, 2005.ACM.

[3] Jeffrey F. Cohn By Ying-li Tian, Takeo Kanade. Rec-ognizing action units for facial expression analysis.In 2000 IEEE Computer Society Conference on Com-puter Vision and Pattern Recognition (CVPR’00) -Volume 1, June 2000.

[4] George Caridakis, Lori Malatesta, Loic Kessous,Noam Amir, Amaryllis Raouzaiou, and Kostas Kar-pouzis. Modeling naturalistic affective states via fa-cial and vocal expressions recognition. In ICMI ’06:Proceedings of the 8th international conference onMultimodal interfaces, pages 146–154, New York,NY, USA, 2006. ACM.

[5] C.Izard. The maximally descriminative facial move-ment coding system (max). Available from Instruc-tional Resource Center, 1979.

[6] C.Izard, L.M Doughtery, and E.A Hembree. A Systemfor Indentifying Affect Expressions by Holistic Judg-ments. University of Delaware, 1983.

[7] Ira Cohen, Nicu Sebe, Fabio G. Cozman, andThomas S. Huang. Semi-supervised learning for fa-cial expression recognition. In Proceedings of the 5thACM SIGMM, 2003.

[8] Jeffrey F. Cohn. Foundations of human computing:facial expression and emotion. In ICMI ’06: Pro-ceedings of the 8th International Conference on Mul-timodal interfaces, pages 233–238, New York, NY,USA, 2006. ACM.

[9] Jeffrey F. Cohn, Karen Schmidt, Ralph Gross, andPaul Ekman. Individual differences in facial expres-sion: Stability over time, relation to self-reportedemotion, and ability to inform person identification.In IEEE International Conference on Multimodal In-terfaces (ICMI 2002), 2002.

[10] Manfred Eckschlager, Regina Bernhaupt, and Man-fred Tscheligi. Nemesys - neural emotion elicitingsystem. In CHI 05 extended abstracts on Human fac-tors in computing systems CHI 05, 2005.

[11] G.J. Edwards, T.F. Cootes, and C.J. Taylor. Face rec-cognition using active appearance models. In Pro-ceedings of the European Conference on ComputerVision, 1998.

Page 6: EMOTION RECOGNITION USING FACIAL EXPRESSIONS WITH …people.uncw.edu/pattersone/research/publications/RatliffPatterson_HCI2008.pdfin capturing the important facial structure for expression

[12] P. Ekman. Emotions Revealed: Recognizing Facesand Feeling to Improve Communication and Emo-tional Life. Holt, 2003.

[13] P. Ekman and W. Friesen. Facial Action CodingSystem: A Technique for the Measurement of Fa-cial Movement. Consulting Psychologists Press, PaloAlto, 1978.

[14] Paul Ekman, Wallace Friesen, and Joseph Hager.Emotional Facial Action Coding System Manual.2002.

[15] B. Fasel and J. Luettin. Automatic facial expressionanalysis: A survey. Pattern Recognition, 2003.

[16] W.V. Friesen and P. Ekman. Emfacs-7: Emotional fa-cial action coding system. Unpublished Manuscript,University of California at San Francisco, 1983,http://citeseer.comp.nus.edu.sg/context/1063041/0.

[17] Lisa Gralewski, Neill Campbell, Barry thomas, ColinDalton, and David Gibson. Statistical synthesis of fa-cial expressions for the portrayal of emotion. In Pro-ceedings of the 2nd International conference on Com-puter graphics and interactive techniques in Australa-sia and South East Asia, 2004.

[18] Rita T. Griesser, Douglas W. Cunningham, ChristianWallraven, and Heinrich H. Balthoff. Psychophysi-cal investigation of facial expressions using computeranimated faces. In Proceedings of the 4th symposiumon Applied perception in graphics and visualizationAPGV ’07, July 2007.

[19] Soumya Hamlaoui and Franck Davoine. Facial actiontracking using particle filters and active appearancemodels. In Proceedings of the 2005 Joint Conferenceon Smart Objects and Ambient Intelligence, 2005.

[20] Diane J. Litman and Kate Forbes-Riley. Predict-ing student emotions in computer-human tutoring di-alogues. In Proceedings of the 42nd Annual Meetingon Association for Computational Linguistics ACL2004, 2004.

[21] Gwen Littlewort, Marian Stewart Bartlett, Ian Fasel,Joshua Susskind, and Javier Movellan. Dynamics offacial expression extracted automatically from video.In IEEE Conference on Computer Vision and PatternRecognition: Workshop on Face Processing in Video,2004.

[22] Juwei Lu, Konstantinos N. Plataniotis, and Anasta-sios N. Venetsanopoulos. Face recognition using ker-nel direct discriminant analysis algorithms. IEEETransactions on Neural Networks, 2003.

[23] A. Mehrabian. Communication without words. Psy-chology Today, 1968.

[24] Philipp Michel and Rana El Keliouby. Real time fa-cial expression expression recognition in video usingsupport vector machines, 2003.

[25] Muid Mufti and Assia Khanam. Fuzzy rule basedfacial expression recognition. In International Con-ference on Computational Intelligence for ModelingControl and Automation, and International Conver-ence on Intelligent Agents, IEEE, 2006.

[26] Maja Pantic, Nicu Sebe, Jeffrey F. Cohen, andThomas Huang. Affective multimodal human-computer interaction. In Proceedings of the 13th an-nual ACM international conference on MultimediaMULTIMEDIA 2005, 2005.

[27] N. Sebe, I. Cohen, A. Garg, M. Lew, and T. Huang.Emotion recognition using a cauchy naive bayes clas-sifier, 2002.

[28] Jeffrey F. Cohn Takeo Kanade, Yingli Tian. Com-prehensive database for facial expression analysis. InProceedings of the 4th IEEE International Confer-ence on Automatic Face and Gesture Recognition,March 2000.

[29] Matthew A. Turk and Alex P. Pentland. Face recog-nition using eigenfaces. Pattern Recognition, 1991.

[30] Frank Walhoff. Facial expression and emotiondatabase from technical university of munich.

[31] Christian Wallraven, Heinrich H. Bulthoff, Dou-glas W. Cunningham, Jan Fischer, and Dirk Bartz.Evaluation of real-world and computer-generatedstylized facial expressions. In ACM Transactions onApplied Perception, volume 4, page 16, New York,NY, USA, 2007. ACM.