wiki:WikiStart
Last modified 8 years ago Last modified on 05/11/10 09:51:26

Welcome to the FaceSense Project

Emotions are a form on non verbal communication that we typically use as human beings to reflect our physiological and mental state. We express emotions when we are dealing with everything around us even with our computers. Since we are becoming more dependent on computers in our lives, we design the systems we use to be more interactive. In other words, we adapt computers to our behavior, our needs. Imagine if a computer is emotionally intelligent, i.e can detect your mood and can make decisions based on that. For example, it can alert you if you are falling asleep while driving, detect if you are frustrated when you can’t find what you are googling for, or can switch to a music genre that will cheer you up you are sad. FaceSense? API enables the real time analysis, tagging and inference of cognitive affective mental states from facial video. The API builds on Rana el Kaliouby's doctoral research, which presents a computational model of mind reading as a framework for machine perception and mental state recognition. This framework combines bottom-up vision-based processing of the face (e.g. a head nod or smile) with top-down predictions of mental state models (e.g. interest and confusion) to interpret the meaning underlying head and facial signals overtime. A multilevel, probabilistic architecture (using Dynamic Bayesian Networks) models the hierarchical way with which people perceive facial and other human behavior and handles the uncertainty inherent in the process of attributing mental states to others. The output probabilities represent a rich modality that technology can use to represent a person’s state and respond accordingly. The API doesn’t include a facial features tracker in order allow its users to easily incorporate it with their own tracker.

http://img175.imageshack.us/img175/2750/modulesen4.jpg

The API is divided into three modules; each level depends on its predecessor. The very first level is the Facial Action Units (AUs) module, which uses the facial coordinates from the face tracker to calculate, using geometry, the changes in the facial expressions across several frames, things such as the eye brows, lips and head movement. These are then fed into the Face Gestures module, which uses Hidden Markov Models (HMM) to estimate facial gestures, things such as smile, head nod, head shake. The API comes with a training set for the HMMs which is basically a group of examples of valid sequences of Action Units. In addition, one can very easily define new face gestures other than the currently supported set as long as the API supports the face Action Units that indicate that gesture. Also, one must provide training examples of the HMM. Finally, Dynamic Bayesian Networks (DBN) is used to estimate the current mental state of the person using the history of face gestures detected across several frames. Like the Gestures module, the API comes with a training set for the supported mental states, and one is free to define new mental states.



Roadmap: the upcoming release dates and planned features
TODO list: Our idea tank!


API Modules



Obtaining the API and its Sample Applications


Other API based solutions:

MR-Cam-Cap
Get AutismUI

If you encounter any issues, please submit a ticket, make sure to check all the open tickets before submitting

Projects by our Collaborators



Documentation

For End Users:

How To Open .MOV file with MR-UI-dotNET
Mindreader in Action
Guide to successful mental state training
How To Train a New Mental State using MR-UI-dotNET
How To Train a New Mental State using FFTTEST
How To Train a Gesture

Developers' Section:

How To Get the API
How To Use the API
How To Use the Sample Workspace


Contributors

Dr. Rana El Kaliouby
Abdelrahman Mahmoud
Youssef Kashef


Useful Third party Tools

Medialooks DirectShow Quicktime Filter