Framework For Behaviour Tracking

Riyadh Almutiry and Tim Cootes

The aim of this project is to build an extensible framework for tracking facial behavior based on features found in the face image. The task of the framework is to provide robust and accurate generic measures for gaze and facial expression that can be used by multiple domains for further analysis.

The development of the project consist of two main stages:

  1. First choosing a reliable facial feature detector
  2. Feature selection and processing in attempt to produce reliable measure for the framework.

We are using the robust Facial Feature Detector developed by Cootes et al. (2012) as the basis for our framework.

currently we aim to build a robust non-intrusive gaze detector using only a single camera

Why detecting gaze

A high visual attention is demanding for many safety-critical tasks such in car driving for instance, a moment of visual attention loss can result in serious consequences such as car collusion. Visual attention remains active topic for many areas such as education, psychology, neuroscience, cognitive neuroscience, and neuropsychology. The rate of visual attention studies since 2000 to 2010 is highly increasing in the research community by more than 50% in recent years, while some studies focuses on neurophysiology, the vast majority are focusing in behavioral studies (Carrasco, 2011). A driving simulation experiment has proved that driver's visual attention is highly predictable when tracking eye gaze (Xu et al., 2000). In psychology, a study of Downing et al. (2004) was investigating the effects of the gaze of others on subjects visual attention shifts. Moreover, in business, a history record of customer's gaze fixations and saccades may enable the understanding of individual interests. This can lead to many benefits such as web searching utilization or helping marketers to personalize their ads based on the interest of individual customers.


T.F.Cootes, M.Ionita, C.Lindner and P.Sauer, "Robust and Accurate Shape Model Fitting using Random Forest Regression Voting", ECCV 2012 (PDF)

Cai, H., & Lin, Y. (2012). An integrated head pose and eye gaze tracking approach to non-intrusive visual attention measurement for wide FOV simulators. Virtual Reality, 16, 25–32. doi:10.1007/s10055-010-0171-9 Carrasco, M. (2011). Visual attention: The past 25 years. Vision Research. doi:10.1016/j.visres.2011.04.012 T.F. Cootes, D. Cooper, C.J. Taylor and J. Graham, "Active Shape Models - Their Training and Application." Computer Vision and Image Understanding. Vol. 61, No. 1, Jan. 1995, pp. 38-59. (PDF) Downing, P., Dodds, C., & Bray, D. (2004). Why does the gaze of others direct visual attention? Visual Cognition, 11(1), 71–79. Xu, F., Xu, B., Ding, H., & Qian, W. (2000). Virtual automobile driver training simulator. International Journal of Virtual Reality. Vol. 4, No. 4.