The estimation of human attention has recently been addressed in the context of human robot interaction. Today, joint
work spaces already exist and challenge cooperating systems to jointly focus on common objects, scenes and work
niches. With the advent of Google glasses and increasingly affordable wearable eye-tracking, monitoring of human
attention will soon become ubiquitous. The presented work describes for the first time a method for the estimation of
human fixations in 3D environments that does not require any artificial landmarks in the field of view and enables
attention mapping in 3D models. It enables full 3D recovery of the human view frustum and the gaze pointer in a
previously acquired 3D model of the environment in real time. The study on the precision of this method reports a mean
projection error ≈1.1 cm and a mean angle error ≈0.6° within the chosen 3D model - the precision does not go below the
one of the technical instrument (≈1°). This innovative methodology will open new opportunities for joint attention
studies as well as for bringing new potential into automated processing for human factors technologies.
© (2013) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Lucas Paletta ; Katrin Santner ; Gerald Fritz and Heinz Mayer
3D recovery of human gaze in natural environments
", Proc. SPIE 8662, Intelligent Robots and Computer Vision XXX: Algorithms and Techniques, 86620K (February 4, 2013); doi:10.1117/12.2008539; http://dx.doi.org/10.1117/12.2008539