Aujourd’hui on a mangé un burger

With recent technical progress and the miniaturization of electronic devices, mobile eye tracking has become more and more popular in recent years. Whereas the presentation software of eye trackers is flexible and robust, permitting indoor and outdoor studies, software and techniques for the analysis of the recorded scene video and gaze data are still in their infancy. Existing software provides various analysis function for still images, but lack solutions when it comes to the analysis of complex dynamic scenes, which require a cumbersome installation of markers in the scene. In order to analyze the data, the scene video has to be manually annotated – a very time consuming (frame-by-frame) and error-prone process, which suffers from subjectiveness, fatigue of annotators and changes in the annotation schemata.