Publications

Explore a collection of publications and projects, from diverse fields, that cite Pupil Labs and use Pupil Labs eye tracking hardware and software in their research.

Filters

Year

30
54
45
40
49
25
9

Fields

63
39
36
23
22
22
20
16
16
15
13
13
12
10
10
9
8
8
7
6
6
5
5
4
4
3
3
3
2
2
2
2
2
2
2
1
1
1
1
1
1
1
1
1
1
1
1
1

0-0 of 252 publications
The limits of color awareness during active, real-world vision
2020
Psychology, Cognitive Science, Neuroscience
Michael A. Cohen, Thomas L. Botch, and Caroline E. Robertson
PNAS June 16, 2020 117 (24) 13821-13827; first published June 8, 2020
Color ignites visual experience, imbuing the world with meaning, emotion, and richness. As soon as an observer opens their eyes, they have the immediate impression of a rich, colorful experience that encompasses their entire visual world. Here, we show that this impression is surprisingly inaccurate. We used head-mounted virtual reality (VR) to place observers in immersive, dynamic real-world environments, which they naturally explored via saccades and head turns. Meanwhile, we monitored their gaze with in-headset eye tracking and then systematically altered the visual environments such that only the parts of the scene they were looking at were presented in color and the rest of the scene (i.e., the visual periphery) was entirely desaturated. We found that observers were often completely unaware of these drastic alterations to their visual world. In the most extreme case, almost a third of observers failed to notice when less than 5% of the visual display was presented in color. This limitation on perceptual awareness could not be explained by retinal neuroanatomy or previous studies of peripheral visual processing using more traditional psychophysical approaches. In a second study, we measured color detection thresholds using a staircase procedure while a set of observers intentionally attended to the periphery. Still, we found that observers were unaware when a large portion of their field of view was desaturated. Together, these results show that during active, naturalistic viewing conditions, our intuitive sense of a rich, colorful visual world is largely incorrect.
Assessing Cognitive Load via Pupillometry
2020
Cognitive Science
Pavel Weber, Franca Rupprecht, Stefan Wiesen, Bernd Hamann, Achim Ebert
A fierce search is called for a reliable, non-intrusive, and real-time capable method for assessing a person’s experienced cognitive load.Software systems capable of adapting their complexity to the mentaldemand of their users would be beneficial in a variety of domains. Theonly disclosed algorithm that seems to reliably detect cognitive load inpupillometry signals – the Index of Pupillary Activity (IPA) – has notyet been sufficiently validated. We take a first step in validating the IPAby applying it to a working memory experiment with finely granulatedlevels of difficulty, and comparing the results to traditional pupillometrymetrics analyzed in cognitive resarch. Our findings confirm the significantpositive correlation between task difficulty and IPA the authors stated.
Tracking visual search demands and memory load through pupil dilation
2020
Cognitive Science
Moritz Stolte, Benedikt Gollan, Ulrich Ansorge
Journal of Vision June 2020, Vol.20, 21
Continuously tracking cognitive demands via pupil dilation is a desirable goal for the monitoring and investigation of cognitive performance in applied settings where the exact time point of mental engagement in a task is often unknown. Yet, hitherto no experimentally validated algorithm exists for continuously estimating cognitive demands based on pupil size. Here, we evaluated the performance of a continuously operating algorithm that is agnostic of the onset of the stimuli and derives them by way of retrospectively modeling attentional pulses (i.e., onsets of processing). We compared the performance of this algorithm to a standard analysis of stimulus-locked pupil data. The pupil data were obtained while participants performed visual search (VS) and visual working memory (VWM) tasks with varying cognitive demands. In Experiment 1, VS was performed during the retention interval of the VWM task to assess interactive effects between search and memory load on pupil dilation. In Experiment 2, the tasks were performed separately. The results of the stimulus-locked pupil data demonstrated reliable increases in pupil dilation due to high VWM load. VS difficulty only affected pupil dilation when simultaneous memory demands were low. In the single task condition, increased VS difficulty resulted in increased pupil dilation. Importantly, online modeling of pupil responses was successful on three points. First, there was good correspondence between the modeled and stimulus locked pupil dilations. Second, stimulus onsets could be approximated from the derived attentional pulses to a reasonable extent. Third, cognitive demands could be classified above chance level from the modeled pupil traces in both tasks.
Eye movements in real-life search are guided by task-irrelevant working-memory content
2020
Cognitive Science
Cherie Zhou, Monicque M. Lorist, Sebastiaan Mathot
bioRxiv preprint
Attention is automatically guided towards stimuli that match the contents of working memory. This has been studied extensively using simplified computer tasks, but it has never been investigated whether (yet often assumed that) memory-driven guidance also affects real-life search. Here we tested this open question in a naturalistic environment that closely resembles real life. In two experiments, participants wore a mobile eye-tracker, and memorized a color, prior to a search task in which they looked for a target word among book covers on a bookshelf. The memory color was irrelevant to the search task. Nevertheless, we found that participants' gaze was strongly guided towards book covers that matched the memory color. Crucially, this memory-driven guidance was evident from the very start of the search period. These findings support that attention is guided towards working-memory content in real-world search, and that this is fast and therefore likely reflecting an automatic process.
SteadEye-Head—Improving MARG-Sensor Based Head Orientation Measurements Through Eye Tracking Data
2020
Medical
Lukas Wöhle, Marion Gebhard
Sensors 20, no. 10 (2020)
This paper presents the use of eye tracking data in Magnetic AngularRate Gravity (MARG)-sensor based head orientation estimation. The approach presented here can be deployed in any motion measurement that includes MARG and eye tracking sensors (e.g., rehabilitation robotics or medical diagnostics). The challenge in these mostly indoor applications is the presence of magnetic field disturbances at the location of the MARG-sensor. In this work, eye tracking data (visual fixations) are used to enable zero orientation change updates in the MARG-sensor data fusion chain. The approach is based on a MARG-sensor data fusion filter, an online visual fixation detection algorithm as well as a dynamic angular rate threshold estimation for low latency and adaptive head motion noise parameterization. In this work we use an adaptation of Madgwicks gradient descent filter for MARG-sensor data fusion, but the approach could be used with any other data fusion process. The presented approach does not rely on additional stationary or local environmental references and is therefore self-contained. The proposed system is benchmarked against a Qualisys motion capture system, a gold standard in human motion analysis, showing improved heading accuracy for the MARG-sensor data fusion up to a factor of 0.5 while magnetic disturbance is present.
The impact of slippage on the data quality of head-worn eye trackers
2020
Eye Tracking Algorithms
Diederick C. Niehorster, Thiago Santini, Roy S. Hessels, Ignace TC Hooge, Enkelejda Kasneci, Marcus Nyström
Behavior Research Methods (2020)
Mobile head-worn eye trackers allow researchers to record eye-movement data as participants freely move around and interact with their surroundings. However, participant behavior may cause the eye tracker to slip on the participant’s head, potentially strongly affecting data quality. To investigate how this eye-tracker slippage affects data quality, we designed experiments in which participants mimic behaviors that can cause a mobile eye tracker to move. Specifically, we investigated data quality when participants speak, make facial expressions, and move the eye tracker. Four head-worn eye-tracking setups were used: (i) Tobii Pro Glasses 2 in 50 Hz mode, (ii) SMI Eye Tracking Glasses 2.0 60 Hz, (iii) Pupil-Labs’ Pupil in 3D mode, and (iv) Pupil-Labs’ Pupil with the Grip gaze estimation algorithm as implemented in the EyeRecToo software. Our results show that whereas gaze estimates of the Tobii and Grip remained stable when the eye tracker moved, the other systems exhibited significant errors (0.8–3.1∘ increase in gaze deviation over baseline) even for the small amounts of glasses movement that occurred during the speech and facial expressions tasks. We conclude that some of the tested eye-tracking setups may not be suitable for investigating gaze behavior when high accuracy is required, such as during face-to-face interaction scenarios. We recommend that users of mobile head-worn eye trackers perform similar tests with their setups to become aware of its characteristics. This will enable researchers to design experiments that are robust to the limitations of their particular eye-tracking setup.
Use of eye tracking device to evaluate the driver’s behaviour and the infrastructures quality in relation to road safety
2020
Transportation Safety, Automotive
David Vetturi, Michela Tiboni, Giulio Maternini, Michela Bonera
Transportation Research Procedia Volume 45, 2020, Pages 587-595
Eye tracking allows to obtain important elements regarding the drivers’ behaviour during their driving activity, by employing a device that monitors the movements of the eye and therefore of the user’s observation point. In this paper it will be explained how analysing the behaviour of the drivers through the eye movements permits to evaluate the infrastructures quality in terms of road safety. Driver behaviour analysis have been conducted in urban areas, examining the observation target (cars, pedestrians, road signs, distraction elements) in quantitative terms (time of fixing each singular target). In particular, roundabout intersections and rectilinear segment of urban arterials have been examined and the records related to seven drivers’ behaviour were collected, in order to have a significant statistical variability. Only young people has considered in this study. The analyses carried out have made it possible to assess how different types of infrastructure influence the behaviour of road users, in terms of safety performance given by their design. In particular, quantitative analyzes were carried out on driving times dedicated to observing attention rather than distraction targets. From a statistical point of view, the relationship that exists between the characteristics of the driver, weather conditions and infrastructure, with driving behavior (traveling speed and attention / inattention time) was analyzed by ANOVA method.
Privacy-Preserving Eye Videos using Rubber Sheet Model
2020
HCI, Privacy
Aayush K. Chaudhary, Jeff B. Pelz
ETRA 2020 Preprint: June2-5 2020
Video-based eye trackers estimate gaze based on eye images/videos. As security and privacy concerns loom over technological advancements, tackling such challenges is crucial. We present a new approach to handle privacy issues in eye videos by replacing the current identifiable iris texture with a different iris template in the video capture pipeline based on the Rubber Sheet Model. We extend to image blending and median-value representations to demonstrate that videos can be manipulated without significantly degrading segmentation and pupil detection accuracy.
Gaze Tracking for Eye-Hand Coordination Training Systems in Virtual Reality
2020
HCI
Aunnoy K Mutasim, Anil Ufuk Batmaz, Wolfgang Stuerzlinger
CHI EA '20: Extended Abstracts of the 2020 CHI Conference on Human Factors in Computing Systems, April 2020 Pages 1–8
Eye-hand coordination training systems are used to improve user performance during fast movements in sports training. In this work, we explored gaze tracking in a Virtual Reality (VR) sports training system with a VR headset. Twelve subjects performed a pointing study with or without passive haptic feedback. Results showed that subjects spent an average of 0.55 s to visually fnd and another 0.25 s before their fnger selected a target. We also identifed that, passive haptic feedback did not increase the performance of the user. Moreover, gaze tracker accuracy significantly deteriorated when subjects looked below their eye level. Our results also point out that practitioners/trainers should focus on reducing the time spent on searching for the next target to improve their performance through VR eye-hand coordination training systems. We believe that current VR eye-hand coordination training systems are ready to be evaluated with athletes.
Attention-Aware Brain Computer Interface to avoid Distractions in Augmented Reality
2020
HCI
Lisa-Marie Vortmann, Felix Putze
CHI EA '20: Extended Abstracts of the 2020 CHI Conference on Human Factors in Computing Systems, April 2020 Pages 1–8
Recently, the idea of using BCIs in Augmented Reality settings to operate systems has emerged. One problem of such head-mounted displays is the distraction caused by an unavoidable display of control elements even when focused on internal thoughts. In this project, we reduced this distraction by including information about the current attentional state. A multimodal smart-home environment was altered to adapt to the user’s state of attention. The system only responded if the attentional orientation was classified as "external". The classification was based on multimodal EEG and eye tracking data. Seven users tested the attentionaware system in comparison to the unaware system. We show that the adaptation of the interface improved the usability of the system. We conclude that more systems would benefit from awareness of the user’s ongoing attentional state
  • ...
  • ...