Publications

Explore a collection of publications and projects, from diverse fields, that cite Pupil Labs and use Pupil Labs eye tracking hardware and software in their research.

Filters

Year

33
54
45
40
49
25
9

Fields

64
39
36
24
22
22
21
17
17
15
14
13
12
10
10
10
8
8
7
6
6
5
5
4
4
3
3
3
2
2
2
2
2
2
2
1
1
1
1
1
1
1
1
1
1
1
1
1

0-0 of 255 publications
Measuring cognitive load: heart-rate variability and pupillometry assessment
2020
Robotics, HRI
St-Onge David, Anguiozar, N. U.
MSECP'20 Workshop ICMI '20 Companion, October 25–29, 2020, Virtual Event, Netherlands
Cognitive load covers a wide field of study that triggers the interest of many disciplines, such as neuroscience, psychology and com- puter science since decades. With the growing impact of human factor in robotics, many more are diving into the topic, looking, namely, for a way to adapt the control of an autonomous system to the cognitive load of its operator. Theoretically, this can be achieved from heart-rate variability measurements, brain waves monitoring, pupillometry or even skin conductivity. This work introduces some recent algorithms to analyze the data from the first two and assess some of their limitations.
Feeling Uncertain—Effects of a Vibrotactile Belt that Communicates Vehicle Sensor Uncertainty
2020
Transportation Safety, Automotive, HCI
Matti Krüger, Tom Driessen, Christiane B. Wiebel-Herboth, Joost C. F. de Winter, Heiko Wersing
Information 2020
With the rise of partially automated cars, drivers are more and more required to judge the degree of responsibility that can be delegated to vehicle assistant systems. This can be supported by utilizing interfaces that intuitively convey real-time reliabilities of system functions such as environment sensing. We designed a vibrotactile interface that communicates spatiotemporal information about surrounding vehicles and encodes a representation of spatial uncertainty in a novel way. We evaluated this interface in a driving simulator experiment with high and low levels of human and machine confidence respectively caused by simulated degraded vehicle sensor precision and limited human visibility range. Thereby we were interested in whether drivers (i) could perceive and understand the vibrotactile encoding of spatial uncertainty, (ii) would subjectively benefit from the encoded information, (iii) would be disturbed in cases of information redundancy, and (iv) would gain objective safety benefits from the encoded information. To measure subjective understanding and benefit, a custom questionnaire, Van der Laan acceptance ratings and NASA TLX scores were used. To measure the objective benefit, we computed the minimum time-to-contact as a measure of safety and gaze distributions as an indicator for attention guidance. Results indicate that participants were able to understand the encoded uncertainty and spatiotemporal information and purposefully utilized it when needed. The tactile interface provided meaningful support despite sensory restrictions. By encoding spatial uncertainties, it successfully extended the operating range of the assistance system
Assessment of Implicit and Explicit Measures of Mental Workload in Working Situations: Implications for Industry 4.0
2020
Psychology, Cognitive Science
Mingardi, Michele, Patrik Pluchino, Davide Bacchin, Chiara Rossato, and Luciano Gamberini
Applied Sciences 10, no. 18
Nowadays, in the context of Industry 4.0, advanced working environments aim at achieving a high degree of human–machine collaboration. This phenomenon occurs, on the one hand, through the correct interpretation of operators’ data by machines that can adapt their functioning to support workers, and on the other hand, by ensuring the transparency of the actions of the system itself. This study used an ad hoc system that allowed the co-registration of a set of participants’ implicit and explicit (I/E) data in two experimental conditions that varied in the level of mental workload (MWL). Findings showed that the majority of the considered I/E measures were able to discriminate the different task-related mental demands and some implicit measures were capable of predicting task performance in both tasks. Moreover, self-reported measures showed that participants were aware of such differences in MWL. Finally, the paradigm’s ecology highlights that task and environmental features may affect the reliability of the various I/E measures. Thus, these factors have to be considered in the design and development of advanced adaptive systems within the industrial context.
The limits of color awareness during active, real-world vision
2020
Psychology, Cognitive Science, Neuroscience
Michael A. Cohen, Thomas L. Botch, and Caroline E. Robertson
PNAS June 16, 2020 117 (24) 13821-13827; first published June 8, 2020
Color ignites visual experience, imbuing the world with meaning, emotion, and richness. As soon as an observer opens their eyes, they have the immediate impression of a rich, colorful experience that encompasses their entire visual world. Here, we show that this impression is surprisingly inaccurate. We used head-mounted virtual reality (VR) to place observers in immersive, dynamic real-world environments, which they naturally explored via saccades and head turns. Meanwhile, we monitored their gaze with in-headset eye tracking and then systematically altered the visual environments such that only the parts of the scene they were looking at were presented in color and the rest of the scene (i.e., the visual periphery) was entirely desaturated. We found that observers were often completely unaware of these drastic alterations to their visual world. In the most extreme case, almost a third of observers failed to notice when less than 5% of the visual display was presented in color. This limitation on perceptual awareness could not be explained by retinal neuroanatomy or previous studies of peripheral visual processing using more traditional psychophysical approaches. In a second study, we measured color detection thresholds using a staircase procedure while a set of observers intentionally attended to the periphery. Still, we found that observers were unaware when a large portion of their field of view was desaturated. Together, these results show that during active, naturalistic viewing conditions, our intuitive sense of a rich, colorful visual world is largely incorrect.
Assessing Cognitive Load via Pupillometry
2020
Cognitive Science
Pavel Weber, Franca Rupprecht, Stefan Wiesen, Bernd Hamann, Achim Ebert
A fierce search is called for a reliable, non-intrusive, and real-time capable method for assessing a person’s experienced cognitive load.Software systems capable of adapting their complexity to the mentaldemand of their users would be beneficial in a variety of domains. Theonly disclosed algorithm that seems to reliably detect cognitive load inpupillometry signals – the Index of Pupillary Activity (IPA) – has notyet been sufficiently validated. We take a first step in validating the IPAby applying it to a working memory experiment with finely granulatedlevels of difficulty, and comparing the results to traditional pupillometrymetrics analyzed in cognitive resarch. Our findings confirm the significantpositive correlation between task difficulty and IPA the authors stated.
Tracking visual search demands and memory load through pupil dilation
2020
Cognitive Science
Moritz Stolte, Benedikt Gollan, Ulrich Ansorge
Journal of Vision June 2020, Vol.20, 21
Continuously tracking cognitive demands via pupil dilation is a desirable goal for the monitoring and investigation of cognitive performance in applied settings where the exact time point of mental engagement in a task is often unknown. Yet, hitherto no experimentally validated algorithm exists for continuously estimating cognitive demands based on pupil size. Here, we evaluated the performance of a continuously operating algorithm that is agnostic of the onset of the stimuli and derives them by way of retrospectively modeling attentional pulses (i.e., onsets of processing). We compared the performance of this algorithm to a standard analysis of stimulus-locked pupil data. The pupil data were obtained while participants performed visual search (VS) and visual working memory (VWM) tasks with varying cognitive demands. In Experiment 1, VS was performed during the retention interval of the VWM task to assess interactive effects between search and memory load on pupil dilation. In Experiment 2, the tasks were performed separately. The results of the stimulus-locked pupil data demonstrated reliable increases in pupil dilation due to high VWM load. VS difficulty only affected pupil dilation when simultaneous memory demands were low. In the single task condition, increased VS difficulty resulted in increased pupil dilation. Importantly, online modeling of pupil responses was successful on three points. First, there was good correspondence between the modeled and stimulus locked pupil dilations. Second, stimulus onsets could be approximated from the derived attentional pulses to a reasonable extent. Third, cognitive demands could be classified above chance level from the modeled pupil traces in both tasks.
Eye movements in real-life search are guided by task-irrelevant working-memory content
2020
Cognitive Science
Cherie Zhou, Monicque M. Lorist, Sebastiaan Mathot
bioRxiv preprint
Attention is automatically guided towards stimuli that match the contents of working memory. This has been studied extensively using simplified computer tasks, but it has never been investigated whether (yet often assumed that) memory-driven guidance also affects real-life search. Here we tested this open question in a naturalistic environment that closely resembles real life. In two experiments, participants wore a mobile eye-tracker, and memorized a color, prior to a search task in which they looked for a target word among book covers on a bookshelf. The memory color was irrelevant to the search task. Nevertheless, we found that participants' gaze was strongly guided towards book covers that matched the memory color. Crucially, this memory-driven guidance was evident from the very start of the search period. These findings support that attention is guided towards working-memory content in real-world search, and that this is fast and therefore likely reflecting an automatic process.
SteadEye-Head—Improving MARG-Sensor Based Head Orientation Measurements Through Eye Tracking Data
2020
Medical
Lukas Wöhle, Marion Gebhard
Sensors 20, no. 10 (2020)
This paper presents the use of eye tracking data in Magnetic AngularRate Gravity (MARG)-sensor based head orientation estimation. The approach presented here can be deployed in any motion measurement that includes MARG and eye tracking sensors (e.g., rehabilitation robotics or medical diagnostics). The challenge in these mostly indoor applications is the presence of magnetic field disturbances at the location of the MARG-sensor. In this work, eye tracking data (visual fixations) are used to enable zero orientation change updates in the MARG-sensor data fusion chain. The approach is based on a MARG-sensor data fusion filter, an online visual fixation detection algorithm as well as a dynamic angular rate threshold estimation for low latency and adaptive head motion noise parameterization. In this work we use an adaptation of Madgwicks gradient descent filter for MARG-sensor data fusion, but the approach could be used with any other data fusion process. The presented approach does not rely on additional stationary or local environmental references and is therefore self-contained. The proposed system is benchmarked against a Qualisys motion capture system, a gold standard in human motion analysis, showing improved heading accuracy for the MARG-sensor data fusion up to a factor of 0.5 while magnetic disturbance is present.
The impact of slippage on the data quality of head-worn eye trackers
2020
Eye Tracking Algorithms
Diederick C. Niehorster, Thiago Santini, Roy S. Hessels, Ignace TC Hooge, Enkelejda Kasneci, Marcus Nyström
Behavior Research Methods (2020)
Mobile head-worn eye trackers allow researchers to record eye-movement data as participants freely move around and interact with their surroundings. However, participant behavior may cause the eye tracker to slip on the participant’s head, potentially strongly affecting data quality. To investigate how this eye-tracker slippage affects data quality, we designed experiments in which participants mimic behaviors that can cause a mobile eye tracker to move. Specifically, we investigated data quality when participants speak, make facial expressions, and move the eye tracker. Four head-worn eye-tracking setups were used: (i) Tobii Pro Glasses 2 in 50 Hz mode, (ii) SMI Eye Tracking Glasses 2.0 60 Hz, (iii) Pupil-Labs’ Pupil in 3D mode, and (iv) Pupil-Labs’ Pupil with the Grip gaze estimation algorithm as implemented in the EyeRecToo software. Our results show that whereas gaze estimates of the Tobii and Grip remained stable when the eye tracker moved, the other systems exhibited significant errors (0.8–3.1∘ increase in gaze deviation over baseline) even for the small amounts of glasses movement that occurred during the speech and facial expressions tasks. We conclude that some of the tested eye-tracking setups may not be suitable for investigating gaze behavior when high accuracy is required, such as during face-to-face interaction scenarios. We recommend that users of mobile head-worn eye trackers perform similar tests with their setups to become aware of its characteristics. This will enable researchers to design experiments that are robust to the limitations of their particular eye-tracking setup.
Use of eye tracking device to evaluate the driver’s behaviour and the infrastructures quality in relation to road safety
2020
Transportation Safety, Automotive
David Vetturi, Michela Tiboni, Giulio Maternini, Michela Bonera
Transportation Research Procedia Volume 45, 2020, Pages 587-595
Eye tracking allows to obtain important elements regarding the drivers’ behaviour during their driving activity, by employing a device that monitors the movements of the eye and therefore of the user’s observation point. In this paper it will be explained how analysing the behaviour of the drivers through the eye movements permits to evaluate the infrastructures quality in terms of road safety. Driver behaviour analysis have been conducted in urban areas, examining the observation target (cars, pedestrians, road signs, distraction elements) in quantitative terms (time of fixing each singular target). In particular, roundabout intersections and rectilinear segment of urban arterials have been examined and the records related to seven drivers’ behaviour were collected, in order to have a significant statistical variability. Only young people has considered in this study. The analyses carried out have made it possible to assess how different types of infrastructure influence the behaviour of road users, in terms of safety performance given by their design. In particular, quantitative analyzes were carried out on driving times dedicated to observing attention rather than distraction targets. From a statistical point of view, the relationship that exists between the characteristics of the driver, weather conditions and infrastructure, with driving behavior (traveling speed and attention / inattention time) was analyzed by ANOVA method.
  • ...
  • ...