Explore a collection of publications and projects, from diverse fields, that cite Pupil Labs and use Pupil Labs eye tracking hardware and software in their research. You can also find this collection on Zotero.







0-0 of 467 publications
Eye-gaze behaviour of expert and novice surfers in a simulated surf environment
Sports Science
Ian M. Luke; David L. Neumann; Matthew J. Stainer; Leigh Ellen Potter; Robyn L. Moffitt
Psychology of Sport and Exercise
Skilled performance in sport often relies on looking at the right place at the right time. Differences in visual behaviour can thus characterise expertise. The current study examined visual attention associated with surfing expertise. Expert (n = 12) and novice (n = 12) surfers viewed 360-degree surfing videos in a head-mounted display. Eye-gaze, presence, and engagement were measured. Experts were faster to detect approaching high, and low waves, spent more time overall attending to high-performance value areas-of-interest (AOIs; pocket, shoulder, lip), and were more physically engaged. Group differences were not found for presence or simulator sickness. Outcomes show that surfing expertise is associated with more optimal visual attention to cues informing wave approach and wave dynamics. Experts look at these areas earlier than novices, and for more time overall. The findings suggest the performance advantages of early planning of motor actions, along with moment-to-moment adjustments while surfing.
Distance effects on visual search and visually guided freehand interaction on large displays
Xiaolong Lou; Lili Fu; Lamei Yan; Xiangdong Li; Preben Hansen
International Journal of Industrial Ergonomics
Different from mouse-based and touch-based interactions at a static distance, motion-sensing interaction on a large display is typically performed at varying distances ranging from an arm's length to several metres. To investigate the effect of distance on visual search and freehand interaction performance, an empirical experiment was conducted; 30 participants were recruited to complete a series of target search and freehand selection tasks on large displays, which were 1.6 and 2.4 m wide, respectively. The results indicated that (1) the user-preferred viewing distance was positively related to the physical size of the display: a larger display size corresponded to a larger viewing distance. (2) The viewing distance had a two-sided effect on the visual search time efficiency. At a close range, increasing distance improved the search time efficiency; but at a farther range, the efficiency decreased. (3) An optimal field of view at which visual search was most efficient was found; (4) however, increasing the distance lowered freehand interaction efficiency and accuracy. Changing the distance also caused variations in the performance on divided large display areas: (5) the visual search efficiency on the upper area was higher than that on the lower area, increasing the distance reduced the difference; (6) freehand interaction efficiency and accuracy on the lower area outperformed that on the upper area, increasing the distance also reduced the difference. Implications were discussed for building more efficient and user-friendly large display-based user interfaces.
The Effect of Visualisation Level and Situational Visibility in Co-located Digital Musical Ensembles
Cognitive Psychology
Florent Berthaut; Luke Dahl
NIME 2022
Digital Musical Instruments (DMIs) offer new opportunities for collaboration, such as exchanging sounds or sharing controls between musicians. However, in the context of spontaneous and heterogeneous orchestras, such as jam sessions, collective music-making may become challenging due to the diversity and complexity of the DMIs and the musicians’ unfamiliarity with the others’ instruments. In particular, the potential lack of visibility into each musician’s respective contribution to the sound they hear, i.e. who is playing what, might impede their capacity to play together. In this paper, we propose to augment each instrument in a digital orchestra with visual feedback extracted in real-time from the instrument’s activity, in order to increase this awareness. We present the results of a user study in which we investigate the influence of visualisation level and situational visibility during short improvisations by groups of three musicians. Our results suggest that internal visualisations of all instruments displayed close to each musician’s instrument provide the best awareness.
Coeliac consumers’ expectations and eye fixations on commercial gluten-free bread packages
P. Puerta; E. Carrillo; C. Badia-Olmos; L. Laguna; C. M. Rosell; A. Tárrega
The aim of this work was to investigate coeliac consumers' expected acceptability and trust in commercial bread packages showing different brands and gluten-free claims in relation to their gaze fixations when observing the package. For that, ten commercial gluten-free breads were used (varying in the brand and presence of certification logo). Eighty-six coeliac consumers or relatives rated expected acceptability and trust of each bread, and eye-tracking was used to register the number of fixations on different elements of packages. Brand affected expected acceptability, being higher for breads from specific gluten-free brands. Trust conferred was high for all breads. Certification logo did not affect trust of consumers, but conditioned their fixations: when logo was not present, they looked more at the ingredients or nutritional facts. Both factors (brand and certification logo) showed to affect coeliac consumers’ response to gluten-free food, such as expected acceptability, trust and how they looked at the different package elements.
Personality trait prediction by machine learning using physiological data and driving behaviour
Machine Learning
Morgane Evin; Antonio Hidalgo-Munoz; Adolphe James Béquet; Fabien Moreau; Helène Tattegrain; Catherine Berthelon; Alexandra Fort; Christophe Jallais
Machine Learning with Applications
This article explores the influence of personality on physiological data while driving in reaction to near crashes and risky situations using Machine Learning (ML). The objective is to improve the driving assistance systems in considering drivers’ characteristics. Methods: Physiological and behavioral data were recorded in sixty-three healthy volunteers during risky urban situations and analyzed using 5 ML algorithms to discriminate the driver’s personality according to Big Five Inventory and STAI trait. Seven step process was performed including data pre-processing, Electrodermal Activity (EDA) time windows selection (one by one backward and forward approach comparison with a pseudo-wrapped), personality traits assessment, input algorithms parameters optimization, algorithm comparison and personality trait cluster prediction. ROC Area Under the Curve (AUC) was used to describe improvement. Results/discussion: The pseudo-wrapped/all possibilities method comparison resulted in 8.3% on average for all personality traits and all algorithms (% of ROC AUC of backward and forward approach). The ROC AUC for the detection of the personality ranged between 0.968 to 0.974 with better detection of Openness, Agreeability and Neuroticism. Use of association between Neuroticism, Extraversion and Conscientiousness previously defined in the literature slightly improve personality detection (maximum ROC AUC of 0.961 to 0.993 for cluster). Results are discussed in terms of contribution to driving aids. Conclusion: This study is one of the first to use machine learning techniques to detect personality traits using behavioral and physiological measures in a driving context. Additionally, it questions input parameters optimization approach, time windows selection, as well as clustering and association of personality trait for detection improvement.
Smartphone gaming induces dry eye symptoms and reduces blinking in school-aged children
Ngozi Charity Chidi-Egboka; Isabelle Jalbert; Blanka Golebiowski
Smartphone use by children is rising rapidly, but its ocular surface impact is unknown. This study examined the effect of smartphone use on blinking, symptoms, and tear function in children.
Deep-SAGA: a deep-learning-based system for automatic gaze annotation from eye-tracking data
Eye tracking methods
Oliver Deane; Eszter Toth; Sang-Hoon Yeo
Behavior Research Methods
With continued advancements in portable eye-tracker technology liberating experimenters from the restraints of artificial laboratory designs, research can now collect gaze data from real-world, natural navigation. However, the field lacks a robust method for achieving this, as past approaches relied upon the time-consuming manual annotation of eye-tracking data, while previous attempts at automation lack the necessary versatility for in-the-wild navigation trials consisting of complex and dynamic scenes. Here, we propose a system capable of informing researchers of where and what a user’s gaze is focused upon at any one time. The system achieves this by first running footage recorded on a head-mounted camera through a deep-learning-based object detection algorithm called Masked Region-based Convolutional Neural Network (Mask R-CNN). The algorithm’s output is combined with frame-by-frame gaze coordinates measured by an eye-tracking device synchronized with the head-mounted camera to detect and annotate, without any manual intervention, what a user looked at for each frame of the provided footage. The effectiveness of the presented methodology was legitimized by a comparison between the system output and that of manual coders. High levels of agreement between the two validated the system as a preferable data collection technique as it was capable of processing data at a significantly faster rate than its human counterpart. Support for the system’s practicality was then further demonstrated via a case study exploring the mediatory effects of gaze behaviors on an environment-driven attentional bias.
Wide field topographical choroidal thickness measurement following binocular, eye-tracking mapped, short-term optical defocus
Peter Wagner; Arthur Ho; Juno Kim
Investigative Ophthalmology & Visual Science
Changes in choroidal thickness observed after short-term optical defocus support the theory that eye growth is optically guided. Most previous studies used a monocular optical appliance to manipulate the potential myogenic factors, which may introduce confounders by disrupting the natural functionality of the visual system. We explored if optical defocus generated by an ecological environment can predict choroidal thickness changes while habitual binocular vision is maintained. A custom 3D eye tracker (Pupil Labs Core, Pupil Labs GmbH, Germany) monitored compliance to fixating the hand-held device (at ~3D, 30min) to estimate dioptric demand over a wide field of view (with magnitudes of ~0.5D up to 4.5D) using a miniature time-of-flight camera (Pico Flexx, PMD Technologies AG, Germany). The spatially diverse peripheral environment around the hand-held device generated an optical defocus at the retinal periphery. Temporal accumulation of dioptric demand was mapped onto a wide field of choroidal locations using gaze direction estimates from the 3D Eye-Tracker. Intra- (3 repeats per participants, pre and post intervention) and inter- participant topographical choroidal maps were landmark matched and subtended a congruent visual field. Local and regional choroidal thickness exposed to different dioptric defocus were evaluated for change with a global, sensitivity enhanced model. The average intra-participant random error of choroidal thickness estimation (1×s.d. at each pixel within the topographical map, n ~100k) could be reduced by ~20% to 5.2 ± 2.1 µm by truncation of outliers. No significant difference in thickness change was found for choroidal regions exposed to different optical defocus (GLM n=21, eye p=0.62, area p=0.21, eye*area p=0.46). Some apparently randomly positioned loci showed significant difference (Fig. 1). A change in topographical choroidal thickness consistent with local relative dioptric defocus was not found. The proposed topographical choroidal thickness model showed a sensitivity comparable to theoretical longitudinal resolution of swept-source OCT (wavelength 1060 nm). The presented methods and device can contribute to a better understanding of the ecologically valid dioptric landscape, and with further development, may provide further insight into short-term myopia models. This abstract was presented at the 2022 ARVO Annual Meeting, held in Denver, CO, May 1-4, 2022, and virtually.
Anesthesia personnel’s visual attention regarding patient monitoring in simulated non-critical and critical situations, an eye-tracking study
Tadzio R. Roche; Elise J. C. Maas; Sadiq Said; Julia Braun; Carl Machado; Donat R. Spahn; Christoph B. Noethiger; David W. Tscholl
BMC Anesthesiology
Cognitive ergonomics design of patient monitoring may reduce human factor errors in high-stress environments. Eye-tracking is a suitable tool to gain insight into the distribution of visual attention of healthcare professionals with patient monitors, which may facilitate their further development.
The influence of visitor-based social contextual information on visitors’ museum experience
Taeha Yi; Hao-yun Lee; Joosun Yum; Ji-Hyun Lee
Visitor-centered approaches have been widely discussed in the museum experience research field. One notable approach was suggested by Falk and Dierking, who defined museum visitor experience as having a physical, personal, and social context. Many studies have been conducted based on this approach, yet the interactions between personal and social contexts have not been fully researched. Since previous studies related to these interactions have focused on the face-to-face conversation of visitor groups, attempts to provide the social information contributed by visitors have not progressed. To fill this gap, we examined such interactions in collaboration with the Lee-Ungno Art Museum in South Korea. Specifically, we investigated the influence of individual visitors’ social contextual information about their art museum experience. This data, which we call “visitor-based social contextual information” (VSCI), is the social information individuals provide—feedback, reactions, or behavioral data—that can be applied to facilitate interactions in a social context. The study included three stages: In Stage 1, we conducted an online survey for a preliminary investigation of visitors’ requirements for VSCI. In Stage 2, we designed a mobile application prototype. Finally, in Stage 3, we used the prototype in an experiment to investigate the influence of VSCI on museum experience based on visitors’ behaviors and reactions. Our results indicate that VSCI positively impacts visitors’ museum experiences. Using VSCI enables visitors to compare their thoughts with others and gain insights about art appreciation, thus allowing them to experience the exhibition from new perspectives. The results of this novel examination of a VSCI application suggest that it may be used to guide strategies for enhancing the experience of museum visitors.