Publications

Explore a collection of publications and projects, from diverse fields, that cite Pupil Labs and use Pupil Labs eye tracking hardware and software in their research. You can also find this collection on Zotero.

Filters

Year

89
129
114
64
55
47
40
49
24
10

Product

489
72
43
17

Fields

145
123
104
92
74
66
61
60
57
42
41
35
29
27
27
27
22
19
17
16
16
14
14
8
6
0-0 of 621 publications
Extracting Decision-Making Features from the Unstructured Eye Movements of Clinicians on Glaucoma OCT Reports and Developing AI Models to Classify Expertise
2023
Artificial Intelligence, Clinical, Opthalmology
M. Akerman; S. Choudhary; J.M. Liebmann; G.A. Cioffi; R.W. Chen; K.A. Thakoor
Frontiers in Medicine
This study aimed to investigate the eye movement patterns of ophthalmologists with varying expertise levels during the assessment of optical coherence tomography (OCT) reports for glaucoma detection. Objectives included evaluating eye gaze metrics and patterns as a function of ophthalmic education, deriving novel features from eye-tracking, and developing binary classification models for disease detection and expertise differentiation. Thirteen ophthalmology residents, fellows, and clinicians specializing in glaucoma participated in the study. Junior residents had less than 1 year of experience, while senior residents had 2–3 years of experience. The expert group consisted of fellows and faculty with over 3 to 30+ years of experience. Each participant was presented with a set of 20 Topcon OCT reports (10 healthy and 10 glaucomatous) and was asked to determine the presence or absence of glaucoma and rate their confidence of diagnosis. The eye movements of each participant were recorded as they diagnosed the reports using a Pupil Labs Core eye tracker. Expert ophthalmologists exhibited more refined and focused eye fixations, particularly on specific regions of the OCT reports, such as the retinal nerve fiber layer (RNFL) probability map and circumpapillary RNFL b-scan. The binary classification models developed using the derived features demonstrated high accuracy up to 94.0% in differentiating between expert and novice clinicians. The derived features and trained binary classification models hold promise for improving the accuracy of glaucoma detection and distinguishing between expert and novice ophthalmologists. These findings have implications for enhancing ophthalmic education and for the development of effective diagnostic tools.
Eye Tracking Study of Visual Attention of Children with Hearing Impairments in a Learning Situation
2023
Developmental Psychology
Yana K. Smirnova
Experimental Psychology (Russia)
Potential mechanisms underlying atypical joint attention that impede effective learning are analyzed using the example of the consequences of hearing impairment. A sample of preschool children with hearing impairment after cochlear implantation (sensorineural hearing loss, ICD-10 class H90) was studied. For the study, an experimental situation was created that would allow tracing the learning difficulties in children with hearing impairments associated with the skills of joint attention. In the course of completing a training task jointly with an adult in children with hearing impairment, eye movements were recorded with a portable tracker in the form of Pupil Headset glasses. In the course of the study, it was possible to identify and visualize markers of oculomotor activity that impede their effective learning: distribution of visual attention over a wide area of the visual field, not in a focused mode; in the trajectory of eye movements and the position of fixations, there is a preference for non-social signals, non-target objects, difficulties in switching attention from one object to another. The main manifestation of joint attention deficit in preschoolers with hearing impairment is a decrease in the volume and time of stable maintenance of synchronicity of perception in the learning process. The parameters of oculomotor activity can serve as an indicator of the ability of a child with a hearing impairment to maintain attention to the sample form and a diagnostic indicator of the possible number of errors in the learning process. Such indicators are the duration and number of fixations. The role of multimodal means of maintaining visual attention to the sample in the process of teaching children with hearing impairment is shown. It was revealed that the attention of a child with hearing impairment to the face of an adult is part of the joint attention to the object, which improves the effectiveness of learning and is associated with a longer visual attention to the object (teaching pattern).
Task-related gaze behaviour in face-to-face dyadic collaboration: Toward an interactive theory?
2023
Social Psychology
Roy S. Hessels; Martin K. Teunisse; Diederick C. Niehorster; Marcus Nyström; Jeroen S. Benjamins; Atsushi Senju; Ignace T. C. Hooge
Visual Cognition
Visual routines theory posits that vision is critical for guiding sequential actions in the world. Most studies on the link between vision and sequential action have considered individual agents, while substantial human behaviour is characterized by multi-party interaction. Here, the actions of each person may affect what the other can subsequently do. We investigated task execution and gaze allocation of 19 dyads completing a Duplo-model copying task together, while wearing the Pupil Invisible eye tracker. We varied whether all blocks were visible to both participants, and whether verbal communication was allowed. For models in which not all blocks were visible, participants seemed to coordinate their gaze: The distance between the participants' gaze positions was smaller and dyads looked longer at the model concurrently than for models in which all blocks were visible. This was most pronounced when verbal communication was allowed. We conclude that the way the collaborative task was executed depended both on whether visual information was available to both persons, and how communication took place. Modelling task structure and gaze allocation for human-human and human-robot collaboration thus requires more than the observable behaviour of either individual. We discuss whether an interactive visual routines theory ought to be pursued.
Utility of Gaze-Control for Real-Time Surgical Training During Minimally Invasive Surgery - a Feasilibility Study of Eye-Tracking Technology
2023
Clinical, HCI
H.B.T Le
British Journal of Surgery
Aim Training in laparoscopic surgery is limited to observing what the trainee is doing on the screen and reliant on verbal coaching. The idea of a physical pointer onto the screen in real-time to allow for more focused supervision and to improve communication was conceptualised. The study aims to demonstrate the feasibility and identify the challenges of using eye-tracking glasses in producing this digital pointer onto a screen. Method Pupil Core (Pupil Labs GmbH) eye-tracking glasses were used to calibrate the edges of a computer monitor to the pupil tracking cameras (200Hz) and the point-of-view (POV) cameras. Utilising the accompanying open-source Pupil Core software, a red-dot cursor was programmed to display in real-time on an accompanying screen. A video of a simulated laparoscopic cholecystectomy and geometric shapes were displayed onto the screen to demonstrate the technology. Results A red dot pointer was successfully displayed on a screen as an overlay controlled entirely by gaze. It was able to track the edges of shapes and used to point out anatomical structures in a video in real-time. Despite the tester looking away from the screen and leaving the seated position, the glasses were able to recallibrate automatically and work seamlessly after. The saccadic nature of eye movements meant that the tracing lines were not always smooth. Conclusions This study demonstrates that the technology exists to make this feasible. Further optimisation in software design is required to mitigate the effects of saccadic eye movements. Further studies collecting objective data is required to demonstrate a minimum viable product.
Effects of emotional content on social inhibition of gaze in live social and non-social situations
2023
Cognitive Psychology, Social Psychology
Laura Pasqualette; Louisa Kulke
Scientific Reports
Abstract In real-life interactions, it is crucial that humans adequately respond to others’ emotional expressions. Emotion perception so far has mainly been studied in highly controlled laboratory tasks. However, recent research suggests that attention and gaze behaviour significantly differ between watching a person on a controlled laboratory screen compared to in real world interactions. Therefore, the current study aimed to investigate effects of emotional expression on participants’ gaze in social and non-social situations. We compared looking behaviour towards a confederate showing positive, neutral or negative facial expressions between live social and non-social waiting room situations. Participants looked more often and longer to the confederate on the screen, than when physically present in the room. Expressions displayed by the confederate and individual traits (social anxiety and autistic traits) of participants did not reliably relate to gaze behaviour. Indications of covert attention also occurred more often and longer during the non-social, than during the social condition. Findings indicate that social norm is a strong factor modulating gaze behaviour in social contexts. Protocol registration The stage 1 protocol for this Registered Report was accepted in principle on September 13, 2021. The protocol, as accepted by the journal, can be found at: https://doi.org/10.6084/m9.figshare.16628290 .
Real-World Variation in Pupil Size during Activities of Daily Living across the Lifespan
2023
Opthalmology
Rafael Lazar; Manuel Spitschan
34th Annual Meeting of the Society for Light Treatment and Biological Rhythms (SLTBR), 30 May–1 June
Human visual perception begins with light entering the eyes and subsequently stimulating the receptors in the retina. The pupil has the crucial task of regulating the amount of incident light by dilating or constricting (between approx. 8 and 2 mm in diameter, with the average pupil diameter decreasing with age). Pupil size not only influences the quality of the retinal image but also modulates non-visual effects of light (e.g., melatonin suppression by light). In controlled laboratory experiments, it has been established that steady-state pupil diameter is primarily influenced by the activity of the intrinsically photosensitive retinal ganglion cells (ipRGCs) expressing the short-wavelengthsensitive photopigment melanopsin (λmax = 480 nm) under photopic conditions. However, there is a lack of investigations on pupil size regulation in dynamic, real-life lighting conditions across the lifespan
Integrating Gaze, image analysis, and body tracking: Foothold selection during locomotion
2023
Motor Control
Karl Muller; Dan Panfili; Jonathan S. Matthis; Kathryn Bonnen; Mary Hayhoe
Neuroscience
Abstract Relatively little is known about the way vision is use to guide locomo-tion in the natural world. What visual features are used to choose paths in natural complex terrain? How do walkers trade off different costs such as getting to the goal, minimizing energy, and satisfying stability constraints? To answer these questions, it is necessary to monitor not only the eyes and the body, but also to represent the three dimensional structure of the terrain. We used photogrammetry techniques to do this, and found substantial regularities in the choice of paths. Walkers avoid paths that involve changes in height and choose more circuitous and flatter paths. This stable tradeoff is related to the walker’s leg length and reflects both energetic and stability constraints. Gaze data and path choices suggest that subjects take into account the terrain approximately 5 steps ahead, and so are planning routes as well as particular footplants. Such planning ahead allows the minimization of energetic costs. Thus locomotor behavior in natural environments is controlled by decision mechanisms that attempt to optimize for multiple factors in the context of well-calibrated sensory and motor internal models.
Depth and direction effects in the prediction of static and shifted reaching goals from kinematics
2023
Motor Control
A. Bosco; M. Filippini; D. Borra; E. A. Kirchner; P. Fattori
Scientific Reports
Abstract The kinematic parameters of reach-to-grasp movements are modulated by action intentions. However, when an unexpected change in visual target goal during reaching execution occurs, it is still unknown whether the action intention changes with target goal modification and which is the temporal structure of the target goal prediction. We recorded the kinematics of the pointing finger and wrist during the execution of reaching movements in 23 naïve volunteers where the targets could be located at different directions and depths with respect to the body. During the movement execution, the targets could remain static for the entire duration of movement or shifted, with different timings, to another position. We performed temporal decoding of the final goals and of the intermediate trajectory from the past kinematics exploiting a recurrent neural network. We observed a progressive increase of the classification performance from the onset to the end of movement in both horizontal and sagittal dimensions, as well as in decoding shifted targets. The classification accuracy in decoding horizontal targets was higher than the classification accuracy of sagittal targets. These results are useful for establishing how human and artificial agents could take advantage from the observed kinematics to optimize their cooperation in three-dimensional space.