Publications

Explore a collection of publications and projects, from diverse fields, that cite Pupil Labs and use Pupil Labs eye tracking hardware and software in their research.

Filters

Year

70
63
55
47
40
49
25
9

Product

297
38
16
4
3

Fields

75
53
40
36
32
32
27
27
24
24
23
20
18
17
13
11
10
10
10
9
6
6
6
5
5
5
5
5
5
4
4
3
3
3
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
2
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
0-0 of 358 publications
A Field Dependence-Independence Perspective on Eye Gaze Behavior within Affective Activities
2021
Cognitive Science, Eye Tracking, Psychology
Christos Fidas, Marios Belk, Christodoulos Constantinides, Argyris Constantinides, Andreas Pitsillides
IFIP Conference on Human-Computer Interaction
Evidence suggests that human cognitive differences affect users’ visual behavior within various tasks and activities. However, a human cognitive processing perspective on the interplay between visual and affective aspects remains up-to-date understudied. In this paper, we aim to investigate this relationship by adopting an accredited cognitive style framework (Field Dependence-Independence – FD-I) and provide empirical evidence on main interaction effects between human cognition and emotional processing towards eye gaze behavior. For doing so, we designed and implemented an eye tracking study (n = 22) in which participants were initially classified according to their FD-I cognitive processing characteristics, and were further exposed to a series of images, which triggered specific emotional valence. Analysis of results yield that affective images had a different effect on FD and FI users in terms of visual information exploration time and comprehension, which was reflected on eye gaze metrics. Findings highlight a hidden and rather unexplored effect between human cognition and emotions towards eye gaze behavior, which could lead to a more holistic and comprehensive approach in affective computing.
Impression evaluation of robot’s behavior when assisting human in a cooking task
2021
Robotics, Psychology, Pupillometry
Marie Yamamoto; Yue Hu; Enrique Coronado; Gentiane Venture
30th IEEE International Conference on Robot & Human Interactive Communication (RO-MAN)
Studies have shown that the appearance and movements of home robots may play key roles in the impression and engagement of users, opposed to recent rises in the smart speaker market. In this research, we conduct a user experiment with the aim of clarifying the elements required to evaluate the human impression of a robot's movements, based on the hypothesis that adequate movements may lead to better impressions and engagement. We compare the impressions of participants who interacted with a robot with movements (behavior robot) and a robot without movements (non-behavior robot). Results show that when using the behavior robot, participants showed significantly higher values in their impressions of cheerfulness and sociability. Questionnaires about interaction revealed that personalization is also an important function for robots to make a good impression on humans.
Visual attention reveals affordances during Lower Palaeolithic stone tool exploration
2021
Cognitive Science, Eye Tracking, Archaeology
María Silva-Gago, Annapaola Fedato, Timothy Hodgson, Marcos Terradillos-Bernal, Rodrigo Alonso-Alcalde & Emiliano Bruner
Archaeological and Anthropological Sciences
Tools, which have a cognitive background rooted in our phylogenetic history, are essential for humans to interact with their environment. One of the characteristics of human beings is the coordination between the eyes and hands, which is associated with a skilled visuospatial system. Vision is the first input of an action that influences interaction with tools, and tools have affordances, known as behavioural possibilities, which indicate their possible uses and potentialities. The aim of the present study is to investigate body–tool interaction from a cognitive perspective, focusing on visual affordances during interaction with the early stone tools. We analyse visual attention, applying eye tracking technology, during a free visual exploration and during haptic manipulation of the Lower Palaeolithic stone tools. The central area of the tool is the most observed region, followed by the top and the base, while knapped areas trigger more attention than the cortex. There are differences between stone tool types, but visual exploration does not differ when aided by haptic exploration. The results suggest that visual behaviour is associated with the perception of affordances, possibly from the beginning of the brain–body–tool interaction, associated with the Lower Palaeolithic culture.
Gaze-angle dependency of pupil-size measurements in head-mounted eye tracking
2021
Pupillometry, Cognitive Science
Petersch, B., Dierkes, K.
Behavior Research Methods
Pupillometry - the study of temporal changes in pupil diameter as a function of external light stimuli or cognitive processing - requires the accurate and gaze-angle independent measurement of pupil dilation. Expected response amplitudes often are only a few percent relative to a pre-stimulus baseline, thus demanding for sub-millimeter accuracy. Video-based approaches to pupil-size measurement aim at inferring pupil dilation from eye images alone. Eyeball rotation in relation to the recording camera as well as optical effects due to refraction at corneal interfaces can, however, induce so-called pupil foreshortening errors (PFE), i.e. systematic gaze-angle dependent changes of apparent pupil size that are on a par with typical response amplitudes. While PFE and options for its correction have been discussed for remote eye trackers, for head-mounted eye trackers such an assessment is still lacking. In this work, we therefore gauge the extent of PFE in three measurement techniques, all based on eye images recorded with a single near-eye camera. We present both real world experimental data as well as results obtained on synthetically generated eye images. We discuss PFE effects at three different levels of data aggregation: the sample, subject, and population level. In particular, we show that a recently proposed refraction-aware approach employing a mathematical 3D eye model is successful in providing pupil-size measurements which are gaze-angle independent at the population level.
Dyslexics’ Fragile Oculomotor Control Is Further Destabilized by Increased Text Difficulty
2021
Reading, Dyslexia
Ward LM, Kapoula Z
MDPI: Brain Sciences
Dyslexic adolescents demonstrate deficits in word decoding, recognition, and oculomotor coordination as compared to healthy controls. Our lab recently showed intrinsic deficits in large saccades and vergence movements with a Remobi device independent from reading. This shed new light on the field of dyslexia, as it has been debated in the literature whether the deficits in eye movements are a cause or consequence of reading difficulty. The present study investigates how these oculomotor problems are compensated for or aggravated by text difficulty. A total of 46 dyslexic and 41 non-dyslexic adolescents’ eye movements were analyzed while reading L’Alouette, a dyslexia screening test, and 35 Kilos D’Espoir, a children’s book with a reading age of 10 years. While reading the more difficult text, dyslexics made more mistakes, read slower, and made more regressive saccades; moreover, they made smaller amplitude saccades with abnormal velocity profiles (e.g., higher peak velocity but lower average velocity) and significantly higher saccade disconjugacy. While reading the simpler text, these differences persisted; however, the difference in saccade disconjugacy, although present, was no longer significant, nor was there a significant difference in the percentage of regressive saccades. We propose that intrinsic eye movement abnormalities in dyslexics such as saccade disconjugacy, abnormal velocity profiles, and cognitively associated regressive saccades can be particularly exacerbated if the reading text relies heavily on word decoding to extract meaning; increased number of regressive saccades are a manifestation of reading difficulty and not a problem of eye movement per se. These interpretations are in line with the motor theory of visual attention and our previous research describing the relationship between binocular motor control, attention, and cognition that exists outside of the field of dyslexia.
Target position and avoidance margin effects on path planning in obstacle avoidance
2021
Locomotion, Motor Control, Psychology, Cognitive Neuroscience, Eye Tracking
Mohammad R. Saeedpour-Parizi, Shirin E. Hassan, Ariful Azad, Kelly J. Baute, Tayebeh Baniasadi & John B. Shea
Nature: Scientific Reports
This study examined how people choose their path to a target, and the visual information they use for path planning. Participants avoided stepping outside an avoidance margin between a stationary obstacle and the edge of a walkway as they walked to a bookcase and picked up a target from different locations on a shelf. We provided an integrated explanation for path selection by combining avoidance margin, deviation angle, and distance to the obstacle. We found that the combination of right and left avoidance margins accounted for 26%, deviation angle accounted for 39%, and distance to the obstacle accounted for 35% of the variability in decisions about the direction taken to circumvent an obstacle on the way to a target. Gaze analysis findings showed that participants directed their gaze to minimize the uncertainty involved in successful task performance and that gaze sequence changed with obstacle location. In some cases, participants chose to circumvent the obstacle on a side for which the gaze time was shorter, and the path was longer than for the opposite side. Our results of a path selection judgment test showed that the threshold for participants abandoning their preferred side for circumventing the obstacle was a target location of 15 cm to the left of the bookcase shelf center.
Multimodal Attention Creates the Visual Input for Infant Word Learning
2021
Child Development, Learning, Eye Tracking, Cognitive Science
S. E. Schroer and C. Yu
2021 IEEE International Conference on Development and Learning
Infant language acquisition is fundamentally an embodied process, relying on the body to select information from the learning environment. Infants show their attention to an object not merely by gazing at the object, but also through orienting their body towards the object and generating various types of manual actions on the object, such as holding, touching, and shaking. The goal of the present study was to examine how multimodal attention shapes infant word learning in real-time. Infants and their parents played in a home-like lab with unfamiliar objects with assigned labels. While playing, participants wore wireless head-mounted eye trackers to capture visual attention. Infants were then tested on their knowledge of the new words. We identified all the utterances in which parents labeled the learned or not learned objects and analyzed infant multimodal attention during and around labeling. We found that proportion of time spent in handeye coordination predicted learning outcomes. To understand the learning advantage hand-eye coordination creates, we compared the size of objects in the infant's field of view. Although there were no differences in object size between learned and not learned labeling utterances, hand-eye coordination created the most informative views. Together, these results suggest that in-the-moment word learning may be driven by the greater access to informative object views that hand-eye coordination affords.
Physical and cognitive demands associated with police in-vehicle technology use: an on-road case study
2021
Driving, Cognitive Science, Eye Tracking, Police
Maryam Zahabi, Farzaneh Shahini, Wei Yin, Xudong Zhang
Ergonomics
Motor vehicle crashes are a leading cause of police officers' deaths in line of duty. These crashes have been mainly attributed to officers' driving distraction caused by the use of in-vehicle technologies while driving. This paper presents a 3-h ride-along study of 20 police officers to assess the physical and cognitive demands associated with using in-vehicle technologies. The findings suggested that the mobile computer terminal (MCT) was the most frequently used in-vehicle system for the officers. In addition, officers perceived the MCT to significantly increase their visual, cognitive, and physical demands compared to other in-vehicle technologies. Evidence from electromyography and eye-tracking measures suggested that officers with more experience as a patrol officer and those who were working in more congested areas experienced higher cognitive workload. Furthermore, it was found that as the ride-along duration increased, there were indications of muscle fatigue in medial deltoid and triceps brachii muscles. Practitioner summary: This study assessed the impact of police in-vehicle technology use in an on-road case study. The findings provide new data and knowledge for police agencies and vehicle manufacturers to develop administrative measures and in-vehicle technology innovations to improve police officers' health and safety.
Visual anticipation of the future path: Predictive gaze and steering
2021
Motor Control, Eye Tracking, Driving
Samuel Tuhkanen; Jami Pekkanen; Richard M. Wilkie; Otto Lappi
Journal of Vision
Skillful behavior requires the anticipation of future action requirements. This is particularly true during high-speed locomotor steering where solely detecting and correcting current error is insufficient to produce smooth and accurate trajectories. Anticipating future steering requirements could be supported using “model-free” prospective signals from the scene ahead or might rely instead on model-based predictive control solutions. The present study generated conditions whereby the future steering trajectory was specified using a breadcrumb trail of waypoints, placed at regular intervals on the ground to create a predictable course (a repeated series of identical “S-bends”). The steering trajectories and gaze behavior relative to each waypoint were recorded for each participant (N = 16). To investigate the extent to which drivers predicted the location of future waypoints, “gaps” were included (20% of waypoints) whereby the next waypoint in the sequence did not appear. Gap location was varied relative to the S-bend inflection point to manipulate the chances that the next waypoint indicated a change in direction of the bend. Gaze patterns did indeed change according to gap location, suggesting that participants were sensitive to the underlying structure of the course and were predicting the future waypoint locations. The results demonstrate that gaze and steering both rely upon anticipation of the future path consistent with some form of internal model.