Gaze-Guided Robots: A Leap Forward for Enhanced Assisted Living
Research Digest
August 27, 2025

Figure: On the left is the testing environment where Neon and the robot worked together to navigate the space. On the right, the robot’s final path is shown, illustrating how it carefully moved around obstacles to reach the target indicated by the participant’s gaze. Source: Joseph, P., Plozza, D., Pascarella, L., & Magno, M. (2024, July). Gaze-Guided Semi-Autonomous Quadruped Robot for Enhanced Assisted Living. In 2024 IEEE Sensors Applications Symposium (SAS) (pp. 1-6). IEEE.
Figure: On the left is the testing environment where Neon and the robot worked together to navigate the space. On the right, the robot’s final path is shown, illustrating how it carefully moved around obstacles to reach the target indicated by the participant’s gaze. Source: Joseph, P., Plozza, D., Pascarella, L., & Magno, M. (2024, July). Gaze-Guided Semi-Autonomous Quadruped Robot for Enhanced Assisted Living. In 2024 IEEE Sensors Applications Symposium (SAS) (pp. 1-6). IEEE.
Figure: On the left is the testing environment where Neon and the robot worked together to navigate the space. On the right, the robot’s final path is shown, illustrating how it carefully moved around obstacles to reach the target indicated by the participant’s gaze. Source: Joseph, P., Plozza, D., Pascarella, L., & Magno, M. (2024, July). Gaze-Guided Semi-Autonomous Quadruped Robot for Enhanced Assisted Living. In 2024 IEEE Sensors Applications Symposium (SAS) (pp. 1-6). IEEE.
The Challenge: Bringing Robots into Our Homes
While robots have revolutionized industries like manufacturing, their use in everyday home assistance, especially for people with reduced mobility, is limited. Typical controls like manual input, voice commands, or bio-signals (such as brain or muscle activity) often require effort, supervision, or physical ability that not all users have. Moreover, many assistive robots are fixed in place, restricting their usefulness in dynamic home environments.
A New Approach: Guiding Robots with Your Gaze
To address these challenges, researchers Paul Joseph, Davide Plozza, Luca Pascarella, and Michele Magno from the D-ITET Center for Project-Based Learning (PBL) at ETH Zürich have developed a system that lets users control a mobile robot simply by looking at their target. This gaze-guided robot combines Neon eye tracking glasses with an agile four-legged Unitree Go1 robot, enabling hands-free control that works in complex spaces.
Gaze-to-Robot Translation
The Neon glasses wirelessly stream gaze and a live video of the wearer’s field of view to the robot’s onboard computer. To align the user’s gaze with what the robot “sees”, an image-matching algorithm translates gaze from the user’s perspective onto the robot’s camera view. The researchers found the algorithm SURF to be efficient for their CPU-only setup due to low latency and power use. Once gaze is mapped, the robot uses its depth camera to determine the precise 3D location of the target.
Using this information, the robot’s navigation system autonomously plans the best path to the target. This system dynamically avoids both static obstacles, like furniture, and moving obstacles, such as people, allowing safe and efficient movement in complex environments.
Testing the Design: Promising Results
The system was put to the test in a realistic, obstacle-filled environment designed to reflect daily living conditions. The results were encouraging:
High Accuracy: The system successfully guided the quadrupedal robot toward a target highlighted by the user's gaze with a mean accuracy of less than 20 cm (specifically, 19.7 cm with a standard deviation of 8.1 cm). This accuracy is sufficient for the robot to get close enough to the target for practical applications.
Low Latency: The entire process, from gaze input to path planning, achieved an impressive inference time of approximately 0.2 seconds.
Robust Performance: The combination of gaze-tracking and autonomous navigation proved effective in complex, obstacle-filled environments.
Implications for Assisted Living and Beyond
This research represents a leap forward in human-robot interaction and assistive technology. It has the potential to break down barriers for individuals with motor disabilities and enable greater independence.
The ability to guide a mobile robot to a precise point in space using gaze opens up exciting future possibilities. The researchers envision extending this functionality by incorporating data from Neon’s IMU sensors, alongside the robot’s own, to build a shared spatial map and enable continuous localization of both user and robot, making the system more versatile and robust. They also plan to mount a robotic arm onto the quadrupedal robot, thereby enabling fully immersive interactions with the environment, such as remote object retrieval. This could allow someone to simply look at an object across the room, and the robot could autonomously navigate to it, grasp, and deliver it.
Further Resources
Full article: https://ieeexplore.ieee.org/abstract/document/10636523
Github repository: https://github.com/paulijosey/gaze_goal_position
Research Center: Center for Project-Based Learning D-ITET, ETH Zürich, Zürich, Switzerland
Interested in building your own gaze-guided robotic system with Neon? Explore our Real-Time API to learn how to access Neon’s data live and synchronize it with other devices.
The Challenge: Bringing Robots into Our Homes
While robots have revolutionized industries like manufacturing, their use in everyday home assistance, especially for people with reduced mobility, is limited. Typical controls like manual input, voice commands, or bio-signals (such as brain or muscle activity) often require effort, supervision, or physical ability that not all users have. Moreover, many assistive robots are fixed in place, restricting their usefulness in dynamic home environments.
A New Approach: Guiding Robots with Your Gaze
To address these challenges, researchers Paul Joseph, Davide Plozza, Luca Pascarella, and Michele Magno from the D-ITET Center for Project-Based Learning (PBL) at ETH Zürich have developed a system that lets users control a mobile robot simply by looking at their target. This gaze-guided robot combines Neon eye tracking glasses with an agile four-legged Unitree Go1 robot, enabling hands-free control that works in complex spaces.
Gaze-to-Robot Translation
The Neon glasses wirelessly stream gaze and a live video of the wearer’s field of view to the robot’s onboard computer. To align the user’s gaze with what the robot “sees”, an image-matching algorithm translates gaze from the user’s perspective onto the robot’s camera view. The researchers found the algorithm SURF to be efficient for their CPU-only setup due to low latency and power use. Once gaze is mapped, the robot uses its depth camera to determine the precise 3D location of the target.
Using this information, the robot’s navigation system autonomously plans the best path to the target. This system dynamically avoids both static obstacles, like furniture, and moving obstacles, such as people, allowing safe and efficient movement in complex environments.
Testing the Design: Promising Results
The system was put to the test in a realistic, obstacle-filled environment designed to reflect daily living conditions. The results were encouraging:
High Accuracy: The system successfully guided the quadrupedal robot toward a target highlighted by the user's gaze with a mean accuracy of less than 20 cm (specifically, 19.7 cm with a standard deviation of 8.1 cm). This accuracy is sufficient for the robot to get close enough to the target for practical applications.
Low Latency: The entire process, from gaze input to path planning, achieved an impressive inference time of approximately 0.2 seconds.
Robust Performance: The combination of gaze-tracking and autonomous navigation proved effective in complex, obstacle-filled environments.
Implications for Assisted Living and Beyond
This research represents a leap forward in human-robot interaction and assistive technology. It has the potential to break down barriers for individuals with motor disabilities and enable greater independence.
The ability to guide a mobile robot to a precise point in space using gaze opens up exciting future possibilities. The researchers envision extending this functionality by incorporating data from Neon’s IMU sensors, alongside the robot’s own, to build a shared spatial map and enable continuous localization of both user and robot, making the system more versatile and robust. They also plan to mount a robotic arm onto the quadrupedal robot, thereby enabling fully immersive interactions with the environment, such as remote object retrieval. This could allow someone to simply look at an object across the room, and the robot could autonomously navigate to it, grasp, and deliver it.
Further Resources
Full article: https://ieeexplore.ieee.org/abstract/document/10636523
Github repository: https://github.com/paulijosey/gaze_goal_position
Research Center: Center for Project-Based Learning D-ITET, ETH Zürich, Zürich, Switzerland
Interested in building your own gaze-guided robotic system with Neon? Explore our Real-Time API to learn how to access Neon’s data live and synchronize it with other devices.
The Challenge: Bringing Robots into Our Homes
While robots have revolutionized industries like manufacturing, their use in everyday home assistance, especially for people with reduced mobility, is limited. Typical controls like manual input, voice commands, or bio-signals (such as brain or muscle activity) often require effort, supervision, or physical ability that not all users have. Moreover, many assistive robots are fixed in place, restricting their usefulness in dynamic home environments.
A New Approach: Guiding Robots with Your Gaze
To address these challenges, researchers Paul Joseph, Davide Plozza, Luca Pascarella, and Michele Magno from the D-ITET Center for Project-Based Learning (PBL) at ETH Zürich have developed a system that lets users control a mobile robot simply by looking at their target. This gaze-guided robot combines Neon eye tracking glasses with an agile four-legged Unitree Go1 robot, enabling hands-free control that works in complex spaces.
Gaze-to-Robot Translation
The Neon glasses wirelessly stream gaze and a live video of the wearer’s field of view to the robot’s onboard computer. To align the user’s gaze with what the robot “sees”, an image-matching algorithm translates gaze from the user’s perspective onto the robot’s camera view. The researchers found the algorithm SURF to be efficient for their CPU-only setup due to low latency and power use. Once gaze is mapped, the robot uses its depth camera to determine the precise 3D location of the target.
Using this information, the robot’s navigation system autonomously plans the best path to the target. This system dynamically avoids both static obstacles, like furniture, and moving obstacles, such as people, allowing safe and efficient movement in complex environments.
Testing the Design: Promising Results
The system was put to the test in a realistic, obstacle-filled environment designed to reflect daily living conditions. The results were encouraging:
High Accuracy: The system successfully guided the quadrupedal robot toward a target highlighted by the user's gaze with a mean accuracy of less than 20 cm (specifically, 19.7 cm with a standard deviation of 8.1 cm). This accuracy is sufficient for the robot to get close enough to the target for practical applications.
Low Latency: The entire process, from gaze input to path planning, achieved an impressive inference time of approximately 0.2 seconds.
Robust Performance: The combination of gaze-tracking and autonomous navigation proved effective in complex, obstacle-filled environments.
Implications for Assisted Living and Beyond
This research represents a leap forward in human-robot interaction and assistive technology. It has the potential to break down barriers for individuals with motor disabilities and enable greater independence.
The ability to guide a mobile robot to a precise point in space using gaze opens up exciting future possibilities. The researchers envision extending this functionality by incorporating data from Neon’s IMU sensors, alongside the robot’s own, to build a shared spatial map and enable continuous localization of both user and robot, making the system more versatile and robust. They also plan to mount a robotic arm onto the quadrupedal robot, thereby enabling fully immersive interactions with the environment, such as remote object retrieval. This could allow someone to simply look at an object across the room, and the robot could autonomously navigate to it, grasp, and deliver it.
Further Resources
Full article: https://ieeexplore.ieee.org/abstract/document/10636523
Github repository: https://github.com/paulijosey/gaze_goal_position
Research Center: Center for Project-Based Learning D-ITET, ETH Zürich, Zürich, Switzerland
Interested in building your own gaze-guided robotic system with Neon? Explore our Real-Time API to learn how to access Neon’s data live and synchronize it with other devices.