Adapting Rendering to the User's Current Visual Field

Community Stories

Author(s): Pupil Dev Team

January 25, 2017

Advances in head-mounted displays for further immersion increases the need for performance optimizations to handle rendering workload. Daniel Pohl et al., propose to use the user's visual field of the eye gaze to optimize the area of the virtual scene that is rendered.

The researchers have developed a calibration routine using Pupil Lab's Oculus Rift DK2 add-on cup. Their calibration routine enables one to determine an individual user's visual field. The calibration demonstrates that users can actually see more of the VR environment when fixating on the center of a calibration area than when fixating on outer areas (due to lens defects).

By knowing a user's visual field, one can optimize the rendering pipeline to skip areas that are not seen. This will enable faster frame rates (up to 2x performance gains) and lower perceived latency, and therefore a more immersive VR experience.

Check out the their full research paper here.

If you use Pupil in your research and have published work, please send us a note. We would love to include your work here on the blog and in a list of work that cites Pupil.