Adapting Rendering to the User's Current Visual Field

January 25, 2017

Advances in head-mounted displays for further immersion increases the need for performance optimizations to handle rendering workload. Daniel Pohl et al., propose to use the user’s visual field of the eye gaze to optimize the area of the virtual scene that is rendered.

Image Source: [Concept for Using Eye Tracking in a Head-Mounted Display to Adapt Rendering to the User's Current Visual Field](https://perceptual.mpi-inf.mpg.de/wp-content/blogs.dir/12/files/2016/11/pohl2016_vrst.pdf)

The researchers have developed a calibration routine using Pupil Lab’s Oculus Rift DK2 add-on cup. Their calibration routine enables one to determine an individual user’s visual field. The calibration demonstrates that users can actually see more of the VR environment when fixating on the center of a calibration area than when fixating on outer areas (due to lens defects).

By knowing a user’s visual field, one can optimize the rendering pipeline to skip areas that are not seen. This will enable faster frame rates (up to 2x performance gains) and lower perceived latency, and therefore a more immersive VR experience.

Check out the their full research paper here.

If you use Pupil in your research and have published work, please send us a note. We would love to include your work here on the blog and in a list of work that cites Pupil.