Assets
This is the biggest release of Pupil to-date!
Some details below:
Limits of regression based calibration: It assumes the headset is "screwed to the head". You need to compensate for the movement of the headset. Addionally, a model-less search for the ellipse yields poor results when the pupil is partially obstructed by reflections or eyelashes or eyelids.With the use of a 3D model of the eye and a pinhole model of the camera based on Swirski’s work in [“A fully-automatic, temporal approach to single camera, glint-free 3D eye model fitting” PETMEI 2013] we can model the eyeball as a sphere and the pupil as a disk on that sphere. The sphere used is based on an average human eyeball diameter is 24mm. The state of the model is the position of the sphere in eye camera space and two rotation vectors that describe the location of the pupil on the sphere.
Using temporal constraints and competing eye models we can detect and compensate for slippage events when 2D pupil evidence is strong. In case of weak 2D evidence we can use constraint from existing models to robustly fit pupils with considerably less evidence than before.
With a 3D location of the eye and 3D vectors of gaze we don’t have to rely of polynomials for gaze mapping. Instead we use a geometric gaze mapping approach. We model the world camera as a pinhole camera with distortion, and project pupil line of sight vectors onto the world image. For this we need to know the rotation translation of world and eye camera. This rigid transformation is obtained in a 3 point calibration routine. At the time of writing we simply assume that the targets are equally far away and minimize the distance of the obtained point pairs. This will be extended to infer distances during calibration.