We are pleased to announce the release of Pupil v1.15!
Download the latest bundle and let us know what you think via the #pupil channel on Discord 😄
Apriltags support for Surface Tracking
v1.15, the Surface Tracker will detect apriltags instead of "square markers".
We highly recommend to replace "square markers" with apriltags in your setup since their improved detection performance translates to gaze being mapped to surfaces with higher accuracy and surfaces being detected more reliably, especially during movements.
You can still use the "square markers" by setting the
Marker Detector Mode to
Legacy square markers. This is especially useful, if you want to process recordings that were made previous to this release.
Apriltags version 3 and support for Windows
This change allows us to finally enable the Head Pose Tracker feature (initially released in
v1.12) on Windows!
Freeze the current 3d eye model
You now have the option to freeze the current eye model. Doing so will prevent any changes to the current eye model and discard any alternative models that might have built up in the background.
Warning: Freezing the eye model will disable its ability to compensate for any kind of slippage.
Drop invalid frame after disconnect - #1573
Revert change that caused eye process to crash when minimizing
Please install apriltags via
A fixation is based on gaze data that full-fills the maximum-dispersion-minimum-duration-criterion. We use it to infer e.g. fixation's position. When mapping a fixation to a surface, we use the surface's homography to calculate the fixation's position in surface coordinates. The surface's homography is calculated based on the detected surface markers for a given world frame. In most cases, the fixation appears during multiple world frames, i.e. there are multiple surface homographies which could be used to map the fixation to the surface. Until now, we only exported fixations for the first frame during which the fixation appeared. This is not a problem, as long the surface does not move in relation to the world camera during the fixation. Since this is an assumption, that does not always hold true, we will export all fixation mappings starting with v1.15. This means, that if a fixation spans multiple world frames during which a surface was detected, we will export the fixation and its position for each of these world frames.
Therefore, we added two new columns,
world_index, to identify the world frame which was used to map the fixation.
Eye model freezing
You can un/freeze the 3d eye models via a notification:
See this example on how to use the Pupil Detector Network API.
Change pupil detection ROI by notification - #1576
You can change the pupil detector's region of interest via a notification:
HMD-Eyes Streaming - #1501
This new video backend allows to stream a virtual Unity camera to Pupil Capture.
HMD-Eyes Streaming is an experimental feature for the hmd-eyes project and can only be activated via a network notification.
We are hiring Python developers!
Hey - you're reading the developer notes, so this is for you! We're looking to hire developers to contribute to Pupil source code. If you love Python and enjoy writing code that is a joy to read, get in touch. Experience with the scientific Python stack is a plus, but not required. We have a lot of exciting projects in the pipeline.
We are also looking for Dev Ops engineers that have experience with kubernetes, docker, and server-side Python.
Send an email to firstname.lastname@example.org with a CV to start a discussion. We look forward to hearing from you.
v1.15-65 and higher include multiple fixes for surface tracking. Please update Pupil if you have been using versions
Update 2019-08-27 10:07: Please find the macOS release attached below.Update 2019-09-05 12:05: We updated the Windows bundle to include fixes for the HMD Streaming backend.