We have been working hard to significantly reduce the memory usage of Pupil Capture and Player. See incremental data serialization and deferred deserialization for details.
Auto-exposure mode – #1210
We have added an auto-exposure mode for the 200Hz Pupil cameras. You can enable it in the
UVC Source menu of the eye windows.
Incremental data serialization – #1141
Prior to this release, data was cached in memory during recordings and written to disk after the recording had finished. This resulted in large memory consumption during recording.
v1.8, Pupil Capture will store data directory to the disk as it becomes available during the recording. This reduces the memory footprint and improves reliability of Pupil Capture. See the New Recording Format section on our documentation.
Automatic recording stop on low disk space
The recorder will show a warning if less than 5GB of disk space is available to the user. Recordings will be stopped gracefully as soon as less than 1GB of disk space is available.
Fingertip Calibration – #1218
We introduced a proof-of-concept fingertip calibration with Pupil Capture
v1.6. It was based on traditional computer vision approaches and the fingertip detection was not very stable.
Now, we are releasing a revised version that uses convolutional neural networks (CNN) for the hand and fingertip detection. For details checkout our documentation.
Note - The current bundle support CPU inference only. If you install from source and have an NVIDIA GPU with CUDA 9.0 drivers installed, you can install pytorch and our fingertip detector will use your GPU!
v1.8, opening a recording in Pupil Player would read the entire
pupil_data file and deserialize the data into Python objects. This enabled fast processing of the data but also used excessive amounts of memory. This led to software instabilities for users who were trying to process recordings with long durations.
v1.8, Pupil Player only deserializes data if required. This reduces memory consumption dramatically and improves software stability.
Please be aware that the initial upgrade of recordings to the new format can take a bit of time. Please be patient while the recording is converted.
Temporally disabled features
We had to disable the following features due to changes on how we handle data within Pupil Player:
Vis Scan Path
- Manual gaze correction for
Gaze From Recording
We are working on a solution and will hopefully by able ro re-enable these with in the next release.
- Fixed a bug were Player crashed if
info.csvincluded non-ASCII chracters – #1224
- Correctly reset the last known reference location after stopping the manual marker calibration in Capture – #1206
We have added PyTorch to our dependencies. If you want to make use of GPU acceleration you will have to run Pupil Capture from source and install the GPU version of PyTorch. We will work on bundling GPU supported versions in the future.
New recording format
We had to make changes to our recording format in order to make the incremental serialization and deferred deserialization features possible. Please, see our documentation for more details on the New Recording Format.
zmq_tools.Msg_Streamer.send() has been reworked. The previous required two arguments:
payload. The new version only accepts the
payload argument and expects the payload to have a
topic field instead.
Real-time fixation format changes – #1231
Online fixations are published in a high frequency. Each fixation has a
base_data field that includes gaze data related to this fixation. In turn, gaze data has a
base_data field on its own including pupil data. As a result, recordings grew unreasonably fast in size if the
Online Fixation Detector was enabled. E.g. for an eleven minute long recording, the
pupil_data file grew to 1.4GB of which 1.1GB were fixations alone.
As a consequence, we are replacing each gaze datum in the
base_data field of online fixations with
(topic, timestamp) tuples. These uniquely identify their corresponding gaze datum within the gaze data stream.