Pupil Cloud releases

Here you will find a log of all features, changes, bug fixes, and developer notes for Pupil Cloud software.

Pupil cloud

July 8, 2022

Pupil cloud

July 8, 2022

Big updates for Pupil Cloud! A Demo Workspace with sample recordings, projects, and enrichments for everyone to explore. Fixation scanpaths can now visualized in all video playback in Cloud.

Demo Workspace

Every Pupil Cloud user now has access to our new Demo Workspace. It contains recordings and an example project with enrichments. We encourage everyone to explore it to help understand cloud features, best practices, and to get hands-on on with a real world dataset recorded with Pupil Invisible. We will continue to add more projects over time. Have a use-case you'd like to see as a demo? Get in touch!

Learn more about the demo workspace here.

Fixation Scanpath Visualization

We have added a new visualization for fixation scanpaths to all video playback in Pupil Cloud. It shows the sequence of fixations for the last two seconds in a recording. The visualization compensates for head movements to ensure fixations remain in the right location even when the viewpoint changes. (We've got a white paper on the new fixation detection algorithm coming soon - stay tuned!)

Pupil cloud

March 24, 2022

Pupil cloud

March 24, 2022

We are excited to announce our latest update for Pupil Cloud including a new blink detector for Pupil Invisible, visualizations for the Reference Image Mapper enrichment and quicker access to data downloads in convenient formats!

We built a brand new blink detection algorithm and are making it available in Pupil Cloud. Blinks will be calculated for recordings automatically on upload to Pupil Cloud and be available in exports. The algorithm analyzes motion patterns in the eye videos to robustly detect blink events including blink duration. We are planning on open sourcing the blink detection algorithm in the near future.

Learn more about the algorithm here.

Checkout the export format of blink data here.

Reference Image Mapper Visualizations

The Reference Image Mapper enables you to automatically map gaze from the scene video onto a reference image. We want to give users a glimpse into how the algorithm works and a way to inspect the results.

To facilitate that, we added two new visualizations for the Reference Image Mapper to the project editor.

If you select a Reference Image Mapper enrichment in the project editor sidebar you can enable a side-by-side view of the reference image and the scene video. You can play back gaze on them simultaneously and verify the correctness of the mapping.

Internally, the Reference Image Mapper is generating a 3D point cloud representation of the recorded environment. You can now enable a visualization of this point cloud projected onto the scene video. This allows you to verify that the 3D scene camera motion has been estimated correctly.

CSV Data Downloads In Drive

You can already use the Raw Data Exporter enrichment to download recordings in convenient formats like CSV and MP4 files. Now you can download recordings in convenient formats directly from Drive! We did this to help speed up the exploration of data for those who don’t need to create a project.

In the Drive view you can click on the Download button and now see two options. Download binary recording data which downloads the raw data as recorded on the Companion Device (plus 200Hz gaze data). And the new option called Download Recording which will result in CSV data files and videos in convenient formats (including 200 Hz gaze, fixation and blink data).

Pupil cloud

December 10, 2021

Pupil cloud

December 10, 2021

We are excited to announce our latest release for Pupil Cloud including a novel fixation detector, advanced privacy features in workspaces, and more! All recordings uploaded after this release will automatically have fixation data.

Fixation Detection

We have developed a novel fixation detector for Pupil Invisible! It was designed to cope with the challenges of head-mounted eye tracking and is one of the most robust algorithms out there!

Traditional fixation detection algorithms are designed for stationary settings which assume minimal head movement. Our algorithm actively compensates for head movements and can detect fixations more reliably in dynamic scenarios. This includes robustness to vestibulo–ocular reflex (VOR) movements.

Fixations will be calculated automatically on upload to Pupil Cloud and be available in all exports. We are planning on open sourcing our new fixation detection algorithm in the near future along with a white paper and integration into Pupil Player for offline support.

Check out documentation on fixations exported for enrichments in the docs

Advanced Workspace Privacy Settings

We are introducing additional privacy settings for workspaces to cover a few specialized use cases that were requested by the community.

You can now disable scene video upload for the entire workspace. This allows users to make use of Pupil Cloud features like the calculation of 200 Hz gaze or fixations, while complying with strict privacy policies that would not allow scene video uploads to our servers.

In a future release we will introduce the ability to automatically blur faces on-upload of a recording to a workspace, such that the potentially sensitive original version is never stored in Pupil Cloud.

“Created by” Column in Recordings List

You can now see who uploaded each recording in the Drive view “Created by” column. We hope this makes collaboration easier within your workspace.

Optimized Project Editor Layout

We made more space for the video player at the center, and added a toolbar above the video player. The toolbar contains contextual enrichment related functions. Currently only for the Marker Mapper enrichment - more in the near future!

Pupil cloud

September 30, 2021

Pupil cloud

September 30, 2021

Big release! We added two powerful new features. Workspaces enable collaboration with your colleagues. Face Mapper automatically detects faces in recordings!

Workspaces

Workspaces enable you to collaborate with colleagues through role based access from data collection with Pupil Invisible Companion App to data annotation and enrichment in Pupil Cloud. Workspaces act as isolated spaces that contain recordings, wearers, templates, labels, projects and enrichments.

Note: Update the Pupil Invisible Companion app to the most recent version (v1.3.0) to make use of workspaces.

Learn more about Workspaces

Face Mapper

Face Mapper is a new enrichment that automatically and robustly detects faces in the scene video and maps gaze data onto the detected faces. This enables you to easily determine when a subject has been looking at a face and to compute aggregate statistics (example: how much time was spent looking at faces?).

Learn more about Face Mapper

Share your thoughts

Have feedback, questions, feature requests - send us your thoughts!

Pupil cloud

July 22, 2021

Pupil cloud

July 22, 2021

Head Pitch & Roll Estimates are now available!

The Inertial Measurment Unit (IMU) sensor embedded in the Pupil Invisible Glasses frame provides measurements that can tell us how the wearer’s head is moving. This can be valuable if you want to measure when your subject is looking downwards vs upwards for example.

In this update we have made the outputs of the IMU much easier to work with. Instead of only reporting the raw outputs of the IMU (rotational speed and translational acceleration), we now include estimates of the absolute pitch and roll of the head.

This is part of the Raw Data Exporter. You can find more details about how the estimates are calculated in the docs!

Information on when the Glasses are worn now included in Raw Data Export

The gaze.csv file included in the Raw Data Export now contains a new column called worn that indicates whether the Pupil Invisible Glasses have been worn by a subject at the respective time point (1.0 = worn, 0.0 = not worn).

This data has previously only been available as part of the binary recording data.

Improved Export Format of Marker Mapper

We have updated the export format of the Marker Mapper to be easier to use:

We have added a new column detected markers to the aoi_positions.csv file that contains the IDs of all the markers detected in the respective frame. We have split the corner coordinates in image column in the aoi_positions.csv file in to a separate column per coordinate to make it easier to parse.

Do you have feedback you would like to share?

Please do so!