Gaze-Tracking Calibration

Gaze-Tracking Model

Here you can see the HMD components in 3d creating the underlying model for the gaze tracking. We use this model in a physically accurate ray tracer.

Gaze model components
Gaze model components.

At this time the location of the lenses and the eyes are still unkown. Since the user can wildly adjust the lenses, those parameters are derived in the user calibration.

We are using a physical eye model according to related work from Deering et al. and Adler et al. This includes a single eye ball diameter, a single iris diameter as well as different material parameters. However, the relative location of both eyes have to be estimated since the anatomy between different users is very different.

For eyetracking our system robustly tracks the pupil, and then maps the lens-distorted pupil positions into undistorted gaze directions using our calibrated model.

We extract the pupil ellipse at 60 fps for both eyes. Based on the ellipse we compute the pupil position and the pupil size.

Ok, so how does it work ? For pupil tracking we implemented a novel tracking algorithm, since we have to deal with different challenges in our HMD setup.
We have to get rid of the glints while tracking. This is solved by different filter steps.

We also have to be robust against lens distortion as well as strong occlusion due to the user‘s eye lashes. We therefor compute an occlusion value and proceed with different algorithm being optimized for different degrees of occlusion.

In the end we get a pupil size and pupil position in camera coordinates.
We then map the lens-distorted pupil position to undistorted screen positions,
for which we use the data from the calibrated model.

Details on the algorithm are in the research paper.

User Calibration

The user is calibrated by the following procedure. First the user adjusts the lens controllers to get a comfortable and clear view onto the screen. Our algorithm then estimates the lens controller positions automatically by exploiting those visible white rings and the information is updated in the model.

The red components are user-specific
The red components are user-specific.

We created a supported guidance algorithm that guides the user to gaze directions being characteristic in our special setup. We exploit the visible reflections on the cornea and pupil resulting in unique glint configurations as you can see here as red circles in the camera images. Those gaze directions enable the location estimation of the users eye balls, relative to the remaining precalibrated components. We therefore have raytraced every possible glint configuration and in the end just need one lookup for the derived glints of the user. Details on that are in the research paper.

One advantage of this calibration procedure is that the interpupillary distance is derived automatically from our calibration model. The correct IPD can then be used in every VR application and supports motion sickness avoidance

We are then able to run the real-time tracking of the pupil based on the eye tracking camera data.

Hardware Calibration << >> Tracking Performance

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s