In this last part of this post I want to show you some applications for our HMD.
This is an example approach for foveated rendering in order to save render time. We render the left and right view using different resolutions. Only the foveal region gets the highest acuity. We then blend between the regions to achieve a a smooth image.
Here we also lower saturation in the non-foveal region. It doesnt save render time but it shows that you could do every perceptual effect you need, for example to simulate functional defects of the eye.
The next application is accomodation simulation. We achive that by adding depth of field to the computed distance of the gaze vector. To show the effect here we overestimated the depth of field effect. For realistic experiences a lower parameter would be used.
This is another example for accomodation where we added the gaze-vector visually
in order to show where the user was looking at. As you can see the user is able to adapt focus for arbitrary distances. It creates an interesting effect and wouldnt be possible without gaze-tracking.
Avatar Animation and Telepresence
Here you can see real-time avatar animation with gaze-control.
This enables deeper immersion and a higher degree of self-expression and
and could be used for realistic games or social games, for training applications and for telepresence functionality.
The guy looks a bit sleepy, but it is probably the case that I‘m just not a good artist when it comes to facial animation 🙂
User Studies and Gaze Analysis in Panoramic Videos
The gaze-tracking of course also allows more traditional tasks like user studies, but now in an immersive context. In this example we‘ve recorded gaze data while watching a video in the HMD. In real-time we can visualize the foveal region using a color coded heat-map.
Based on the previous idea we implemented a new framework for gaze analysis of immersive video, which helps to gain insight about attention in VR videos.
This is work from our group which is currently being presented at the eye-tracking on visualization workshop in Chicago from my collegue Thomas Löwe. For the shown 360 video we recorded head tracking and gaze data and based on that we derive a multidimensional scaling view, which are the curves in the lower video. If the curves are close together the users have seen the same things, otherwise different things. So this tool can help for analysis and improving storytelling in VR video.
You have seen some applications, however I think we barely scratched the surface yet.
- You can use gaze also in form of an interaction metaphor, maybe by giving support to the focus area when selecting things or navigation.
- Gaze-contingent tone mapping enables many new perceptual effects.
- Adaptive visualization can reduce the cognitive information load for the user while analyzing data.
- Foveated video rendering enables to reduce the bandwith necessary for videos in very high quality.
- Augmented virtual reality as you can see on the right would enable new interfaces in VR applications.
And this is just a short list for stationary VR. With mobile systems you could also think about many more applications.
Future Work / ToDo
- full autocalibration
- creation of mobile system
- testing different cameras (Raspberry Pi Cam)
- testing other mobile systems (e.g. XU3 Droid)
- more applications