How Would a Response in VR Seem Intelligent

In Krueger’s Responsive Environments paper, he argued that in order for an interactive medium to respond intelligently, “it must know as much as possible about what the participant is doing”. It is important for the computing machine to obtain as much multi-sensory information about users inputs as possible to use its algorithmic process to produce a corresponding response. The way in which the medium responds also reflects its intelligence, be it to the users’ position, velocity, or a change of shape.

Expanding that concept into VR environments, I believe that collecting users inputs in great resolution and accuracy is imperative to intelligent responses. Users inputs here can be headset position, rotation, velocity, and acceleration. They can also be from the controllers (all the data mentioned above, plus click detection, drag detection…). For more premium VR headset that employs spatial tracking, users’ position inside and outside of the environment can be utilized.

One potential possibility that promise to be a game changer is the ability to track users’ eyes, which opens infinitely many doors for novel interaction and responses as this mimics how we visually perceive in real life. One example for an intelligent response with regards to eyes tracking is foveated rendering. Foveated rendering uses algorithmic processing to render areas where the user is looking at at a higher resolution than the periphery (which will be blurrier), which produces a more realistic VR environment while saving bandwidth, thus achieving a faster response time. As users move their eyes around, the focal area changes accordingly in a timely manner, thus an intelligent response.

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.