Sunday, August 17, 2014

The Sensory Explosion

At last week's SIGGRAPH conference, I had the pleasure of contributing a "Sensory Explosion" presentation.on the "Sight, Sounds and Sensors" panel.

Below are the key slides and some speaker notes. Enjoy!


The panel featured several contributors:


Sensics has been doing VR for a long time. Historically, it has mostly focused on enterprise applications (government, academia, corporate) and it is considering how to best leverage its technologies and know-how into larger markets such as medical devices and consumer VR.



Traditionally, head-mounted displays had three key components: a display (or multiple displays), adaptation optics and most often an orientation sensor. Most of the efforts were focused on increasing resolution, improving field of view and designing better optics. The orientation sensor was necessary, but not the critical component.


Recently, we are seeing the evolution of the HMD as a sensory platform. On top of the traditional core, we see the emergence of new types of sensors: position trackers, eye tracking, cameras (either for augmented reality and/or depth sensing), biometric sensors, haptic feedback, sensors that perform real-time determination of hand and finger position and more. Increasingly, innovation is shifting to how to make these sensors deliver maximum performance,  lightest weight (after all, they are on the head), and utmost power-efficiency (both for heat dissipation reasons as well as battery life for portable systems)



Above and beyond these on-board sensors, VR applications can now access sensors that are external to the HMD platform. For instance, most users carry a phone that has its own set of sensors such as a GPS. Some might wear a fitness or be in a room where a Kinect or some other camera can provide additional information. These sensors pose an opportunity for application developers to know even more about what the user is doing.



Integrating all these sensors can be pretty complex pretty quickly. Above is a block diagram of the SmartGoggles(tm) prototype that was done by Sensics a few years ago. These days, there is a much greater variety of sensors, so what can be done about them?



I feel that getting a handle on the explosion of sensors requires a few things:
1. A way to abstract sensors, just like VRPN abstracted motion trackers.
2. A standardized way to discover which sensors are connected to the system.
3. An easy way to configure all these sensors, as well as store the configuration for quick retrieval
4. A way to map the various sensor events into high-level application events. Just like you might change the mapping of a the buttons on a gamepad, you should be able to decide what impact does a particular gesture, for instance, have on the application.

But beyond this "plumbing", what is really needed is a way to figure out the context of the user, to turn data from various sensors into higher-level information. For instance: turn the motion data from two hands into the realization that the user is clapping, or determine that a user is sitting down, or is excited, or happy or exhausted.

We live in exciting times with significant developments in display technologies, goggles and sensors. I look forward to seeing what the future holds, as well as to make my contribution to shaping it.

No comments: