Projet C IG (1A)

IHM (2A)


VR/AR/Post-WIMP (3A)


Projet image (2A)


HCI (MoSIG 2)


Test Logiciel


Projects Docs

Stereo rendering

Stereo rendering offers 2 different images of a virtual 3D scene to the observer, one for each eye, in order to simulate the two images that we get from the real world through our eyes. Compared to a single image, stereo improves depth perception in the virtual scene.

Requirements

To achieve stereo rendering, a system must:

  • know the position of the two eyes with respect to the 3D virtual scene,
  • generate two images, the results of the perspective projection of the scene from eye of the eyes viewpoints,
  • arrange for the user's left eye to see the left image, and to not see the right image. And vice versa.

Eyes positions

The position of the eyes is sometimes assume to be static (e.g. 3D movies in the theaters). A better depth perception is created if the eyes' positions are tracked in realtime. This is called head coupling, because only the head position needs to be tracked: each eye position is fixed with respect to the head, hence it can be computed from the current head position. Tracking the head usually require specific equipment, such as an optical tracker.

Two perspective projections

At first glance, generating the two perspective projections for both eyes is easy: just ask your graphics library (e.g. OpenGL) to project the scene from one particular viewpoint, then change the camera position a little bit, and ask for a second projection. This works quite well, but better stereo is achieved if the projection is adapted to the particular configuration of the two eyes, using asymmetric frustum.

Showing different images to the left eye and the right eye

This is often the most challenging technical problem in interactive stereo. There are various approaches, such using passive glasses and polarizing the projected light (the movie theater approach). Here, we discuss the use of active shutter glasses.

The user ears active shutter glasses: glasses that can quickly switch between blocking and letting the light through each lens, independently. The challenge is to arrange for the left image to be shown while the right lens of the glasses is blocking, and vice versa: there has to be a synchronization mechanism between the display and the glasses. Stereo system thus use a small device that is connected to the computer, and sends synchronization signals to the shutter glasses. Also, as each eye sees the scene only half of the time, the display must be able to refresh at a higher frame rate than non-stereo systems. Usually, stereo displays update at 120Hz, which offers 60Hz per eye, the equivalent of non-stereo systems.

Professional systems (i.e. expensive)

Professional graphic card (e.g. NVidia Quadra) usually have a stereo port in addition to the display ports. The glasses synchronization device is plug there, and the graphics card and driver are in charge of sending the correct synchronization signal. From the programmer's point of view, the driver offers a quad-buffer rendering context: the program can take as much time as necessary to render in the left and right back-buffers. When both images are ready, the program signal it to the driver (call to swapbuffer) and the card switches both the left and right back and front buffers.

Mainstream systems

Mainstream systems (e.g. NVidia GeForce) typically don't offer stereo renderings. However, NVidia has released the "3D Vision" system for its mainstream graphics cards. It uses a syncronization device that plugs in a usb port, hence it does not required the stereo port of professional cards. The problem is that 3D vision is a closed system that only runs on MS Windows with the Direct X API (i.e. not OpenGL). But some people have reverse engineered the device so that any program can send a message to the syncrhonization device in order to flip the eyes on the shutter glasses.

The problem becomes: how do we know when to send the "flip eye" signal on the usb bus? We could try just after "swap buffer": in theory, OpenGL has just swapped the buffer and a new image is displayed. But OpenGL's "swap buffer" does not wait for the actual physical refresh, i.e. the new image is not yet displayed in the physical world when we send the signal on the bus. There is a way to wait for the actual swapping of the OpenGL buffers:

 GLsync sync_obj;
 GLenum wait_res;

 glutSwapBuffers();
 sync_obj = glFenceSync(GL_SYNC_GPU_COMMANDS_COMPLETE, 0);
 wait_res = glClientWaitSync(sync_obj, GL_SYNC_FLUSH_COMMANDS_BIT, (GLuint64)1 * 1000 * 1000 * 1000);
 glDeleteSync(sync_obj);

 // send "swap eye" signal to synchronization device.
Edit - History - Upload - Print - Recent Changes - Search
Page last modified on November 14, 2016, at 09:59 AM