Projet C IG (1A)

IHM (2A)


VR/AR/Post-WIMP (3A)


Projet image (2A)


HCI (MoSIG 2)


Test Logiciel


Projects Docs

Optitrack Calibration Tutorial with Motive

Right-click on the images in this page and open them in a new window to get a larger version.

Hardware setup

The Optitrack system is based on infrared (IR) cameras connected through USB (sometimes ethernet) to an Optihub USB hub, itself connected to a dedicated PC running the Motive Tracker tracking software.

The lenses of the cameras are surrounded by IR light emitting diodes (LEDs) which send IR light in the direction of the lenses optical axis. This light is reflected in the opposite direction by retro-reflexive markers and captured by the cameras. Hence, the cameras only see a set of smalls dots in their 2D images. The system needs to know the relative position and field of view of each camera, so that it can compute the 3D position of the dots from their 2D positions in the camera images.

Physical setup checklist

  1. Connect at least 2 cameras to the optihub with USB cables. We have 3m and 5m cables.
  2. Plug the optihub's power supply to the optihub
  3. Connect the optihub to the Optitrack PC with an USB uplink cable (it has a square plug on one side)
  4. Plug the USB hardware key to the PC (the key should be stored in the camera box).

Motive Tracker software

Double click on the icon (see on the right) which should be on the desktop of the optitrack PC.

The application displays a "Quick start" window (see Figure 1). For a first use, this window can simply be closed. When coming back to work, you can "Open Existing Project" to recover the various settings that you saved in a previous session.

A general view of the software is illustrated on Figure 2. The screen is organized in window panes that can be closed. Window panes are re-openned by selecting them in the "View" option in the menu bar. In Figure 2, the screen displays the "Cameras", "Perspective View", "Camera Preview" and "Camera calibration" panes.


Figure 1

Figure 2

Aiming cameras at the center of the tracking volume

The "Camera" pane is illustrated on Figure 3. Any setting of the camera pane can be applied to all cameras at the same time by first selecting the camera group (called Group 1 (2) (Master) in Figure 3), or to one particular camera by first selecting this camera under the group.

The first step is to points all camera towards the center of the the desired tracking volume. As this would be difficult in the IR spectrum, the camera are switched to the visible spectrum by clicking on the Visible/IR switch (see Figure 3). Figure 4 shows two dark images in the visible spectrum. We can distinguish a computer monitor. Three bright disks are visible, they are markers. There are also some reflexions on the side of the monitor: they would cause problem if being treated as markers, they will be handled by masking (see below). The brightness of the image can be temporarily augmented to facilitate the aiming of the cameras, for example by increasing the ``exposure time'' (see Figure 3).

There is another way to aim each camera at the center of the tracking volume: each camera has a button on its back. When pressing this button, the camera is automatically switch to visible spectrum, its brightness augmented, and its image in the Motive software is set to fill the window. Press the button a second time to set the camera back to IR tracking mode.


Figure 3

Figure 4

Adjusting brightness, masking

When in IR mode, the goal is that only the markers are visible. There are many ways to control the illumination of the scene. You can adjust the camera's LED power or exposure time (Figure 3). The camera's gain can be set to high/medium/low in the "Camera Settings" area of the Cameras pane (Figure 3). In the same area, the "Illumination type" can be set to either "Strobed IR" (the LEDs are flashed before each camera capture) on "Continuous IR" (the LEDs are set to a constant illumination). Strobed IR is more powerful and can be used when tracking far away markers.

As a rule of thumb, you can:

  1. set the video type to "Precision mode"
  2. set the "Gain" (Range) depending on your setup
  3. set the "Illumination type" to "Continuous IR"
  4. set the LED power to maximum
  5. decrease the "Exposure time" until all marker in the scene are clearly visible, and everything else is not visible. Place the most "challenging" markers in the scene: small and far from the cameras. It may not be possible to remove all non-markers from the camera images, this will be handled by "masking" (see below).
  6. increase the threshold to a value where to markers are always visible, while removing noise. The effect of threshold is only visible when the camera are in IR tracking mode, not in visible mode.

There can be specular objects in the scene which reflect light very well to the camera. Specular object are mis-interpreted as markers and they make marker reconstruction difficult. To remove specular markers from the camera image:

  1. identify specular objects and remove them from the scene, or, if removing is not possible as in Figure 4,
  2. mask specular objects: remove any markers from the scene and press the "Mask visible regions" button (see Figure 4). At this moment, Motive considers everything that is visible as "noise", and creates some masks in every camera image: these area will then be considered unsuitable for tracking. Figure 5 shows the red masks on the specular reflexions on the sides of the monitor in Figure 4, left.

Figure 5

Calibration of the camera positions

Now, the system needs to know the relative position of all cameras. This is done by moving the optiwand in front of the cameras (see Figure 7). The system knows the spatial configuration of the markers on the wand. From the projection of the wand's makers in all camera images, for several positions of the wand in the tracking volume, the system can compute the relative position of the cameras.

  1. Open the "Camera Calibration" pane (from the "View" menu).
  2. Check that the size of your optiwand correspond to the value in the "Wand Options" area (see Figure 6, right).
  3. Click "Start Wanding" and move the wand in front of the cameras. When the wand is seen by all cameras, Motive displays a line in the camera images (see Figure 6, left). Move the wand to achieve a good coverage of the surface of all camera images.
  4. After some time, the number of wand samples is sufficient, the "Wanding" rectangle turns green (see Figure 6, bottom-right). Click "Calculate".
  5. The system optimizes the camera positions. After some time, when the "Calibration Engine" shows a sufficiently high "Overall Quality" (Figure 8), press "Apply Result". The relative position of the cameras is calibrated. Motive offers to save the wanding data, you don't need to save this.
  6. The last step is to give an origin the the world coordinate system. Place the Calibration Square (Figure 9) in the scene, and click "Set Ground Plane". Motive offers to save the camera calibration data, you don't need to save this.
  7. Save the overall project (<File><Save Project>): this will save all camera setup (illumination settings, etc.) and camera calibration. In fact, every time you fix something in Motive, remember to save it in the project. Doing so, you will not have to redo the settings the next time yo launch motive.

Figure 6

Figure 7

Figure 8

Figure 9

At this point, you can switch to "perspective view" (see Figure 4) to check that the camera position calibration went ok. Figure 10 illustrate a 2 camera calibration. The position of the representation of the cameras is ok. The 3 markers of the calibration square have been selected by a rectangular selection with the mouse. Motive shows the 3 projection rays for both cameras. In the perspective view, use the left mouse button to select objects (such as markers), the right mouse button to rotate the scene, the scroll wheel to zoom in and out, and press the scroll wheel to translate the scene.


Figure 10

Network streaming, reconstruction properties

In the "View" menu, activate various panes for final settings. In the "Data Streaming" pane, check that every thing is as in Figure 11: broadcast data frame checked, stream markers and rigid bodies, port and adresses are correct. In the "Reconstruction" pane, check that "Enable point cloud reconstruction" is checked. You can set the "Residual (mm)" property to 1.0 in case you are working on accurate tracking (camera at close range).


Figure 11

Rigid bodies

If you need to track the 6 Degrees of Freedom (DoF) of an object (i.e. 3 DoF for 3D position, 3 DoF for 3D rotation), you need to attach a rigid body to the object. A rigid body is a set of 3 or more markers which relative positions in the physical world remain constant. Then, you need to teach Motive about the existence of the rigid body and the relative positions of its markers. Select the rigid body's markers in the "Perspective view". Then, in the "Rigid bodies" pane, click "create from selection" (see Figure 11). You can create as many rigid bodies as you need, and give them specific names.

At this point Motive shoud be sending over the network the position (3 DoF) of all visible markers, and the position+rotation (6 DoF) of all visible rigid bodies.

Edit - History - Upload - Print - Recent Changes - Search
Page last modified on October 17, 2016, at 01:48 PM