Optitrack Calibration Tutorial with MotiveRight-click on the images in this page and open them in a new window to get a larger version. Hardware setupTable of contentsHardware setup The Optitrack system is based on infrared (IR) cameras connected through USB (sometimes ethernet) to an Optihub USB hub, itself connected to a dedicated PC running the Motive Tracker tracking software. The lenses of the cameras are surrounded by IR light emitting diodes (LEDs) which send IR light in the direction of the lenses optical axis. This light is reflected in the opposite direction by retro-reflexive markers and captured by the cameras. Hence, the cameras only see a set of smalls dots in their 2D images. The system needs to know the relative position and field of view of each camera, so that it can compute the 3D position of the dots from their 2D positions in the camera images. Physical setup checklist
Motive Tracker softwareDouble click on the icon (see on the right) which should be on the desktop of the optitrack PC. The application displays a "Quick start" window (see Figure 1). For a first use, this window can simply be closed. When coming back to work, you can "Open Existing Project" to recover the various settings that you saved in a previous session. A general view of the software is illustrated on Figure 2. The screen is organized in window panes that can be closed. Window panes are re-openned by selecting them in the "View" option in the menu bar. In Figure 2, the screen displays the "Cameras", "Perspective View", "Camera Preview" and "Camera calibration" panes. Aiming cameras at the center of the tracking volumeThe "Camera" pane is illustrated on Figure 3. Any setting of the camera pane can be applied to all cameras at the same time by first selecting the camera group (called Group 1 (2) (Master) in Figure 3), or to one particular camera by first selecting this camera under the group. The first step is to points all camera towards the center of the the desired tracking volume. As this would be difficult in the IR spectrum, the camera are switched to the visible spectrum by clicking on the Visible/IR switch (see Figure 3). Figure 4 shows two dark images in the visible spectrum. We can distinguish a computer monitor. Three bright disks are visible, they are markers. There are also some reflexions on the side of the monitor: they would cause problem if being treated as markers, they will be handled by masking (see below). The brightness of the image can be temporarily augmented to facilitate the aiming of the cameras, for example by increasing the ``exposure time'' (see Figure 3). There is another way to aim each camera at the center of the tracking volume: each camera has a button on its back. When pressing this button, the camera is automatically switch to visible spectrum, its brightness augmented, and its image in the Motive software is set to fill the window. Press the button a second time to set the camera back to IR tracking mode. Adjusting brightness, maskingWhen in IR mode, the goal is that only the markers are visible. There are many ways to control the illumination of the scene. You can adjust the camera's LED power or exposure time (Figure 3). The camera's gain can be set to high/medium/low in the "Camera Settings" area of the Cameras pane (Figure 3). In the same area, the "Illumination type" can be set to either "Strobed IR" (the LEDs are flashed before each camera capture) on "Continuous IR" (the LEDs are set to a constant illumination). Strobed IR is more powerful and can be used when tracking far away markers. As a rule of thumb, you can:
There can be specular objects in the scene which reflect light very well to the camera. Specular object are mis-interpreted as markers and they make marker reconstruction difficult. To remove specular markers from the camera image:
Calibration of the camera positionsNow, the system needs to know the relative position of all cameras. This is done by moving the optiwand in front of the cameras (see Figure 7). The system knows the spatial configuration of the markers on the wand. From the projection of the wand's makers in all camera images, for several positions of the wand in the tracking volume, the system can compute the relative position of the cameras.
At this point, you can switch to "perspective view" (see Figure 4) to check that the camera position calibration went ok. Figure 10 illustrate a 2 camera calibration. The position of the representation of the cameras is ok. The 3 markers of the calibration square have been selected by a rectangular selection with the mouse. Motive shows the 3 projection rays for both cameras. In the perspective view, use the left mouse button to select objects (such as markers), the right mouse button to rotate the scene, the scroll wheel to zoom in and out, and press the scroll wheel to translate the scene. Network streaming, reconstruction propertiesIn the "View" menu, activate various panes for final settings. In the "Data Streaming" pane, check that every thing is as in Figure 11: broadcast data frame checked, stream markers and rigid bodies, port and adresses are correct. In the "Reconstruction" pane, check that "Enable point cloud reconstruction" is checked. You can set the "Residual (mm)" property to 1.0 in case you are working on accurate tracking (camera at close range). Rigid bodiesIf you need to track the 6 Degrees of Freedom (DoF) of an object (i.e. 3 DoF for 3D position, 3 DoF for 3D rotation), you need to attach a rigid body to the object. A rigid body is a set of 3 or more markers which relative positions in the physical world remain constant. Then, you need to teach Motive about the existence of the rigid body and the relative positions of its markers. Select the rigid body's markers in the "Perspective view". Then, in the "Rigid bodies" pane, click "create from selection" (see Figure 11). You can create as many rigid bodies as you need, and give them specific names. At this point Motive shoud be sending over the network the position (3 DoF) of all visible markers, and the position+rotation (6 DoF) of all visible rigid bodies. |