Simultaneous Localization and Mapping (SLAM) is a computer vision technique used for tracking the motion of a host object in a physical environment. This algorithm works by creating a mapping of the surroundings using on-board sensors, and then estimating the pose (translation, rotation) of the host object relative to that environment. Meta’s SLAM modifies the position of the MetaCameraRig GameObject in Unity based on an estimate of the headset’s position in the physical world. This feature is enabled by default.
When running an application with SLAM tracking, it is important to note that there will be an initialization process. This is a period of time where the software will map the environment in order to track motion robustly.
The best results for SLAM tracking are achieved when:
1. SLAM tracking is used in a static environment with distinct features or patterns which break up the homogeneity of the space.
This means that the more features (clutter) there are to track, the better the quality of tracking will be.
2. During the initialization phase, the user needs to turn their head slowly, first to the left, then to the right to capture the greatest area of the room that they are facing.
3. The user should refrain from moving to a new environment or location where their surroundings change significantly from where SLAM was initialized.
4. The user maintains a 10 – 20 inch buffer distance between the camera and the closest objects.
5. Avoid extreme lighting conditions, either too bright (direct sunlight from a window), or too dark (e.g. just light from your monitor).
Although it may work okay in these conditions if users experience issues it is worth while to try and adjust the lighting conditions.
6. Avoid extremely dynamic environments. SLAM will dismiss overly stimulating objects.
Note: To re-initialize environment mapping at any time, use the F4 key.
To get started using SLAM in your own scene
Begin with a scene which has the MetaCameraRig prefab included in the hierarchy panel of your scene.
1. Select the MetaCameraRig GameObject in the Hierarchy window.
2. Select the localizer method named ‘Meta.SlamLocalizer’ from the dropdown menu.
Your scene is now configured for SLAM tracking.
Digging a Little Deeper
1. Ensure that the component’s current Localizer is set to Meta.SlamLocalizer.
2. Ensure that the Slam Localizer component’s script is set to SlamLocalizer.
There are several callbacks that you can use to know when SLAM has finished initializing.
- “On Slam Sensors Ready” is the initialization callback. Initialization happens almost immediately. Just move your head a little bit and that starts the initial mapping.
- “On Slam Mapping in Progress” is the Scale Estimation callback. This uses the IMU (inertial measurement unit) to track the head rotation. The gyroscope gets the smooth visualization. Then it moves to full VISlam (visual inertial) which detects the angular velocities and translational acceleration, to fuse with the Slam output that produces the low latency that we have now.
- “On Slam Mapping Complete” is the Slam is working callback. Slam has finished initializing.
- “On Slam Tracking Lost” is the callback to let you know that the sensors have somehow been blocked and cannot see the mapping.
- “On Slam Tracking Relocalized” is the callback to let you know that the sensors have been unblocked, and it relocalizes you back to where you started with the initializing.
Map Saving and Loading
The SLAM map is not exposed, but the 3D Reconstruction map adds saving and loading interfaces to the SlamInterop module, and adds the option to save a map at application shutdown, and load a map at startup on the SlamLocalizer script.
In any scene with a MetaCameraRig, you can change the settings on SlamLocalizer to Save Map at End. Developers must provide a name for your map in the text box required. After you have run the scan you should see <your_map_name>.mmf in the Unity project folder. You can then enable Load Map At Init with the same filename.
If loaded successfully, you should see rotation as well as the SLAM UI telling you to hold still.
Note: If you do not see rotation of the MetaCameraRig for an extended period of time after loading the map, this means that relocalization is not able to find a pose, and you may need to create a new map.
Note: When using this feature for collaboration the scale estimate of each of the headsets an be different, which means objects may not appear in exactly the same place (may be up to a -5cm error betweeen two user’s perceived position of objects).
Note: If you scan a dynamic object which is no longer present (e.g. hands) in the scene, it will not disappear if you continue to scan the same area. This is a known issue.
When mapping is finished, the environment map is saved along with the denser 3D Reconstruction map, so that when you re-enter a scene, you are placed within those same coordinates.
SLAM produces a pose in 3D space that is passed to 3D Reconstruction.