Introduction The ZED is a camera that reproduces the way human vision works. Using its two “eyes” and through triangulation, the ZED provides a three-dimensional understanding of the scene it observes, allowing your application to become space and motion aware. This guide will show you how to get started. The best way to use the guide is: Read the Getting Started section Read more about Camera and Sensors features of your camera Learn how to use Depth, Tracking, Mapping and Spatial AI modules. Check out the different Integrations with the ZED Explore the Tutorials and Samples to get started with application development Stereo Capture The ZED is a camera with dual lenses. It captures high-definition 3D video with a wide field of view and outputs two synchronized left and right video streams in side-by-side format on USB 3.0. Read more about 3D Video Capture. Depth Perception Depth perception is the ability to determine distances between objects and see the world in three dimensions. Up until now, depth sensors have been limited to perceiving depth at short range and indoors, restricting their application to gesture control and body tracking. Using stereo vision, the ZED is the first universal depth sensor: Depth can be captured at longer ranges, up to 20m. Frame rate of depth capture can be as high as 100 FPS. Field of view is much larger, up to 110° (H) x 70° (V). The camera works indoors and outdoors, contrary to active sensors such as structured-light or time of flight. Read more about Depth Perception. Positional Tracking Using computer vision and stereo SLAM technology, the ZED also understands its position and orientation in space, offering full 6DOF positional tracking. In VR/AR, this means you can now walk around freely and the camera will track your movements anywhere. If you’re into robotics, you can now reliably determine your robot’s position, orientation, and velocity and make it navigate autonomously to the coordinates of your choice on a map. You can access 6DOF motion tracking data through the ZED SDK or its plugins: Unity, ROS… Read more about Positional Tracking. Spatial Mapping Spatial mapping is the ability to capture a digital model of a scene or an object in the physical world. By merging the real world with the virtual world, it is possible to create convincing mixed reality experiences or robots that understand their environment. The ZED continuously scans its environment to reconstruct a 3D map of the real-world. It refines its understanding of the world by combining new depth and position data over time. Spatial mapping is available either through the ZEDfu application or the ZED SDK. Read more about Spatial Mapping.