Tutorial - Using Depth Perception This tutorial shows how to retrieve the depth from stereo image and point cloud, and print the distance of a given pixel in the terminal. The program will loop until 50 frames are grabbed. We assume that you have followed the previous tutorials: Hello ZED and Image Capture. Getting Started First, download the latest version of the ZED SDK. Download the Depth Sensing sample code in C++, Python or C#. Code Overview Open the camera As in previous tutorials, we create, configure and open the ZED. We set the 3D camera in HD720 mode at 60fps and enable depth in PERFORMANCE mode. For more information on this parameter, see Depth Modes. C++ Python C# // Create a ZED camera Camera zed; // Create configuration parameters InitParameters init_params; init_params.sdk_verbose = true; // Enable verbose logging init_params.depth_mode = DEPTH_MODE::PERFORMANCE; // Set the depth mode to performance (fastest) init_params.coordinate_units = UNIT::MILLIMETER; // Use millimeter units // Open the camera ERROR_CODE err = zed.open(init_params); if (err != ERROR_CODE::SUCCESS) { cout << "Error " << err << ", exit program.\n"; // Display the error return -1; } # Create a ZED camera zed = sl.Camera() init_params = sl.InitParameters() init_params.sdk_verbose = True # Enable verbose logging init_params.depth_mode = sl.DEPTH_MODE.PERFORMANCE # Set the depth mode to performance (fastest) init_params.coordinate_units = sl.UNIT.MILLIMETER # Use millimeter units # Open the camera err = zed.open(init_params) if err != sl.ERROR_CODE.SUCCESS: print("Error {}, exit program".format(err)) # Display the error exit() // Create a ZED camera Camera zed = new Camera(); // Create configuration parameters InitParameters init_params = new InitParameters(); init_params.depthMode = DEPTH_MODE.PERFORMANCE; // Set the depth mode to performance (fastest) init_params.coordinateUnits = UNIT.MILLIMETER; // Use millimeter units // Open the camera ERROR_CODE err = zed.Open(ref init_params); if (err != ERROR_CODE.SUCCESS) { Console.WriteLine("Error " + err + ", exit program."); // Display the error Environment.Exit(-1); } Note: By default, depth perception is in ULTRA mode. If you’re using this mode, it is not necessary to set the depth mode in InitParameters. Capture Image and Depth Now that the ZED is opened, we can capture images and depth. Here we loop until we have successfully captured 50 images. Retrieving the depth map is as simple as retrieving an image: We create a Mat to store the depth map. We call retrieveMeasure() to get the depth map. C++ Python C# // Capture 50 images and depth, then stop int i = 0; sl::Mat image, depth, point_cloud; while (i < 50) { // Grab an image if (zed.grab(runtime_parameters) == ERROR_CODE::SUCCESS) { // A new image is available if grab() returns ERROR_CODE::SUCCESS zed.retrieveImage(image, VIEW::LEFT); // Get the left image zed.retrieveMeasure(depth, MEASURE::DEPTH); // Retrieve depth matrix. Depth is aligned on the left RGB image. zed.retrieveMeasure(point_cloud, MEASURE::XYZRGBA); // Retrieve colored point cloud i++; } } # Capture 50 images and depth, then stop i = 0 image = sl.Mat() depth = sl.Mat() point_cloud = sl.Mat() runtime_parameters = sl.RuntimeParameters() while i < 50: # Grab an image if zed.grab(runtime_parameters) == sl.ERROR_CODE.SUCCESS: # A new image is available if grab() returns sl.ERROR_CODE.SUCCESS zed.retrieve_image(image, sl.VIEW.LEFT) # Get the left image zed.retrieve_measure(depth, sl.MEASURE.DEPTH) # Retrieve depth matrix. Depth is aligned on the left RGB image zed.retrieve_measure(point_cloud, sl.MEASURE.XYZRGBA) # Retrieve colored point cloud i = i + 1 // Capture 50 images and depth, then stop int i = 0; Mat image = new Mat() Mat depth = new Mat(); Mat point_cloud = new Mat(); uint mWidth = (uint)zed.ImageWidth; uint mHeight = (uint)zed.ImageHeight; image.Create(mWidth, mHeight, MAT_TYPE.MAT_8U_C4, MEM.CPU); // Mat needs to be created before use. depth.Create(mWidth, mHeight, MAT_TYPE.MAT_32F_C1, MEM.CPU); // Mat needs to be created before use. point_cloud.Create(mWidth, mHeight, MAT_TYPE.MAT_32F_C4, MEM.CPU); // Mat needs to be created before use. while (i < 50) { // Grab an image if (zed.Grab(ref runtime_parameters) == ERROR_CODE.SUCCESS) { // A new image is available if grab() returns ERROR_CODE::SUCCESS zed.RetrieveImage(image, VIEW.LEFT); // Get the left image zed.RetrieveMeasure(depth, MEASURE.DEPTH); // Retrieve depth matrix. Depth is aligned on the left RGB image. zed.RetrieveMeasure(point_cloud, MEASURE.XYZRGBA); // Retrieve colored point cloud i++; } } For more information on depth and point cloud parameters, read Using the Depth API. Measure Distance in Point Cloud Now that we have retrieved the point cloud, we can extract the depth at a specific pixel. In the example, we extract the distance of the point at the center of the image (width/2, height/2). C++ Python C# // Get and print distance value in mm at the center of the image // We measure the distance camera - object using Euclidean distance int x = image.getWidth() / 2; int y = image.getHeight() / 2; sl::float4 point_cloud_value; point_cloud.getValue(x, y, &point_cloud_value); float distance = sqrt(point_cloud_value.x*point_cloud_value.x + point_cloud_value.y*point_cloud_value.y + point_cloud_value.z*point_cloud_value.z); printf("Distance to Camera at (%d, %d): %f mm\n", x, y, distance); # Get and print distance value in mm at the center of the image # We measure the distance camera - object using Euclidean distance x = round(image.get_width() / 2) y = round(image.get_height() / 2) err, point_cloud_value = point_cloud.get_value(x, y) distance = math.sqrt(point_cloud_value[0] * point_cloud_value[0] + point_cloud_value[1] * point_cloud_value[1] + point_cloud_value[2] * point_cloud_value[2]) print("Distance to Camera at ({0}, {1}): {2} mm".format(x, y, distance), end="\r") // Get and print distance value in mm at the center of the image // We measure the distance camera - object using Euclidean distance int x = image.ImageWidth / 2; int y = image.ImageHeight / 2; point_cloud.GetValue(x, y, out float4 point_cloud_value); float distance = Math.Sqrt(point_cloud_value.x*point_cloud_value.x + point_cloud_value.y*point_cloud_value.y + point_cloud_value.z*point_cloud_value.z); Console.WriteLine("Depth to Camera at (" + x + "," + y + "):" + distance + " mm"); We can also use directly the depth map instead of the point cloud to extract the depth value at a specific pixel. C++ Python C# int x = image.getWidth() / 2; int y = image.getHeight() / 2; float depth_value; depth.getValue(x,y, depth_value); printf("Depth to Camera at (%d, %d): %f mm\n", x, y, depth_value); x = round(image.get_width() / 2) y = round(image.get_height() / 2) err, depth_value = depth.get_value(x, y) print("Distance to Camera at ({0}, {1}): {2} mm".format(x, y, depth_value), end="\r") int x = image.ImageWidth / 2; int y = image.ImageHeight / 2; depth.GetValue(x, y, out float depth_value); Console.WriteLine("Depth to Camera at (" + x + ", y):" + depth_value + " mm"); Close the camera Once 50 frames have been grabbed, we close the camera. C++ Python C# // Close the camera zed.close(); # Close the camera zed.close() // Close the camera zed.Close(); Advanced Example To learn how to retrieve and display the live point cloud from the camera and adjust depth confidence filters, check the Point Cloud Viewer sample code. Next Steps At this point, you know how to retrieve image and depth data from ZED stereo cameras. Depending on your application, you have multiple paths forward: Read the Sensors tutorial to learn how to retrieve onboard sensors data. Read the Camera Tracking tutorial to learn how to track the motion of your depth camera in 3D space. Read the Object Detection tutorial to see how to detect and track persons in 3D.