The ZED SDK v2.0 has been completely redesigned and architectured into different modules: Video, Depth, Positional Tracking and Spatial Mapping. Each module can be configured through dedicated parameters. Support functions have also been introduced and memory management has been simplified. The new API is much easier to use and integrate.
Since there are a lot of changes compared to ZED SDK v1.2, this tutorial will help you switch easily from 1.2 to 2.0.
The philosophy of the ZED SDK is still the same (you still have a camera object to handle your ZED) and most of the functions of v1.2 are still available. However, most of them have either been renamed or their parameters modified.
New image containers (Mat) have been introduced to simplify data handling and memory management: you will find the new definitions in Core.hpp or Types.hpp. The ‘zed’ namespace has been removed and merged into single namespace ‘sl’. Samples have been refactored as well and each module has now its dedicated sample to demonstrates the usage of the new API.
The ZED SDK is still installed in Program Files/ZED SDK/ on Windows and /usr/local/zed/ on Linux. The content of each folder (include, lib and samples) has changed:
sl
folder with:Camera.hpp
: default file to include in your projectMesh.hpp
: new file to handle the mesh output of the spatial mapping API. Automatically included by Camera.hppCore.hpp
: new file that contains the basic classes and structures used in the ZED SDK. Automatically included by Camera.hpptypes.hpp
: new file that contains the generic types used in the ZED SDK. Automatically included by Camera.hppdefines.hpp
: previously called GlobalDefines.hpp in 1.2. Automatically included by Camera.hppNote: To protect possible code you could have stored in the ZED SDK directory, the previous ‘ZED SDK’ directory will be preserved and renamed in ‘ZED SDK.old’ during installation of 2.0. Note however that it is not recommended to have your own code in this directory.
ENUM_NAME_ENUM_VALUE
CAMERA_SETTINGS
enum, the saturation value will be called with CAMERA_SETTINGS_SATURATION
.xxxParameters
class as input:sl::InitParameters
: contains the parameters that must be choosen during initialization. (Previously called sl::InitParams
)sl::TrackingParameters
: contains the parameters to enable and configure the Motion Tracking module in the ZED SDK. (Previously called sl::TrackingParams
).sl::SpatialMappingParameters
: contains the parameters to enable and configure the spatial mapping module.sl::RuntimeParameters
: contains the parameters that can be changed anytime during use when calling grab()
. (Previously called GrabParams
).SPATIAL_MAPPING_STATUS
, ERROR_CODE
, …) and not an object anymore.sl::Mat
) for example) are now passed as reference.retrieveImage
function was previously called with the following code:sl::Mat imageLeft_cpu = zed->retrieveImage(sl::LEFT); sl::Mat imageLeft_gpu = zed->retrieveImage_gpu(sl::LEFT);
sl::Mat imageLeft_cpu, imageLeft_gpu; ERROR_CODE err = zed->retrieveImage(imageLeft_cpu, sl::VIEW_LEFT, sl::MEM_CPU); ERROR_CODE err2 = zed->retrieveImage(imageLeft_gpu, sl::VIEW_LEFT, sl::MEM_GPU);
This means you can now check that a function has succeeded before using the Mat
result.
We have refactored sl::Mat
to provide a simpler way to handle images both on CPU and GPU. Our goal is to allow any developer to switch easily between CPU and GPU functions during development.
sl::Mat
is now defined in sl/Core.hpp. It can handle float
or unsigned char
values and up to 4 channels. This obviously defines a MAT_FORMAT
enum which sets a format type: for example, you can choose to create an sl::Mat
with the format MAT_FORMAT_1_FLOAT
(float mat with 1 channel, typically a depth map), or MAT_FORMAT_4_UCHAR
(uchar
mat with 4 channels, typically RGBA images). Compared to previous 1.2 SDK, the channel number and the DATA_TYPE
are now merged.sl::Mat
can now handle both a CPU and a GPU memory. You can easily switch from CPU/GPU to GPU/CPU using updateCPUfromGPU()
or updateGPUfromCPU()
functions.sl::Mat
memory allocation is now managed on both CPU and GPU through alloc()
function: MEM_TYPE
parameter in the allocation function will define if the Mat belongs to the CPU or the GPU memory.Example:
sl::Mat gpuImage_; sl::Mat cpuImage_; sl::Mat cpuMeasure_; gpuImage_.alloc(w,h,sl::MAT_FORMAT_4_UCHAR, sl::MEM_GPU); // Create a RGBA image on the GPU cpuImage_.alloc(w,h,sl::MAT_FORMAT_4_UCHAR, sl::MEM_CPU); // Create a RGBA image on the CPU cpuMeasure_.alloc(w,h,sl::MAT_FORMAT_1_FLOAT, sl::MEM_CPU); // Create a 32bits measure (depth) on the CPU
Compared to 1.2, there is now only one single default constructor. It allows to declare a zed (sl::Camera) object and have the default constructor called.
sl::Camera* pzed = new sl::Camera(); // initialization
sl::Camera zed; // same behavior
To initialize the ZED, you still need to call a function that opens the camera with the sl::InitParameters structure. This function has been renamed open(...)
but behaves in a similar way to the previous init(...)
function. A close()
function is now available to close the camera. The camera can be reopened by calling open()
again.
Example :
sl::InitParameters param; param.system_units = sl::UNIT_METER; ERROR_CODE res = zed.open(param); // check if initialization succeeded
About SVO and LIVE mode: The choice between SVO and Live mode was previously done in the constructor. They are now parameters of the InitParameters
class:
sl::RESOLUTION camera_resolution:
selects the camera LIVE resolution.int camera_fps:
elects the camera LIVE fps.sl::String svo_input_filename:
if you want to work in SVO/offline mode, just enter an SVO string name. Leave it empty if you are working in LIVE
mode.Leaving the InitParameters
empty will provide default parameters in live mode.
Members of InitParameters
have been renamed according to the modules they are attached to.
v1.2 | v2.0 |
verbose | sdk_verbose |
device | sdk_gpu_id |
v1.2 | v2.0 |
disableSelfCalib | camera_disable_self_calib |
vflip | camera_image_flip |
- | camera_buffer_count_linux (NEW) |
- | camera_resolution (NEW) |
- | camera_fps (NEW) |
v1.2 | v2.0 |
mode | depth_mode |
minimumDistance | depth_minimum_distance |
v1.2 | v2.0 |
unit | coordinate_units |
coordinate | coordinate_system |
Retrieving an image or a measure through the ZED SDK has been simplified : normalizeMeasure()
and getView()
have been merged into retrieveImage
function. retrieveImage
now takes a VIEW
parameter instead of a SIDE
(in 1.2).
A single function can now get images or measures from CPU or GPU memory. You just need to specify the MEM_TYPE
in the last parameter of the retrieve function.
In the following example, we retrieve a left image, a side by side image, a point cloud and a displayable depth image (8-bit, only for vizualisation) from both CPU and GPU memory.
With ZED SDK 1.2:
// Left image (CPU and GPU) sl::Mat left_image_cpu_ = zed->retrieveimage(sl::zed::SIDE::LEFT); // left image on CPU sl::Mat left_image_gpu_ = zed->retrieveimage_gpu(sl::zed::SIDE::LEFT); // left image on GPU // Side by Side image (CPU and GPU) sl::Mat sbs_image_cpu_ = zed->getView(sl::zed::VIEW_MODE::STEREO_SBS); // side by side image on CPU sl::Mat sbs_image_gpu_ = zed->getView_gpu(sl::zed::VIEW_MODE::STEREO_SBS); // side by side image on GPU // Point Cloud XYZRGBA sl::Mat point_cloud_cpu_ = zed->retrieveMeasure(sl::zed::MEASURE::XYZRGBA); // get the point cloud in CPU sl::Mat point_cloud_gpu_ = zed->retrieveMeasure_gpu(sl::zed::MEASURE::XYZRGBA); // get the point cloud in GPU // Depth map image (only for display) sl::Mat image_depth_cpu_ = zed->normalizeMeasure(sl::zed::MEASURE::DEPTH); // get the point cloud in CPU sl::Mat image_depth_gpu_ = zed->normalizeMeasure_gpu(sl::zed::MEASURE::DEPTH); // get the point cloud in GPU
With ZED SDK 2.0:
sl::Mat left_image_cpu_,left_image_gpu_,sbs_image_cpu_,sbs_image_gpu_, point_cloud_cpu_, point_cloud_gpu_, image_depth_cpu_, image_depth_gpu_; ERROR_CODE res; // Left image (CPU and GPU) res = zed->retrieveImage(left_image_cpu_, sl::VIEW_LEFT, sl::MEM_CPU); res = zed->retrieveImage(left_image_gpu_, sl::VIEW_LEFT, sl::MEM_GPU); // Side by Side image (CPU and GPU) res = zed->retrieveImage(sbs_image_cpu_, sl::VIEW_SIDE_BY_SIDE, sl::MEM_CPU); res = zed->retrieveImage(sbs_image_gpu_, sl::VIEW_SIDE_BY_SIDE, sl::MEM_GPU); // Point Cloud XYZRGBA res = zed->retrieveMeasure(point_cloud_cpu_, sl::MEASURE_XYZRGBA, sl::MEM_CPU); res = zed->retrieveMeasure(point_cloud_gpu_, sl::MEASURE_XYZRGBA, sl::MEM_GPU); // Depth map image (only for display) res = zed->retrieveImage(image_depth_cpu_, sl::VIEW_DEPTH, sl::MEM_CPU); res = zed->retrieveImage(image_depth_gpu_, sl::VIEW_DEPTH, sl::MEM_GPU);
Note: Mat memory can be allocated with the alloc()
function before calling a retrieve function. But if memory is not allocated (Mat
default constructor) or incorrectly allocated (wrong MAT_FORMAT
, size or MEM_TYPE
), then the API will reallocate correctly the Mat memory during the first call of the retrieve function. Since reallocation takes some time, you should expect the first function call to take longer than for the next calls.
Camera information (available through the CameraInformation
structure) now regroups all the information relative to the ZED camera: calibration parameters, firmware version, serial number. As a result, three function of the ZED SDK 1.2 have been merged into a single one in 2.0.
With ZED SDK 1.2:
sl::StereoParameters* calibration_parameters = zed->getParameters(); int focale_x = calibration_parameters->left_cam.fx; // Focal length in pixels of left camera (after image alignment/rectification) int baseline = calibration_parameters->baseline; // Distance between the left and right camera int zed_fw_version = zed->getZEDFirmware(); // Current firmware version inside the ZED int zed_serial_number = zed->getZEDSerial(); // ZED serial number
With ZED SDK 2.0:
sl::CameraInformation *infos = zed->getCameraInformation(); int focale_x = infos.calibration_parameters.left_cam.fx; // Focal length in pixels of left camera (after image alignment/rectification) int baseline = infos.calibration_parameters.T.x; // Distance between left and right camera int zed_fw_version = infos.firmware_version; // Current firmware version inside the ZED int zed_serial_number = infos.serial_number; // ZED serial number
Camera Framerate can now be controlled using setCameraFPS()
and getCameraFPS()
. In ZED SDK 1.2, those functions were setFPS()
and getCurrentFPS()
.
getCurrentFPS()
returns the callback time of the ZED (time between two successful grab()
). It is based on the camera timestamp. Since it takes computation time into account, getCurrentFPS()
will return an fps lower than getCameraFPS()
.
We have introduced in ZED SDK 2.0 the possibility to set the camera white balance at a specific color temperature. This is a new entry in the CAMERA_SETTINGS
enum, called CAMERA_SETTINGS_WHITEBALANCE
and CAMERA_SETTINGS_AUTO_WHITEBALANCE
.
To access camera settings (contrast, hue, brightness, exposure…), get and set functions are now available: setCameraSettings()
and getCameraSettings()
.
Example with ZED SDK 1.2:
zed->setCameraSettingsValue(sl::CAMERA_SETTINGS::CONTRAST, 5); int current_hue = zed->getCameraSettingsValue(sl::CAMERA_SETTINGS::HUE, 5);
With ZED SDK 2.0:
zed->setCameraSettings(sl::CAMERA_SETTINGS::CAMERA_SETTINGS_CONTRAST, 5); int current_hue = zed->getCameraSettings(sl::CAMERA_SETTINGS::CAMERA_SETTINGS_HUE, 5);
Depth range control functions have been renamed but have the same behavior as in 1.2.
v1.2 | v2.0 |
setDepthClampValue(float) | setDepthMaxRangeValue(float) |
float getDepthClampValue() | float getDepthMaxRangeValue() |
float getClosestDepthValue() | float getDepthMinRangeValue() |
Moving from ZED SDK 1.2 to 2.0 should be quick and easy, bringing many performance improvements along with additional features. For more information, have a look at our new online documentation or contact our Support team. We’ll be happy to help you make the switch to 2.0.
Start building exciting new applications that recognize and understand your environment.