This class is the main interface with the camera and the SDK features, such as: video, depth, tracking, mapping and more.
Find more information in the detailed description below. A standard program will use the Camera class like this:
More...
Functions | |
Camera () | |
Default constructor which creates an empty Camera object. Parameters will be set when calling open(init_param) with the desired InitParameters . More... | |
~Camera () | |
The Camera destructor will call the close() function and clear the memory previously allocated by the object. More... | |
ERROR_CODE | open (InitParameters init_parameters=InitParameters()) |
Opens the ZED camera from the provided InitParameters. This function will also check the hardware requirements and run a self-calibration. More... | |
InitParameters | getInitParameters () |
Returns the init parameters used. Correspond to the structure send when the open() function was called. More... | |
bool | isOpened () |
Reports if the camera has been successfully opened. It has the same behavior as checking if open() returns SUCCESS. More... | |
void | close () |
If open() has been called, this function will close the connection to the camera (or the SVO file) and free the corresponding memory. More... | |
ERROR_CODE | grab (RuntimeParameters rt_parameters=RuntimeParameters()) |
This function will grab the latest images from the camera, rectify them, and compute the measurements based on the RuntimeParameters provided (depth, point cloud, tracking, etc.) As measures are created in this function, its execution can last a few milliseconds, depending on your parameters and your hardware. The exact duration will mostly depend on the following parameters: More... | |
RuntimeParameters | getRuntimeParameters () |
Returns the runtime parameters used. Correspond to the structure send when the grab() function was called. More... | |
CameraInformation | getCameraInformation (Resolution image_size=Resolution(0, 0)) |
Returns the calibration parameters, serial number and other information about the camera being used. More... | |
void | updateSelfCalibration () |
Perform a new self calibration process. More... | |
CUcontext | getCUDAContext () |
Gets the Camera-created CUDA context for sharing it with other CUDA-capable libraries. This can be useful for sharing GPU memories. If you're looking for the opposite mechanism, where an existing CUDA context is given to the Camera, please check InitParameters::sdk_cuda_ctx. More... | |
CUstream | getCUDAStream () |
ERROR_CODE | findPlaneAtHit (sl::uint2 coord, sl::Plane &plane) |
Checks the plane at the given left image coordinates. More... | |
ERROR_CODE | findFloorPlane (sl::Plane &floorPlane, sl::Transform &resetTrackingFloorFrame, float floor_height_prior=INVALID_VALUE, sl::Rotation world_orientation_prior=sl::Matrix3f::zeros(), float floor_height_prior_tolerance=INVALID_VALUE) |
Detect the floor plane of the scene. More... | |
ERROR_CODE | getCurrentMinMaxDepth (float &min, float &max) |
Gets the current range of perceived depth. More... | |
Camera (const Camera &)=delete | |
The Camera object cannot be copied. Therfore, its copy constructor is disabled. If you need to share a Camera instance across several threads or object, please consider using a pointer. More... | |
Video | |
ERROR_CODE | retrieveImage (Mat &mat, VIEW view=VIEW::LEFT, MEM type=MEM::CPU, Resolution image_size=Resolution(0, 0)) |
Retrieves images from the camera (or SVO file). More... | |
int | getCameraSettings (VIDEO_SETTINGS settings) |
Returns the current value of the requested camera setting. (gain, brightness, hue, exposure, etc.) More... | |
ERROR_CODE | getCameraSettings (VIDEO_SETTINGS settings, Rect &roi, sl::SIDE side=sl::SIDE::BOTH) |
Overloaded function for VIDEO_SETTINGS::AEC_AGC_ROI which takes a Rect as parameter. More... | |
void | setCameraSettings (VIDEO_SETTINGS settings, int value=VIDEO_SETTINGS_VALUE_AUTO) |
Sets the value of the requested camera setting. (gain, brightness, hue, exposure, etc.) More... | |
ERROR_CODE | setCameraSettings (VIDEO_SETTINGS settings, Rect roi, sl::SIDE side=sl::SIDE::BOTH, bool reset=false) |
Overloaded function for VIDEO_SETTINGS::AEC_AGC_ROI which takes a Rect as parameter. More... | |
float | getCurrentFPS () |
Returns the current framerate at which the grab() method is successfully called. The returned value is based on the difference of camera timestamps between two successful grab() calls. More... | |
Timestamp | getTimestamp (sl::TIME_REFERENCE reference_time) |
Returns the timestamp in the requested TIME_REFERENCE. More... | |
unsigned int | getFrameDroppedCount () |
Returns the number of frames dropped since grab() was called for the first time. More... | |
int | getSVOPosition () |
Returns the current playback position in the SVO file. More... | |
void | setSVOPosition (int frame_number) |
Sets the playback cursor to the desired frame number in the SVO file. More... | |
int | getSVONumberOfFrames () |
Returns the number of frames in the SVO file. More... | |
Depth Sensing | |
ERROR_CODE | retrieveMeasure (Mat &mat, MEASURE measure=MEASURE::DEPTH, MEM type=MEM::CPU, Resolution image_size=Resolution(0, 0)) |
Computed measures, like depth, point cloud, or normals, can be retrieved using this method. More... | |
ERROR_CODE | setRegionOfInterest (sl::Mat &roi_mask) |
Defines a region of interest to focus on for all the SDK, discarding other parts. More... | |
Positional Tracking | |
ERROR_CODE | enablePositionalTracking (PositionalTrackingParameters tracking_parameters=PositionalTrackingParameters()) |
Initializes and starts the positional tracking processes. More... | |
POSITIONAL_TRACKING_STATE | getPosition (Pose &camera_pose, REFERENCE_FRAME reference_frame=REFERENCE_FRAME::WORLD) |
Retrieves the estimated position and orientation of the camera in the specified reference frame. More... | |
ERROR_CODE | saveAreaMap (String area_file_path) |
Saves the current area learning file. The file will contain spatial memory data generated by the tracking. More... | |
AREA_EXPORTING_STATE | getAreaExportState () |
Returns the state of the spatial memory export process. More... | |
ERROR_CODE | resetPositionalTracking (const Transform &path) |
Resets the tracking, and re-initializes the position with the given transformation matrix. More... | |
void | disablePositionalTracking (String area_file_path="") |
Disables the positional tracking. More... | |
bool | isPositionalTrackingEnabled () |
Tells if the tracking module is enabled. More... | |
PositionalTrackingParameters | getPositionalTrackingParameters () |
Returns the positional tracking parameters used. Correspond to the structure send when the enablePositionalTracking() function was called. More... | |
ERROR_CODE | getSensorsData (SensorsData &data, TIME_REFERENCE reference_time) |
Retrieves the Sensors (IMU,magnetometer,barometer) Data at a specific time reference. More... | |
ERROR_CODE | setIMUPrior (const sl::Transform &transform) |
Set an optional IMU orientation hint that will be used to assist the tracking during the next grab(). More... | |
Spatial Mapping | |
ERROR_CODE | enableSpatialMapping (SpatialMappingParameters spatial_mapping_parameters=SpatialMappingParameters()) |
Initializes and starts the spatial mapping processes. More... | |
SPATIAL_MAPPING_STATE | getSpatialMappingState () |
Returns the current spatial mapping state. More... | |
void | requestSpatialMapAsync () |
Starts the spatial map generation process in a non blocking thread from the spatial mapping process. More... | |
ERROR_CODE | getSpatialMapRequestStatusAsync () |
Returns the spatial map generation status. This status allows to know if the mesh can be retrieved by calling retrieveSpatialMapAsync. More... | |
ERROR_CODE | retrieveSpatialMapAsync (Mesh &mesh) |
Retrieves the current generated spatial map only if SpatialMappingParameters::map_type was set as SPATIAL_MAP_TYPE::MESH. More... | |
ERROR_CODE | retrieveSpatialMapAsync (FusedPointCloud &fpc) |
Retrieves the current generated spatial map only if SpatialMappingParameters::map_type was set as SPATIAL_MAP_TYPE::FUSED_POINT_CLOUD. After calling requestSpatialMapAsync , this function allows you to retrieve the generated fused point cloud. The fused point cloud will only be available when getMeshRequestStatusAsync() returns SUCCESS. More... | |
ERROR_CODE | extractWholeSpatialMap (Mesh &mesh) |
Extracts the current spatial map from the spatial mapping process only if SpatialMappingParameters::map_type was set as SPATIAL_MAP_TYPE::MESH. More... | |
ERROR_CODE | extractWholeSpatialMap (FusedPointCloud &fpc) |
Extracts the current spatial map from the spatial mapping process only if SpatialMappingParameters::map_type was set as SPATIAL_MAP_TYPE::FUSED_POINT_CLOUD. More... | |
void | pauseSpatialMapping (bool status) |
Pauses or resumes the spatial mapping processes. More... | |
void | disableSpatialMapping () |
Disables the spatial mapping process. More... | |
SpatialMappingParameters | getSpatialMappingParameters () |
Returns the spatial mapping parameters used. Correspond to the structure send when the enableSpatialMapping() function was called. More... | |
Recording | |
ERROR_CODE | enableRecording (RecordingParameters recording_parameters) |
Creates an SVO file to be filled by record(). More... | |
RecordingStatus | getRecordingStatus () |
Get the recording information. More... | |
void | pauseRecording (bool status) |
Pauses or resumes the recording. More... | |
void | disableRecording () |
Disables the recording initiated by enableRecording() and closes the generated file. More... | |
RecordingParameters | getRecordingParameters () |
Returns the recording parameters used. Correspond to the structure send when the enableRecording() function was called. More... | |
Streaming | |
ERROR_CODE | enableStreaming (StreamingParameters streaming_parameters=StreamingParameters()) |
Creates a streaming pipeline. More... | |
void | disableStreaming () |
Disables the streaming initiated by enableStreaming() More... | |
bool | isStreamingEnabled () |
Tells if the streaming is running (true) or still initializing (false) More... | |
StreamingParameters | getStreamingParameters () |
Returns the streaming parameters used. Correspond to the structure send when the enableStreaming() function was called. More... | |
Object Detection | |
ERROR_CODE | enableObjectDetection (ObjectDetectionParameters object_detection_parameters=ObjectDetectionParameters()) |
Initializes and starts the Deep Learning detection module. The object detection module currently supports two types of detection : More... | |
void | pauseObjectDetection (bool status) |
Pauses or resumes the object detection processes. More... | |
void | disableObjectDetection () |
Disables the Object Detection process. More... | |
ERROR_CODE | ingestCustomBoxObjects (std::vector< CustomBoxObjectData > &objects_in) |
Feed the 3D Object tracking function with your own 2D bounding boxes from your own detection algorithm. More... | |
ERROR_CODE | retrieveObjects (Objects &objects, ObjectDetectionRuntimeParameters parameters=ObjectDetectionRuntimeParameters()) |
Retrieve objects detected by the object detection module. More... | |
ERROR_CODE | getObjectsBatch (std::vector< sl::ObjectsBatch > &trajectories) |
Get a batch of detected objects. More... | |
bool | isObjectDetectionEnabled () |
Tells if the object detection module is enabled. More... | |
ObjectDetectionParameters | getObjectDetectionParameters () |
Returns the object detection parameters used. Correspond to the structure send when the enableObjectDetection() function was called. More... | |
Static Functions | |
static String | getSDKVersion () |
Returns the version of the currently installed ZED SDK. More... | |
static void | getSDKVersion (int &major, int &minor, int &patch) |
Returns the version of the currently installed ZED SDK. More... | |
static std::vector< sl::DeviceProperties > | getDeviceList () |
List all the connected devices with their associated information. More... | |
static std::vector< sl::StreamingProperties > | getStreamingDeviceList () |
List all the streaming devices with their associated information. More... | |
static sl::ERROR_CODE | reboot (int sn, bool fullReboot=true) |
Performs an hardware reset of the ZED 2 and the ZED2i. More... | |
static AI_Model_status | checkAIModelStatus (AI_MODELS model, int gpu_id=0) |
Check if a corresponding optimized engine is found for the requested Model based on your rig configuration. More... | |
static ERROR_CODE | optimizeAIModel (sl::AI_MODELS model, int gpu_id=0) |
Optimize the requested model, possible download if the model is not present on the host. More... | |
This class is the main interface with the camera and the SDK features, such as: video, depth, tracking, mapping and more.
Find more information in the detailed description below. A standard program will use the Camera class like this:
Camera | ( | ) |
Default constructor which creates an empty Camera object.
Parameters will be set when calling open(init_param) with the desired InitParameters .
The Camera object can be created like this:
or
~Camera | ( | ) |
ERROR_CODE open | ( | InitParameters | init_parameters = InitParameters() | ) |
Opens the ZED camera from the provided InitParameters.
This function will also check the hardware requirements and run a self-calibration.
init_parameters | : a structure containing all the initial parameters. default : a preset of InitParameters. |
Here is the proper way to call this function:
InitParameters getInitParameters | ( | ) |
Returns the init parameters used. Correspond to the structure send when the open() function was called.
|
inline |
void close | ( | ) |
If open() has been called, this function will close the connection to the camera (or the SVO file) and free the corresponding memory.
If open() wasn't called or failed, this function won't have any effects.
ERROR_CODE grab | ( | RuntimeParameters | rt_parameters = RuntimeParameters() | ) |
This function will grab the latest images from the camera, rectify them, and compute the measurements based on the RuntimeParameters provided (depth, point cloud, tracking, etc.)
As measures are created in this function, its execution can last a few milliseconds, depending on your parameters and your hardware.
The exact duration will mostly depend on the following parameters:
This function is meant to be called frequently in the main loop of your application.
rt_parameters | : a structure containing all the runtime parameters. default : a preset of RuntimeParameters. |
RuntimeParameters getRuntimeParameters | ( | ) |
Returns the runtime parameters used. Correspond to the structure send when the grab() function was called.
CameraInformation getCameraInformation | ( | Resolution | image_size = Resolution(0, 0) | ) |
Returns the calibration parameters, serial number and other information about the camera being used.
As calibration parameters depend on the image resolution, you can provide a custom resolution as a parameter to get scaled information.
When reading an SVO file, the parameters will correspond to the camera used for recording.
image_size | : You can specify a size different from default image size to get the scaled camera information. default = (0,0) meaning original image size (given by .camera_configuration.resolution ). |
void updateSelfCalibration | ( | ) |
Perform a new self calibration process.
In some cases, due to temperature changes or strong vibrations, the stereo calibration becomes less accurate. Use this function to update the self-calibration data and get more reliable depth values.
CUcontext getCUDAContext | ( | ) |
Gets the Camera-created CUDA context for sharing it with other CUDA-capable libraries. This can be useful for sharing GPU memories. If you're looking for the opposite mechanism, where an existing CUDA context is given to the Camera, please check InitParameters::sdk_cuda_ctx.
CUstream getCUDAStream | ( | ) |
ERROR_CODE retrieveImage | ( | Mat & | mat, |
VIEW | view = VIEW::LEFT , |
||
MEM | type = MEM::CPU , |
||
Resolution | image_size = Resolution(0, 0) |
||
) |
Retrieves images from the camera (or SVO file).
Multiple images are available along with a view of various measures for display purposes.
Available images and views are listed here.
As an example, VIEW::DEPTH can be used to get a gray-scale version of the depth map, but the actual depth values can be retrieved using retrieveMeasure().
Pixels
Most VIEW modes output image with 4 channels as BGRA (Blue, Green, Red, Alpha), for more information see enum VIEW
Memory
By default, images are copied from GPU memory to CPU memory (RAM) when this function is called.
If your application can use GPU images, using the type parameter can increase performance by avoiding this copy.
If the provided Mat object is already allocated and matches the requested image format, memory won't be re-allocated.
Image size
By default, images are returned in the resolution provided by .camera_configuration.resolution.
However, you can request custom resolutions. For example, requesting a smaller image can help you speed up your application.
mat | : the Mat to store the image. The function will create the Mat if necessary at the proper resolution. If already created, it will just update its data (CPU or GPU depending on the MEM_TYPE). |
view | : defines the image you want (see VIEW). default : VIEW::LEFT. |
type | : whether the image should be provided in CPU or GPU memory. default : MEM::CPU. |
image_size | : if specified, define the resolution of the output mat. If set to Resolution(0,0) , the ZED resolution will be taken. default : (0,0). |
int getCameraSettings | ( | VIDEO_SETTINGS | settings | ) |
Returns the current value of the requested camera setting. (gain, brightness, hue, exposure, etc.)
Possible values (range) of each setting are available here.
setting | : the requested setting. |
ERROR_CODE getCameraSettings | ( | VIDEO_SETTINGS | settings, |
Rect & | roi, | ||
sl::SIDE | side = sl::SIDE::BOTH |
||
) |
Overloaded function for VIDEO_SETTINGS::AEC_AGC_ROI which takes a Rect as parameter.
setting | : must be set at VIDEO_SETTINGS::AEC_AGC_ROI, otherwise the function will have no impact. |
roi | : Rect that defines the current target applied for AEC/AGC. |
void setCameraSettings | ( | VIDEO_SETTINGS | settings, |
int | value = VIDEO_SETTINGS_VALUE_AUTO |
||
) |
Sets the value of the requested camera setting. (gain, brightness, hue, exposure, etc.)
Possible values (range) of each setting are available here.
settings | : the setting to be set. |
value | : the value to set, default : auto mode |
ERROR_CODE setCameraSettings | ( | VIDEO_SETTINGS | settings, |
Rect | roi, | ||
sl::SIDE | side = sl::SIDE::BOTH , |
||
bool | reset = false |
||
) |
Overloaded function for VIDEO_SETTINGS::AEC_AGC_ROI which takes a Rect as parameter.
setting | : must be set at VIDEO_SETTINGS::AEC_AGC_ROI, otherwise the function will have no impact. |
roi | : Rect that defines the target to be applied for AEC/AGC computation. Must be given according to camera resolution. |
float getCurrentFPS | ( | ) |
Returns the current framerate at which the grab() method is successfully called. The returned value is based on the difference of camera timestamps between two successful grab() calls.
Timestamp getTimestamp | ( | sl::TIME_REFERENCE | reference_time | ) |
Returns the timestamp in the requested TIME_REFERENCE.
This function can also be used when playing back an SVO file.
reference_time | : The selected TIME_REFERENCE. |
unsigned int getFrameDroppedCount | ( | ) |
Returns the number of frames dropped since grab() was called for the first time.
A dropped frame corresponds to a frame that never made it to the grab function.
This can happen if two frames were extracted from the camera when grab() is called. The older frame will be dropped so as to always use the latest (which minimizes latency).
int getSVOPosition | ( | ) |
Returns the current playback position in the SVO file.
The position corresponds to the number of frames already read from the SVO file, starting from 0 to n.
Each grab() call increases this value by one (except when using InitParameters::svo_real_time_mode).
void setSVOPosition | ( | int | frame_number | ) |
Sets the playback cursor to the desired frame number in the SVO file.
This function allows you to move around within a played-back SVO file. After calling, the next call to grab() will read the provided frame number.
frame_number | : the number of the desired frame to be decoded. |
int getSVONumberOfFrames | ( | ) |
Returns the number of frames in the SVO file.
ERROR_CODE retrieveMeasure | ( | Mat & | mat, |
MEASURE | measure = MEASURE::DEPTH , |
||
MEM | type = MEM::CPU , |
||
Resolution | image_size = Resolution(0, 0) |
||
) |
Computed measures, like depth, point cloud, or normals, can be retrieved using this method.
Multiple measures are available after a grab() call. A full list is available here.
Memory
By default, images are copied from GPU memory to CPU memory (RAM) when this function is called.
If your application can use GPU images, using the type parameter can increase performance by avoiding this copy.
If the provided Mat object is already allocated and matches the requested image format, memory won't be re-allocated.
Measure size
By default, measures are returned in the resolution provided by .camera_configuration.resolution .
However, custom resolutions can be requested. For example, requesting a smaller measure can help you speed up your application.
mat | : the Mat to store the measure. The function will create the Mat if necessary at the proper resolution. If already created, it will just update its data (CPU or GPU depending on the MEM_TYPE). |
measure | : defines the measure you want. (see MEASURE), default : MEASURE::DEPTH |
type | : the type of the memory of provided mat that should by used. default : MEM::CPU. |
image_size | : if specified, define the resolution of the output mat. If set to Resolution(0,0) , the ZED resolution will be taken. default : (0,0). |
ERROR_CODE setRegionOfInterest | ( | sl::Mat & | roi_mask | ) |
Defines a region of interest to focus on for all the SDK, discarding other parts.
roi_mask | the Mat defining the requested region of interest, all pixel set to 0 will be discard. If empty, set all pixels as valid, otherwise should fit the resolution of the current instance and its type should be U8_C1. |
ERROR_CODE enablePositionalTracking | ( | PositionalTrackingParameters | tracking_parameters = PositionalTrackingParameters() | ) |
Initializes and starts the positional tracking processes.
This function allows you to enable the position estimation of the SDK. It only has to be called once in the camera's lifetime.
When enabled, the position will be update at each grab call.
Tracking-specific parameter can be set by providing PositionalTrackingParameters to this function.
tracking_parameters | : A structure containing all the PositionalTrackingParameters . default : a preset of PositionalTrackingParameters. |
POSITIONAL_TRACKING_STATE getPosition | ( | Pose & | camera_pose, |
REFERENCE_FRAME | reference_frame = REFERENCE_FRAME::WORLD |
||
) |
Retrieves the estimated position and orientation of the camera in the specified reference frame.
Using REFERENCE_FRAME::WORLD, the returned pose relates to the initial position of the camera. (PositionalTrackingParameters::initial_world_transform )
Using REFERENCE_FRAME::CAMERA, the returned pose relates to the previous position of the camera.
If the tracking has been initialized with PositionalTrackingParameters::enable_area_memory to true (default), this function can return POSITIONAL_TRACKING_STATE::SEARCHING.
This means that the tracking lost its link to the initial referential and is currently trying to relocate the camera. However, it will keep on providing position estimations.
camera_pose | [out]: the pose containing the position of the camera and other information (timestamp, confidence) |
reference_frame | : defines the reference from which you want the pose to be expressed. Default : REFERENCE_FRAME::WORLD. |
Extract Rotation Matrix : camera_pose.getRotation();
Extract Translation Vector: camera_pose.getTranslation();
Convert to Orientation / quaternion : camera_pose.getOrientation();
ERROR_CODE saveAreaMap | ( | String | area_file_path | ) |
Saves the current area learning file. The file will contain spatial memory data generated by the tracking.
If the tracking has been initialized with PositionalTrackingParameters::enable_area_memory to true (default), the function allows you to export the spatial memory.
Reloading the exported file in a future session with PositionalTrackingParameters::area_file_path initializes the tracking within the same referential.
This function is asynchronous, and only triggers the file generation. You can use getAreaExportState() to get the export state. The positional tracking keeps running while exporting.
area_file_path | : save the spatial memory database in an '.area' file. |
AREA_EXPORTING_STATE getAreaExportState | ( | ) |
Returns the state of the spatial memory export process.
As saveAreaMap() only starts the exportation, this function allows you to know when the exportation finished or if it failed.
ERROR_CODE resetPositionalTracking | ( | const Transform & | path | ) |
Resets the tracking, and re-initializes the position with the given transformation matrix.
path | : Position of the camera in the world frame when the function is called. By default, it is set to identity. |
void disablePositionalTracking | ( | String | area_file_path = "" | ) |
Disables the positional tracking.
The positional tracking is immediately stopped. If a file path is given, saveAreaMap(area_file_path) will be called asynchronously. See getAreaExportState() to get the exportation state.
If the tracking has been enabled, this function will automatically be called by close() .
area_file_path | : if set, saves the spatial memory into an '.area' file. default : (empty) area_file_path is the name and path of the database, e.g. "path/to/file/myArea1.area". |
bool isPositionalTrackingEnabled | ( | ) |
Tells if the tracking module is enabled.
PositionalTrackingParameters getPositionalTrackingParameters | ( | ) |
Returns the positional tracking parameters used. Correspond to the structure send when the enablePositionalTracking() function was called.
ERROR_CODE getSensorsData | ( | SensorsData & | data, |
TIME_REFERENCE | reference_time | ||
) |
Retrieves the Sensors (IMU,magnetometer,barometer) Data at a specific time reference.
Calling getSensosrData with TIME_REFERENCE::CURRENT gives you the latest sensors data received. Getting all the data requires to call this function at high frame rate in a thread.
Calling getSensorsData with TIME_REFERENCE::IMAGE gives you the sensors data at the time of the latest image grabbed.
SensorsData object contains the previous IMUData structure that was used in ZED SDK v2.X:
For IMU data, the values are provided in 2 ways :
Time-fused pose estimation that can be accessed using:
Raw values from the IMU sensor:
both gyroscope and accelerometer are synchronized. The delta time between previous and current value can be calculated using data.imu.timestamp.
ERROR_CODE setIMUPrior | ( | const sl::Transform & | transform | ) |
Set an optional IMU orientation hint that will be used to assist the tracking during the next grab().
This function can be used to assist the positional tracking rotation while using a ZED Mini or a ZED 2.
sl::Transform | to be ingested into IMU fusion. Note that only the rotation is used. |
ERROR_CODE enableSpatialMapping | ( | SpatialMappingParameters | spatial_mapping_parameters = SpatialMappingParameters() | ) |
Initializes and starts the spatial mapping processes.
The spatial mapping will create a geometric representation of the scene based on both tracking data and 3D point clouds.
The resulting output can be a Mesh or a FusedPointCloud. It can be be obtained by calling extractWholeSpatialMap() or retrieveSpatialMapAsync(). Note that retrieveSpatialMapAsync() should be called after requestSpatialMapAsync().
spatial_mapping_parameters | : the structure containing all the specific parameters for the spatial mapping. Default: a balanced parameter preset between geometric fidelity and output file size. For more information, see the SpatialMappingParameters documentation. |
SPATIAL_MAPPING_STATE getSpatialMappingState | ( | ) |
Returns the current spatial mapping state.
As the spatial mapping runs asynchronously, this function allows you to get reported errors or status info.
void requestSpatialMapAsync | ( | ) |
Starts the spatial map generation process in a non blocking thread from the spatial mapping process.
The spatial map generation can take a long time depending on the mapping resolution and covered area. This function will trigger the generation of a mesh without blocking the program. You can get info about the current generation using getSpatialMapRequestStatusAsync(), and retrieve the mesh using retrieveSpatialMapAsync(...) .
ERROR_CODE getSpatialMapRequestStatusAsync | ( | ) |
Returns the spatial map generation status. This status allows to know if the mesh can be retrieved by calling retrieveSpatialMapAsync.
See requestSpatialMapAsync() for an example.
ERROR_CODE retrieveSpatialMapAsync | ( | Mesh & | mesh | ) |
Retrieves the current generated spatial map only if SpatialMappingParameters::map_type was set as SPATIAL_MAP_TYPE::MESH.
After calling requestSpatialMapAsync , this function allows you to retrieve the generated mesh. The mesh will only be available when getMeshRequestStatusAsync() returns SUCCESS
mesh | : [out] The mesh to be filled with the generated spatial map. |
ERROR_CODE retrieveSpatialMapAsync | ( | FusedPointCloud & | fpc | ) |
Retrieves the current generated spatial map only if SpatialMappingParameters::map_type was set as SPATIAL_MAP_TYPE::FUSED_POINT_CLOUD. After calling requestSpatialMapAsync , this function allows you to retrieve the generated fused point cloud. The fused point cloud will only be available when getMeshRequestStatusAsync() returns SUCCESS.
fpc | : The fused point cloud to be filled with the generated spatial map. |
See requestSpatialMapAsync() for an example.
ERROR_CODE extractWholeSpatialMap | ( | Mesh & | mesh | ) |
Extracts the current spatial map from the spatial mapping process only if SpatialMappingParameters::map_type was set as SPATIAL_MAP_TYPE::MESH.
If the object to be filled already contains a previous version of the mesh, only changes will be updated, optimizing performance.
mesh | : The mesh to be filled with the generated spatial map. |
ERROR_CODE extractWholeSpatialMap | ( | FusedPointCloud & | fpc | ) |
Extracts the current spatial map from the spatial mapping process only if SpatialMappingParameters::map_type was set as SPATIAL_MAP_TYPE::FUSED_POINT_CLOUD.
If the object to be filled already contains a previous version of the fused point cloud, only changes will be updated, optimizing performance.
fpc | : The fused point cloud to be filled with the generated spatial map. |
void pauseSpatialMapping | ( | bool | status | ) |
Pauses or resumes the spatial mapping processes.
As spatial mapping runs asynchronously, using this function can pause its computation to free some processing power, and resume it again later.
For example, it can be used to avoid mapping a specific area or to pause the mapping when the camera is static.
status | : if true, the integration is paused. If false, the spatial mapping is resumed. |
void disableSpatialMapping | ( | ) |
Disables the spatial mapping process.
The spatial mapping is immediately stopped.
If the mapping has been enabled, this function will automatically be called by close().
SpatialMappingParameters getSpatialMappingParameters | ( | ) |
Returns the spatial mapping parameters used. Correspond to the structure send when the enableSpatialMapping() function was called.
ERROR_CODE findPlaneAtHit | ( | sl::uint2 | coord, |
sl::Plane & | plane | ||
) |
Checks the plane at the given left image coordinates.
This function gives the 3D plane corresponding to a given pixel in the latest left image grabbed.
The pixel coordinates are expected to be contained x=[0;width-1] and y=[0;height-1], where width/height are defined by the input resolution.
coord | : [in] The image coordinate. The coordinate must be taken from the full-size image |
plane | : [out] The detected plane if the function succeeded |
ERROR_CODE findFloorPlane | ( | sl::Plane & | floorPlane, |
sl::Transform & | resetTrackingFloorFrame, | ||
float | floor_height_prior = INVALID_VALUE , |
||
sl::Rotation | world_orientation_prior = sl::Matrix3f::zeros() , |
||
float | floor_height_prior_tolerance = INVALID_VALUE |
||
) |
Detect the floor plane of the scene.
This function analysis the latest image and depth to estimate the floor plane of the scene.
It expects the floor plane to be visible and bigger than other candidate planes, like a table.
floorPlane | : [out] The detected floor plane if the function succeeded |
resetTrackingFloorFrame | : [out] The transform to align the tracking with the floor plane. The initial position will then be at ground height, with the axis align with the gravity. The positional tracking needs to be reset/enabled with this transform as a parameter (PositionalTrackingParameters.initial_world_transform) |
floor_height_prior | : [in] Prior set to locate the floor plane depending on the known camera distance to the ground, expressed in the same unit as the ZED. If the prior is too far from the detected floor plane, the function will return ERROR_CODE::PLANE_NOT_FOUND |
world_orientation_prior | : [in] Prior set to locate the floor plane depending on the known camera orientation to the ground. If the prior is too far from the detected floor plane, the function will return ERROR_CODE::PLANE_NOT_FOUND |
floor_height_prior_tolerance | : [in] Prior height tolerance, absolute value. |
ERROR_CODE enableRecording | ( | RecordingParameters | recording_parameters | ) |
Creates an SVO file to be filled by record().
SVO files are custom video files containing the un-rectified images from the camera along with some meta-data like timestamps or IMU orientation (if applicable).
They can be used to simulate a live ZED and test a sequence with various SDK parameters.
Depending on the application, various compression modes are available. See SVO_COMPRESSION_MODE.
recording_parameters | : Recording parameters such as filename and compression mode |
RecordingStatus getRecordingStatus | ( | ) |
Get the recording information.
void pauseRecording | ( | bool | status | ) |
Pauses or resumes the recording.
status | : if true, the recording is paused. If false, the recording is resumed. |
void disableRecording | ( | ) |
Disables the recording initiated by enableRecording() and closes the generated file.
See enableRecording() for an example.
RecordingParameters getRecordingParameters | ( | ) |
Returns the recording parameters used. Correspond to the structure send when the enableRecording() function was called.
ERROR_CODE enableStreaming | ( | StreamingParameters | streaming_parameters = StreamingParameters() | ) |
Creates a streaming pipeline.
streaming_parameters | : the structure containing all the specific parameters for the streaming. #include <sl/Camera.hpp>
using namespace sl;
int main(int argc, char **argv) {
// Create a ZED camera object
Camera zed;
// Set initial parameters
InitParameters init_params;
init_params.camera_resolution = RESOLUTION::HD720; // Use HD720 video mode (default fps: 60)
// Open the camera
ERROR_CODE err = zed.open(init_params);
if (err != SUCCESS) {
std::cout << toString(err) << std::endl;
exit(-1);
}
// Enable video recording
sl::StreamingParameters stream_params;
stream_params.port = 30000;
stream_params.bitrate = 8000;
err = zed.enableStreaming(stream_params);
if (err != SUCCESS) {
std::cout << toString(err) << std::endl;
exit(-1);
}
// Grab data during 500 frames
int i = 0;
while (i < 500) {
// Grab a new frame
if (zed.grab() == SUCCESS) {
i++;
}
}
zed.disableStreaming();
zed.close();
return 0;
}
unsigned int bitrate Defines the streaming bitrate in Kbits/s. Definition: Camera.hpp:916 unsigned short port Defines the port used for streaming. Definition: Camera.hpp:899 |
void disableStreaming | ( | ) |
Disables the streaming initiated by enableStreaming()
bool isStreamingEnabled | ( | ) |
Tells if the streaming is running (true) or still initializing (false)
StreamingParameters getStreamingParameters | ( | ) |
Returns the streaming parameters used. Correspond to the structure send when the enableStreaming() function was called.
ERROR_CODE enableObjectDetection | ( | ObjectDetectionParameters | object_detection_parameters = ObjectDetectionParameters() | ) |
Initializes and starts the Deep Learning detection module.
The object detection module currently supports two types of detection :
Detected objects can be retrieved using the retrieveObjects() function.
As detecting and tracking the objects is CPU and GPU-intensive, the module can be used synchronously or asynchronously using ObjectDetectionParameters::image_sync.
object_detection_parameters | : Structure containing all specific parameters for object detection. For more information, see the ObjectDetectionParameters documentation. |
void pauseObjectDetection | ( | bool | status | ) |
Pauses or resumes the object detection processes.
If the object detection has been enabled with ObjectDetectionParameters::image_sync set to false (running asynchronously), this function will pause processing.
While in pause, calling this function with status = false will resume the object detection. The retrieveObjects function will keep on returning the last objects detected while in pause.
status | : If true, object detection is paused. If false, object detection is resumed. |
void disableObjectDetection | ( | ) |
Disables the Object Detection process.
The object detection module immediately stops and frees its memory allocations. If the object detection has been enabled, this function will automatically be called by close().
ERROR_CODE ingestCustomBoxObjects | ( | std::vector< CustomBoxObjectData > & | objects_in | ) |
Feed the 3D Object tracking function with your own 2D bounding boxes from your own detection algorithm.
ERROR_CODE retrieveObjects | ( | Objects & | objects, |
ObjectDetectionRuntimeParameters | parameters = ObjectDetectionRuntimeParameters() |
||
) |
Retrieve objects detected by the object detection module.
This function returns the result of the object detection, whether the module is running synchronously or asynchronously.
It is recommended to keep the same Objects object as the input of all calls to this function. This will enable the identification and the tracking of every objects detected.
objects | : The detected objects will be saved into this object. If the object already contains data from a previous detection, it will be updated, keeping a unique ID for the same person. |
parameters | : Object detection runtime settings, can be changed at each detection. In async mode, the parameters update is applied on the next iteration. |
ERROR_CODE getObjectsBatch | ( | std::vector< sl::ObjectsBatch > & | trajectories | ) |
Get a batch of detected objects.
trajectories | as a std::vector of sl::ObjectBatch, that will be filled by the batching queue process. |
bool isObjectDetectionEnabled | ( | ) |
Tells if the object detection module is enabled.
ObjectDetectionParameters getObjectDetectionParameters | ( | ) |
Returns the object detection parameters used. Correspond to the structure send when the enableObjectDetection() function was called.
ERROR_CODE getCurrentMinMaxDepth | ( | float & | min, |
float & | max | ||
) |
Gets the current range of perceived depth.
min | : [out] Minimum depth detected (in selected sl::UNIT) |
max | : [out] Maximum depth detected (in selected sl::UNIT) |
|
static |
Returns the version of the currently installed ZED SDK.
|
static |
Returns the version of the currently installed ZED SDK.
major | : major int of the version filled |
minor | : minor int of the version filled |
patch | : patch int of the version filled |
|
static |
List all the connected devices with their associated information.
This function lists all the cameras available and provides their serial number, models and other information.
|
static |
List all the streaming devices with their associated information.
|
static |
Performs an hardware reset of the ZED 2 and the ZED2i.
sn | : Serial number of the camera to reset, or 0 to reset the first camera detected. |
fullReboot | : Perform a full reboot (Sensors and Video modules) if true, otherwise only the Video module will be rebooted. |