This class serves as the primary interface between the camera and the various features provided by the SDK. More...
Functions | |
None | close (self) |
Close an opened camera. More... | |
ERROR_CODE | open (self, py_init=InitParameters()) |
Opens the ZED camera from the provided InitParameters. More... | |
bool | is_opened (self) |
Reports if the camera has been successfully opened. More... | |
ERROR_CODE | grab (self, RuntimeParameters py_runtime=RuntimeParameters()) |
This method will grab the latest images from the camera, rectify them, and compute the measurements based on the RuntimeParameters provided (depth, point cloud, tracking, etc.) More... | |
ERROR_CODE | retrieve_image (self, Mat py_mat, view=VIEW.LEFT, type=MEM.CPU, resolution=Resolution(0, 0)) |
Retrieves images from the camera (or SVO file). More... | |
ERROR_CODE | retrieve_measure (self, Mat py_mat, measure=MEASURE.DEPTH, type=MEM.CPU, resolution=Resolution(0, 0)) |
Computed measures, like depth, point cloud, or normals, can be retrieved using this method. More... | |
ERROR_CODE | set_region_of_interest (self, Mat py_mat, modules=[MODULE.ALL]) |
Defines a region of interest to focus on for all the SDK, discarding other parts. More... | |
ERROR_CODE | get_region_of_interest (self, Mat py_mat, resolution=Resolution(0, 0), module=MODULE.ALL) |
Get the previously set or computed region of interest. More... | |
ERROR_CODE | start_region_of_interest_auto_detection (self, roi_param=RegionOfInterestParameters()) |
Start the auto detection of a region of interest to focus on for all the SDK, discarding other parts. More... | |
REGION_OF_INTEREST_AUTO_DETECTION_STATE | get_region_of_interest_auto_detection_status (self) |
Return the status of the automatic Region of Interest Detection The automatic Region of Interest Detection is enabled by using startRegionOfInterestAutoDetection. More... | |
ERROR_CODE | start_publishing (self, CommunicationParameters communication_parameters) |
Set this camera as a data provider for the Fusion module. More... | |
ERROR_CODE | stop_publishing (self) |
Set this camera as normal camera (without data providing). More... | |
None | set_svo_position (self, int frame_number) |
Sets the playback cursor to the desired frame number in the SVO file. More... | |
int | get_svo_position (self) |
Returns the current playback position in the SVO file. More... | |
int | get_svo_number_of_frames (self) |
Returns the number of frames in the SVO file. More... | |
ERROR_CODE | ingest_data_into_svo (self, SVOData data) |
ingest a SVOData in the SVO file. More... | |
list | get_svo_data_keys (self) |
Get the external channels that can be retrieved from the SVO file. More... | |
ERROR_CODE | retrieve_svo_data (self, str key, dict data, Timestamp ts_begin, Timestamp ts_end) |
retrieve SVO datas from the SVO file at the given channel key and in the given timestamp range. More... | |
ERROR_CODE | set_camera_settings_range (self, VIDEO_SETTINGS settings, min=-1, max=-1) |
Sets the value of the requested camera setting that supports two values (min/max). More... | |
ERROR_CODE | set_camera_settings_roi (self, VIDEO_SETTINGS settings, Rect roi, eye=SIDE.BOTH, reset=False) |
Overloaded method for VIDEO_SETTINGS.AEC_AGC_ROI which takes a Rect as parameter. More... | |
(ERROR_CODE, int) | get_camera_settings (self, VIDEO_SETTINGS setting) |
Returns the current value of the requested camera setting (gain, brightness, hue, exposure, etc.). More... | |
(ERROR_CODE, int, int) | get_camera_settings_range (self, VIDEO_SETTINGS setting) |
Returns the values of the requested settings for VIDEO_SETTINGS that supports two values (min/max). More... | |
ERROR_CODE | get_camera_settings_roi (self, VIDEO_SETTINGS setting, Rect roi, eye=SIDE.BOTH) |
Returns the current value of the currently used ROI for the camera setting AEC_AGC_ROI. More... | |
bool | is_camera_setting_supported (self, VIDEO_SETTINGS setting) |
Returns if the video setting is supported by the camera or not. More... | |
float | get_current_fps (self) |
Returns the current framerate at which the grab() method is successfully called. More... | |
Timestamp | get_timestamp (self, TIME_REFERENCE time_reference) |
Returns the timestamp in the requested TIME_REFERENCE. More... | |
int | get_frame_dropped_count (self) |
Returns the number of frames dropped since grab() was called for the first time. More... | |
(ERROR_CODE, float, float) | get_current_min_max_depth (self) |
Gets the current range of perceived depth. More... | |
CameraInformation | get_camera_information (self, resizer=Resolution(0, 0)) |
Returns the CameraInformation associated the camera being used. More... | |
RuntimeParameters | get_runtime_parameters (self) |
Returns the RuntimeParameters used. More... | |
InitParameters | get_init_parameters (self) |
Returns the InitParameters associated with the Camera object. More... | |
PositionalTrackingParameters | get_positional_tracking_parameters (self) |
Returns the PositionalTrackingParameters used. More... | |
SpatialMappingParameters | get_spatial_mapping_parameters (self) |
Returns the SpatialMappingParameters used. More... | |
ObjectDetectionParameters | get_object_detection_parameters (self, instance_module_id=0) |
Returns the ObjectDetectionParameters used. More... | |
BodyTrackingParameters | get_body_tracking_parameters (self, instance_id=0) |
Returns the BodyTrackingParameters used. More... | |
StreamingParameters | get_streaming_parameters (self) |
Returns the StreamingParameters used. More... | |
ERROR_CODE | enable_positional_tracking (self, py_tracking=PositionalTrackingParameters()) |
Initializes and starts the positional tracking processes. More... | |
None | update_self_calibration (self) |
Performs a new self-calibration process. More... | |
ERROR_CODE | enable_body_tracking (self, BodyTrackingParameters body_tracking_parameters=BodyTrackingParameters()) |
Initializes and starts the body tracking module. More... | |
None | disable_body_tracking (self, int instance_id=0, bool force_disable_all_instances=False) |
Disables the body tracking process. More... | |
ERROR_CODE | retrieve_bodies (self, Bodies bodies, BodyTrackingRuntimeParameters body_tracking_runtime_parameters=BodyTrackingRuntimeParameters(), int instance_id=0) |
Retrieves body tracking data from the body tracking module. More... | |
bool | is_body_tracking_enabled (self, int instance_id=0) |
Tells if the body tracking module is enabled. | |
ERROR_CODE | get_sensors_data (self, SensorsData py_sensors_data, time_reference=TIME_REFERENCE.CURRENT) |
Retrieves the SensorsData (IMU, magnetometer, barometer) at a specific time reference. More... | |
ERROR_CODE | set_imu_prior (self, Transform transfom) |
Set an optional IMU orientation hint that will be used to assist the tracking during the next grab(). More... | |
POSITIONAL_TRACKING_STATE | get_position (self, Pose py_pose, reference_frame=REFERENCE_FRAME.WORLD) |
Retrieves the estimated position and orientation of the camera in the specified reference frame. More... | |
PositionalTrackingStatus | get_positional_tracking_status (self) |
Return the current status of positional tracking module. More... | |
AREA_EXPORTING_STATE | get_area_export_state (self) |
Returns the state of the spatial memory export process. More... | |
ERROR_CODE | save_area_map (self, area_file_path="") |
Saves the current area learning file. More... | |
None | disable_positional_tracking (self, area_file_path="") |
Disables the positional tracking. More... | |
bool | is_positional_tracking_enabled (self) |
Tells if the tracking module is enabled. | |
ERROR_CODE | reset_positional_tracking (self, Transform path) |
Resets the tracking, and re-initializes the position with the given transformation matrix. More... | |
ERROR_CODE | enable_spatial_mapping (self, py_spatial=SpatialMappingParameters()) |
Initializes and starts the spatial mapping processes. More... | |
None | pause_spatial_mapping (self, bool status) |
Pauses or resumes the spatial mapping processes. More... | |
SPATIAL_MAPPING_STATE | get_spatial_mapping_state (self) |
Returns the current spatial mapping state. More... | |
None | request_spatial_map_async (self) |
Starts the spatial map generation process in a non-blocking thread from the spatial mapping process. More... | |
ERROR_CODE | get_spatial_map_request_status_async (self) |
Returns the spatial map generation status. More... | |
ERROR_CODE | retrieve_spatial_map_async (self, py_mesh) |
Retrieves the current generated spatial map. More... | |
ERROR_CODE | extract_whole_spatial_map (self, py_mesh) |
Extract the current spatial map from the spatial mapping process. More... | |
ERROR_CODE | find_plane_at_hit (self, coord, Plane py_plane, parameters=PlaneDetectionParameters()) |
Checks the plane at the given left image coordinates. More... | |
ERROR_CODE | find_floor_plane (self, Plane py_plane, Transform reset_tracking_floor_frame, floor_height_prior=float('nan'), world_orientation_prior=Rotation(Matrix3f().zeros()), floor_height_prior_tolerance=float('nan')) |
Detect the floor plane of the scene. More... | |
None | disable_spatial_mapping (self) |
Disables the spatial mapping process. More... | |
ERROR_CODE | enable_streaming (self, streaming_parameters=StreamingParameters()) |
Creates a streaming pipeline. More... | |
None | disable_streaming (self) |
Disables the streaming initiated by enable_streaming(). More... | |
bool | is_streaming_enabled (self) |
Tells if the streaming is running. More... | |
ERROR_CODE | enable_recording (self, RecordingParameters record) |
Creates an SVO file to be filled by enable_recording() and disable_recording(). More... | |
None | disable_recording (self) |
Disables the recording initiated by enable_recording() and closes the generated file. More... | |
RecordingStatus | get_recording_status (self) |
Get the recording information. More... | |
None | pause_recording (self, value=True) |
Pauses or resumes the recording. More... | |
RecordingParameters | get_recording_parameters (self) |
Returns the RecordingParameters used. More... | |
ERROR_CODE | enable_object_detection (self, object_detection_parameters=ObjectDetectionParameters()) |
Initializes and starts object detection module. More... | |
None | disable_object_detection (self, instance_module_id=0, force_disable_all_instances=False) |
Disables the object detection process. More... | |
ERROR_CODE | retrieve_objects (self, Objects py_objects, ObjectDetectionRuntimeParameters object_detection_parameters=ObjectDetectionRuntimeParameters(), instance_module_id=0) |
Retrieve objects detected by the object detection module. More... | |
ERROR_CODE | get_objects_batch (self, list[ObjectsBatch] trajectories, instance_module_id=0) |
Get a batch of detected objects. More... | |
ERROR_CODE | ingest_custom_box_objects (self, list[CustomBoxObjectData] objects_in, instance_module_id=0) |
Feed the 3D Object tracking function with your own 2D bounding boxes from your own detection algorithm. More... | |
ERROR_CODE | ingest_custom_mask_objects (self, list[CustomMaskObjectData] objects_in, instance_module_id=0) |
Feed the 3D Object tracking function with your own 2D bounding boxes with masks from your own detection algorithm. More... | |
bool | is_object_detection_enabled (self, int instance_id=0) |
Tells if the object detection module is enabled. | |
Static Functions | |
str | get_sdk_version () |
Returns the version of the currently installed ZED SDK. More... | |
list[DeviceProperties] | get_device_list () |
List all the connected devices with their associated information. More... | |
list[StreamingProperties] | get_streaming_device_list () |
Lists all the streaming devices with their associated information. More... | |
ERROR_CODE | reboot (int sn, bool full_reboot=True) |
Performs a hardware reset of the ZED 2 and the ZED 2i. More... | |
ERROR_CODE | reboot_from_input (INPUT_TYPE input_type) |
Performs a hardware reset of all devices matching the InputType. More... | |
This class serves as the primary interface between the camera and the various features provided by the SDK.
It enables seamless integration and access to a wide array of capabilities, including video streaming, depth sensing, object tracking, mapping, and much more.
A standard program will use the Camera class like this:
None close | ( | self | ) |
Close an opened camera.
If open() has been called, this method will close the connection to the camera (or the SVO file) and free the corresponding memory.
If open() wasn't called or failed, this method won't have any effect.
ERROR_CODE open | ( | self, | |
py_init = InitParameters() |
|||
) |
Opens the ZED camera from the provided InitParameters.
The method will also check the hardware requirements and run a self-calibration.
py_init | : A structure containing all the initial parameters. Default: a preset of InitParameters. |
Here is the proper way to call this function:
bool is_opened | ( | self | ) |
Reports if the camera has been successfully opened.
It has the same behavior as checking if open() returns ERROR_CODE.SUCCESS.
ERROR_CODE grab | ( | self, | |
RuntimeParameters | py_runtime = RuntimeParameters() |
||
) |
This method will grab the latest images from the camera, rectify them, and compute the measurements based on the RuntimeParameters provided (depth, point cloud, tracking, etc.)
As measures are created in this method, its execution can last a few milliseconds, depending on your parameters and your hardware.
The exact duration will mostly depend on the following parameters:
This method is meant to be called frequently in the main loop of your application.
py_runtime | : A structure containing all the runtime parameters. Default: a preset of RuntimeParameters. |
str()
.ERROR_CODE retrieve_image | ( | self, | |
Mat | py_mat, | ||
view = VIEW.LEFT , |
|||
type = MEM.CPU , |
|||
resolution = Resolution(0,0) |
|||
) |
Retrieves images from the camera (or SVO file).
Multiple images are available along with a view of various measures for display purposes.
Available images and views are listed here.
As an example, VIEW.DEPTH can be used to get a gray-scale version of the depth map, but the actual depth values can be retrieved using retrieve_measure() .
Pixels
Most VIEW modes output image with 4 channels as BGRA (Blue, Green, Red, Alpha), for more information see enum VIEW
Memory
By default, images are copied from GPU memory to CPU memory (RAM) when this function is called.
If your application can use GPU images, using the type parameter can increase performance by avoiding this copy.
If the provided sl.Mat object is already allocated and matches the requested image format, memory won't be re-allocated.
Image size
By default, images are returned in the resolution provided by get_camera_information().camera_configuration.resolution.
However, you can request custom resolutions. For example, requesting a smaller image can help you speed up your application.
py_mat[out] | : The sl.Mat to store the image. |
view[in] | : Defines the image you want (see VIEW). Default: VIEW.LEFT. |
type[in] | : Defines on which memory the image should be allocated. Default: MEM.CPU (you cannot change this default value). |
resolution[in] | : If specified, defines the Resolution of the output sl.Mat. If set to Resolution(0,0), the camera resolution will be taken. Default: (0,0). |
ERROR_CODE retrieve_measure | ( | self, | |
Mat | py_mat, | ||
measure = MEASURE.DEPTH , |
|||
type = MEM.CPU , |
|||
resolution = Resolution(0,0) |
|||
) |
Computed measures, like depth, point cloud, or normals, can be retrieved using this method.
Multiple measures are available after a grab() call. A full list is available here.
Memory
By default, images are copied from GPU memory to CPU memory (RAM) when this function is called.
If your application can use GPU images, using the type parameter can increase performance by avoiding this copy.
If the provided Mat object is already allocated and matches the requested image format, memory won't be re-allocated.
Measure size
By default, measures are returned in the resolution provided by get_camera_information() in CameraInformations.camera_resolution .
However, custom resolutions can be requested. For example, requesting a smaller measure can help you speed up your application.
py_mat[out] | : The sl.Mat to store the measures. |
measure[in] | : Defines the measure you want (see MEASURE). Default: MEASURE.DEPTH. |
type[in] | : Defines on which memory the image should be allocated. Default: MEM.CPU (you cannot change this default value). |
resolution[in] | : If specified, defines the Resolution of the output sl.Mat. If set to Resolution(0,0), the camera resolution will be taken. Default: (0,0). |
ERROR_CODE set_region_of_interest | ( | self, | |
Mat | py_mat, | ||
modules = [MODULE.ALL] |
|||
) |
Defines a region of interest to focus on for all the SDK, discarding other parts.
roi_mask | : The Mat defining the requested region of interest, pixels lower than 127 will be discarded from all modules: depth, positional tracking, etc. If empty, set all pixels as valid. The mask can be either at lower or higher resolution than the current images. |
ERROR_CODE get_region_of_interest | ( | self, | |
Mat | py_mat, | ||
resolution = Resolution(0,0) , |
|||
module = MODULE.ALL |
|||
) |
Get the previously set or computed region of interest.
roi_mask | The Mat returned |
image_size | The optional size of the returned mask |
ERROR_CODE start_region_of_interest_auto_detection | ( | self, | |
roi_param = RegionOfInterestParameters() |
|||
) |
Start the auto detection of a region of interest to focus on for all the SDK, discarding other parts.
This detection is based on the general motion of the camera combined with the motion in the scene. The camera must move for this process, an internal motion detector is used, based on the Positional Tracking module. It requires a few hundreds frames of motion to compute the mask.
roi_param | The RegionOfInterestParameters defining parameters for the detection |
REGION_OF_INTEREST_AUTO_DETECTION_STATE get_region_of_interest_auto_detection_status | ( | self | ) |
Return the status of the automatic Region of Interest Detection The automatic Region of Interest Detection is enabled by using startRegionOfInterestAutoDetection.
ERROR_CODE start_publishing | ( | self, | |
CommunicationParameters | communication_parameters | ||
) |
Set this camera as a data provider for the Fusion module.
Metadata is exchanged with the Fusion.
communication_parameters | : A structure containing all the initial parameters. Default: a preset of CommunicationParameters. |
ERROR_CODE stop_publishing | ( | self | ) |
Set this camera as normal camera (without data providing).
Stop to send camera data to fusion.
None set_svo_position | ( | self, | |
int | frame_number | ||
) |
Sets the playback cursor to the desired frame number in the SVO file.
This method allows you to move around within a played-back SVO file. After calling, the next call to grab() will read the provided frame number.
frame_number | : The number of the desired frame to be decoded. |
int get_svo_position | ( | self | ) |
Returns the current playback position in the SVO file.
The position corresponds to the number of frames already read from the SVO file, starting from 0 to n.
Each grab() call increases this value by one (except when using InitParameters.svo_real_time_mode).
See set_svo_position() for an example.
int get_svo_number_of_frames | ( | self | ) |
Returns the number of frames in the SVO file.
The method works only if the camera is open in SVO playback mode.
ERROR_CODE ingest_data_into_svo | ( | self, | |
SVOData | data | ||
) |
ingest a SVOData in the SVO file.
The method works only if the camera is open in SVO recording mode.
list get_svo_data_keys | ( | self | ) |
Get the external channels that can be retrieved from the SVO file.
The method works only if the camera is open in SVO playback mode.
ERROR_CODE retrieve_svo_data | ( | self, | |
str | key, | ||
dict | data, | ||
Timestamp | ts_begin, | ||
Timestamp | ts_end | ||
) |
retrieve SVO datas from the SVO file at the given channel key and in the given timestamp range.
key | : The channel key. |
data | : The dict to be filled with SVOData objects, with timestamps as keys. |
ts_begin | : The beginning of the range. |
ts_end | : The end of the range. |
The method works only if the camera is open in SVO playback mode.
ERROR_CODE set_camera_settings_range | ( | self, | |
VIDEO_SETTINGS | settings, | ||
min = -1 , |
|||
max = -1 |
|||
) |
Sets the value of the requested camera setting that supports two values (min/max).
This method only works with the following VIDEO_SETTINGS:
settings | : The setting to be set. |
min | : The minimum value that can be reached (-1 or 0 gives full range). |
max | : The maximum value that can be reached (-1 or 0 gives full range). |
ERROR_CODE set_camera_settings_roi | ( | self, | |
VIDEO_SETTINGS | settings, | ||
Rect | roi, | ||
eye = SIDE.BOTH , |
|||
reset = False |
|||
) |
Overloaded method for VIDEO_SETTINGS.AEC_AGC_ROI which takes a Rect as parameter.
settings | : Must be set at VIDEO_SETTINGS.AEC_AGC_ROI, otherwise the method will have no impact. |
roi | : Rect that defines the target to be applied for AEC/AGC computation. Must be given according to camera resolution. |
eye | : SIDE on which to be applied for AEC/AGC computation. Default: SIDE.BOTH |
reset | : Cancel the manual ROI and reset it to the full image. Default: False |
(ERROR_CODE, int) get_camera_settings | ( | self, | |
VIDEO_SETTINGS | setting | ||
) |
Returns the current value of the requested camera setting (gain, brightness, hue, exposure, etc.).
Possible values (range) of each setting are available here.
setting | : The requested setting. |
(ERROR_CODE, int, int) get_camera_settings_range | ( | self, | |
VIDEO_SETTINGS | setting | ||
) |
Returns the values of the requested settings for VIDEO_SETTINGS that supports two values (min/max).
This method only works with the following VIDEO_SETTINGS:
Possible values (range) of each setting are available here.
setting | : The requested setting. |
ERROR_CODE get_camera_settings_roi | ( | self, | |
VIDEO_SETTINGS | setting, | ||
Rect | roi, | ||
eye = SIDE.BOTH |
|||
) |
Returns the current value of the currently used ROI for the camera setting AEC_AGC_ROI.
setting[in] | : Must be set at VIDEO_SETTINGS.AEC_AGC_ROI, otherwise the method will have no impact. |
roi[out] | : Roi that will be filled. |
eye[in] | : The requested side. Default: SIDE.BOTH |
bool is_camera_setting_supported | ( | self, | |
VIDEO_SETTINGS | setting | ||
) |
Returns if the video setting is supported by the camera or not.
setting[in] | : the video setting to test |
float get_current_fps | ( | self | ) |
Returns the current framerate at which the grab() method is successfully called.
The returned value is based on the difference of camera timestamps between two successful grab() calls.
Timestamp get_timestamp | ( | self, | |
TIME_REFERENCE | time_reference | ||
) |
Returns the timestamp in the requested TIME_REFERENCE.
This function can also be used when playing back an SVO file.
time_reference | : The selected TIME_REFERENCE. |
int get_frame_dropped_count | ( | self | ) |
Returns the number of frames dropped since grab() was called for the first time.
A dropped frame corresponds to a frame that never made it to the grab method.
This can happen if two frames were extracted from the camera when grab() is called. The older frame will be dropped so as to always use the latest (which minimizes latency).
(ERROR_CODE, float, float) get_current_min_max_depth | ( | self | ) |
Gets the current range of perceived depth.
min[out] | : Minimum depth detected (in selected sl.UNIT). |
max[out] | : Maximum depth detected (in selected sl.UNIT). |
CameraInformation get_camera_information | ( | self, | |
resizer = Resolution(0, 0) |
|||
) |
Returns the CameraInformation associated the camera being used.
To ensure accurate calibration, it is possible to specify a custom resolution as a parameter when obtaining scaled information, as calibration parameters are resolution-dependent.
When reading an SVO file, the parameters will correspond to the camera used for recording.
resizer | : You can specify a size different from the default image size to get the scaled camera information. Default = (0,0) meaning original image size (given by get_camera_information().camera_configuration.resolution). |
RuntimeParameters get_runtime_parameters | ( | self | ) |
Returns the RuntimeParameters used.
It corresponds to the structure given as argument to the grab() method.
InitParameters get_init_parameters | ( | self | ) |
Returns the InitParameters associated with the Camera object.
It corresponds to the structure given as argument to open() method.
PositionalTrackingParameters get_positional_tracking_parameters | ( | self | ) |
Returns the PositionalTrackingParameters used.
It corresponds to the structure given as argument to the enable_positional_tracking() method.
SpatialMappingParameters get_spatial_mapping_parameters | ( | self | ) |
Returns the SpatialMappingParameters used.
It corresponds to the structure given as argument to the enable_spatial_mapping() method.
ObjectDetectionParameters get_object_detection_parameters | ( | self, | |
instance_module_id = 0 |
|||
) |
Returns the ObjectDetectionParameters used.
It corresponds to the structure given as argument to the enable_object_detection() method.
BodyTrackingParameters get_body_tracking_parameters | ( | self, | |
instance_id = 0 |
|||
) |
Returns the BodyTrackingParameters used.
It corresponds to the structure given as argument to the enable_body_tracking() method.
StreamingParameters get_streaming_parameters | ( | self | ) |
Returns the StreamingParameters used.
It corresponds to the structure given as argument to the enable_streaming() method.
ERROR_CODE enable_positional_tracking | ( | self, | |
py_tracking = PositionalTrackingParameters() |
|||
) |
Initializes and starts the positional tracking processes.
This method allows you to enable the position estimation of the SDK. It only has to be called once in the camera's lifetime.
When enabled, the position will be update at each grab() call.
Tracking-specific parameters can be set by providing PositionalTrackingParameters to this method.
py_tracking | : A structure containing all the specific parameters for the positional tracking. Default: a preset of PositionalTrackingParameters. |
None update_self_calibration | ( | self | ) |
Performs a new self-calibration process.
In some cases, due to temperature changes or strong vibrations, the stereo calibration becomes less accurate.
Use this method to update the self-calibration data and get more reliable depth values.
ERROR_CODE enable_body_tracking | ( | self, | |
BodyTrackingParameters | body_tracking_parameters = BodyTrackingParameters() |
||
) |
Initializes and starts the body tracking module.
The body tracking module currently supports multiple classes of human skeleton detection with the BODY_TRACKING_MODEL.HUMAN_BODY_FAST, BODY_TRACKING_MODEL::HUMAN_BODY_MEDIUM or BODY_TRACKING_MODEL::HUMAN_BODY_ACCURATE.
This model only detects humans but provides a full skeleton map for each person.
Detected objects can be retrieved using the retrieve_bodies() method.
body_tracking_parameters | : A structure containing all the specific parameters for the object detection. Default: a preset of BodyTrackingParameters. |
None disable_body_tracking | ( | self, | |
int | instance_id = 0 , |
||
bool | force_disable_all_instances = False |
||
) |
Disables the body tracking process.
The body tracking module immediately stops and frees its memory allocations.
instance_id | : Id of the body tracking instance. Used when multiple instances of the body tracking module are enabled at the same time. |
force_disable_all_instances | : Should disable all instances of the body tracking module or just instance_module_id. |
ERROR_CODE retrieve_bodies | ( | self, | |
Bodies | bodies, | ||
BodyTrackingRuntimeParameters | body_tracking_runtime_parameters = BodyTrackingRuntimeParameters() , |
||
int | instance_id = 0 |
||
) |
Retrieves body tracking data from the body tracking module.
This method returns the result of the body tracking, whether the module is running synchronously or asynchronously.
It is recommended to keep the same Bodies object as the input of all calls to this method. This will enable the identification and the tracking of every detected object.
bodies | : The detected bodies will be saved into this object. If the object already contains data from a previous tracking, it will be updated, keeping a unique ID for the same person. |
body_tracking_runtime_parameters | : Body tracking runtime settings, can be changed at each tracking. In async mode, the parameters update is applied on the next iteration. |
instance_id | : Id of the body tracking instance. Used when multiple instances of the body tracking module are enabled at the same time. |
ERROR_CODE get_sensors_data | ( | self, | |
SensorsData | py_sensors_data, | ||
time_reference = TIME_REFERENCE.CURRENT |
|||
) |
Retrieves the SensorsData (IMU, magnetometer, barometer) at a specific time reference.
SensorsData object contains the previous IMUData structure that was used in ZED SDK v2.X:
For IMU data, the values are provided in 2 ways :
The delta time between previous and current values can be calculated using data.imu.timestamp
data[out] | : The SensorsData variable to store the data. |
reference_frame[in] | Defines the reference from which you want the data to be expressed. Default: REFERENCE_FRAME.WORLD. |
ERROR_CODE set_imu_prior | ( | self, | |
Transform | transfom | ||
) |
Set an optional IMU orientation hint that will be used to assist the tracking during the next grab().
This method can be used to assist the positional tracking rotation.
transform | : Transform to be ingested into IMU fusion. Note that only the rotation is used. |
POSITIONAL_TRACKING_STATE get_position | ( | self, | |
Pose | py_pose, | ||
reference_frame = REFERENCE_FRAME.WORLD |
|||
) |
Retrieves the estimated position and orientation of the camera in the specified reference frame.
If the tracking has been initialized with PositionalTrackingParameters.enable_area_memory to True (default), this method can return POSITIONAL_TRACKING_STATE.SEARCHING. This means that the tracking lost its link to the initial referential and is currently trying to relocate the camera. However, it will keep on providing position estimations.
camera_pose[out] | The pose containing the position of the camera and other information (timestamp, confidence). |
reference_frame[in] | : Defines the reference from which you want the pose to be expressed. Default: REFERENCE_FRAME.WORLD. |
PositionalTrackingStatus get_positional_tracking_status | ( | self | ) |
Return the current status of positional tracking module.
AREA_EXPORTING_STATE get_area_export_state | ( | self | ) |
Returns the state of the spatial memory export process.
As Camera.save_area_map() only starts the exportation, this method allows you to know when the exportation finished or if it failed.
ERROR_CODE save_area_map | ( | self, | |
area_file_path = "" |
|||
) |
Saves the current area learning file.
The file will contain spatial memory data generated by the tracking.
If the tracking has been initialized with PositionalTrackingParameters.enable_area_memory to True (default), the method allows you to export the spatial memory.
Reloading the exported file in a future session with PositionalTrackingParameters.area_file_path initializes the tracking within the same referential.
This method is asynchronous, and only triggers the file generation. You can use get_area_export_state() to get the export state. The positional tracking keeps running while exporting.
area_file_path | : Path of an '.area' file to save the spatial memory database in. |
None disable_positional_tracking | ( | self, | |
area_file_path = "" |
|||
) |
Disables the positional tracking.
The positional tracking is immediately stopped. If a file path is given, save_area_map() will be called asynchronously. See get_area_export_state() to get the exportation state. If the tracking has been enabled, this function will automatically be called by close() .
area_file_path | : If set, saves the spatial memory into an '.area' file. Default: (empty) area_file_path is the name and path of the database, e.g. path/to/file/myArea1.area". |
ERROR_CODE reset_positional_tracking | ( | self, | |
Transform | path | ||
) |
Resets the tracking, and re-initializes the position with the given transformation matrix.
path | : Position of the camera in the world frame when the method is called. |
ERROR_CODE enable_spatial_mapping | ( | self, | |
py_spatial = SpatialMappingParameters() |
|||
) |
Initializes and starts the spatial mapping processes.
The spatial mapping will create a geometric representation of the scene based on both tracking data and 3D point clouds. The resulting output can be a Mesh or a FusedPointCloud. It can be be obtained by calling extract_whole_spatial_map() or retrieve_spatial_map_async(). Note that retrieve_spatial_map_async should be called after request_spatial_map_async().
py_spatial | : A structure containing all the specific parameters for the spatial mapping. Default: a balanced parameter preset between geometric fidelity and output file size. For more information, see the SpatialMappingParameters documentation. |
None pause_spatial_mapping | ( | self, | |
bool | status | ||
) |
Pauses or resumes the spatial mapping processes.
As spatial mapping runs asynchronously, using this method can pause its computation to free some processing power, and resume it again later.
For example, it can be used to avoid mapping a specific area or to pause the mapping when the camera is static.
status | : If True, the integration is paused. If False, the spatial mapping is resumed. |
SPATIAL_MAPPING_STATE get_spatial_mapping_state | ( | self | ) |
Returns the current spatial mapping state.
As the spatial mapping runs asynchronously, this method allows you to get reported errors or status info.
See also SPATIAL_MAPPING_STATE
None request_spatial_map_async | ( | self | ) |
Starts the spatial map generation process in a non-blocking thread from the spatial mapping process.
The spatial map generation can take a long time depending on the mapping resolution and covered area. This function will trigger the generation of a mesh without blocking the program. You can get info about the current generation using get_spatial_map_request_status_async(), and retrieve the mesh using retrieve_spatial_map_async().
ERROR_CODE get_spatial_map_request_status_async | ( | self | ) |
Returns the spatial map generation status.
This status allows you to know if the mesh can be retrieved by calling retrieve_spatial_map_async().
ERROR_CODE retrieve_spatial_map_async | ( | self, | |
py_mesh | |||
) |
Retrieves the current generated spatial map.
After calling request_spatial_map_async(), this method allows you to retrieve the generated mesh or fused point cloud.
The Mesh or FusedPointCloud will only be available when get_spatial_map_request_status_async() returns ERROR_CODE.SUCCESS.
py_mesh[out] | : The Mesh or FusedPointCloud to be filled with the generated spatial map. |
ERROR_CODE extract_whole_spatial_map | ( | self, | |
py_mesh | |||
) |
Extract the current spatial map from the spatial mapping process.
If the object to be filled already contains a previous version of the mesh / fused point cloud, only changes will be updated, optimizing performance.
py_mesh[out] | : The Mesh or FuesedPointCloud to be filled with the generated spatial map. |
ERROR_CODE find_plane_at_hit | ( | self, | |
coord, | |||
Plane | py_plane, | ||
parameters = PlaneDetectionParameters() |
|||
) |
Checks the plane at the given left image coordinates.
This method gives the 3D plane corresponding to a given pixel in the latest left image grabbed.
The pixel coordinates are expected to be contained x=[0;width-1] and y=[0;height-1], where width/height are defined by the input resolution.
coord[in] | : The image coordinate. The coordinate must be taken from the full-size image |
plane[out] | : The detected plane if the method succeeded. |
parameters[in] | : A structure containing all the specific parameters for the plane detection. Default: a preset of PlaneDetectionParameters. |
ERROR_CODE find_floor_plane | ( | self, | |
Plane | py_plane, | ||
Transform | reset_tracking_floor_frame, | ||
floor_height_prior = float('nan') , |
|||
world_orientation_prior = Rotation(Matrix3f().zeros()) , |
|||
floor_height_prior_tolerance = float('nan') |
|||
) |
Detect the floor plane of the scene.
This method analyses the latest image and depth to estimate the floor plane of the scene.
It expects the floor plane to be visible and bigger than other candidate planes, like a table.
py_plane[out] | : The detected floor plane if the method succeeded. |
reset_tracking_floor_frame[out] | : The transform to align the tracking with the floor plane. The initial position will then be at ground height, with the axis align with the gravity. The positional tracking needs to be reset/enabled with this transform as a parameter (PositionalTrackingParameters.initial_world_transform). |
floor_height_prior[in] | : Prior set to locate the floor plane depending on the known camera distance to the ground, expressed in the same unit as the ZED. If the prior is too far from the detected floor plane, the method will return ERROR_CODE.PLANE_NOT_FOUND. |
world_orientation_prior[in] | : Prior set to locate the floor plane depending on the known camera orientation to the ground. If the prior is too far from the detected floor plane, the method will return ERROR_CODE.PLANE_NOT_FOUND. |
floor_height_prior_tolerance[in] | : Prior height tolerance, absolute value. |
None disable_spatial_mapping | ( | self | ) |
Disables the spatial mapping process.
The spatial mapping is immediately stopped.
If the mapping has been enabled, this method will automatically be called by close().
ERROR_CODE enable_streaming | ( | self, | |
streaming_parameters = StreamingParameters() |
|||
) |
Creates a streaming pipeline.
streaming_parameters | : A structure containing all the specific parameters for the streaming. Default: a reset of StreamingParameters . |
None disable_streaming | ( | self | ) |
Disables the streaming initiated by enable_streaming().
See enable_streaming() for an example.
bool is_streaming_enabled | ( | self | ) |
Tells if the streaming is running.
ERROR_CODE enable_recording | ( | self, | |
RecordingParameters | record | ||
) |
Creates an SVO file to be filled by enable_recording() and disable_recording().
SVO files are custom video files containing the un-rectified images from the camera along with some meta-data like timestamps or IMU orientation (if applicable).
They can be used to simulate a live ZED and test a sequence with various SDK parameters.
Depending on the application, various compression modes are available. See SVO_COMPRESSION_MODE.
record | : A structure containing all the specific parameters for the recording such as filename and compression mode. Default: a reset of RecordingParameters . |
None disable_recording | ( | self | ) |
Disables the recording initiated by enable_recording() and closes the generated file.
See enable_recording() for an example.
RecordingStatus get_recording_status | ( | self | ) |
Get the recording information.
None pause_recording | ( | self, | |
value = True |
|||
) |
Pauses or resumes the recording.
status | : If True, the recording is paused. If False, the recording is resumed. |
RecordingParameters get_recording_parameters | ( | self | ) |
Returns the RecordingParameters used.
It corresponds to the structure given as argument to the enable_recording() method.
ERROR_CODE enable_object_detection | ( | self, | |
object_detection_parameters = ObjectDetectionParameters() |
|||
) |
Initializes and starts object detection module.
The object detection module currently supports multiple class of objects with the OBJECT_DETECTION_MODEL.MULTI_CLASS_BOX or OBJECT_DETECTION_MODEL.MULTI_CLASS_BOX_ACCURATE.
The full list of detectable objects is available through OBJECT_CLASS and OBJECT_SUBCLASS.
Detected objects can be retrieved using the retrieve_objects() method. the retrieve_objects() method will be blocking during the detection.
object_detection_parameters | : A structure containing all the specific parameters for the object detection. Default: a preset of ObjectDetectionParameters. |
None disable_object_detection | ( | self, | |
instance_module_id = 0 , |
|||
force_disable_all_instances = False |
|||
) |
Disables the object detection process.
The object detection module immediately stops and frees its memory allocations.
instance_module_id | : Id of the object detection instance. Used when multiple instances of the object detection module are enabled at the same time. |
force_disable_all_instances | : Should disable all instances of the object detection module or just instance_module_id. |
ERROR_CODE retrieve_objects | ( | self, | |
Objects | py_objects, | ||
ObjectDetectionRuntimeParameters | object_detection_parameters = ObjectDetectionRuntimeParameters() , |
||
instance_module_id = 0 |
|||
) |
Retrieve objects detected by the object detection module.
This method returns the result of the object detection, whether the module is running synchronously or asynchronously.
It is recommended to keep the same Objects object as the input of all calls to this method. This will enable the identification and tracking of every object detected.
py_objects[out] | : The detected objects will be saved into this object. If the object already contains data from a previous detection, it will be updated, keeping a unique ID for the same person. |
object_detection_parameters[in] | : Object detection runtime settings, can be changed at each detection. In async mode, the parameters update is applied on the next iteration. |
instance_module_id | : Id of the object detection instance. Used when multiple instances of the object detection module are enabled at the same time. |
ERROR_CODE get_objects_batch | ( | self, | |
list[ObjectsBatch] | trajectories, | ||
instance_module_id = 0 |
|||
) |
Get a batch of detected objects.
trajectories | : list of sl.ObjectsBatch that will be filled by the batching queue process. An empty list should be passed to the function |
instance_module_id | : Id of the object detection instance. Used when multiple instances of the object detection module are enabled at the same time. |
ERROR_CODE ingest_custom_box_objects | ( | self, | |
list[CustomBoxObjectData] | objects_in, | ||
instance_module_id = 0 |
|||
) |
Feed the 3D Object tracking function with your own 2D bounding boxes from your own detection algorithm.
objects_in | : List of CustomBoxObjectData to feed the object detection. |
instance_module_id | : Id of the object detection instance. Used when multiple instances of the object detection module are enabled at the same time. |
ERROR_CODE ingest_custom_mask_objects | ( | self, | |
list[CustomMaskObjectData] | objects_in, | ||
instance_module_id = 0 |
|||
) |
Feed the 3D Object tracking function with your own 2D bounding boxes with masks from your own detection algorithm.
objects_in | : List of CustomMaskObjectData to feed the object detection. |
instance_module_id | : Id of the object detection instance. Used when multiple instances of the object detection module are enabled at the same time. |
|
static |
Returns the version of the currently installed ZED SDK.
|
static |
List all the connected devices with their associated information.
This method lists all the cameras available and provides their serial number, models and other information.
|
static |
Lists all the streaming devices with their associated information.
|
static |
Performs a hardware reset of the ZED 2 and the ZED 2i.
sn | : Serial number of the camera to reset, or 0 to reset the first camera detected. |
full_reboot | : Perform a full reboot (sensors and video modules) if True, otherwise only the video module will be rebooted. |
|
static |
Performs a hardware reset of all devices matching the InputType.
input_type | : Input type of the devices to reset. |