ObjectDetectionParameters Class Reference

Class containing a set of parameters for the object detection module. More...

Functions

ObjectDetectionParameters __cinit__ (self, enable_tracking=True, enable_segmentation=False, detection_model=OBJECT_DETECTION_MODEL.MULTI_CLASS_BOX_FAST, max_range=-1.0, batch_trajectories_parameters=BatchParameters(), filtering_mode=OBJECT_FILTERING_MODE.NMS3D, prediction_timeout_s=0.2, allow_reduced_precision_inference=False, instance_module_id=0, fused_objects_group_name="", custom_onnx_file="", custom_onnx_dynamic_input_shape=Resolution(512, 512))
 Default constructor. More...
 
bool enable_tracking (self)
 Whether the object detection system includes object tracking capabilities across a sequence of images. More...
 
bool enable_segmentation (self)
 Whether the object masks will be computed. More...
 
OBJECT_DETECTION_MODEL detection_model (self)
 sl.OBJECT_DETECTION_MODEL to use. More...
 
str fused_objects_group_name (self)
 In a multi camera setup, specify which group this model belongs to. More...
 
str custom_onnx_file (self)
 Path to the YOLO-like onnx file for custom object detection ran in the ZED SDK. More...
 
sl.Resolution custom_onnx_dynamic_input_shape (self)
 Resolution to the YOLO-like onnx file for custom object detection ran in the ZED SDK. More...
 
float max_range (self)
 Upper depth range for detections. More...
 
BatchParameters batch_parameters (self)
 Batching system parameters. More...
 
def filtering_mode (self)
 Filtering mode that should be applied to raw detections. More...
 
float prediction_timeout_s (self)
 Prediction duration of the ZED SDK when an object is not detected anymore before switching its state to sl.OBJECT_TRACKING_STATE.SEARCHING. More...
 
bool allow_reduced_precision_inference (self)
 Whether to allow inference to run at a lower precision to improve runtime and memory usage. More...
 
int instance_module_id (self)
 Id of the module instance. More...
 
str custom_onnx_file (self)
 Path to the YOLO-like onnx file for custom object detection ran in the ZED SDK. More...
 
Resolution custom_onnx_dynamic_input_shape (self)
 Resolution to the YOLO-like onnx file for custom object detection ran in the ZED SDK. More...
 

Detailed Description

Class containing a set of parameters for the object detection module.

The default constructor sets all parameters to their default settings.

Note
Parameters can be adjusted by the user.

Functions

◆ __cinit__()

ObjectDetectionParameters __cinit__ (   self,
  enable_tracking = True,
  enable_segmentation = False,
  detection_model = OBJECT_DETECTION_MODEL.MULTI_CLASS_BOX_FAST,
  max_range = -1.0,
  batch_trajectories_parameters = BatchParameters(),
  filtering_mode = OBJECT_FILTERING_MODE.NMS3D,
  prediction_timeout_s = 0.2,
  allow_reduced_precision_inference = False,
  instance_module_id = 0,
  fused_objects_group_name = "",
  custom_onnx_file = "",
  custom_onnx_dynamic_input_shape = Resolution(512, 512) 
)

Default constructor.

All the parameters are set to their default values.

Parameters
enable_tracking: Activates enable_tracking
enable_segmentation: Activates enable_segmentation
detection_model: Chosen detection_model
max_range: Chosen max_range
batch_trajectories_parameters: Chosen batch_parameters
filtering_mode: Chosen filtering_mode
prediction_timeout_s: Chosen prediction_timeout_s
allow_reduced_precision_inference: Activates allow_reduced_precision_inference
instance_module_id: Chosen instance_module_id

◆ enable_tracking()

bool enable_tracking (   self)

Whether the object detection system includes object tracking capabilities across a sequence of images.

Default: True

◆ enable_segmentation()

bool enable_segmentation (   self)

Whether the object masks will be computed.

Default: False

◆ detection_model()

◆ fused_objects_group_name()

str fused_objects_group_name (   self)

In a multi camera setup, specify which group this model belongs to.

In a multi camera setup, multiple cameras can be used to detect objects and multiple detector having similar output layout can see the same object. Therefore, Fusion will fuse together the outputs received by multiple detectors only if they are part of the same fused_objects_group_name.

Note
This parameter is not used when not using a multi-camera setup and must be set in a multi camera setup.

◆ custom_onnx_file() [1/2]

str custom_onnx_file (   self)

Path to the YOLO-like onnx file for custom object detection ran in the ZED SDK.

When detection_model is OBJECT_DETECTION_MODEL::CUSTOM_YOLOLIKE_BOX_OBJECTS, a onnx model must be passed so that the ZED SDK can optimize it for your GPU and run inference on it.

The resulting optimized model will be saved for re-use in the future.

Attention
- The model must be a YOLO-like model.
- The caching uses the custom_onnx_file string along with your GPU specs to decide whether to use the cached optmized model or to optimize the passed onnx model. If you want to use a different model (i.e. an onnx with different weights), you must use a different custom_onnx_file string or delete the cached optimized model in <ZED Installation path>/resources.
Note
This parameter is useless when detection_model is not OBJECT_DETECTION_MODEL::CUSTOM_YOLOLIKE_BOX_OBJECTS.

Referenced by ObjectDetectionParameters.custom_onnx_file().

◆ custom_onnx_dynamic_input_shape() [1/2]

sl.Resolution custom_onnx_dynamic_input_shape (   self)

Resolution to the YOLO-like onnx file for custom object detection ran in the ZED SDK.

This resolution defines the input tensor size for dynamic shape ONNX model only. The batch and channel dimensions are automatically handled, it assumes it's color images like default YOLO models.

Note
This parameter is only used when detection_model is OBJECT_DETECTION_MODEL::CUSTOM_YOLOLIKE_BOX_OBJECTS and the provided ONNX file is using dynamic shapes.
Attention
- Multiple model only support squared images

\default Squared images 512x512 (input tensor will be 1x3x512x512)

Referenced by ObjectDetectionParameters.custom_onnx_dynamic_input_shape().

◆ max_range()

float max_range (   self)

Upper depth range for detections.

Default: -1 (value set in sl.InitParameters.depth_maximum_distance)

Note
The value cannot be greater than sl.InitParameters.depth_maximum_distance and its unit is defined in sl.InitParameters.coordinate_units.

◆ batch_parameters()

BatchParameters batch_parameters (   self)

Batching system parameters.

Batching system (introduced in 3.5) performs short-term re-identification with deep-learning and trajectories filtering.
sl.BatchParameters.enable must to be true to use this feature (by default disabled).

◆ filtering_mode()

def filtering_mode (   self)

Filtering mode that should be applied to raw detections.

Default: sl.OBJECT_FILTERING_MODE.NMS_3D (same behavior as previous ZED SDK version)

Note
This parameter is only used in detection model sl.OBJECT_DETECTION_MODEL.MULTI_CLASS_BOX_XXX and sl.OBJECT_DETECTION_MODEL.CUSTOM_BOX_OBJECTS.
For custom object, it is recommended to use sl.OBJECT_FILTERING_MODE.NMS_3D_PER_CLASS or sl.OBJECT_FILTERING_MODE.NONE.
In this case, you might need to add your own NMS filter before ingesting the boxes into the object detection module.

◆ prediction_timeout_s()

float prediction_timeout_s (   self)

Prediction duration of the ZED SDK when an object is not detected anymore before switching its state to sl.OBJECT_TRACKING_STATE.SEARCHING.

It prevents the jittering of the object state when there is a short misdetection.
The user can define their own prediction time duration.
Default: 0.2

Note
During this time, the object will have sl.OBJECT_TRACKING_STATE.OK state even if it is not detected.
The duration is expressed in seconds.
Warning
prediction_timeout_s will be clamped to 1 second as the prediction is getting worse with time.
Setting this parameter to 0 disables the ZED SDK predictions.

◆ allow_reduced_precision_inference()

bool allow_reduced_precision_inference (   self)

Whether to allow inference to run at a lower precision to improve runtime and memory usage.

It might increase the initial optimization time and could include downloading calibration data or calibration cache and slightly reduce the accuracy.

Note
The fp16 is automatically enabled if the GPU is compatible and provides a speed up of almost x2 and reduce memory usage by almost half, no precision loss.
This setting allow int8 precision which can speed up by another x2 factor (compared to fp16, or x4 compared to fp32) and half the fp16 memory usage, however some accuracy could be lost.
The accuracy loss should not exceed 1-2% on the compatible models.
The current compatible models are all sl.AI_MODELS.HUMAN_BODY_XXXX.

◆ instance_module_id()

int instance_module_id (   self)

Id of the module instance.

This is used to identify which object detection module instance is used.

◆ custom_onnx_file() [2/2]

str custom_onnx_file (   self)

Path to the YOLO-like onnx file for custom object detection ran in the ZED SDK.

When detection_model is OBJECT_DETECTION_MODEL::CUSTOM_YOLOLIKE_BOX_OBJECTS, a onnx model must be passed so that the ZED SDK can optimize it for your GPU and run inference on it. The resulting optimized model will be saved for re-use in the future.

Note
The model must be a YOLO-like model.
The caching uses the custom_onnx_file string along with your GPU specs to decide whether to use the cached optmized model or to optimize the passed onnx model. If you want to use a different model (i.e. an onnx with different weights), you must use a different custom_onnx_file string or delete the cached optimized model in <ZED Installation path>/resources.
This parameter is useless when detection_model is not OBJECT_DETECTION_MODEL.CUSTOM_YOLOLIKE_BOX_OBJECTS.

◆ custom_onnx_dynamic_input_shape() [2/2]

Resolution custom_onnx_dynamic_input_shape (   self)

Resolution to the YOLO-like onnx file for custom object detection ran in the ZED SDK.

This resolution defines the input tensor size for dynamic shape ONNX model only. The batch and channel dimensions are automatically handled, it assumes it's color images like default YOLO models.

Note
This parameter is only used when detection_model is OBJECT_DETECTION_MODEL::CUSTOM_YOLOLIKE_BOX_OBJECTS and the provided ONNX file is using dynamic shapes.
- Multiple model only support squared images