Structure containing a set of parameters for the object detection module. More...
Data Fields | |
unsigned int | instance_module_id |
Id of the module instance. More... | |
bool | enable_tracking |
Whether the object detection system includes object tracking capabilities across a sequence of images. More... | |
bool | enable_segmentation |
Whether the object masks will be computed. More... | |
enum SL_OBJECT_DETECTION_MODEL | detection_model |
SL_OBJECT_DETECTION_MODEL to use. More... | |
char * | fused_objects_group_name |
In a multi camera setup, specify which group this model belongs to. More... | |
char * | custom_onnx_file |
Path to the YOLO-like onnx file for custom object detection ran in the ZED SDK. More... | |
struct SL_Resolution | custom_onnx_dynamic_input_shape |
Resolution to the YOLO-like onnx file for custom object detection ran in the ZED SDK. This resolution defines the input tensor size for dynamic shape ONNX model only. The batch and channel dimensions are automatically handled, it assumes it's color images like default YOLO models. More... | |
float | max_range |
Upper depth range for detections. More... | |
struct SL_BatchParameters | batch_parameters |
Batching system parameters. More... | |
enum SL_OBJECT_FILTERING_MODE | filtering_mode |
Filtering mode that should be applied to raw detections. More... | |
float | prediction_timeout_s |
Prediction duration of the ZED SDK when an object is not detected anymore before switching its state to SL_OBJECT_TRACKING_STATE_SEARCHING. More... | |
bool | allow_reduced_precision_inference |
Whether to allow inference to run at a lower precision to improve runtime and memory usage. More... | |
Structure containing a set of parameters for the object detection module.
The default constructor sets all parameters to their default settings.
unsigned int SL_ObjectDetectionParameters::instance_module_id |
Id of the module instance.
This is used to identify which object detection module instance is used.
bool SL_ObjectDetectionParameters::enable_tracking |
Whether the object detection system includes object tracking capabilities across a sequence of images.
Default: true
bool SL_ObjectDetectionParameters::enable_segmentation |
Whether the object masks will be computed.
Default: false
enum SL_OBJECT_DETECTION_MODEL SL_ObjectDetectionParameters::detection_model |
SL_OBJECT_DETECTION_MODEL to use.
char* SL_ObjectDetectionParameters::fused_objects_group_name |
In a multi camera setup, specify which group this model belongs to.
In a multi camera setup, multiple cameras can be used to detect objects and multiple detector having similar output layout can see the same object. Therefore, Fusion will fuse together the outputs received by multiple detectors only if they are part of the same fused_objects_group_name.
char* SL_ObjectDetectionParameters::custom_onnx_file |
Path to the YOLO-like onnx file for custom object detection ran in the ZED SDK.
When detection_model
is OBJECT_DETECTION_MODEL::CUSTOM_YOLOLIKE_BOX_OBJECTS, a onnx model must be passed so that the ZED SDK can optimize it for your GPU and run inference on it.
The resulting optimized model will be saved for re-use in the future.
custom_onnx_file
string along with your GPU specs to decide whether to use the cached optmized model or to optimize the passed onnx model. If you want to use a different model (i.e. an onnx with different weights), you must use a different custom_onnx_file
string or delete the cached optimized model in <ZED Installation path>/resources.struct SL_Resolution SL_ObjectDetectionParameters::custom_onnx_dynamic_input_shape |
Resolution to the YOLO-like onnx file for custom object detection ran in the ZED SDK. This resolution defines the input tensor size for dynamic shape ONNX model only. The batch and channel dimensions are automatically handled, it assumes it's color images like default YOLO models.
\default Squared images 512x512 (input tensor will be 1x3x512x512)
float SL_ObjectDetectionParameters::max_range |
Upper depth range for detections.
Default: -1.f (value set in SL_InitParameters.depth_maximum_distance)
struct SL_BatchParameters SL_ObjectDetectionParameters::batch_parameters |
Batching system parameters.
Batching system (introduced in 3.5) performs short-term re-identification with deep-learning and trajectories filtering.
SL_BatchParameters.enable must to be true to use this feature (by default disabled).
enum SL_OBJECT_FILTERING_MODE SL_ObjectDetectionParameters::filtering_mode |
Filtering mode that should be applied to raw detections.
Default: SL_OBJECT_FILTERING_MODE_NMS_3D (same behavior as previous ZED SDK version)
float SL_ObjectDetectionParameters::prediction_timeout_s |
Prediction duration of the ZED SDK when an object is not detected anymore before switching its state to SL_OBJECT_TRACKING_STATE_SEARCHING.
It prevents the jittering of the object state when there is a short misdetection.
The user can define their own prediction time duration.
Default: 0.2f
bool SL_ObjectDetectionParameters::allow_reduced_precision_inference |
Whether to allow inference to run at a lower precision to improve runtime and memory usage.
It might increase the initial optimization time and could include downloading calibration data or calibration cache and slightly reduce the accuracy.