Adding Object Detection in ROS2 The ROS2 wrapper offers full support for the Object Detection module of the ZED SDK. The Object Detection module is available only using a ZED2 or a ZED2i camera. The Object Detection module can be configured to use one of four different detection models: MULTI CLASS BOX: bounding boxes of objects of seven different classes (persons, vehicles, bags, animals, electronic devices, fruits and vegetables). Real time performance even on Jetson or low end GPU cards. MULTI CLASS BOX ACCURATE: bounding boxes of objects of seven different classes (persons, vehicles, bags, animals, electronic devices, fruits and vegetables). State of the art accuracy, requires powerful GPU. HUMAN BODY FAST: Keypoints based, specific to human skeleton. Real time performance even on Jetson or low end GPU cards. HUMAN BODY ACCURATE: Keypoints based, specific to human skeleton. State of the art accuracy, requires powerful GPU. MULTI_CLASS_BOX_MEDIUM: bounding boxes of objects of seven different classes (persons, vehicles, bags, animals, electronic devices, fruits and vegetables). A compromise between accuracy and speed. HUMAN_BODY_MEDIUM: Keypoints based, specific to human skeleton. A compromise between accuracy and speed. The result of the detection is published using a new custom message of type zed_interfaces/ObjectsStamped defined in the package zed_interfaces. Enable Object Detection Object detection can be started automatically when the ZED Wrapper node start setting the parameter object_detection.od_enabled to true in the file zed2.yaml or zed2i.yaml. It is also possible to start the Object Detection processing manually calling the service ~/enable_obj_det with parameter True. In both the cases the Object Detection processing can be stopped calling the service ~/enable_obj_det with parameter False. See the services documentation for more info. Object Detection results in RVIZ2 To visualize the results of the Object Detection processing in Rviz2 the new ZedOdDisplay plugin is required. The plugin is available in the zed-ros2-examples Github repository and can be installed following the online instructions. Note: the source code of the plugin is a valid example about how to process the data of the topics of type zed_interfaces/ObjectsStamped. Parameters: Topic: Selects the object detection topic to visualize from the list of available images in the combo box. Depth: The depth of the incoming message queue. History policy: Set the QoS history policy. Keep Last is suggested for performances and compatibility. Reliability Policy: Set the QoS reliability policy. Best Effort is suggested for performances and compatibility. Durability Policy: Set the QoS durability policy. Volatile is suggested for compatibility. Transparency: the transparency level of the structures composing the detected objects. Show skeleton: enable/disable the visualization of the skeleton of the detected persons (if available). Show Labels: enable/disable the visualization of the objects label. Show Bounding Boxes: enable/disable the visualization of the bounding boxes of the detected objects. Link Size: the size of the bounding boxes corner lines and skeleton link lines. Joint Radius: the radius of the spheres placed on the corners of the bounding boxes and on the skeleton joint points. Label Scale: the scale of the label of the object. Detected Objects message The zed_interfaces/ObjectsStamped message is defined as: # Standard Header std_msgs/Header header # Array of `object_stamped` topics zed_interfaces/Object[] objects where zed_interfaces/Object is defined as: # Object label string label # Object label ID int16 label_id # Object sub string sublabel # Object confidence level (1-99) float32 confidence # Object centroid position float32[3] position # Position covariance float32[6] position_covariance # Object velocity float32[3] velocity # Tracking state # 0 -> OFF (object not valid) # 1 -> OK # 2 -> SEARCHING (occlusion occurred, trajectory is estimated) int8 tracking_state # Action state # 0 -> IDLE # 2 -> MOVING int8 action_state # 2D Bounding box projected to Camera image zed_interfaces/BoundingBox2Di bounding_box_2d # 3D Bounding box in world frame zed_interfaces/BoundingBox3D bounding_box_3d # 3D dimensions (width, height, length) float32[3] dimensions_3d # Is skeleton available? bool skeleton_available # 2D Bounding box projected to Camera image of the person head zed_interfaces/BoundingBox2Df head_bounding_box_2d # 3D Bounding box in world frame of the person head zed_interfaces/BoundingBox3D head_bounding_box_3d # 3D position of the centroid of the person head float32[3] head_position # 2D Person skeleton projected to Camera image zed_interfaces/Skeleton2D skeleton_2d # 3D Person skeleton in world frame zed_interfaces/Skeleton3D skeleton_3d And all the submessages are defined as following: zed_interfaces/BoundingBox2Df: # 0 ------- 1 # | | # | | # | | # 3 ------- 2 zed_interfaces/Keypoint2Df[4] corners zed_interfaces/BoundingBox2Df: # 0 ------- 1 # | | # | | # | | # 3 ------- 2 zed_interfaces/Keypoint2Di[4] corners zed_interfaces/BoundingBox3D: # 1 ------- 2 # /. /| # 0 ------- 3 | # | . | | # | 5.......| 6 # |. |/ # 4 ------- 7 zed_interfaces/Keypoint3D[8] corners zed_interfaces/Keypoint2Df: float32[2] kp zed_interfaces/Keypoint2Di: uint32[2] kp zed_interfaces/Keypoint3D: float32[3] kp zed_interfaces/Skeleton2D: # Skeleton joints indices # 16-14 15-17 # \ / # 0 # | # 2------1------5 # | | | | # | | | | # 3 | | 6 # | | | | # | | | | # 4 8 11 7 # | | # | | # | | # 9 12 # | | # | | # | | # 10 13 zed_interfaces/Keypoint2Df[18] keypoints zed_interfaces/Skeleton3D: # Skeleton joints indices # 16-14 15-17 # \ / # 0 # | # 2------1------5 # | | | | # | | | | # 3 | | 6 # | | | | # | | | | # 4 8 11 7 # | | # | | # | | # 9 12 # | | # | | # | | # 10 13 zed_interfaces/Keypoint3D[18] keypoints