This class serves as the primary interface between the camera and the various features provided by the SDK. More...
Functions | |
float | GetRequestedCameraFPS () |
Desired FPS from the ZED camera. More... | |
Camera (int id) | |
Default constructor. More... | |
ERROR_CODE | Open (ref InitParameters initParameters) |
Opens the ZED camera from the provided InitParameters. More... | |
void | Close () |
Closes the camera. More... | |
sl.ERROR_CODE | Grab (ref sl.RuntimeParameters runtimeParameters) |
This method will grab the latest images from the camera, rectify them, and compute the measurements based on the RuntimeParameters provided (depth, point cloud, tracking, etc.). More... | |
ERROR_CODE | StartPublishing (ref CommunicationParameters commParams) |
Set this camera as a data provider for the Fusion module. More... | |
ERROR_CODE | StopPublishing () |
Set this camera as normal camera(without data providing). More... | |
sl.INPUT_TYPE | GetInputType () |
Return the sl.INPUT_TYPE currently used. More... | |
CalibrationParameters | GetCalibrationParameters (bool raw=false) |
Return the calibration parameters of the camera. More... | |
sl.MODEL | GetCameraModel () |
Gets the camera model (sl.MODEL). More... | |
int | GetCameraFirmwareVersion () |
Gets the camera firmware version. More... | |
int | GetSensorsFirmwareVersion () |
Gets the sensors firmware version. More... | |
int | GetZEDSerialNumber () |
Gets the camera's serial number. More... | |
float | GetFOV () |
Returns the camera's vertical field of view in radians. More... | |
void | UpdateSelfCalibration () |
Perform a new self calibration process. More... | |
uint | GetFrameDroppedCount () |
Gets the number of frames dropped since Grab() was called for the first time. More... | |
sl.ERROR_CODE | SaveCurrentImageInFile (sl.VIEW view, String filename) |
Save current image (specified by view) in a file defined by filename. More... | |
sl.ERROR_CODE | SaveCurrentDepthInFile (SIDE side, String filename) |
Save the current depth in a file defined by filename. More... | |
sl.ERROR_CODE | SaveCurrentPointCloudInFile (SIDE side, String filename) |
Save the current point cloud in a file defined by filename. More... | |
Depth Sensing | |
sl.ERROR_CODE | RetrieveMeasure (sl.Mat mat, sl.MEASURE measure, sl.MEM mem=sl.MEM.CPU, sl.Resolution resolution=new sl.Resolution()) |
Retrieves a measure texture from the ZED SDK and loads it into a sl.Mat. More... | |
int | GetConfidenceThreshold () |
Gets the current confidence threshold value for the disparity map (and by extension the depth map). More... | |
float | GetDepthMinRangeValue () |
Gets the closest measurable distance by the camera, according to the camera type and depth map parameters. More... | |
sl.ERROR_CODE | GetCurrentMixMaxDepth (ref float min, ref float max) |
Gets the current range of perceived depth. More... | |
float | GetDepthMaxRangeValue () |
Returns the current maximum distance of depth/disparity estimation. More... | |
sl.ERROR_CODE | EnablePositionalTracking (ref PositionalTrackingParameters positionalTrackingParameters) |
Initializes and starts the positional tracking processes. More... | |
void | DisablePositionalTracking (string path="") |
Disables the positional tracking. More... | |
bool | IsPositionalTrackingEnabled () |
Tells if the tracking module is enabled. More... | |
ERROR_CODE | SaveAreaMap (string areaFilePath) |
Saves the current area learning file. More... | |
AREA_EXPORT_STATE | GetAreaExportState () |
Returns the state of the spatial memory export process. More... | |
sl.ERROR_CODE | ResetPositionalTracking (Quaternion rotation, Vector3 translation) |
Resets the tracking, and re-initializes the position with the given translation vector and rotation quaternion. More... | |
SensorsConfiguration | GetSensorsConfiguration () |
Returns the sensor configuration of the camera. More... | |
CameraInformation | GetCameraInformation (Resolution resolution=new Resolution()) |
Returns the CameraInformation associated the camera being used. More... | |
POSITIONAL_TRACKING_STATE | GetPosition (ref Quaternion rotation, ref Vector3 position, REFERENCE_FRAME referenceType=REFERENCE_FRAME.WORLD) |
Gets the position of the camera and the current state of the camera Tracking. More... | |
PositionalTrackingStatus | GetPositionalTrackingStatus () |
Returns the current status of positional tracking module. More... | |
POSITIONAL_TRACKING_STATE | GetPosition (ref Quaternion rotation, ref Vector3 translation, ref Quaternion targetQuaternion, ref Vector3 targetTranslation, REFERENCE_FRAME referenceFrame=REFERENCE_FRAME.WORLD) |
Gets the current position of the camera and state of the tracking, with an optional offset to the tracking frame. More... | |
POSITIONAL_TRACKING_STATE | GetPosition (ref Quaternion rotation, ref Vector3 translation, TRACKING_FRAME trackingFrame, REFERENCE_FRAME referenceFrame=REFERENCE_FRAME.WORLD) |
Gets the current position of the camera and state of the tracking, with a defined tracking frame. More... | |
POSITIONAL_TRACKING_STATE | GetPosition (ref Pose pose, REFERENCE_FRAME referenceType=REFERENCE_FRAME.WORLD) |
Gets the current position of the camera and state of the tracking, filling a Pose struct useful for AR pass-through. More... | |
ERROR_CODE | SetIMUOrientationPrior (ref Quaternion rotation) |
Sets a prior to the IMU orientation (not for MODEL.ZED). More... | |
ERROR_CODE | GetIMUOrientation (ref Quaternion rotation, TIME_REFERENCE referenceTime=TIME_REFERENCE.IMAGE) |
Gets the rotation given by the IMU. More... | |
ERROR_CODE | GetSensorsData (ref SensorsData data, TIME_REFERENCE referenceTime=TIME_REFERENCE.IMAGE) |
Retrieves the SensorsData (IMU, magnetometer, barometer) at a specific time reference. More... | |
ERROR_CODE | SetRegionOfInterest (sl.Mat roiMask, bool[] module) |
Defines a region of interest to focus on for all the SDK, discarding other parts. More... | |
ERROR_CODE | GetRegionOfInterest (sl.Mat roiMask, sl.Resolution resolution=new sl.Resolution(), MODULE module=MODULE.ALL) |
Get the previously set or computed region of interest. More... | |
ERROR_CODE | StartRegionOfInterestAutoDetection (RegionOfInterestParameters roiParams) |
Start the auto detection of a region of interest to focus on for all the SDK, discarding other parts. This detection is based on the general motion of the camera combined with the motion in the scene. The camera must move for this process, an internal motion detector is used, based on the Positional Tracking module. It requires a few hundreds frames of motion to compute the mask. More... | |
REGION_OF_INTEREST_AUTO_DETECTION_STATE | GetRegionOfInterestAutoDetectionStatus () |
Return the status of the automatic Region of Interest Detection. The automatic Region of Interest Detection is enabled by using StartRegionOfInterestAutoDetection More... | |
Recording | |
ERROR_CODE | EnableRecording (string videoFileName, SVO_COMPRESSION_MODE compressionMode=SVO_COMPRESSION_MODE.H264_BASED, uint bitrate=0, int targetFPS=0, bool transcode=false) |
Creates an SVO file to be filled by EnableRecording() and DisableRecording(). More... | |
ERROR_CODE | EnableRecording (RecordingParameters recordingParameters) |
Creates an SVO file to be filled by EnableRecording() and DisableRecording(). More... | |
sl.RecordingStatus | GetRecordingStatus () |
Get the recording information. More... | |
sl.RecordingParameters | GetRecordingParameters () |
Returns the RecordingParameters used. More... | |
void | PauseRecording (bool status) |
Pauses or resumes the recording. More... | |
void | DisableRecording () |
Disables the recording initiated by EnableRecording() and closes the generated file. More... | |
ERROR_CODE | IngestDataIntoSVO (ref SVOData data) |
Ingest SVOData in a SVO file. More... | |
ERROR_CODE | RetrieveSVOData (string key, ref List< SVOData > data, ulong tsBegin, ulong tsEnd) |
Retrieves SVO data from the SVO file at the given channel key and in the given timestamp range. More... | |
List< string > | GetSVODataKeys () |
Gets the external channels that can be retrieved from the SVO file. More... | |
Streaming | |
ERROR_CODE | EnableStreaming (STREAMING_CODEC codec=STREAMING_CODEC.H264_BASED, uint bitrate=8000, ushort port=30000, int gopSize=-1, bool adaptativeBitrate=false, int chunkSize=32768, int targetFPS=0) |
Creates an streaming pipeline. More... | |
ERROR_CODE | EnableStreaming (ref StreamingParameters streamingParameters) |
Creates an streaming pipeline. More... | |
bool | IsStreamingEnabled () |
Tells if the streaming is running. More... | |
void | DisableStreaming () |
Disables the streaming initiated by EnableStreaming(). More... | |
sl.StreamingParameters | GetStreamingParameters () |
Returns the StreamingParameters used. More... | |
Static Functions | |
static void | UnloadPlugin () |
static void | UnloadInstance (int id) |
static string | GenerateUniqueID () |
Generate a UUID like unique id to help identify and track AI detections. More... | |
static string | GetSDKVersion () |
Gets the version of the currently installed ZED SDK. More... | |
static sl.ERROR_CODE | ConvertCoordinateSystem (ref Quaternion rotation, ref Vector3 translation, sl.COORDINATE_SYSTEM coordinateSystemSrc, sl.COORDINATE_SYSTEM coordinateSystemDest) |
Change the coordinate system of a transform matrix. More... | |
static void | GetSDKVersion (ref int major, ref int minor, ref int patch) |
Gets the version of the currently installed ZED SDK. More... | |
static sl.DeviceProperties[] | GetDeviceList (out int nbDevices) |
List all the connected devices with their associated information. More... | |
static sl.StreamingProperties[] | GetStreamingDeviceList (out int nbDevices) |
List all the streaming devices with their associated information. More... | |
static sl.ERROR_CODE | Reboot (int serialNumber, bool fullReboot=true) |
Performs a hardware reset of the ZED 2 and the ZED 2i. More... | |
Attributes | |
int | CameraID = 0 |
Camera ID (for multiple cameras) More... | |
Properties | |
int | ImageWidth [get] |
Width of the images returned by the camera in pixels. More... | |
int | ImageHeight [get] |
Height of the images returned by the camera in pixels. More... | |
float | Baseline [get] |
Baseline of the camera (distance between the cameras). More... | |
float | HorizontalFieldOfView [get] |
Current horizontal field of view in degrees of the camera. More... | |
float | VerticalFieldOfView [get] |
Current vertical field of view in degrees of the camera. More... | |
SensorsConfiguration | SensorsConfiguration [get] |
CalibrationParameters | CalibrationParametersRaw [get] |
Stereo parameters for current ZED camera prior to rectification (distorted). More... | |
CalibrationParameters | CalibrationParametersRectified [get] |
Stereo parameters for current ZED camera after rectification (undistorted). More... | |
sl.MODEL | CameraModel [get] |
Model of the camera. More... | |
bool | IsCameraReady [get] |
Whether the camera has been successfully initialized. More... | |
Video | |
sl.ERROR_CODE | RetrieveImage (sl.Mat mat, sl.VIEW view, sl.MEM mem=sl.MEM.CPU, sl.Resolution resolution=new sl.Resolution()) |
Retrieves an image texture from the ZED SDK and loads it into a sl.Mat. More... | |
InitParameters | GetInitParameters () |
Returns the InitParameters associated with the Camera object. More... | |
RuntimeParameters | GetRuntimeParameters () |
Returns the RuntimeParameters used. More... | |
PositionalTrackingParameters | GetPositionalTrackingParameters () |
Returns the PositionalTrackingParameters used. More... | |
bool | IsCameraSettingSupported (VIDEO_SETTINGS setting) |
Test if the video setting is supported by the camera. More... | |
void | SetCameraSettings (VIDEO_SETTINGS settings, int minvalue, int maxvalue) |
Sets the min and max range values of the requested camera setting (used for settings with a range). More... | |
sl.ERROR_CODE | GetCameraSettings (VIDEO_SETTINGS settings, ref int minvalue, ref int maxvalue) |
Returns the current range of the requested camera setting. More... | |
void | SetCameraSettings (VIDEO_SETTINGS settings, int value) |
Sets the value of the requested camera setting (gain, brightness, hue, exposure, etc.). More... | |
int | GetCameraSettings (VIDEO_SETTINGS settings) |
Returns the current value of the requested camera setting (gain, brightness, hue, exposure, etc.). More... | |
ERROR_CODE | SetCameraSettings (VIDEO_SETTINGS settings, SIDE side, Rect roi, bool reset=false) |
Overloaded method for VIDEO_SETTINGS.AEC_AGC_ROI which takes a Rect as parameter. More... | |
ERROR_CODE | GetCameraSettings (VIDEO_SETTINGS settings, SIDE side, ref Rect roi) |
Overloaded method for VIDEO_SETTINGS.AEC_AGC_ROI which takes a Rect as parameter. More... | |
void | ResetCameraSettings () |
Reset camera settings to default. More... | |
ulong | GetCameraTimeStamp () |
Gets the timestamp at the time the latest grabbed frame was extracted from the USB stream. More... | |
ulong | GetCurrentTimeStamp () |
Gets the current timestamp at the time the method is called. More... | |
int | GetSVOPosition () |
Returns the current playback position in the SVO file. More... | |
int | GetSVOPositionAtTimestamp (ulong timestamp) |
Retrieves the frame index within the SVO file corresponding to the provided timestamp. More... | |
int | GetSVONumberOfFrames () |
Returns the number of frames in the SVO file. More... | |
void | SetSVOPosition (int frame) |
Sets the position of the SVO file currently being read to a desired frame. More... | |
float | GetCameraFPS () |
Returns the current camera FPS. More... | |
bool | IsOpened () |
Reports if the camera has been successfully opened. More... | |
static sl.Resolution | GetResolution (RESOLUTION resolution) |
Gets the corresponding sl.Resolution from an sl.RESOLUTION. More... | |
Spatial Mapping | |
sl.ERROR_CODE | EnableSpatialMapping (ref SpatialMappingParameters spatialMappingParameters) |
Initializes and begins the spatial mapping processes. More... | |
sl.ERROR_CODE | EnableSpatialMapping (SPATIAL_MAP_TYPE type=SPATIAL_MAP_TYPE.MESH, MAPPING_RESOLUTION mappingResolution=MAPPING_RESOLUTION.MEDIUM, MAPPING_RANGE mappingRange=MAPPING_RANGE.MEDIUM, bool saveTexture=false) |
Initializes and begins the spatial mapping processes. More... | |
SpatialMappingParameters | GetSpatialMappingParameters () |
Returns the SpatialMappingParameters used. More... | |
void | DisableSpatialMapping () |
Disables the spatial mapping process. More... | |
sl.ERROR_CODE | UpdateMesh (int[] nbVerticesInSubmeshes, int[] nbTrianglesInSubmeshes, ref int nbUpdatedSubmesh, int[] updatedIndices, ref int nbVertices, ref int nbTriangles, int nbSubmeshMax) |
Updates the internal version of the mesh and returns the sizes of the meshes. More... | |
sl.ERROR_CODE | UpdateMesh (ref Mesh mesh) |
Updates the internal version of the mesh and returns the sizes of the meshes. More... | |
sl.ERROR_CODE | RetrieveMesh (Vector3[] vertices, int[] triangles, byte[] colors, int nbSubmeshMax, Vector2[] uvs, IntPtr textures) |
Retrieves all chunks of the current generated mesh. More... | |
sl.ERROR_CODE | RetrieveMesh (ref Mesh mesh) |
Retrieves all chunks of the current generated mesh. More... | |
sl.ERROR_CODE | RetrieveChunks (ref Mesh mesh) |
Retrieve all chunks of the generated mesh. More... | |
sl.ERROR_CODE | RetrieveSpatialMap (ref Mesh mesh) |
Retrieves the current generated mesh. More... | |
sl.ERROR_CODE | RetrieveSpatialMap (ref FusedPointCloud fusedPointCloud) |
Retrieves the current fused point cloud. More... | |
sl.ERROR_CODE | UpdateFusedPointCloud (ref int nbVertices) |
Updates the fused point cloud (if spatial map type was FUSED_POINT_CLOUD). More... | |
sl.ERROR_CODE | RetrieveFusedPointCloud (Vector4[] vertices) |
Retrieves all points of the fused point cloud. More... | |
ERROR_CODE | ExtractWholeSpatialMap () |
Extracts the current spatial map from the spatial mapping process. More... | |
void | RequestSpatialMap () |
Starts the mesh generation process in a thread that does not block the spatial mapping process. More... | |
void | PauseSpatialMapping (bool status) |
Pauses or resumes the spatial mapping processes. More... | |
sl.ERROR_CODE | GetMeshRequestStatus () |
Returns the mesh generation status. More... | |
bool | SaveMesh (string filename, MESH_FILE_FORMAT format) |
Saves the scanned mesh in a specific file format. More... | |
bool | SavePointCloud (string filename, MESH_FILE_FORMAT format) |
Saves the scanned point cloud in a specific file format. More... | |
bool | LoadMesh (string filename, int[] nbVerticesInSubmeshes, int[] nbTrianglesInSubmeshes, ref int nbSubmeshes, int[] updatedIndices, ref int nbVertices, ref int nbTriangles, int nbSubmeshMax, int[] textureSize=null) |
Loads a saved mesh file. More... | |
bool | FilterMesh (MESH_FILTER filterParameters, int[] nbVerticesInSubmeshes, int[] nbTrianglesInSubmeshes, ref int nbSubmeshes, int[] updatedIndices, ref int nbVertices, ref int nbTriangles, int nbSubmeshMax) |
Filters a mesh to remove triangles while still preserving its overall shape (though less accurate). More... | |
bool | FilterMesh (MESH_FILTER filterParameters, ref Mesh mesh) |
Filters a mesh to remove triangles while still preserving its overall shape (though less accurate). More... | |
bool | ApplyTexture (int[] nbVerticesInSubmeshes, int[] nbTrianglesInSubmeshes, ref int nbSubmeshes, int[] updatedIndices, ref int nbVertices, ref int nbTriangles, int[] textureSize, int nbSubmeshMax) |
Applies the scanned texture onto the internal scanned mesh. More... | |
bool | ApplyTexture (ref Mesh mesh) |
Applies the texture on a mesh. More... | |
SPATIAL_MAPPING_STATE | GetSpatialMappingState () |
Returns the current spatial mapping state. More... | |
Vector3 | GetGravityEstimate () |
Gets a vector pointing toward the direction of gravity. More... | |
void | MergeChunks (int numberFaces, int[] nbVerticesInSubmeshes, int[] nbTrianglesInSubmeshes, ref int nbSubmeshes, int[] updatedIndices, ref int nbVertices, ref int nbTriangles, int nbSubmesh) |
Consolidates the chunks from a scan. More... | |
Plane Detection | |
sl.ERROR_CODE | findFloorPlane (ref PlaneData plane, out float playerHeight, Quaternion priorQuat, Vector3 priorTrans) |
Detect the floor plane of the scene. More... | |
sl.ERROR_CODE | FindFloorPlane (ref PlaneData plane, out float playerHeight, Quaternion priorQuat, Vector3 priorTrans) |
Detect the floor plane of the scene. More... | |
int | convertFloorPlaneToMesh (Vector3[] vertices, int[] triangles, out int numVertices, out int numTriangles) |
Using data from a detected floor plane, updates supplied vertex and triangle arrays with data needed to make a mesh that represents it. More... | |
int | ConvertFloorPlaneToMesh (Vector3[] vertices, int[] triangles, out int numVertices, out int numTriangles) |
Using data from a detected floor plane, updates supplied vertex and triangle arrays with data needed to make a mesh that represents it. More... | |
sl.ERROR_CODE | findPlaneAtHit (ref PlaneData plane, Vector2 coord, ref PlaneDetectionParameters planeDetectionParameters) |
Checks the plane at the given left image coordinates. More... | |
sl.ERROR_CODE | FindPlaneAtHit (ref PlaneData plane, Vector2 coord, ref PlaneDetectionParameters planeDetectionParameters) |
Checks the plane at the given left image coordinates. More... | |
int | convertHitPlaneToMesh (Vector3[] vertices, int[] triangles, out int numVertices, out int numTriangles) |
Using data from a detected hit plane, updates supplied vertex and triangle arrays with data needed to make a mesh that represents it. More... | |
int | ConvertHitPlaneToMesh (Vector3[] vertices, int[] triangles, out int numVertices, out int numTriangles) |
Using data from a detected hit plane, updates supplied vertex and triangle arrays with data needed to make a mesh that represents it. More... | |
static float | ConvertRangePreset (MAPPING_RANGE rangePreset) |
Updates the range to match the specified preset. More... | |
static float | ConvertResolutionPreset (MAPPING_RESOLUTION resolutionPreset) |
Updates the resolution to match the specified preset. More... | |
Object Detection | |
sl.ERROR_CODE | EnableObjectDetection (ref ObjectDetectionParameters od_params) |
Initializes and starts object detection module. More... | |
sl.ERROR_CODE | EnableBodyTracking (ref BodyTrackingParameters bt_params) |
Initializes and starts body tracking module. More... | |
void | DisableObjectDetection (uint instanceID=0, bool disableAllInstance=false) |
Disable object detection module and release the resources. More... | |
void | DisableBodyTracking (uint instanceID=0, bool disableAllInstance=false) |
Disable body tracking module and release the resources. More... | |
sl.ObjectDetectionParameters | GetObjectDetectionParameters () |
Returns the ObjectDetectionParameters used. More... | |
sl.BodyTrackingParameters | GetBodyTrackingParameters () |
Returns the BodyTrackingParameters used. More... | |
sl.ERROR_CODE | IngestCustomBoxObjects (List< CustomBoxObjectData > objects_in) |
Feed the 3D Object tracking method with your own 2D bounding boxes from your own detection algorithm. More... | |
sl.ERROR_CODE | RetrieveObjects (ref Objects objs, ref ObjectDetectionRuntimeParameters od_params, uint instanceID=0) |
Retrieve objects detected by the object detection module. More... | |
sl.ERROR_CODE | RetrieveBodies (ref Bodies bodies, ref BodyTrackingRuntimeParameters bt_params, uint instanceID=0) |
Retrieve bodies detected by the body tracking module. More... | |
sl.ERROR_CODE | UpdateObjectsBatch (out int nbBatches) |
Update the batch trajectories and retrieve the number of batches. More... | |
sl.ERROR_CODE | GetObjectsBatch (int batch_index, ref ObjectsBatch objectsBatch) |
Retrieve a batch of objects. More... | |
static AI_Model_status | CheckAIModelStatus (AI_MODELS model, int gpu_id=0) |
Check if a corresponding optimized engine is found for the requested model based on your rig configuration. More... | |
static sl.ERROR_CODE | OptimizeAIModel (AI_MODELS model, int gpu_id=0) |
Optimize the requested model, possible download if the model is not present on the host. More... | |
This class serves as the primary interface between the camera and the various features provided by the SDK.
It enables seamless integration and access to a wide array of capabilities, including video streaming, depth sensing, object tracking, mapping, and much more.
|
inline |
Desired FPS from the ZED camera.
This is the maximum FPS for the camera's current resolution unless a lower setting was specified in Open().
Maximum values are bound by the camera's output, not system performance.
|
inlinestatic |
|
inlinestatic |
|
inlinestatic |
Generate a UUID like unique id to help identify and track AI detections.
|
inline |
Opens the ZED camera from the provided InitParameters.
The method will also check the hardware requirements and run a self-calibration.
initParameters | A structure containing all the initial parameters. Default: a preset of InitParameters. |
|
inline |
Closes the camera.
Once destroyed, you need to recreate a camera instance to restart again.
|
inline |
This method will grab the latest images from the camera, rectify them, and compute the measurements based on the RuntimeParameters provided (depth, point cloud, tracking, etc.).
The grabbing method is typically called in the main loop in a separate thread.
runtimeParameters | A structure containing all the runtime parameters. Default: a preset of RuntimeParameters. |
|
inline |
Set this camera as a data provider for the Fusion module.
jsonConfigFileName |
|
inline |
Set this camera as normal camera(without data providing).
|
inline |
Return the sl.INPUT_TYPE currently used.
|
inline |
Retrieves an image texture from the ZED SDK and loads it into a sl.Mat.
Use this to get an individual texture from the last grabbed frame in a human-viewable format. Image textures work for when you want the result to be visible, such as the direct RGB image from the camera, or a greyscale image of the depth. However it will lose accuracy if used to show measurements like depth or confidence, unlike measure textures.
mat | sl.Mat to fill with the new texture. |
view | Image type (left RGB, right depth map, etc.) |
mem | Whether the image should be on CPU or GPU memory. |
resolution | Resolution of the texture. |
|
inline |
Returns the InitParameters associated with the Camera object.
It corresponds to the structure given as argument to Open() method.
|
inline |
Returns the RuntimeParameters used.
It corresponds to the structure given as argument to the Grab() method.
|
inline |
Returns the PositionalTrackingParameters used.
It corresponds to the structure given as argument to the EnablePositionalTracking() method.
|
inlinestatic |
Gets the corresponding sl.Resolution from an sl.RESOLUTION.
resolution | The wanted sl.RESOLUTION. |
|
inline |
Test if the video setting is supported by the camera.
setting | The video setting to test. |
|
inline |
Sets the min and max range values of the requested camera setting (used for settings with a range).
settings | The setting to be set. |
minvalue | The min value of the range to set. |
maxvalue | The min value of the range to set. |
Referenced by Camera.ResetCameraSettings().
|
inline |
Returns the current range of the requested camera setting.
settings | Setting to be retrieved (setting with range value). |
minvalue | Will be set to the value of the lower bound of the range of the setting. |
maxvalue | Will be set to the value of the higher bound of the range of the setting. |
|
inline |
Sets the value of the requested camera setting (gain, brightness, hue, exposure, etc.).
settings | The setting to be set. |
value | The value to set. Default: auto mode |
|
inline |
Returns the current value of the requested camera setting (gain, brightness, hue, exposure, etc.).
settings | Setting to be retrieved (brightness, contrast, gain, exposure, etc.). |
|
inline |
Overloaded method for VIDEO_SETTINGS.AEC_AGC_ROI which takes a Rect as parameter.
settings | Must be set at VIDEO_SETTINGS.AEC_AGC_ROI, otherwise the method will have no impact. |
side | sl.SIDE on which to be applied for AEC/AGC computation. |
roi | Rect that defines the target to be applied for AEC/AGC computation. Must be given according to camera resolution. |
reset | Cancel the manual ROI and reset it to the full image. |
|
inline |
Overloaded method for VIDEO_SETTINGS.AEC_AGC_ROI which takes a Rect as parameter.
settings | Must be set at VIDEO_SETTINGS.AEC_AGC_ROI, otherwise the method will have no impact. |
side | sl.SIDE on which to get the ROI from. |
roi | Roi that will be filled. |
|
inline |
Reset camera settings to default.
|
inline |
Gets the timestamp at the time the latest grabbed frame was extracted from the USB stream.
This is the closest timestamp you can get from when the image was taken.
|
inline |
Gets the current timestamp at the time the method is called.
Can be compared to the camera timestamp for synchronization, since they have the same reference (the computer's start time).
|
inline |
Returns the current playback position in the SVO file.
|
inline |
Retrieves the frame index within the SVO file corresponding to the provided timestamp.
timestamp | The target timestamp for which the frame index is to be determined. |
|
inline |
Returns the number of frames in the SVO file.
|
inline |
Sets the position of the SVO file currently being read to a desired frame.
frame | Index of the desired frame to be decoded. |
|
inline |
Returns the current camera FPS.
This is limited primarily by resolution but can also be lower due to setting a lower desired resolution in Open() or from USB connection/bandwidth issues.
|
inline |
Reports if the camera has been successfully opened.
|
inline |
Return the calibration parameters of the camera.
raw | Whether to return the raw or rectified calibration parameters. |
Referenced by Camera.GetFOV(), and Camera.Open().
|
inline |
Gets the camera model (sl.MODEL).
Referenced by Camera.Open().
|
inline |
Gets the camera firmware version.
|
inline |
Gets the sensors firmware version.
|
inline |
Gets the camera's serial number.
|
inline |
Returns the camera's vertical field of view in radians.
|
inline |
Perform a new self calibration process.
In some cases, due to temperature changes or strong vibrations, the stereo calibration becomes less accurate.
Use this method to update the self-calibration data and get more reliable depth values.
|
inline |
|
inlinestatic |
Gets the version of the currently installed ZED SDK.
|
inlinestatic |
Change the coordinate system of a transform matrix.
rotation | [In, Out] : rotation to transform |
translation | [In, Out] : translation to transform |
coordinateSystemSrc | The current coordinate system of the translation/rotation |
coordinateSystemDest | The destination coordinate system for the translation/rotation. |
|
inlinestatic |
Gets the version of the currently installed ZED SDK.
|
inlinestatic |
List all the connected devices with their associated information.
This method lists all the cameras available and provides their serial number, models and other information.
|
inlinestatic |
List all the streaming devices with their associated information.
This method lists all the cameras available and provides their serial number, models and other information.
|
inlinestatic |
Performs a hardware reset of the ZED 2 and the ZED 2i.
serialNumber | Serial number of the camera to reset, or 0 to reset the first camera detected. |
fullReboot | Perform a full reboot (sensors and video modules) if true, otherwise only the video module will be rebooted. |
|
inline |
Retrieves a measure texture from the ZED SDK and loads it into a sl.Mat.
Use this to get an individual texture from the last grabbed frame with measurements in every pixel - such as a depth map, confidence map, etc. Measure textures are not human-viewable but don't lose accuracy, unlike image textures.
mat | sl.Mat to fill with the new texture. |
measure | Measure type (depth, confidence, xyz, etc.). |
mem | Whether the image should be on CPU or GPU memory. |
resolution | Resolution of the texture. |
|
inline |
Gets the current confidence threshold value for the disparity map (and by extension the depth map).
Values below the given threshold are removed from the depth map.
|
inline |
Gets the closest measurable distance by the camera, according to the camera type and depth map parameters.
|
inline |
Gets the current range of perceived depth.
min | Minimum depth detected (in selected sl.UNIT). |
max | Maximum depth detected (in selected sl.UNIT). |
|
inline |
Returns the current maximum distance of depth/disparity estimation.
|
inline |
Initializes and starts the positional tracking processes.
positionalTrackingParameters | A structure containing all the specific parameters for the positional tracking. Default: a preset of PositionalTrackingParameters. |
|
inline |
Disables the positional tracking.
path | If set, saves the spatial memory into an '.area' file. Default: (empty) path is the name and path of the database, e.g. path/to/file/myArea1.area". |
|
inline |
Tells if the tracking module is enabled.
|
inline |
Saves the current area learning file.
The file will contain spatial memory data generated by the tracking.
areaFilePath | Path of an '.area' file to save the spatial memory database in. |
|
inline |
Returns the state of the spatial memory export process.
|
inline |
Resets the tracking, and re-initializes the position with the given translation vector and rotation quaternion.
rotation | Rotation to set the positional tracking to. |
translation | Translation to set the positional tracking to. |
|
inline |
Returns the sensor configuration of the camera.
|
inline |
Returns the CameraInformation associated the camera being used.
To ensure accurate calibration, it is possible to specify a custom resolution as a parameter when obtaining scaled information, as calibration parameters are resolution-dependent.
When reading an SVO file, the parameters will correspond to the camera used for recording.
Referenced by Camera.Open().
|
inline |
Gets the position of the camera and the current state of the camera Tracking.
rotation | Quaternion filled with the current rotation of the camera depending on its reference frame. |
position | Vector filled with the current position of the camera depending on its reference frame. |
referenceType | Reference frame for setting the rotation/position. REFERENCE_FRAME.CAMERA gives movement relative to the last pose. REFERENCE_FRAME.WORLD gives cumulative movements since tracking started. |
|
inline |
Returns the current status of positional tracking module.
|
inline |
Gets the current position of the camera and state of the tracking, with an optional offset to the tracking frame.
rotation | Quaternion filled with the current rotation of the camera depending on its reference frame. |
position | Vector filled with the current position of the camera depending on its reference frame. |
targetQuaternion | Rotational offset applied to the tracking frame. |
targetTranslation | Positional offset applied to the tracking frame. |
referenceFrame | Reference frame for setting the rotation/position. REFERENCE_FRAME.CAMERA gives movement relative to the last pose. REFERENCE_FRAME.WORLD gives cumulative movements since tracking started. |
|
inline |
Gets the current position of the camera and state of the tracking, with a defined tracking frame.
A tracking frame defines what part of the camera is its center for tracking purposes. See sl.TRACKING_FRAME.
rotation | Quaternion filled with the current rotation of the camera depending on its reference frame. |
position | Vector filled with the current position of the camera depending on its reference frame. |
trackingFrame | Center of the camera for tracking purposes (left eye, center, right eye). |
referenceFrame | Reference frame for setting the rotation/position. REFERENCE_FRAME.CAMERA gives movement relative to the last pose. REFERENCE_FRAME.WORLD gives cumulative movements since tracking started. |
|
inline |
Gets the current position of the camera and state of the tracking, filling a Pose struct useful for AR pass-through.
pose | Current pose. |
referenceType | Reference frame for setting the rotation/position. REFERENCE_FRAME.CAMERA gives movement relative to the last pose. REFERENCE_FRAME.WORLD gives cumulative movements since tracking started. |
|
inline |
Sets a prior to the IMU orientation (not for MODEL.ZED).
Prior must come from a external IMU, such as the HMD orientation and should be in a time frame that's as close as possible to the camera.
rotation | Prior rotation. |
|
inline |
Gets the rotation given by the IMU.
rotation | Rotation from the IMU. |
|
inline |
Retrieves the SensorsData (IMU, magnetometer, barometer) at a specific time reference.
data | The SensorsData variable to store the data. |
referenceTime | Defines the reference from which you want the data to be expressed. Default: REFERENCE_FRAME.WORLD. |
|
inline |
Defines a region of interest to focus on for all the SDK, discarding other parts.
roiMask | The Mat defining the requested region of interest, pixels lower than 127 will be discarded from all modules: depth, positional tracking, etc. If empty, set all pixels as valid. The mask can be either at lower or higher resolution than the current images. |
module | Apply the ROI to a list of SDK module, all by default. Must of size sl.MODULE.LAST. The Mat defining the requested region of interest, pixels lower than 127 will be discarded from all modules: depth, positional tracking, etc. If empty, set all pixels as valid. The mask can be either at lower or higher resolution than the current images. |
|
inline |
Get the previously set or computed region of interest.
roiMask | The Mat returned |
resolution | The optional size of the returned mask |
module | Specifies the module from which the ROI is to be obtained. |
|
inline |
Start the auto detection of a region of interest to focus on for all the SDK, discarding other parts. This detection is based on the general motion of the camera combined with the motion in the scene. The camera must move for this process, an internal motion detector is used, based on the Positional Tracking module. It requires a few hundreds frames of motion to compute the mask.
roiParams |
|
inline |
Return the status of the automatic Region of Interest Detection. The automatic Region of Interest Detection is enabled by using StartRegionOfInterestAutoDetection
|
inline |
Initializes and begins the spatial mapping processes.
spatialMappingParameters | Spatial mapping parameters. |
|
inline |
Initializes and begins the spatial mapping processes.
resolutionMeter | Spatial mapping resolution in meters. |
maxRangeMeter | Maximum scanning range in meters. |
saveTexture | True to scan surface textures in addition to geometry. |
|
inline |
Returns the SpatialMappingParameters used.
It corresponds to the structure given as argument to the EnableSpatialMapping() method.
|
inline |
Disables the spatial mapping process.
|
inline |
Updates the internal version of the mesh and returns the sizes of the meshes.
nbVerticesInSubmeshes | Array of the number of vertices in each sub-mesh. |
nbTrianglesInSubmeshes | Array of the number of triangles in each sub-mesh. |
nbUpdatedSubmesh | Number of updated sub-meshes. |
updatedIndices | List of all sub-meshes updated since the last update. |
nbVertices | Total number of updated vertices in all sub-meshes. |
nbTriangles | Total number of updated triangles in all sub-meshes. |
nbSubmeshMax | Maximum number of sub-meshes that can be handled. |
Referenced by Camera.RetrieveSpatialMap(), and Camera.UpdateMesh().
|
inline |
Updates the internal version of the mesh and returns the sizes of the meshes.
mesh | The mesh to be filled with the generated spatial map. |
|
inline |
Retrieves all chunks of the current generated mesh.
Vertex and triangle arrays must be at least of the sizes returned by UpdateMesh (nbVertices and nbTriangles).
vertices | Vertices of the mesh. |
triangles | Triangles, formatted as the index of each triangle's three vertices in the vertices array. |
colors | (b, g, r) colors of the vertices. |
nbSubmeshMax | Maximum number of sub-meshes that can be handled. |
Referenced by Camera.RetrieveMesh(), and Camera.RetrieveSpatialMap().
|
inline |
Retrieves all chunks of the current generated mesh.
Vertex and triangle arrays must be at least of the sizes returned by UpdateMesh (nbVertices and nbTriangles).
mesh | The mesh to be filled with the generated spatial map. |
|
inline |
Retrieve all chunks of the generated mesh.
mesh | The mesh to be filled with the generated spatial map. |
|
inline |
Retrieves the current generated mesh.
mesh | The mesh to be filled with the generated spatial map. |
|
inline |
Retrieves the current fused point cloud.
fusedPointCloud | The Fused Point Cloud to be filled with the generated spatial map. |
|
inline |
Updates the fused point cloud (if spatial map type was FUSED_POINT_CLOUD).
Referenced by Camera.RetrieveSpatialMap().
|
inline |
Retrieves all points of the fused point cloud.
Vertex arrays must be at least of the sizes returned by UpdateFusedPointCloud().
vertices | Points of the fused point cloud. |
Referenced by Camera.RetrieveSpatialMap().
|
inline |
Extracts the current spatial map from the spatial mapping process.
If the object to be filled already contains a previous version of the mesh, only changes will be updated, optimizing performance.
This is a blocking method. You should either call it in a thread or at the end of the mapping process.
|
inline |
Starts the mesh generation process in a thread that does not block the spatial mapping process.
ZEDSpatialMappingHelper calls this each time it has finished applying the last mesh update.
|
inline |
Pauses or resumes the spatial mapping processes.
status | If true, the integration is paused. If false, the spatial mapping is resumed. |
|
inline |
Returns the mesh generation status.
Useful for knowing when to update and retrieve the mesh.
|
inline |
Saves the scanned mesh in a specific file format.
filename | Path and filename of the mesh. |
format | File format (extension). Can be .obj, .ply or .bin. |
|
inline |
Saves the scanned point cloud in a specific file format.
filename | Path and filename of the point cloud. |
format | File format (extension). Can be .obj, .ply or .bin. |
|
inline |
Loads a saved mesh file.
ZEDSpatialMapping then configures itself as if the loaded mesh was just scanned.
filename | Path and filename of the mesh. Should include the extension (.obj, .ply or .bin). |
nbVerticesInSubmeshes | Array of the number of vertices in each sub-mesh. |
nbTrianglesInSubmeshes | Array of the number of triangles in each sub-mesh. |
nbSubmeshes | Number of sub-meshes. |
updatedIndices | List of all sub-meshes updated since the last update. |
nbVertices | Total number of updated vertices in all sub-meshes. |
nbTriangles | Total number of updated triangles in all sub-meshes. |
nbSubmeshMax | Maximum number of sub-meshes that can be handled. |
textureSize | Array containing the sizes of all the textures (width, height) if applicable. |
|
inline |
Filters a mesh to remove triangles while still preserving its overall shape (though less accurate).
filterParameters | Filter level. Higher settings remove more triangles. |
nbVerticesInSubmeshes | Array of the number of vertices in each sub-mesh. |
nbTrianglesInSubmeshes | Array of the number of triangles in each sub-mesh. |
nbSubmeshes | Number of sub-meshes. |
updatedIndices | List of all sub-meshes updated since the last update. |
nbVertices | Total number of updated vertices in all sub-meshes. |
nbTriangles | Total number of updated triangles in all sub-meshes. |
nbSubmeshMax | Maximum number of sub-meshes that can be handled. |
|
inline |
Filters a mesh to remove triangles while still preserving its overall shape (though less accurate).
filterParameters | Filter level. Higher settings remove more triangles. |
mesh | The mesh to be filled with the generated spatial map. |
|
inline |
Applies the scanned texture onto the internal scanned mesh.
nbVerticesInSubmeshes | Array of the number of vertices in each sub-mesh. |
nbTrianglesInSubmeshes | Array of the number of triangles in each sub-mesh. |
nbSubmeshes | Number of sub-meshes. |
updatedIndices | List of all sub-meshes updated since the last update. |
nbVertices | Total number of updated vertices in all sub-meshes. |
nbTriangles | Total number of updated triangles in all sub-meshes. |
textureSize | Vector containing the size of all the texture (width, height). |
nbSubmeshMax | Maximum number of sub-meshes that can be handled. |
|
inline |
Applies the texture on a mesh.
mesh | Mesh with a texture to apply. |
|
inline |
Returns the current spatial mapping state.
As the spatial mapping runs asynchronously, this method allows you to get reported errors or status info.
|
inline |
Gets a vector pointing toward the direction of gravity.
This is estimated from a 3D scan of the environment, and as such, a scan must be started/finished for this value to be calculated. If using a camera other than MODEL.ZED, this is not required thanks to its IMU.
|
inline |
Consolidates the chunks from a scan.
This is used to turn lots of small meshes (which are efficient for the scanning process) into several large meshes (which are more convenient to work with).
numberFaces | |
nbVerticesInSubmeshes | Array of the number of vertices in each sub-mesh. |
nbTrianglesInSubmeshes | Array of the number of triangles in each sub-mesh. |
nbSubmeshes | Number of sub-meshes. |
updatedIndices | List of all sub-meshes updated since the last update. |
nbVertices | Total number of updated vertices in all sub-meshes. |
nbTriangles | Total number of updated triangles in all sub-meshes. |
|
inline |
Detect the floor plane of the scene.
Use ZEDPlaneDetectionManager.DetectFloorPlane for a higher-level version that turns planes into GameObjects.
plane | Data on the detected plane. |
playerHeight | Height of the camera from the newly-detected floor. |
priorQuat | Prior rotation. |
priorTrans | Prior position. |
|
inline |
Detect the floor plane of the scene.
Use ZEDPlaneDetectionManager.DetectFloorPlane for a higher-level version that turns planes into GameObjects.
plane | Data on the detected plane. |
playerHeight | Height of the camera from the newly-detected floor. |
priorQuat | Prior rotation. |
priorTrans | Prior position. |
|
inline |
Using data from a detected floor plane, updates supplied vertex and triangle arrays with data needed to make a mesh that represents it.
These arrays are updated directly from the wrapper.
vertices | Array to be filled with mesh vertices. |
triangles | Array to be filled with mesh triangles, stored as indexes of each triangle's points. |
numVertices | Total vertices in the mesh. |
numTriangles | Total triangle indexes (3x number of triangles). |
|
inline |
Using data from a detected floor plane, updates supplied vertex and triangle arrays with data needed to make a mesh that represents it.
These arrays are updated directly from the wrapper.
vertices | Array to be filled with mesh vertices. |
triangles | Array to be filled with mesh triangles, stored as indexes of each triangle's points. |
numVertices | Total vertices in the mesh. |
numTriangles | Total triangle indexes (3x number of triangles). |
|
inline |
Checks the plane at the given left image coordinates.
plane | The detected plane if the method succeeded. |
coord | The image coordinate. The coordinate must be taken from the full-size image. |
parameters | A structure containing all the specific parameters for the plane detection. Default: a preset of PlaneDetectionParameters. |
|
inline |
Checks the plane at the given left image coordinates.
plane | The detected plane if the method succeeded. |
coord | The image coordinate. The coordinate must be taken from the full-size image. |
parameters | A structure containing all the specific parameters for the plane detection. Default: a preset of PlaneDetectionParameters. |
|
inline |
Using data from a detected hit plane, updates supplied vertex and triangle arrays with data needed to make a mesh that represents it.
These arrays are updated directly from the wrapper.
vertices | Array to be filled with mesh vertices. |
triangles | Array to be filled with mesh triangles, stored as indexes of each triangle's points. |
numVertices | Total vertices in the mesh. |
numTriangles | Total triangle indexes (3x number of triangles). |
|
inline |
Using data from a detected hit plane, updates supplied vertex and triangle arrays with data needed to make a mesh that represents it.
These arrays are updated directly from the wrapper.
vertices | Array to be filled with mesh vertices. |
triangles | Array to be filled with mesh triangles, stored as indexes of each triangle's points. |
numVertices | Total vertices in the mesh. |
numTriangles | Total triangle indexes (3x number of triangles). |
|
inlinestatic |
Updates the range to match the specified preset.
Referenced by Camera.EnableSpatialMapping().
|
inlinestatic |
Updates the resolution to match the specified preset.
Referenced by Camera.EnableSpatialMapping().
|
inline |
Creates an SVO file to be filled by EnableRecording() and DisableRecording().
videoFileName | Filename of the recording. Whether it ends with .svo or .avi defines its file type. |
compressionMode | The compression to use for recording. |
bitrate | Override default bitrate with a custom bitrate (Kbits/s). |
targetFPS | Use another fps than camera FPS. Must respect camera_fpstarget_fps == 0. |
transcode | If input is in streaming mode, dump directly into SVO file (transcode=false) or decode/encode (transcode=true). |
|
inline |
Creates an SVO file to be filled by EnableRecording() and DisableRecording().
videoFileName | A structure containing all the specific parameters for the positional tracking. Default: a reset of RecordingParameters. |
|
inline |
Get the recording information.
|
inline |
Returns the RecordingParameters used.
It corresponds to the structure given as argument to the EnableRecording() method.
|
inline |
Pauses or resumes the recording.
status | If true, the recording is paused. If false, the recording is resumed. |
|
inline |
Disables the recording initiated by EnableRecording() and closes the generated file.
|
inline |
Ingest SVOData in a SVO file.
data | Data to ingest in the SVO file.. |
Note: The method works only if the camera is recording.
|
inline |
Retrieves SVO data from the SVO file at the given channel key and in the given timestamp range.
key | The key of the SVOData that is going to be retrieved. |
data | The map to be filled with SVOData objects, with timestamps as keys. |
tsBegin | The beginning of the range. |
tsEnd | The end of the range. |
|
inline |
Gets the external channels that can be retrieved from the SVO file.
|
inline |
Creates an streaming pipeline.
codec | Defines the codec used for streaming. |
bitrate | Defines the streaming bitrate in Kbits/s. |
port | Defines the port used for streaming. |
gopSize | Defines the gop size in number of frames. |
adaptativeBitrate | Enable/Disable adaptive bitrate. |
chunkSize | Defines a single chunk size. |
targetFPS | Defines the target framerate for the streaming output. |
|
inline |
Creates an streaming pipeline.
streamingParameters | A structure containing all the specific parameters for the streaming. Default: a preset of StreamingParameters. |
|
inline |
Tells if the streaming is running.
|
inline |
Disables the streaming initiated by EnableStreaming().
|
inline |
Returns the StreamingParameters used.
It corresponds to the structure given as argument to the EnableStreaming() method.
|
inline |
Save current image (specified by view) in a file defined by filename.
Supported formats are JPEG and PNG.
Filename must end with either .jpg or .png.
side | sl.SIDE on which to save the image. |
filename | Filename must end with .jpg or .png. |
|
inline |
Save the current depth in a file defined by filename.
Supported formats are PNG, PFM and PGM.
side | sl.SIDE on which to save the depth. |
filename | Filename must end with .png, .pfm or .pgm. |
|
inline |
Save the current point cloud in a file defined by filename.
Supported formats are PLY, VTK, XYZ and PCD.
side | sl.SIDE on which to save the point cloud. |
filename | Filename must end with .ply, .xyz , .vtk or .pcd. |
|
inlinestatic |
Check if a corresponding optimized engine is found for the requested model based on your rig configuration.
model | AI model to check. |
gpu_id | ID of the gpu. |
|
inlinestatic |
Optimize the requested model, possible download if the model is not present on the host.
model | AI model to optimize. |
gpu_id | ID of the gpu to optimize on. |
|
inline |
Initializes and starts object detection module.
od_params | A structure containing all the specific parameters for the object detection. Default: a preset of ObjectDetectionParameters. |
|
inline |
Initializes and starts body tracking module.
bt_params | A structure containing all the specific parameters for the body tracking. Default: a preset of BodyTrackingParameters. |
|
inline |
Disable object detection module and release the resources.
instanceID | Id of the object detection instance. Used when multiple instances of the object detection module are enabled at the same time. |
disableAllInstance | Should disable all instances of the object detection module or just instanceID. |
|
inline |
Disable body tracking module and release the resources.
instanceID | Id of the body tracking module instance. Used when multiple instances of the body tracking module module are enabled at the same time. |
disableAllInstance | Should disable all instances of the body tracking module or just instanceID. |
|
inline |
Returns the ObjectDetectionParameters used.
It corresponds to the structure given as argument to the EnableObjectDetection() method.
|
inline |
Returns the BodyTrackingParameters used.
It corresponds to the structure given as argument to the EnableBodyTracking() method.
|
inline |
Feed the 3D Object tracking method with your own 2D bounding boxes from your own detection algorithm.
objects_in | List of CustomBoxObjectData to feed the object detection. |
instanceID | Id of the object detection instance. Used when multiple instances of the object detection module are enabled at the same time. |
|
inline |
Retrieve objects detected by the object detection module.
objs | Retrieved objects. |
od_params | Object detection runtime parameters |
instanceID | Id of the object detection instance. Used when multiple instances of the object detection module are enabled at the same time. |
|
inline |
Retrieve bodies detected by the body tracking module.
objs | Retrieved bodies. |
bt_params | Body tracking runtime parameters |
instanceID | Id of the body tracking instance. Used when multiple instances of the body tracking module are enabled at the same time. |
|
inline |
Update the batch trajectories and retrieve the number of batches.
nbBatches | Numbers of batches. |
|
inline |
Retrieve a batch of objects.
batch_index | Index of the batch retrieved. |
objectsBatch | Trajectory that will be filled by the batching queue process. |
int CameraID = 0 |
Camera ID (for multiple cameras)
Referenced by Camera.ApplyTexture(), Camera.Camera(), Camera.Close(), Camera.convertFloorPlaneToMesh(), Camera.ConvertFloorPlaneToMesh(), Camera.convertHitPlaneToMesh(), Camera.ConvertHitPlaneToMesh(), Camera.DisableBodyTracking(), Camera.DisableObjectDetection(), Camera.DisablePositionalTracking(), Camera.DisableRecording(), Camera.DisableSpatialMapping(), Camera.DisableStreaming(), Camera.EnableBodyTracking(), Camera.EnableObjectDetection(), Camera.EnablePositionalTracking(), Camera.EnableRecording(), Camera.EnableSpatialMapping(), Camera.EnableStreaming(), Camera.ExtractWholeSpatialMap(), Camera.FilterMesh(), Camera.findFloorPlane(), Camera.FindFloorPlane(), Camera.findPlaneAtHit(), Camera.FindPlaneAtHit(), Camera.GetAreaExportState(), Camera.GetBodyTrackingParameters(), Camera.GetCalibrationParameters(), Camera.GetCameraFirmwareVersion(), Camera.GetCameraFPS(), Camera.GetCameraInformation(), Camera.GetCameraModel(), Camera.GetCameraSettings(), Camera.GetCameraTimeStamp(), Camera.GetConfidenceThreshold(), Camera.GetCurrentMixMaxDepth(), Camera.GetCurrentTimeStamp(), Camera.GetDepthMaxRangeValue(), Camera.GetDepthMinRangeValue(), Camera.GetFrameDroppedCount(), Camera.GetGravityEstimate(), Camera.GetIMUOrientation(), Camera.GetInitParameters(), Camera.GetInputType(), Camera.GetMeshRequestStatus(), Camera.GetObjectDetectionParameters(), Camera.GetObjectsBatch(), Camera.GetPosition(), Camera.GetPositionalTrackingParameters(), Camera.GetPositionalTrackingStatus(), Camera.GetRecordingParameters(), Camera.GetRecordingStatus(), Camera.GetRegionOfInterest(), Camera.GetRegionOfInterestAutoDetectionStatus(), Camera.GetRuntimeParameters(), Camera.GetSensorsConfiguration(), Camera.GetSensorsData(), Camera.GetSensorsFirmwareVersion(), Camera.GetSpatialMappingParameters(), Camera.GetSpatialMappingState(), Camera.GetStreamingParameters(), Camera.GetSVODataKeys(), Camera.GetSVONumberOfFrames(), Camera.GetSVOPosition(), Camera.GetSVOPositionAtTimestamp(), Camera.GetZEDSerialNumber(), Camera.Grab(), Camera.IngestCustomBoxObjects(), Camera.IngestDataIntoSVO(), Camera.IsCameraSettingSupported(), Camera.IsOpened(), Camera.IsPositionalTrackingEnabled(), Camera.IsStreamingEnabled(), Camera.LoadMesh(), Camera.MergeChunks(), Camera.Open(), Camera.PauseRecording(), Camera.PauseSpatialMapping(), Camera.RequestSpatialMap(), Camera.ResetPositionalTracking(), Camera.RetrieveBodies(), Camera.RetrieveChunks(), Camera.RetrieveFusedPointCloud(), Camera.RetrieveImage(), Camera.RetrieveMeasure(), Camera.RetrieveMesh(), Camera.RetrieveObjects(), Camera.RetrieveSVOData(), Camera.SaveAreaMap(), Camera.SaveCurrentDepthInFile(), Camera.SaveCurrentImageInFile(), Camera.SaveCurrentPointCloudInFile(), Camera.SaveMesh(), Camera.SavePointCloud(), Camera.SetCameraSettings(), Camera.SetIMUOrientationPrior(), Camera.SetRegionOfInterest(), Camera.SetSVOPosition(), Camera.StartPublishing(), Camera.StartRegionOfInterestAutoDetection(), Camera.StopPublishing(), Camera.UpdateFusedPointCloud(), Camera.UpdateMesh(), Camera.UpdateObjectsBatch(), and Camera.UpdateSelfCalibration().
|
get |
Width of the images returned by the camera in pixels.
It corresponds to the camera's current resolution setting.
|
get |
Height of the images returned by the camera in pixels.
It corresponds to the camera's current resolution setting.
|
get |
Baseline of the camera (distance between the cameras).
Extracted from calibration files.
Referenced by Camera.GetPosition().
|
get |
Current horizontal field of view in degrees of the camera.
|
get |
Current vertical field of view in degrees of the camera.
Referenced by Camera.GetSensorsConfiguration().
|
get |
Stereo parameters for current ZED camera prior to rectification (distorted).
|
get |
Stereo parameters for current ZED camera after rectification (undistorted).
|
get |
Model of the camera.
|
get |
Whether the camera has been successfully initialized.