Automation in healthcare using spatial mapping and positional tracking features from 3D Stereo cameras
COVID-19 presented employers with a simple choice: find ways for workers to do their jobs safely or shut down. The efforts are representative of a broader shift amid the pandemic towards automation, artificial intelligence, and autonomous robots.“City Robotics along with ZED, the Stereolabs’ 3D sensors, is doing just that!“
ROBOTS were envisioned as a literary device by the writers and filmmakers of the early 20th century where they had this medium of exploring their hopes and fears about technology, as the era of the automobile, telephone and aircrafts picked up its reckless jazz-age speed. Various depictions of the robots from Isaac Asimov’s “I, Robot” to “WALL-E” and the “Terminator” films, and countless iterations in between, they have succeeded admirably in their task.
The fourth industrial revolution, consisting of autonomous robotics and computing, moves from science fiction and R&D to real-life deployments. Enabled by vast increases in computing capacity, burgeoning data harvested through powerful algorithms embedded in digital platforms, advanced material developments, and urban connectivity — the capability of machines is expanding across all facets of the economy and throughout everyday life.
Robotic 3D stereovision eyes by Stereolabs
A robot approaches an empty hospital room that a COVID patient previously occupied. The robot is “Robo-UV” from City Robotics, and it is scanning the length and breadth of the room using a 3D stereovision sensor from Stereolabs. After a pause for thought, it moves around the room, for disinfecting the room from viruses, bacteria’s and other pathogens using UVC radiation, which destroys the outer protein coating of the SARS novel coronavirus. Not only SARS, but the robot is also capable of disinfecting the room of other pathogens looming around in the air, making it safe and secure.
Robo-UV: UV-C Disinfection robot by CityRobotics
City Robotics has already deployed these robots in a few hospitals in Poland. Robo-UV reduces human efforts and increases the hospital staff’s safety and patients by scanning the room, creating a 3D map of the environment using the ZED sensors, and then navigating the space to optimize the targeted doses of the UV-C disinfection lights.
Robo-UV workflow
The Robo-UV Robot is an autonomous, flexible, and module-built disinfection robot. The autonomous drive system means that the cleaning personnel easily can call the robot via a tablet or phone. It will navigate to the desired position while the staff is doing the initial cleaning. After the initial cleaning, the robot is sent into the room, it is activated and the person can continue their planned work while disinfection is ongoing. After disinfection, the robot signals the personnel and it is ready to continue to a new room.
The robot is embedded with a stereo sensor that replicates the functioning of human eyes, which enables the robot to generate a 3D map of the environment in real-time. Using neural depth sensing, the robot perceives its surroundings in 3D up to 20m distance, also recognizing and tracking the 3D objects in real-time. Unlike LIDAR or IR, the ZED camera is robust to changing light conditions and strong sunlight making it more powerful to be deployed in the challenging hospital environment.
Robo-UV in action
The ZED camera when used with the Stereolabs’ SDK also has the ability to generate Depth maps of the environment. Depth maps captured by the ZED store a distance value (Z) for each pixel (X, Y) in the image, which can also be represented by a 3-D point cloud.
The Stereolabs spatial mapping is a module to perform a real-time 3D reconstruction. It aims to fuse the depth map iteratively using the motion tracking provided by SDK positional tracking. This module collects all information provided by the motion tracking module in a separate thread. It aims to understand the spatial configuration of the overall scene. It is able to detect if the camera has already seen a scene and then correct all cumulated drift by comparing the “known” position stored in spatial memory and the current position given by motion tracking.
3D point cloud using Stereolabs SDK
Embedded with a comprehensive sensor stack within the camera consisting of the IMU, barometer, Temperature sensor, etc, the Stereo camera has the ability of positional tracking (the ability of the device to estimate its position relative to the world around it), which is used to track the movement of a camera or user in 3D space with six degrees of freedom (6DoF).
The positional tracking in the Stereolabs SDK is composed of 2 modules:
In addition to positional tracking, depth sensing, and spatial mapping, the Stereolabs’ SDK is also capable of object detection, which uses AI and neural networks to determine which objects are present in both the left and right images.
The Stereolabs SDK supports various integrations that can be interfaced with multiple third-party libraries and environments like OpenCV, ROS, Pytorch, Docker, Tensorflow, YOLO, Unity, Unreal, and many more.
Here is what the CEO of City Robotics, Deepjyoti Nath had to say on using the ZED stereo cameras:
ZED 2 is the only camera we considered from the very beginning. The built quality is amazing, very easy to integrate with off-the-shelf AI capabilities. And their support is amazing. We will continue using ZED cameras and that was straightforward to make.
Interested in exploring the ZED stereo cameras for your project? Reach out to us at support@stereolabs.com and gift your robot the power of true vision.