Skip to content
News
Mar 20, 2025.

Stereolabs unveils ZED SDK 5 with TERRA AI, revolutionizing vision-based sensing

The ZED SDK 5.0, powered by TERRA AI, our most advanced vision model yet. This release redefines AI perception with 5× faster sensing performance, up to 300% load reduction on Jetson, and unmatched depth quality—even in challenging conditions.

Today, we unveil the ZED SDK 5.0, powered by TERRA AI, our most advanced vision model yet. This release redefines AI perception with 5× faster sensing performance, up to 300% load reduction on Jetson, and unmatched depth quality—even in challenging conditions.

TERRA AI is simply the world’s most accurate, fastest and lightweight vision-based sensing AI available today for various robotic applications such as AMRs, delivery robots, robotic lawn mowers, robotic arms, agricultural vehicles, and wide industrial applications like warehouse and factory automation, as well as stationary digital twin applications.

Powered by TERRA AI: Superhuman Vision Foundation Model

At Stereolabs, we have believed in the power of human vision since day one. We pioneered 3D computer vision 10 years ago, and we are doing it again today. Over the past five years, we have been developing a new, efficient architecture for AI vision perception, codenamed “TERRA”, and today, we are proud to unveil ZED SDK 5.0, powered by TERRA AI—a unified foundation model that represents a breakthrough in spatial perception.

TERRA AI is a multi-task vision model that can estimate depth, semantic, objects, occupancy in real-time on embedded GPU and NPUs. TERRA AI operates at high resolution, producing 2-megapixel depth maps in 30 milliseconds on an NVIDIA Jetson Orin Nano 8GB.

TERRA AI dramatically outperforms all prior work in depth estimation performance and robustness, delivering up to 5× lower latency and achieving superior accuracy than state-of-the-art networks, especially in challenging industrial environments such as low-light, reflective warehouses and factories, or fog and rain in outdoor farms.
This next-gen perception model further solidifies Stereolab's position as a leader in vision AI solutions for off-road and industrial robotics, enabling surround perception for any application through a broad portfolio of cameras, ECUs, and software.

ZED RGB Image

SDK 5.0, powered by TERRA AI, depth map

ZED SDK 5.0: A Leap in Vision AI Perception

ZED SDK 5.0 brings significant enhancements to depth perception using TERRA , while introducing a powerful new option for even faster performance. This release also introduces Magellan™, our third-generation vision-based localization technology, enabling precise positioning both indoors and outdoors with centimeter-level accuracy.

Up to 5× Faster Depth Estimation

Next-gen AI sensing delivers significantly faster depth computation while reducing Jetson compute load by up to 300%, enabling real-time 360° multi-camera operation on constrained platforms like Orin Nano, making vision-based automation more accessible and efficient.

Performance comparison of depth processing on an NVIDIA Jetson Orin Nano, showcasing up to 5× speed improvement in ZED SDK 5.0.

GPU load comparison on an NVIDIA Jetson AGX, highlighting up to 300% reduction in computational load for improved efficiency in ZED SDK 5.0.

Up to 2MP High-Definition Depth Maps

TERRA AI generates high-resolution 2MP depth maps with unmatched sharpness and fine detail, achieving a level of scene understanding and spatial awareness way beyond what competitive sensors or LiDAR can achieve.

Now, users can choose from three modes, each tailored to different application needs:

  • Neural: A perfect compromise between speed and precision, making it ideal for most general-purpose applications.
  • Neural Light: A new mode focused on performance, delivering up to 5x the performance of previous releases modes, perfect for applications where fast depth processing is critical.
  • Neural Plus: A mode designed for applications requiring greater depth accuracy, enhancing precision and more detail.

RGB images

ZED Depth Maps

Conventional Sensor Depth Map

Perceive Objects at Both Far and Close Range

With ZED SDK 5.0, depth sensing capabilities have been significantly enhanced to detect objects from as close as 0.1 meters to as far as 40 meters. This expanded range eliminates blind spots in close-range perception, ensuring safe and reliable operation in a variety of autonomous applications. Far range perception has also been improved, with more accurate edges and point cloud accuracy, enabling ZED cameras to provide unparalleled depth perception across all operating conditions.

Close-range scene

Far-range scene

Reliable Performance in Low Light and Challenging Environments

TERRA AI delivers high-quality perception even in low-light or challenging conditions with texture-less surfaces, repetitive patterns and high-exposure scenes. These improvements set a new benchmark in passive vision-based perception, unlocking the general use of cameras in any environment.

Low-light scene

ZED Depth Map

Intelligent Weather Perception

The new ZED SDK 5.0 introduces intelligent perception capabilities, enabling robots to assess real-time depth confidence and adapt their operation accordingly. TERRA technology has been trained to see through rain and fog, allowing robots to detect and recognize objects in adverse weather conditions. This market-leading 3D camera technology significantly expands robot usability across diverse weather scenarios.

Rainy environment

ZED Depth Map

Introducing Magellan™: Advanced Vision-Based Localization Technology

Magellan™ is Stereolabs' third-generation localization system, engineered to deliver high-precision and reliable positioning for autonomous platforms in industrial environments. By fusing stereo vision, IMU, and optional GNSS data, Magellan™ achieves centimeter-level accuracy across both structured and unstructured environments. This multi-sensor fusion approach ensures robust performance in GPS-denied areas, enabling autonomous systems to navigate with confidence.

Magellan™ is designed for developers and engineers building autonomous solutions in off-road environments, agriculture, logistics, and industrial automation, providing accurate, high-frequency positioning even in challenging environments:

  • Real-time positioning with centimetric accuracy: Magellan™ leverages multi-sensor fusion and visual odometry, continuously tracking key visual landmarks to estimate motion and relocalize. This allows for accurate positioning in any environment, reducing drift and improving localization and tracking.
  • Robust performance in GNSS-degraded environments: By combining data from cameras and IMUs, Magellan™ ensures precise localization in complex environments, even in challenging conditions where GNSS signals are weak or unavailable (e.g., urban canyons, orchards and vineyards, warehouses and factories).
  • Reduced total cost of ownership: Magellan™ can be used alongside ZED depth sensing capabilities, enabling both perception and localization with a single system, minimizing system complexity and significantly reducing the total cost of ownership for adding complete automation.

The new SDK enables precise 3D SLAM mapping indoors and outdoors, accurately reconstructing large-scale environments such as roads, farms, warehouses, and ports with georeferenced coordinates.

Ultra-low Latency and Optimized Capture Pipeline

With ZED SDK 5.0, the capture pipeline architecture has been improved, improving stability, reducing drop frames and lowering video latency below 60ms (just 3 frames), allowing to provide latencies among the lowest in the industry. Beyond depth perception, ZED SDK 5.0 introduces a range of enhancements designed to streamline data handling, improve diagnostics, and boost overall 3D vision performance. Key new functionalities include:

  • Enhanced camera health monitoring: Real-time feedback on critical camera metrics is now available, including scene illumination, depth reliability, and image quality, allowing for proactive performance monitoring.
  • Optimized point cloud retrieval: New dynamic point cloud size adjustment and retrieval in C++ and Python reduces latency and improves load without compromising accuracy.
  • Optimized image capture: New dedicated functions separates image acquisition from data processing, enabling parallel computing for AI-driven tasks such as object detection and scene analysis.

For a detailed breakdown of all new features and improvements, check out the ZED SDK 5.0 Release Notes.

Physics-based simulation with Isaac Sim

Thanks to the ZED extension in Isaac Sim, users can seamlessly transition from virtual simulation to real-world deployment, accelerating the development and testing of AI vision applications.

Isaac Sim simulator

ZED Depth Viewer

Unlocking Scalable AI for Robotics and Automation

The new ZED SDK 5.0 is designed to transform robotics and machine automation. With tens of thousands of industrial businesses actively integrating ZED cameras and its SDK worldwide, we are set to democratize camera-based vision perception even further. Paired with our new ZED Box Mini and Autonomy Kits, OEMs and developers can now build the next generation of intelligent machines, leveraging cameras as a high-performance, cost-effective, and scalable alternative to LiDAR and traditional active sensors.

“TERRA AI provides a quantum leap in vision sensing performance for roboticists and developers, paving the way for automation applications that were previously impossible. With up to 5× the performance of the previous generation, TERRA AI and ZED SDK 5 are setting a new standard for the industry.” said Cecile Schmollgruber, Stereolabs’ CEO and founder.

The future of AI vision starts here

ZED SDK 5.0 is more than just an upgrade—it’s a game-changing advancement in AI-powered perception. By combining breakthrough efficiency, enhanced AI depth sensing, and seamless multi-camera support, Stereolabs is empowering developers and enterprises to build the next generation of intelligent machines—at a fraction of the cost of LiDAR or other sensors.

ZED SDK 5.0 with TERRA AI is now available for download. Visit stereolabs.com/developers to learn more and start building the future of AI vision.

To learn more about Stereolabs and its vision-based perception technology, visit stereolabs.com.