DDS Middleware and Network tuning

⚠️ Warning ⚠️

If you don’t apply these settings, ROS 2 nodes will fail to receive and send large data like point clouds or images published by the ZED ROS 2 nodes.

📌 Note: The configuration described in this documentation must be applied to all the machines involved in the ROS 2 infrastructure that need to send or receive ZED ROS 2 messages.

Change DDS Middleware #

Cyclone DDS is the recommended and most extensively tested DDS implementation for the ZED ROS 2 Wrapper. It also ensures reliable communication with the Nav2 framework, supporting autonomous navigation tasks.

Install Cyclone DDS #

The default DDS middleware included with ROS 2 Humble Hawksbill is eProsima’s Fast DDS. To use Cyclone DDS, you must install and configure it separately.

Open a terminal console (Ctrl + Alt + t) and enter the following command to install the required packages:

sudo apt install ros-$ROS_DISTRO-rmw-cyclonedds-cpp

To ensure ROS 2 nodes use Cyclone DDS, you need to specify the default middleware by setting the RMW_IMPLEMENTATION environment variable in each terminal where ROS nodes are launched:

export RMW_IMPLEMENTATION=rmw_cyclonedds_cpp

It is possible to automatically set the RMW_IMPLEMENTATION environment variable by adding the above command to the file ~/.bashrc:

echo export RMW_IMPLEMENTATION=rmw_cyclonedds_cpp >> ~/.bashrc

Tuning for large messages #

All DDS implementations are not designed to handle large messages (such as images or point clouds). Therefore, it is necessary to tune them and the network parameters to prevent data loss and system overloading.

Reduce fragment timeout time #

If any part of a UDP packet’s IP fragment is missing, the remaining received fragments will take up space in the kernel buffer. If the connection is unreliable (such as WiFi), this can potentially fill up the kernel buffer on the receiving end.

  • Default value: 30 seconds
  • New value: 3 seconds
sudo sysctl -w net.ipv4.ipfrag_time=3

Reducing this parameter’s value also reduces the window of time where no fragments are received. The parameter is global for all incoming fragments, so the feasibility of reducing its value needs to be considered for every environment.

Increase the maximum memory used to reassemble IP fragments #

Increasing this parameter significantly is an attempt to prevent the buffer from becoming completely full. However, the value would likely need to be very high to accommodate all the data received during the time window of ipfrag_time, assuming that every UDP packet is missing one fragment.

  • Default value: 4194304 B (4 MB)
  • New value: 134217728 (128 MB)
sudo sysctl -w net.ipv4.ipfrag_high_thresh=134217728

Increase the maximum Linux kernel receive buffer size #

Cyclone DDS is not delivering large messages reliably, despite using reliable settings and transferring over a wired network. Increasing the maximum Linux kernel receive buffer size mitigates the problem.

  • Default value: 4194304 (4 MB)
  • New value: 2147483647 (2 GiB)
sudo sysctl -w net.core.rmem_max=2147483647

Increase the minimum socket receive buffer and maximum size of messages for Cyclone DDS #

To set the minimum socket receive buffer size that Cyclone requests, it is required to write out a configuration file for Cyclone to use while starting and specify a value for the SocketReceiveBufferSize parameter.

The same configuration file is required to adjust the maximum message size allowed by Cyclone DDS. This can be done by configuring the MaxMessageSize parameter in the XML configuration file.

An example of Cyclone DDS configuration file:

<?xml version="1.0" encoding="UTF-8" ?>
<CycloneDDS xmlns="https://cdds.io/config" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="https://cdds.io/config https://raw.githubusercontent.com/eclipse-cyclonedds/cyclonedds/master/etc/cyclonedds.xsd">
  <Domain Id="any">
    <General>
      <Interfaces>
        <NetworkInterface autodetermine="true" priority="default" multicast="default" />
      </Interfaces>
      <AllowMulticast>default</AllowMulticast>
      <MaxMessageSize>65500B</MaxMessageSize>
    </General>
    <Internal>
      <SocketReceiveBufferSize min="10MB"/>
      <Watermarks>
        <WhcHigh>500kB</WhcHigh>
      </Watermarks>
    </Internal>
  </Domain>
</CycloneDDS>

You can refer to Eclipse Cyclone DDS: Run-time configuration documentation for more details concerning the available parameters.

To force a ROS 2 node to use the tuned parameters for Cyclone DDS it is required to set the environment variable CYCLONEDDS_URI:

export CYCLONEDDS_URI=file:///absolute/path/to/the/configuration/file

Make the tuning permanent #

Network settings #

Open a command line console (Ctrl + Alt + t) and enter the following command to create a configuration file:

sudo nano /etc/sysctl.d/10-cyclone-max.conf # test

Paste the following into the file:

# IP fragmentation settings
net.ipv4.ipfrag_time=3  # in seconds, default is 30 s
net.ipv4.ipfrag_high_thresh=134217728  # 128 MiB, default is 256 KiB

# Increase the maximum receive buffer size for network packets
net.core.rmem_max=2147483647  # 2 GiB, default is 208 KiB

Save the file and reboot.

Validate the sysctl settings, after a reboot:

$ sysctl net.core.rmem_max net.ipv4.ipfrag_time net.ipv4.ipfrag_high_thresh
net.core.rmem_max = 2147483647
net.ipv4.ipfrag_time = 3
net.ipv4.ipfrag_high_thresh = 134217728

Cyclone DDS settings #

Save the following file as ~/cyclonedds.xml.

<?xml version="1.0" encoding="UTF-8" ?>
<CycloneDDS xmlns="https://cdds.io/config" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="https://cdds.io/config https://raw.githubusercontent.com/eclipse-cyclonedds/cyclonedds/master/etc/cyclonedds.xsd">
  <Domain Id="any">
    <General>
      <Interfaces>
        <NetworkInterface autodetermine="true" priority="default" multicast="default" />
      </Interfaces>
      <AllowMulticast>default</AllowMulticast>
      <MaxMessageSize>65500B</MaxMessageSize>
    </General>
    <Internal>
      <SocketReceiveBufferSize min="10MB"/>
      <Watermarks>
        <WhcHigh>500kB</WhcHigh>
      </Watermarks>
    </Internal>
  </Domain>
</CycloneDDS>

Then add the following lines to your ~/.bashrc file.

export RMW_IMPLEMENTATION=rmw_cyclonedds_cpp

export CYCLONEDDS_URI=file:///absolute/path/to/cyclonedds.xml

# Replace `/absolute/path/to/cyclonedds.xml` with the actual path to the file.
# Example: export CYCLONEDDS_URI=file:///home/user/cyclonedds.xml

ROS Domain #

In DDS, the Domain ID serves as the key mechanism that allows different logical networks to share the same physical network.

ROS 2 nodes within the same domain can automatically discover and communicate with each other, while nodes on different domains remain isolated.

By default, all ROS 2 nodes use Domain ID 0.

To prevent interference between multiple groups of computers running ROS 2 on the same network, each group should be assigned a unique Domain ID.

The official ROS 2 documentation explains the derivation of the range of domain IDs that should be used in ROS 2. To skip that background and just choose a safe number, simply choose a domain ID between 0 and 101, inclusive.

Set the ROS Domain #

Open a terminal console and enter the following command:

export ROS_DOMAIN_ID=<DOMAIN_ID>

replace <DOMAIN_ID> by a number between 0 and 101.

To permanently set the same ROS Domain for all the nodes you want to launch, add the following line to your ~/.bashrc file.

export ROS_DOMAIN_ID=<DOMAIN_ID>

# Example: export ROS_DOMAIN_ID=5

📌 Note: all the nodes that must communicate in the same ROS 2 infrastructure, nodes running in Docker images included, must use the same ROS Domain setting.

Change MTU size #

The optimal MTU (Maximum Transmission Unit) value for ROS 2 largely depends on your network environment and the size of the messages being transmitted.

Common MTU Values for ROS 2:

  • Default (1500 bytes): This is a safe choice if you’re working in a mixed network or with devices that may not support jumbo frames. Suitable for smaller messages, typical sensor data, and standard LAN setups.
  • 9000 bytes (Jumbo Frames): Recommended if you’re transmitting large data, such as high-resolution images, point clouds, or video streams. Your entire network (switches, routers, and network cards) must support jumbo frames for it to work properly. Ideal for low-latency, high-throughput networks often used in robotics labs or environments with dedicated infrastructure.
  • Intermediate MTU (4000-6000 bytes): In some cases, you might not want to use the full 9000 bytes but still want larger frames for more efficient communication. This can be a good middle-ground option if your network supports it.

To handle the large messages published by the ZED ROS 2 nodes we recommend using the maximum size possible.

📌 Note: Not every network controller (NIC) model supports Jumbo Frames (9000 B). In this case, we recommend testing your configuration with temporary commands to use the largest MTU available.

Temporary setup #

This method changes the MTU for the current session. It will revert to the default after a reboot. This is useful to test a configuration before adding a wrong permanent setup.

Open a terminal and use the following command to set the MTU for your network interface:

sudo ip link set dev INTERFACE_NAME mtu 9000

replace INTERFACE_NAME with your actual network interface name, like eth0 or wlan0.

For example, to set the MTU to 9000 on eth0, you would run:

sudo ip link set dev eth0 mtu 9000

Verify the change with:

ip link show INTERFACE_NAME

Permanent setup #

Open the Netplan configuration file for your network interface. The file is usually located in /etc/netplan/ and will have a .yaml extension (e.g., 01-netcfg.yaml or 50-cloud-init.yaml).

sudo nano /etc/netplan/<config_file_name>

Look for the section related to your network interface and add or modify the mtu value. It should look like this:

network:
  version: 2
  ethernets:
    INTERFACE_NAME:
      dhcp4: yes
      mtu: 9000

Replace <INTERFACE_NAME> with your actual interface name and set the MTU value you need.

Save the file and apply the configuration:

sudo netplan apply

Verify that the MTU has been set:

ip link show <INTERFACE_NAME>

📌 Note: if you miss the netplan command (i.e. on NVIDIA Jetson), you can install it using the command sudo apt install netplan.io.

Use compressed topics #

To reduce the bandwidth required to transmit image and point cloud message we recommend subscribing to the available compressed topics.

Image topics #

The ZED ROS 2 Wrapper node uses the image_transport package to send all the image topics. This package provides image data compression that allow you to reduce the amount of required bandwidth for data transmission.

  • compressed (e.g. /zed/zed_node/left/image_rect_color/compressed): performs JPEG compression of color images. Recommended if hardware compression is not available
  • compressedDepth (e.g. /zed/zed_node/depth/depth_registered/compressedDepth): performs floating point compression of depth maps using PNG compression.
  • ffmpeg (e.g. /zed/zed_node/left_gray/image_rect_gray/ffmpeg): performs frame compressions using the FFMPEG library if available. We recommend using this compressed topic if hardware compression is available. Read more here. You can find FFMPEG tuning parameters in the file config/ffmpeg.yaml.
  • theora (e.g. /zed/zed_node/left_gray/image_rect_gray/theora): performs frame compressions using the Theora codec if available.

Point cloud topics #

The ZED ROS 2 Wrapper node uses the point_cloud_transport package to send all the point cloud topics. This package provides point cloud compression that allow you to reduce the amount of required bandwidth for data transmission.

  • draco (e.g. /zed/zed_node/point_cloud/cloud_registered/draco): performs point cloud compression using Google Draco Lossy compression.
  • zlib (e.g. /zed/zed_node/point_cloud/cloud_registered/zlib): performs point cloud compression using zlib Lossless compression.
  • zstd (e.g. /zed/zed_node/point_cloud/cloud_registered/zstd): performs point cloud compression using Facebook Zstandard Lossless compression.

Use smaller and less frequent information for data preview #

When subscribing to image and point cloud data solely for preview purposes, it is not necessary to use the maximum available resolution and frame rate.

Reduce data size #

The ZED ROS 2 Wrapper offers a way to reduce the size of the published image and depth messages while maintaining the grab resolution for internal ZED SDK processing. This allows efficient previewing without compromising performance and saturating the network bandwidth.

Data size publishing is controlled by two parameters in config/common.yaml:

  • general.pub_resolution: use 'NATIVE' to publish data at the “grab” resolution. Use 'CUSTOM' to retrieve and publish resized image and depth data.
  • general.pub_downscale_factor: is the rescale factor applied to image and depth data when general.pub_resolution is 'CUSTOM'.

For instance, if you set general.grab_resolution to HD720, the ZED SDK will internally process images and depth data at a resolution of 1280x720. However, by setting the resolution to 'CUSTOM' and using a 2.0 rescale factor, you can publish image and depth messages at a reduced resolution of 640x360, optimizing resource usage for preview purposes without affecting the internal processing quality.

Reduce data publishing rate #

The ZED ROS 2 Wrapper provides a method to lower the frequency of the published image and depth messages while maintaining the internal ZED SDK processing grab rate. This allows for efficient previewing without compromising performance or overloading the network bandwidth.

The frequency of publishing of color images and depth map is controlled by the parameter general.pub_frame_rate. The frequency of publishing of the point cloud is controlled by the parameter depth.point_cloud_freq.