Running ROS 2 With Docker on Raspberry Pi - My Guide to Optimized Performance

by nilutpolkashyap in Circuits > Raspberry Pi

473 Views, 1 Favorites, 0 Comments

Running ROS 2 With Docker on Raspberry Pi - My Guide to Optimized Performance

Running ROS 2 with Docker on Raspberry Pi.png

Now that I have my Raspberry Pi 4 set up with Docker and VS Code remote development, it's time for the exciting part - running ROS 2 in containers! In this article, I'll share how I build and optimize ROS 2 Docker containers specifically for the Pi's ARM64 architecture and resource constraints.

Why I Love Using Docker for ROS 2 on Pi

Before diving into the technical details, let me explain why this approach has become my go-to method:

  1. Consistent environments - My containers work the same way across different Pi setups
  2. Easy deployment - I can build once and deploy anywhere
  3. Resource isolation - Each ROS node runs in its own container with controlled resources
  4. Version management - I can run multiple ROS 2 versions side-by-side
  5. Clean development - No more dependency conflicts or messy installations

Supplies

What you'll need

Before we start, here's what I used for this setup:

  1. Raspberry Pi 4
  2. Power supply for the Pi 4
  3. Stable internet connection
  4. Host PC with Linux/Windows/Mac

Understanding ARM64 Architecture Considerations

Understanding ARM64 Architecture considerations.png

The first thing I learned when working with Docker on Pi is that not all Docker images work out of the box. The Raspberry Pi 4 uses an ARM64 architecture, so I need ARM64-compatible images.

Checking my Pi's architecture

I always verify my Pi's architecture first:

uname -m

This should return aarch64, confirming I'm running 64-bit ARM.

Finding ARM64 ROS 2 Images

I use the official ROS Docker images that support ARM64:

docker pull ros:humble-ros-base

It checks locally for the image on the Pi first. If it's not there, it starts downloading the image. Docker automatically pulls the ARM64 architecture version of the image for my Pi.

You'll see output like:

humble-ros-base: Pulling from library/ros
fdf67ba0bcdc: Already exists
b0a77e697580: Already exists
22f546c8afef: Already exists
...
Status: Downloaded newer image for ros:humble-ros-base

I can verify the image is downloaded correctly with:

docker images

Creating My Base ROS 2 Container

Here's how I create my first ROS 2 container optimized for the Pi:

My Basic ROS 2 Dockerfile

I create a file called dockerfile.ros2-pi:

# Using the official ROS 2 Humble base image for ARM64
FROM ros:humble-ros-base

# Set environment variables for Pi optimization
ENV ROS_DOMAIN_ID=42
ENV RMW_IMPLEMENTATION=rmw_cyclonedds_cpp
ENV PYTHONUNBUFFERED=1

# Install additional packages I commonly need
RUN apt-get update && apt-get install -y \
python3-pip \
python3-colcon-common-extensions \
python3-rosdep \
ros-humble-rmw-cyclonedds-cpp \
&& rm -rf /var/lib/apt/lists/* \
&& apt-get clean

# Set up rosdep
RUN rosdep init || true
RUN rosdep update

# Create a workspace
WORKDIR /ros2_ws
RUN mkdir -p src

# Source ROS 2 in bashrc
RUN echo "source /opt/ros/humble/setup.bash" >> ~/.bashrc

# Set the default command
CMD ["bash"]

Building my Container

First, I build the container:

docker build -f dockerfile.ros2-pi -t ros2-pi:humble .

I usually grab a coffee during this build - it takes 10-15 minutes on the Pi.

Running the Container Interactively

I like to start with an interactive container to test things out:

# Run interactively with a terminal
docker run -it --rm --name my-ros2-container ros2-pi:humble

This gives me a bash prompt inside the container where I can run ROS 2 commands directly.

Adding volume mounts for development:

For actual development work, I usually want to share my code between the Pi and the container:

# Run with a workspace directory mounted from the Pi
docker run -it --rm --name my-ros2-container \
-v /home/pi/my_ros2_workspace:/ros2_ws \
ros2-pi:humble

What this does:

  1. -v /home/pi/my_ros2_workspace:/ros2_ws - Mounts my Pi's workspace folder into the container
  2. Any changes I make in VS Code (connected to the Pi) appear instantly in the container
  3. Built packages persist even if I delete the container

Connecting from a second terminal

If my container is already running, I can connect to it from another terminal window:

# Connecting to an already running container
docker exec -it my-ros2-container bash

This is incredibly useful when I want to:

  1. Run multiple ROS 2 nodes in the same container
  2. Monitor logs while running commands
  3. Debug issues while keeping the main process running

Running in Background Mode

For production, I run containers in the background:

# Run in background (detached mode)
docker run -d --name my-ros2-container ros2-pi:humble tail -f /dev/null

Then I can still connect to the container anytime with the docker exec command above.

Resource Optimization Strategies

The Pi has limited resources compared to a desktop computer, so I've implemented several strategies to make my containers run efficiently. Here's what I've learned works best:

Memory Optimization

The Pi 4 has either 4GB or 8GB of RAM, which needs to be shared between the OS and all running containers.

I always set memory limits for my containers to prevent one container from using all available RAM:

# Limit container to 1GB RAM with 2GB total, including swap
docker run --memory=1g --memory-swap=2g ros2-pi:humble

What this does:

  1. --memory=1g: Limits RAM usage to 1GB
  2. --memory-swap=2g: Allows up to 1GB additional swap space
  3. Prevents the container from crashing the Pi by using all the memory

CPU Optimization

The Pi 4 has a quad-core CPU, but ROS 2 nodes can be CPU-intensive.

For CPU-intensive nodes, I limit CPU usage:

# Limit to 2 CPU cores maximum
docker run --cpus=2 ros2-pi:humble

I can also set CPU priority:

# Lower priority (nice value)
docker run --cpus=2 ros2-pi:humble

What this does:

  1. Prevents one container from monopolizing all CPU cores
  2. Ensures the Pi remains responsive for other tasks
  3. Helps with thermal management (less heat generation)

Storage Optimization

SD cards have limited space and slower I/O compared to SSDs.

I use .dockerignore to keep build contexts small.

# .dockerignore file
*.log
*.tmp
.git/
__pycache__/
*.pyc
node_modules/

And I clean up after package installations:

# dockerfile
RUN apt-get update && apt-get install -y \
package1 \
package \
&& rm -rf /var/lib/apt/lists/* \
&& apt-get clean

My Multi-Stage Docker Build Approach

Why I use this: It dramatically reduces the final image size by excluding build tools and temporary files.

To keep container sizes small, I use multi-stage builds:

# Build stage - includes all build tools
FROM ros:humble-ros-base AS builder

WORKDIR /ros2_ws

# Copy source code if src directory exists
COPY src/ src/

# Install build dependencies (these won't be in final image)
RUN apt-get update && apt-get install -y \
python3-colcon-common-extensions \
build-essential \
cmake \
&& rm -rf /var/lib/apt/lists/*

# Build the workspace
RUN . /opt/ros/humble/setup.sh && colcon build --cmake-args -DCMAKE_BUILD_TYPE=Release

# Runtime stage - much smaller, only includes what's needed to run
FROM ros:humble-ros-base

# Copy only the built artifacts (not the source or build tools)
COPY --from=builder /ros2_ws/install /ros2_ws/install

# Install only runtime dependencies
RUN apt-get update && apt-get install -y \
python3-pip \
&& rm -rf /var/lib/apt/lists/*

# Set up environment
RUN echo "source /opt/ros/humble/setup.bash" >> ~/.bashrc
RUN echo "source /ros2_ws/install/setup.bash" >> ~/.bashrc

WORKDIR /ros2_ws
CMD ["bash"]

Before building, create the directory:

# Create an empty src directory for testing
mkdir -p src
# Build the image
docker build -f dockerfile.multi-stage -t ros2-pi:multi-stage .

Running with your workspace mounted:

For development work, I mount my workspace directory:

# Run with your workspace mounted from the Pi
docker run -it --rm --name my-ros2-container \
-v /home/pi/my_ros2_workspace:/ros2_ws \
my-ros2-pi:multi-stage

Why use a multi-stage build approach?

  1. Final image is 50-70% smaller
  2. Faster deployment and updates
  3. Less storage usage on the Pi
  4. Clean separation of build and runtime environments

Docker Compose for Multi-Node Systems

Docker Compose for Multi-Node Systems.png

What is Docker Compose and Why Do I Need It?

Think of Docker Compose as a way to manage multiple containers like they're one application. Instead of running separate docker run commands for each ROS 2 node (which gets messy fast), I wrote one configuration file that describes all my containers and how they work together.

Why I love Docker Compose for ROS 2:

  1. One command starts everything: docker-compose up starts my entire robot system
  2. Automatic networking: All containers can talk to each other automatically
  3. Dependency management: Containers start in the right order
  4. Easy scaling: I can run multiple copies of the same node
  5. Simplified development: Changes to one container don't affect others

For complex robotics projects, I use Docker Compose to manage multiple ROS 2 nodes:

My ROS 2 Docker Compose Setup

Now, let's create a practical example using the official ROS 2 talker and listener nodes from the Writing a simple publisher and subscriber (Python) tutorial. I'll setup Docker Compose to run both nodes in separate containers.

Create a ROS 2 package py_pubsub, inside /home/pi/my_ros2_workspace/src by following the steps from here

I create a docker-compose.yml file:

version: '3.8'

services:
talker:
build:
context: .
dockerfile: dockerfile.ros2-pi
container_name: ros2-talker
network_mode: host
environment:
- ROS_DOMAIN_ID=42
- RMW_IMPLEMENTATION=rmw_cyclonedds_cpp
volumes:
- /home/pi/my_ros2_workspace:/ros2_ws
command: >
bash -c "source /opt/ros/humble/setup.bash &&
cd /ros2_ws &&
colcon build --packages-select py_pubsub &&
source install/setup.bash &&
ros2 run py_pubsub talker"
restart: unless-stopped

listener:
build:
context: .
dockerfile: dockerfile.ros2-pi
container_name: ros2-listener
network_mode: host
environment:
- ROS_DOMAIN_ID=42
- RMW_IMPLEMENTATION=rmw_cyclonedds_cpp
volumes:
- /home/pi/my_ros2_workspace:/ros2_ws
command: >
bash -c "source /opt/ros/humble/setup.bash &&
cd /ros2_ws &&
colcon build --packages-select py_pubsub &&
source install/setup.bash &&
ros2 run py_pubsub listener"
restart: unless-stopped


Starting my Multi-Node System

From the directory where the docker-compose.yml was created, run:

docker compose up -d

What this setup demonstrates:

  1. Talker node: Publishes "Hello World" messages every 0.5 seconds to the 'topic' topic
  2. Listener node: Subscribes to the 'topic' topic and prints received messages
  3. Automatic building: Each container builds the package before running
  4. Volume mounting: Source code is shared between the host and containers
  5. Network communication: Both containers use host networking for ROS 2 discovery

I can monitor all my nodes with:

docker compose logs -f

You will see output like:

ros2-talker | [INFO] [1758575795.439667580] [minimal_publisher]: Publishing: "Hello World: 0"
ros2-listener | [INFO] [1758575795.440115780] [minimal_subscriber]: I heard: "Hello World: 0"
ros2-talker | [INFO] [1758575795.939564973] [minimal_publisher]: Publishing: "Hello World: 1"
ros2-listener | [INFO] [1758575795.942144191] [minimal_subscriber]: I heard: "Hello World: 1"

Stopping Multi-Node systems

To stop and clean up all containers:

docker compose down

Other useful Docker Compose commands:

# Just stop containers (don't remove them)
docker compose stop

# Start stopped containers again
docker compose start

# View status of all services
docker compose ps


Pi-Specific Optimizations I Always Use

DDS Configuration for Pi

What is this, and where do I create it?

DDS (Data Distribution Service) is how ROS 2 nodes communicate with each other. The default settings are designed for powerful computers, but the Pi needs more conservative settings to avoid overwhelming its network and memory.

I create a custom DDS configuration file called cyclonedds.xml in my project directory (the same folder as my dockerfile):

<?xml version="1.0" encoding="UTF-8" ?>
<CycloneDDS xmlns="https://cyclonedds.org/schema/dds/1.0">
<Discovery>
<ParticipantIndex>auto</ParticipantIndex>
<Peers>
<Peer Address="localhost"/>
</Peers>
</Discovery>
<Internal>
<Watermarks>
<WhcHigh>1MB</WhcHigh>
<WhcLow>512KB</WhcLow>
</Watermarks>
</Internal>
</CycloneDDS>

What this does:

  1. WhcHigh/WhcLow: Limits memory used for message queues (default can be 100MB+)
  2. Peers: Tells DDS to only look for other nodes on the same Pi
  3. ParticipantIndex: Let's DDS automatically assign participant IDs

How to use it in my containers:

In my Docker Compose file:

services:
my-ros-node:
# ... other config
volumes:
- ./cyclonedds.xml:/config/cyclonedds.xml # Mount the config file
environment:
- CYCLONEDDS_URI=file:///config/cyclonedds.xml # Tell ROS 2 to use it

Why this helps:

  1. Reduces memory usage by 80-90%
  2. Faster node startup times
  3. More reliable communication on Pi's limited network

GPU Access for Computer Vision

When I need GPU acceleration for computer vision tasks:

services:
vision-node:
# ... other config
devices:
- /dev/dri:/dev/dri # GPU access
environment:
- LD_LIBRARY_PATH=/usr/lib/aarch64-linux-gnu/tegra:/usr/lib/aarch64-linux-gnu/tegra-egl:${LD_LIBRARY_PATH}

I2C and GPIO Access

For hardware interfacing:

services:
hardware-node:
# ... other config
devices:
- /dev/i2c-1:/dev/i2c-1
- /dev/gpiomem:/dev/gpiomem
privileged: true

Monitoring and Debugging

Monitoring and Debugging.png

Checking Container Performance

I regularly monitor my container's resource usage:

docker stats

Debugging Container Issues

For troubleshooting, I exec into running containers:

docker exec -it ros2-talker bash

Then I can check ROS 2 nodes:

ros2 node list
ros2 topic list
ros2 topic echo /topic

My Log Management Strategy

I configure log rotation to prevent storage issues:

services:
my-service:
logging:
driver: "json-file"
options:
max-size: "10m"
max-file: "3"

Performance Tuning Tips I've Learned

Network Performance

Default Docker networking adds overhead that the Pi can't handle well.

I use host networking for ROS 2 containers:

services:
my-ros-node:
network_mode: host # Uses Pi's network directly

Trade-offs:

  1. Pro: 20-30% better network performance
  2. Pro: Simpler ROS 2 discovery (no port mapping needed)
  3. Con: Less container isolation
  4. Con: Potential port conflicts

When I use each:

  1. Host networking: For ROS 2 communication (always)
  2. Bridge networking: For web services, databases (when isolation matters)

CPU Thermal Management

The Pi throttles the CPU when it gets too hot, causing the containers to run slowly.

We can monitor using:

# Check current temperature
vcgencmd measure_temp

# Check if throttling occurred
vcgencmd get_throttled

My Docker prevention strategy:

services:
cpu-intensive-node:
deploy:
resources:
limits:
cpus: '2.0' # Don't use all 4 cores
environment:
- OMP_NUM_THREADS=2 # Limit OpenMP threads

Troubleshooting Common Issues

Container won't start

  1. Check the Pi's available memory with free -h
  2. Verify the image architecture matches ARM64
  3. Look at container logs with docker logs container_name

ROS 2 Nodes can't communicate

  1. Ensure all containers use the same ROS_DOMAIN-ID
  2. Verify network_mode is set to host
  3. Check firewall settings with sudo ufw status

Poor Performance

  1. Monitor CPU usage with htop
  2. Check if containers are swapping with docker stats
  3. Verify adequate cooling (Pi can throttle when hot)

Final Thoughts

Running ROS 2 in Docker containers on the Raspberry Pi has transformed the way to develop robotics projects. The combination of containerization and proper resource optimization gives me:

  1. Consistent, reproducible deployments
  2. Better resource management
  3. Easier debugging and monitoring
  4. Scalable multi-node architectures

The key is to understand the Pi's limitations and optimize accordingly. With these techniques, I can run surprisingly complex ROS 2 systems on a single Pi 4.

Github Repository

All the files, Dockerfiles, and configurations mentioned in this article are available in my GitHub repository: https://github.com/nilutpolkashyap/ros2-docker-arm64