If you are a ROS developer/user and you blog about it, ROS wants those contributions on this page ! All you need for that to happen is:
have an RSS/Atom blog (no Tweeter/Facebook/Google+ posts)
open a pull request on planet.ros tracker indicating your name and your RSS feed/ATOM url. (You can just edit the file and click "Propose File Change" to open a pull request.)
make your ROS related posts tagged with any of the following categories: "ROS", "R.O.S.", "ros", "r.o.s."
Warnings
For security reasons, html iframe, embed, object, javascript will be stripped out. Only Youtube videos in object and embed will be kept.
Guidelines
Planet ROS is one of the public faces of ROS and is read by users and potential contributors. The content remains the opinion of the bloggers but Planet ROS reserves the right to remove offensive posts.
Blogs should be related to ROS but that does not mean they should be devoid of personal subjects and opinions : those are encouraged since Planet ROS is a chance to know more about ROS developers.
Posts can be positive and promote ROS, or constructive and describe issues but should not contain useless flaming opinions. We want to keep ROS welcoming :)
ROS covers a wide variety of people and cultures. Profanities, prejudice, lewd comments and content likely to offend are to be avoided. Do not make personal attacks or attacks against other projects on your blog.
Suggestions ?
If you find any bug or have any suggestion, please file a bug on the planet.ros tracker.
For developers already working with ROS, the integration of industrial fieldbuses, I/Os, and functional safety into robotic applications often introduces unexpected challenges. ROS offers a flexible and modular software framework, although connecting it to industrial automation hardware typically requires additional integration layers and specialized knowledge.
This led to the idea of creating a solution that allows ROS developers to leverage a PLC where it excels, for example in deterministic control, industrial communication, and safety, while high performance computation and complex logic remain handled within ROS.
PLCnext Technology Architecture Overview
PLCnext Controls run PLCnext Linux, a real-time capable operating system that hosts the PLCnext Runtime. The Runtime manages deterministic process data and stores it in the Global Data Space (GDS).
Key architectural components :
PLCnext Linux: Yocto‑based embedded Linux
PLCnext Runtime (tasks, data handling, Axioline integration): Provides deterministic processing and the Global Data Space
Global Data Space (GDS): Central storage for process variables accessible from PLC programs and system apps
PLCnext Apps: Packaged software components that can be installed on the controller
PLCnext ROS Bridge
Concept
At its core, the PLCnext ROS Bridge is a custom ROS node with dedicated services running inside a Docker container, packaged as a PLCnext App. It provides a bidirectional communication gateway between the PLCnext Global Data Space (industrial side) and ROS topics (robotics side).
To illustrate this, consider a motor connected to the PLC via EtherCAT/FSoE or PROFINET/PROFIsafe. The motor, along with its associated safety functions, can be managed through simple PLC logic and represented by a set of variables. Depending on the implementation, these variables, such as setpoints, command velocities, etc., can be exposed to ROS. When the navigation stack publishes a command velocity, the ROS Bridge, as a subscriber to this topic, writes the received values to the corresponding variable on the PLC side. Likewise, information such as safety status or system state can be sent from the PLC to ROS and made available through a defined topic.
Commissioning Workflow
The ROS Bridge Node is generated through an automated code-generation process. This process is driven by the Interface Description File (IDF), which defines the PLC instance paths (variables) that should be exposed to ROS.
A typical build process performs the following steps:
Building the ROS Packages
Parse the IDF and generate the source code for the topic, publisher and subscribers
Build the ROS Node
Place the resulting binaries and gRPC dependencies into a Docker image with a minimal ros-core installation.
Package the Docker image, together with required metadata, into a read-only PLCnext App.
The resulting App can be deployed to a PLCnext Controller using the Web-Based Management (WBM) interface. While it is possible to build everything in a local environment, the project is designed to be built via CI/CD. An example pipeline can also be found in the GitHub repository.
Runtime Behaviour
After installation, the App starts the container defined via the compose file. Inside this container, the generated ROS Node connects to the Global Data Space using the built gRPC client and then exposes the selected PLC variables via ROS publishers and subscribers. This enables ROS developers to integrate automation components, such as sensors, actuators, I/O modules, and fieldbus devices, into a ROS-based architecture through the GDS. Moreover, the Bridge sets up a set of services that enable users to read and write information at runtime.
I am currently working on optimizing high-bandwidth sensor data transmission (specifically LiDAR point clouds) using ROS 2 and Iceoryx for zero-copy communication.
I have successfully set up the Iceoryx environment and confirmed zero-copy works for fixed-size types. However, I am facing challenges when applying this to variable-size messages, such as sensor_msgs/msg/PointCloud2.
As I understand it, Iceoryx typically requires pre-allocated memory pools with fixed chunks. In the case of PointCloud2, the data size can vary depending on the LiDAR’s points (in my case, around 5.2MB per message).
I have two specific questions:
1. Best practices for variable-size data like PointCloud2
How should we handle messages where the size is not strictly fixed at compile-time while still maintaining zero-copy benefits? Should we always pre-allocate the “worst-case” maximum size for the underlying buffers? If anyone has implemented this for sensor_msgs/msg/PointCloud2 or similar dynamic types, I would appreciate any advice or examples.
2. Tuning RouDi Configuration (size and count)
Regarding the roudi_config.toml (or the RouDi memory pool setup), what is the general rule of thumb for determining the optimal size and count?
For high-resolution LiDAR data:
How do you balance between the number of chunks (count) and the buffer size for each chunk to avoid memory exhaustion without being overly wasteful?
Are there any common pitfalls when setting these values for a system with multiple subscribers?
I’ve already got Iceoryx installed and basic IPC working, but I want to ensure my configuration is production-ready for large-scale sensor data.
Qt Robotics Framework (QRF) introduces a fast, reliable way to connect Qt‑based applications (QML and C++) with ROS2 middleware. By automatically generating strongly‑typed Qt/QML bindings from ROS2 interface definitions, QRF enables robotics teams to integrate control, visualization, and simulation capabilities with minimal boilerplate and maximum safety.
In this webinar, Qt Group’s engineers and industry experts demonstrate how QRF simplifies prototyping, reduces integration complexity, and helps teams move rapidly from concept to production.
Whether you’re building robot controllers, diagnostics dashboards, or simulation environments, Qt Robotics Framework reduces the development cycle and improves reliability across your robotics stack.
Speakers:
Michele Rossi, Director, Industry, Qt Group
Przemysław Nogaj, Head of HMI Technology, Spyrosoft
I’d like to share a tool I built — ROS2 Studio, a single GUI that brings together the most common ROS2 monitoring and bag operations in one place.
What is ROS2 Studio?
ROS2 Studio is a PyQt5-based desktop GUI that runs as a native ROS2 CLI extension (ros2 studio). Instead of juggling multiple terminal windows, everything is accessible from one interface.
Features
Performance Monitor — real-time CPU, memory, and frequency graphs for any topic or node
Bag Recorder — multi-topic selection with custom save location
Bag Player — playback with adjustable rate (0.1x–10x) and loop controls
Bag to CSV Converter — full message deserialization via rosbag2_py to CSV
System Dashboard — CPU, memory, disk, network stats, ROS2 entities, and process monitor
Installation
cd ~/ros2_ws/src
git clone https://github.com/Sourav0607/ROS2-STUDIO
cd ~/ros2_ws
colcon build --packages-select ros2_studio
source install/setup.bash
ros2 studio
This document implements intuitive control of the PiPER robotic arm using a standard gamepad. With a common gamepad, you can operate the PiPER manipulator in a visualized environment, delivering a precise and intuitive control experience.
In main.py and main_virtual.py, select:from src.gamepad_trac_ik import RoboticArmController
5. Execution Steps
Connect manipulator and activate CAN interface:sudo ip link set can0 up type can bitrate 1000000
Connect gamepad:Connect the gamepad to the PC via USB or Bluetooth.
Launch control script:Run python3 main.py or python3 main_virtual.py in the project directory.It is recommended to test with main_virtual.py first in simulation mode.
Verify gamepad connection:Check console output to confirm the gamepad is recognized.
Web visualization:Open a browser and go to http://localhost:8080 to view the manipulator status.
Start control:Operate the manipulator according to the gamepad mapping.
6. Gamepad Control Instructions
6.1 Button Mapping
Button
Short Press Function
Long Press Function
HOME
Connect / Disconnect manipulator
None
START
Switch high-level control mode (Joint / Pose)
Switch low-level control mode (Joint / Pose)
BACK
Switch low-level command mode (Position-Velocity 0x00 / Fast Response 0xAD)
None
Y
Go to home position
None
A
Save current position
Clear current saved position
B
Restore previous saved position
None
X
Switch playback order
Clear all saved positions
LB
Increase speed factor (high-level)
Decrease speed factor (high-level)
RB
Increase movement speed (low-level)
Decrease movement speed (low-level)
6.2 Joystick & Trigger Functions
Control
Joint Mode
Pose Mode
Left Joystick
J1 (Base rotation): Left / RightJ2 (Shoulder): Up / Down
End-effector X / Y translation
Right Joystick
J3 (Elbow): Up / DownJ6 (Wrist rotation): Left / Right
End-effector Z translation & Z-axis rotation
D-Pad
J4 (Wrist yaw): Left / RightJ5 (Wrist pitch): Up / Down
End-effector X / Y-axis rotation
Left Trigger (LT)
Close gripper
Close gripper
Right Trigger (RT)
Open gripper
Open gripper
6.3 Special Functions
6.3.1 Gripper Control
Gripper opening range: 0–100%
Quick toggle: When fully open (100%) or fully closed (0%), a quick press and release of the trigger toggles the state.
Hey everyone,
I’ve been working on FusionCore for the last few months… it’s a ROS 2 Jazzy sensor fusion package that aims to bridge the gap left by the deprecation of robot_localization.
There wasn’t anything user-friendly available for ROS 2 Jazzy. It merges IMU, wheel encoders, and GPS/GNSS into a single, reliable position estimate at 100Hz. No need for manual covariance matrices…. just one YAML config file.
It uses an Unscented Kalman Filter (UKF) with a complete 3D state…. and it’s not just a port of robot_localization.
It features native GNSS fusion in ECEF coordinates, so you won’t run into UTM zone issues.
It supports dual antenna heading right out of the box….
It automatically estimates IMU gyroscope and accelerometer bias.
It includes HDOP/VDOP quality-aware noise scaling, which means bad GPS fixes are automatically down-weighted.
It’s under the Apache 2.0 license, making it commercially safe.
And it’s built natively for ROS 2 Jazzy….. not just a port.
If you would like to get your product or service in front of over a thousand robot application developers, decision makers, and students, ROSCon Global is the place to be!
This year we are aiming for over 1,000 attendees, and if this event is anything like ROSCon 2025, our attendees will represent:
350+ companies in the field of robotics
50+ countries
60+ universities
80% filling roles as engineers or executive management
This year we will be offering our largest number of sponsorship opportunities yet, including the chance to:
Host a booth in our amazing ROSCon Global Expo hall. Booth locations are first come, first served, so do not delay.
Demonstrate your robot or device in our robot demo area.
Support our worldwide community with our free live stream and video archive, reaching thousands of viewers.
Include your stickers, one-sheet, or giveaway in our swag bag.
Support ROSCon attendees in their native language with our live captioning and translation service.
Be the life of the party by hosting our ROSCon Global reception and gala.
Feed and recharge our amazing ROSCon attendees by becoming a lunch or refreshment sponsor.
Elevate your startup’s visibility by joining our amazing ROSCon startup alley.
Connect with ROSCon attendees by supporting our award-winning and surprisingly good Whova app.
Show your support for underrepresented groups in robotics by sponsoring our inspiring ROSCon Diversity Scholars.
Our full ROSCon Global 2026 sponsorship prospectus is now available on the ROSCon website, and you can start your ROSCon journey by emailing roscon-2026-ec@roscon.org. We recommend you start your sponsorship conversation as soon as possible, as ROSCon booths and sponsorship opportunities tend to sell out quickly!
The iRoboCity2030 Summer School 2026, entitled “ROS 2: AI and Field Robotics”, offers undergraduate and graduate students from all over the world an intensive one-week experience focused on the technologies driving the new generation of autonomous and intelligent robots. The program combines theoretical and practical training in ROS 2 (Robot Operating System 2), Artificial Intelligence, and Field Robotics, guided by researchers from leading universities and technological centers in Madrid. Over five days, participants will advance both theoretical knowledge and practical skills, from the fundamentals of ROS 2 to the application of AI techniques in different field robotics domains such as autonomous driving, quadrupedal robots, agricultural robotics, aerial robotics.
In addition to the academic program, the summer school will feature two plenary lectures delivered by internationally recognized leaders in the ROS 2 ecosystem. The first will be given by Steve Macenski (OpenNavigation), lead developer of the Nav2 system, widely regarded as the reference standard for autonomous robot navigation in ROS 2. The second will be delivered by Davide Faconti, creator of BehaviorTrees.CPP and Groot, tools that are extensively used for developing robotics applications based on Behavior Trees.
The school’s pedagogical approach is strongly practical and collaborative: participants will learn by doing, combining knowledge of artificial intelligence, control, and perception with their direct application in ROS 2, both in simulation environments and on real robotic platforms. Beyond its technical dimension, the school promotes intercultural collaboration and international teamwork, creating a dynamic environment for learning and experimentation.
This summer school is part of the iRoboCity2030 initiative, the robotics innovation network of the Community of Madrid, and represents a joint effort by the region’s leading universities and research centers to promote advanced training and knowledge transfer in robotics and artificial intelligence.
LIST OF SPEAKERS AND INSTRUCTORS
Steve Macenski (OpenNavigation) — “Nav2 & ROS 2 Overview: Techniques & Applications Powering an Industry”
Davide Faconti (BehaviorTrees.CPP / Groot) — “Being a roboticist in the era of AI: what changed and what didn’t”
Carlos Balaguer, UC3M
Francisco Martín Rico, URJC
José M. Cañas, URJC
Luis Miguel Bergasa, UAH
Fabio Sánchez, UAH
Miguel Antunes, UAH
Santiago Montiel, UAH
Rodrigo Gutiérrez, UAH
Christyan Cruz, UPM
Roemi Fernández, CSIC
Raúl Fernández, UCM
…
ORGANIZATION
This summer school is part of the iRoboCity2030 initiative, the Robotics Innovation Network of the Madrid Region. It represents a joint effort by leading universities and research institutions to promote advanced training and knowledge transfer in robotics and artificial intelligence.
SOCIAL EXPERIENCE
The Summer School will take place in the city centre of Madrid and well connected by public transport. The city is famous for its lively atmosphere, outdoor cafés, cultural events, and late-evening social life, providing countless opportunities to meet people and enjoy experiences beyond the classroom. With its warm climate, rich culture, excellent food, and safe, walkable neighborhoods, Madrid combines academic learning with an unforgettable social experience.
Point cloud pre-processing including deskewing, merging, and filtering traditionally requires a chain of nodes working in tandem, many of which are no longer actively maintained. Setting up these individual filtering stages often consumes excessive CPU cycles and precious DDS bandwidth.
What if you had a single, low-latency node that could voxellize, deskew, downsample, and merge scans in one go? By passing only mission-critical features to your odometry nodes and downstream, you significantly reduce lag and bandwidth usage across your entire navigation or SLAM stack. A single node to accomplish this.
I developed Polka to solve this. It’s a drop-in replacement for multiple pre-processing nodes and if you need to save CPU, you can run the entire pipeline on your GPU.
Latency across both being ~40ms.
Current features:
Merge Pointclouds and laser scans
Input/output frame filtering.
Defined footprint, height, and angular box filters.
Voxel downsampling.
GPU acceleration support.
Deskewing Pointclouds (WIP)
I’d love your feedback, and if you find the project useful, please consider leaving a star on GitHub!
Tech Solstice 2026 is the annual technology festival hosted by the Manipal Institute of Technology (MIT), Bengaluru, featuring a diverse lineup of competitive robotics events.
We invite students, robotics enthusiasts, and builders to participate in a series of hands-on challenges designed to test speed, control systems, autonomous navigation, and combat robotics.
Total Prize Pool: ₹2.6 Lakhs+
Robotics Events (further details can be found on the website)
• Robo Race
• Cosmo Clench
• Maze Runner
• Line Follower
• Robo Wars
Format & Timeline
Event Dates: 27 March – 29 March 2026
Participants will compete on-site across multiple rounds depending on the event format, with final winners determined through performance-based evaluation.
Participants are encouraged to utilize embedded systems, ROS-based architectures, simulation tools, and custom-built hardware where applicable.
Please come and join us for this coming meeting at Mon, Mar 23, 2026 4:00 PM UTC→Mon, Mar 23, 2026 5:00 PM UTC, where we plan to try out Transitive Robotics. Transitive Robotics is a service that allows users to deploy and manage robots, including giving full-stack robotic capabilities. Capabilities include data capture and storage, which makes Transitive Robotics a useful case study for our focus on Logging & Observability.
Last session, we continued our tryout of the Canonical Observability Stack (COS) from the previous meeting. We were successful in hosting the full stack and viewing the public pages, as well as connecting a simulated robot to the stack. We could view logs and system statistics from the simulated robot. If you’re interested to watch the recorded part of the meeting, it is available on YouTube.
In the previous session, we built a complete MoveIt2 package from a URDF model using the MoveIt Setup Assistant, and realized motion planning and visual control of the robotic arm.
In this session, we will explain how to set up a co-simulation environment for MoveIt2 and Isaac Sim. By configuring the ROS Bridge, adjusting hardware interface topics, and integrating the URDF model, we will achieve seamless connection between the simulator and motion planning, providing a complete practical solution for robot algorithm development and system integration.
Navigate to the Isaac Sim folder, use the script to launch the ROS Bridge Extension, then click Start to launch Isaac Sim:
cd isaac-sim-standalone-5.1.0-linux-x86_64/
./isaac-sim.selector.sh
Then drag and drop the newly downloaded USD model into Isaac Sim to open it:
In the USD file, you need to add an ActionGraph for communication with the ROS side. The ActionGraph is as follows:
Configure ActionGraph
articulation_controller
Modify targetPrim according to actual conditions; targetPrim is generally /World/nero_description/base_link:
ros2_subscribe_joint_state
Modify topicName according to actual conditions; topicName must correspond to the URDF, here it is isaac_joint_commands:
ros2_publish_joint_state
Modify targetPrim and topicName according to actual conditions; targetPrim is generally /World/nero_description/base_link; topicName must correspond to the URDF, here it is isaac_joint_states:
After starting the simulation, use ros2 topic list in the terminal; the following topics can be viewed:
Modify MoveIt Package
Open nero_description.ros2_control.xacro and add topic parameters:
<hardware>
<!-- By default, set up controllers for simulation. This won't work on real hardware -->
<!-- <plugin>mock_components/GenericSystem</plugin> -->
<plugin>topic_based_ros2_control/TopicBasedSystem</plugin>
<param name="joint_commands_topic">/isaac_joint_commands</param>
<param name="joint_states_topic">/isaac_joint_states</param>
</hardware>
Then save and compile the code, then launch MoveIt2:
cd ~/nero_ws
colcon build
source install/setup.bash
ros2 launch nero_moveit2_config demo.launch.py
My name is Ciprian Pater, and I’m reaching out on behalf of PUBLICAE (formerly a student firm at UiA Nyskaping Incubator) to introduce you to NWO Robotics Cloud (nworobotics.cloud) - a comprehensive production-grade API platform we’ve built that extends and enhances the capabilities of the groundbreaking Xiaomi-Robotics-0 model. While Xiaomi-Robotics-0 represents a remarkable achievement in Vision-Language-Action modeling, we’ve identified several critical gaps between a research-grade model and a production-ready robotics platform. Our API addresses these gaps while showcasing the full potential of VLA architecture.
(Attaching some screenshots below for UX reference).
we at JdeRobot org are partipating in Google Summer of Code 2026. All our proposed projects are on open source Robotics, and most of them (7/8) in ROS 2 related software. They are all described at our ideas list for GSoC-2026, including their summary and illustrative videos.
Project #1: PerceptionMetrics: GUI extension and support for standard datasets and models
Project #2: Robotics Academy: extend C++ support for more exercises
Project #3: Robotics Academy: New power tower inspection using deep learning
Project #4: RoboticsAcademy: drone-cat-mouse chase exercise, two controlled robots at the same time
Project #5: Robotics Academy: using the Open3DEngine as robotics simulator
Project #6: VisualCircuit: Improving Functionality & Expanding the Block Library
Project #7: Robotics Academy: Exploring optimization strategies for RoboticsBackend container
Project #8: Robotics Academy: palletizing with an industrial robot exercise
Motivated candidates are welcome Please check the Application Instructions, as we request a Technical Challenge and some interactions in our GitHub repositories before talking to our mentors and submitting your proposal.
I’m pleased to announce that RTI released enhanced support for ROS 2 and rmw_connextdds today. The new Connext Robotics Toolkit makes it much easier for ROS users to take advantage of Connext and DDS features to improve their development experience.
As many of you know, RTI has supported ROS 2 since the very beginning by providing our core DDS implementation at no charge for non-commercial use. The Connext Robotics Toolkit extends that support to our full Connext Professional product. This includes our broader platform around DDS – things like network tuning and debugging tools, system observability, and diverse network support, from shared memory to WAN.
In addition, we’re expanding our free license to include commercial prototyping. This means startups and other product teams building ROS-based systems can now take advantage of Connext at no charge. Starting with production-grade communication infrastructure will make it easier to scale from prototype to deployment.
The Connext Robotics Toolkit is currently available for Kilted Kaiju and will be available for Lyrical Luth upon its release. If you’re exploring ways to leverage ROS in commercial systems or looking at RMW options beyond the default, you can find more details and installation instructions here: Connext Robotics Toolkit for ROS | RTI
Happy to answer questions or discuss with anyone interested.
As ROS2 fleets move into commercial deployments serving external clients, one infrastructure gap is shared economic verification between the fleet operator and their customer. The operator’s internal logs don’t give the client independent verification of what work was completed, leading to manual reconciliation and disputes as fleets scale.
Built a settlement layer that monitors ROS2 lifecycle events and generates verified timestamped records per robot per completed task. Both operator and client can verify independently. Each robot builds a portable work history over time useful for service billing, equipment valuation, and proving utilization to potential customers.
Like many of us, I appreciate the power and flexibility of ROS 2, but I’ve always found the amount of manual boilerplate to be a bottleneck for rapid development. Keeping track of all the configuration details making sure CMakeLists.txt and package.xml are perfectly synced, or manually wiring launch files and topic connections takes a significant amount of time. I wanted to find a way to automate this infrastructure setup so I could focus purely on writing the actual robotics logic.
To solve this, I started building ROS 2 Blueprint Studio a visual node-based editor (inspired by Unreal Engine Blueprints) designed to take the routine off your shoulders.
Under the Hood (Architecture) I tried to avoid any “black magic” and stick entirely to standard ROS 2 practices:
1. Code Generation & Build System The studio doesn’t compile the code itself; it acts as a smart templating engine. Creating a standard node generates a base C++ template. If you duplicate a node (from the palette or canvas), it creates an independent file with a new name and copied code. Modifying the copy doesn’t break the parent. For the actual build, it relies on standard colcon build under the hood.
2. File Watcher & Dependency Tree To build the dependency tree, I wrote a custom FileWatcher. Before building, it scans the files to check for includes and node communication. For performance, it only parses files that have been modified. (I realize this might theoretically cause “phantom connections” on massive graphs, so I plan to add a forced full-rebuild mode in the future).
3. Topic Routing (Two Approaches) Node linking currently works in two modes:
Hardcoded (Bottom-Up): If publisher and subscriber topic names are explicitly hardcoded in your C++ or Python files, the UI detects this and automatically draws a visual “locked” wire between them.
Visual (Top-Down): You can define the topic name only on the publisher, drag a visual wire to a subscriber, and the FileWatcher will find a special placeholder in the subscriber’s code and automatically replace it with the publisher’s topic name. (Full disclosure: the visual routing is still a bit unstable and not recommended for huge projects yet, but I’m refining it).
4. Runtime Environment (Docker) I chose Docker (osrf/ros:humble-desktop) as the execution environment. Why?
Setting up ROS 2 natively on Windows is a special kind of pain.
It provides painless deployment and saves you from dependency hell when migrating to future ROS versions.
You can send your project folder to someone who doesn’t even have ROS installed, and their system will build and run your entire architecture in just a few clicks.
The Ask: Roast My Architecture The project is currently in early alpha. Honestly, my biggest doubts right now are around the core architecture and the automated build system (package and launch file generation).
I would be incredibly grateful if experienced ROS architects could take a look at the repo, point out my blind spots, and give me some harsh architectural critique. I’d much rather rebuild the foundation now than drag architectural flaws into a full release.
mcp-ros2-logs is an open-source MCP server that merges ROS2 log files from multiple nodes into a unified timeline and exposes query tools for AI agents like Claude, GitHub Copilot, and Cursor.
The problem: ROS2 writes each node’s logs to a separate file. Debugging a cascading failure across sensor_driver -> collision_checker -> motion_planner means manually correlating timestamps across 3+ files.
What this does: Install it with pipx install mcp-ros2-logs, register it with your AI assistant, and ask natural language questions like:
“show me all errors with 5 messages of context around each”
“compare good_run vs bad_run — what changed?”
“detect anomalies in this run”
“correlate errors with bag topics — what was happening on /scan when the planner crashed?”
Features:
12 MCP tools: query logs, node summaries, timelines, run comparison, anomaly detection, bag file parsing, log-to-bag topic correlation, live tailing
Parses ROS2 bag files (.db3/.mcap) without ROS2 installed — extracts topic metadata for correlation with log errors
Works with Claude Code, VS Code Copilot, Cursor, and any MCP-compatible client
No ROS2 installation required — it just reads files from disk
Example workflow: Point the agent at a run where a lidar USB connection dropped. It loads the logs, correlates the errors with bag topic data, and reconstructs the full causal chain: USB timeout → /scan messages stopped → collision_checker failed → motion_planner aborted. The whole analysis takes about 10 seconds.
Title: Rewire — stream ROS 2 topics to Rerun with zero ROS 2 build dependencies
Hi all,
I’ve been working on Rewire, a standalone bridge that streams live ROS 2 topics to Rerun for real-time visualization. I wanted to share it here and get feedback from the
community.
The problem it solves
Setting up visualization tooling in ROS 2 often means pulling in dependencies,
building packages, and dealing with middleware configuration. I wanted something that just works — point it at a DDS/Zenoh network and start visualizing.
How it works
Rewire is a single Rust binary that speaks DDS and Zenoh wire protocols directly. It’s not a ROS 2 node —
it doesn’t join the ROS graph or require any ROS 2 installation. It acts as a passive observer.
curl -fsSL https://rewire.run/install.sh | sh
rewire record -a # subscribe to all topics
What’s supported
53 type mappings across sensor_msgs, geometry_msgs, nav_msgs, tf2_msgs, vision_msgs, std_msgs, and rcl_interfaces — including Image, PointCloud2, LaserScan, TF, Odometry, Detection2D/3DArray, and more.
Custom message mappings — map any ROS 2 message type to Rerun archetypes via a JSON5 config file, no recompilation.
URDF visualization — loads from /robot_description, resolves meshes via AMENT_PREFIX_PATH.
Full TF tree — static + dynamic transforms with coordinate frame visualization
Per-topic diagnostics — Hz, bandwidth, drops, and latency rendered as Rerun Scalars.
Linux (x86_64, aarch64) and macOS (Intel + Apple Silicon).
Install options
Install script:curl -fsSL https://rewire.run/install.sh | sh
prefix.dev:pixi global install -c rewire rewire
APT repository for Debian/Ubuntu
I’d love to hear your thoughts — especially around which message types or workflows you’d want supported next. If you run into issues, feedback is very welcome.
When working with Lidar data users are usually referred to using PointCloud2 objects that represent the lidar data as a list of 3d points with additional attributes. While this nicely mirrors the PCL representation and fits the majority of applications working with 3D point cloud data this isn’t how modern lidar sensors represent data.
Problem Statement
This has several drawbacks that are highlighted in the following list (not a comprehensive list):
With the rapid increase in Lidar resolution, PointCloud2 can be hefty to transport..To this date many DDS implementations struggle to catch up with the actual sensor frame rate when transporting a high resolution PointCloud2 on low to medium compute nodes.
An option to reduce the bandwidth requirement would be to use dense pointclouds i.e. transport only valid points. However, by doing so we lose the structured nature of Lidar data of devices that natively generated the Lidar data in a structured 2D grid.
Many image processing operations do benefit from the adjacency information allowing for quick lookup of neighboring pixels. For example, Ground Plane Removal can be more efficient to implement directly on 2D range data than using a 3D representation.
One could also directly employ existing 2D neural networks like YOLO on Lidar data in their 2D representation.
A potential critique to this suggestion might be that we don’t need a new message to do this as we already have sensor_msgs::Image that can fulfill this aspect for users who need it to be that way. And in fact, the ouster_ros driver -optionally- does publish the range data and other byproducts of the sensor as sensor_msgs::Image on separate topics. I am aware of many users who do indeed utilize these topics instead of the 3d pointcloud data.
While this works fine if you are only interested in processing each channel individually. This breaks down a bit if you need to access and use more than one channel in the same operation - which is often the case -.
A simple example of this, a user may want to filter certain returns (range data) based on reflectivity values and adjacency data simultaneously.
A common approach to this problem in ROS would be to use the `ApproximateTime` filter. Doing so, however, adds some latency and CPU overhead to synchronize data channels that were originally already synchronized.
A LidarScan message acts here as a multi-spectral image with all channels are memory aligned with 100% data correlation ensured by the sensor without software sync overhead.
The proposal
We are proposing the addition of a new ROS sensor message that mirror that native format of the majority of Lidar sensors (whether spinning or solid-state), in this proposal we would like to invite other lidar vendors to also contribute to make sure that this format encompasses the entire spectrum of Lidar sensors.
A quick draft of a LidarScan message could look like this:
std_msgs/Header header
# Dimensions of the scan (e.g., 128 channels x 2048 columns)
uint32 height
uint32 width
# --- Geometry Metadata ---
# Horizontal and Vertical FOV/Resolution info to allow projection to 3D
# without needing a full PointCloud2 blob.
float32 vertical_fov_min
float32 vertical_fov_max
float32 horizontal_fov_min
float32 horizontal_fov_max
# --- Channel Data (The "Image" approach) ---
# Each channel (Range, Intensity, Reflectivity, etc.) is stored in this list.
# This mirrors the 'PointField' logic but at a 2d-grid level.
LidarChannel[] channels
# The actual raw buffer containing all interleaved or planar channel data.
# Using uint8[] allows for Zero-Copy compatibility.
uint8[] data
# --- Scaling and Metrics ---
# Different venders have different units
# Ouster (mm) vs. Velodyne (m) vs. Hesai (cm) problem.
# Range = (raw_value * multiplier) + offset
float64 range_multiplier
float64 range_offset
And the definition of LidarChannel:
string name # "range", "intensity", "reflectivity", "ambient", "near_ir"
uint32 offset # Offset from start of data row
uint8 datatype # uint8, uint16, uint32, float32, etc.
While this works for sensors with uniform distributions of laser beams not all vendors have that formation including Ouster sensors making the section on Geometry Metadata insufficient:
Ouster spinning sensors vertical beams don’t have uniform distribution due to the calibration process. Which means we need to extend the previous definition to include the beam angels in the LidarScan message body:
# --- Non-Uniform Geometry Metadata ---
# These arrays allow the receiver to project Range -> 3D.
# vertical_angles[height]: The elevation angle for each ring (in radians).
float32[] vertical_angles
# other attributes might also be needed
# horizontal_angles[width]: [optional] The azimuth angle for each column (in radians).
# int32[] beam_time_offset: [optional] To handle "staggered" firing patterns within a single column.
This solves the problem and allows users to project the range data into 3D but adds a bit of overhead increasing the message size. These arrays basically essentially define the intrinsics of the Lidar sensor, however, transporting them with every LidarScan message reduces or eliminates most of the gains attained by transporting the raw range data vs using the projected xyz points. A better approach would be to break down the beam information and the lidar data into two separate messages in which the lidar sensor info is only transported once earlier during the connection phase. This is not a new pattern to ROS as it already has `sensor_msgs/CameraInfo` which describes the intrinsics of the camera link: sensor_msgs/CameraInfo Documentation .
By moving these intrinsic values fields into a separate message we can retain the same gains and keep the LidarScan message lean. The definition for a sensor_msgs::LidarInfo message would be something like:
std_msgs/Header header
float32[] vertical_angles
float32[] horizontal_angles
int32[] beam_time_offsets
# --- Scaling and Metrics ---
float64 range_multiplier
float64 range_offset
# Plus other static factory data (intrinsic/extrinsic)
And the revised LidarScan message becomes:
std_msgs/Header header
uint32 height
uint32 width
LidarChannel[] channels
uint8[] data
NOTES:
This format is more suited for filtering and perception stacks
It is important to note that this proposal does not suggest that vendors of Lidar sensors or users should stop using the PointCloud2. It mainly suggests the addition of a new message type that mirrors the native format of the majority of Lidar sensors, reducing overhead and providing better synchrony.
The idea here is to come up with a standard sensor_msgs::LidarScan and sensor_msgs::LidarInfo messages .. and totally abstract out the process that converts this native Lidar sensor message format and produce the 3D pointcloud out of any sensor.
Once we get initial feedback from the community the idea is that Ouster and others who are interested in this concept to build a PoC of the proposal and make sure we cover the all basic necessities for this to work before committing to the final interface.
I am also aware of the other proposals around Native Buffers (rcl::Buffer) that are already in flight and we plan to support this from the get go as there are large intersection with the motivation behind Native Buffers and the use of LidarScan for perception type of tasks and other workloads
The ROSCon call for proposals is now open! You can find full proposal details on the ROSCon 2026 website.
ROSCon Global 2026 will be held in Toronto, Canada, from September 22nd to September 24th, 2026. This year, we are officially adopting the “Global” moniker to reflect our growing international community and the many regional ROSCons happening worldwide.
Talk Proposals: Due by Sun, Apr 26, 2026 12:00 AM UTCSubmit via HotCRP
Birds of a Feather (BoF): Due by Fri, Jul 24, 2026 12:00 AM UTC, submissions opening soon.
Important Dates
Diversity Scholarship Deadline: Sun, Mar 22, 2026 12:00 AM UTCSubmit here
Workshop Acceptance Notification: Tue, May 12, 2026 12:00 AM UTC
Ticket Sales Begin: Mon, May 11, 2026 12:00 AM UTC
Presentation Acceptance Notification: Tue, Jun 9, 2026 12:00 AM UTC
Diversity Scholarship Program
If you require financial assistance to attend ROSCon Global and meet the qualifications, please apply for our Diversity Scholarship Program. Thanks to our sponsors, scholarships include complimentary registration, four nights of hotel accommodation, and a travel stipend.
The deadline for the scholarship is Sun, Mar 22, 2026 12:00 AM UTC, which is well before the CFP deadlines to allow for travel planning and visa processing.
What are we looking for?
The core of ROSCon is community-contributed content. We are looking for:
Presentations: Technical talks (10-30 minutes) on new tools, libraries, or novel applications.
Birds of a Feather: Self-organized meetings for specific interest groups (e.g., medical robotics, space, or debugging).
We want to see your robots! Whether it is maritime robots, lunar landers, or industrial factory fleets, we want to hear the technical lessons you learned. We encourage original content, high-impact ideas, and, as always, a focus on open-source availability.
How to Prepare
If you are new to ROSCon we recommend reviewing the archive of previous talks. You are also welcome to use this Discourse thread to workshop your ideas and find collaborators.
Questions and concerns can be directed to the ROSCon Executive Committee (roscon-2026-ec@openrobotics.org) or posted in this thread. We look forward to seeing the community in Toronto!
I’m happy to introduce PlotJuggler Bridge, a lightweight server that exposes ROS 2 or DDS topics over WebSocket, allowing remote tools like PlotJuggler to access telemetry without directly participating in the middleware network.
In many robotics setups, accessing telemetry from another computer is harder than it should be. DDS discovery over WiFi can be unreliable, opening DDS networks outside the robot can create configuration issues, and installing a full ROS 2 environment on every machine used for debugging is often inconvenient.
PlotJuggler Bridge solves this by acting as a gateway between the middleware network and external clients.
It runs close to the robot, reads the topic data, and exposes it through a simple WebSocket endpoint that any client can connect to.
This approach keeps the ROS/DDS network local while making telemetry easily accessible from other machines.
The project is available here:
Why it is useful
This is especially helpful in scenarios such as:
monitoring a robot remotely over WiFi
accessing telemetry from Windows or macOS machines without ROS installed
avoiding DDS discovery and networking configuration issues
debugging systems without exposing the full middleware network
connecting tools without needing message definitions compiled locally
Because the bridge performs runtime schema discovery, clients can access topics even if they use custom ROS messages, without requiring those message packages to be installed on the client machine.
The bridge also aggregates and optionally compresses data, which helps reduce bandwidth usage and improves stability when streaming telemetry over wireless networks.
Main features
PlotJuggler Bridge includes several features designed for real-world robotics workflows:
WebSocket access through a single endpoint
automatic runtime discovery of topic schemas
support for custom ROS message types without client-side compilation
aggregation of messages for efficient streaming
optional ZSTD compression
support for multiple simultaneous clients
bandwidth-friendly handling of large messages by stripping large array fields while preserving useful metadata
The bridge subscribes to topics in the ROS/DDS network and exposes them through a WebSocket server.
External tools can connect and receive the streamed telemetry without joining the middleware network.
Quick to start
The bridge can typically be up and running in less than 5 minutes.
Setup instructions are available in the repository README:
You will need PlotJuggler 3.16 or newer, which includes the WebSocket client plugin:
Basic usage
Once the bridge is running, the workflow is straightforward:
Start the bridge on the machine connected to the ROS/DDS network.
Open PlotJuggler on any computer.
Connect to the WebSocket Client using the bridge address.
The available topics will be discovered automatically and can be inspected immediately.
About the work
My name is Álvaro Valencia, and I am currently working on PlotJuggler as an intern while finishing the last months of my Robotics Software Engineering degree.
I collaborate closely with @facontidavide on this project. PlotJuggler clearly reflects years of work, effort and passion, and contributing to it is a great experience.
Together we are developing the components required to make this new Robot → PlotJuggler connection workflow simple and practical to use. The goal is to make remote telemetry access easier while keeping the system flexible for future extensions that will appear in upcoming PlotJuggler developments.
And stay tuned… more interesting things are coming soon for PlotJuggler.
As a next-generation robot operating system, ROS2 provides powerful support for the intelligent and modular development of robotic arms. As the core motion planning framework in the ROS2 ecosystem, MoveIt2 not only inherits the mature functions of MoveIt but also achieves significant improvements in real-time performance, scalability, and industrial applicability.
Taking a 7-DoF robotic arm as an example, this document provides step-by-step instructions for configuring and generating a complete MoveIt2 package from a URDF model using the MoveIt Setup Assistant, enabling motion planning and visual control. This guide offers a clear, practical workflow for both beginners and developers looking to quickly integrate models into MoveIt2.
MoveIt2 is the next-generation robotic arm motion planning and control framework developed based on the ROS2 architecture. It can be understood as a comprehensive upgrade of MoveIt in the ROS2 ecosystem. Inheriting the core capabilities of MoveIt, it has made significant improvements in real-time performance, modularity, and industrial application scenarios.
The main problems solved by MoveIt2 include:
Robotic Arm Motion Planning
Collision Checking
Inverse Kinematics (IK)
Trajectory Generation and Execution
RViz Visualization and Interaction
Installing Moveit2
You can directly install using binary packages; use the following commands to install all components related to moveit:
sudo apt install ros-humble-moveit*
Downloading the URDF File
First, create a new workspace and download the URDF model:
mkdir -p ~/nero_ws/src
cd ~/nero_ws/src
git clone https://github.com/agilexrobotics/piper_ros.git -b humble_beta1
cd ..
colcon build
After successful compilation, use the following command to view the model in rviz:
cd ~/nero_ws/src
source install/setup.bash
ros2 launch nero_description display_urdf.launch.py
Exporting the MoveIt Package Using Setup Assistant
Select Create New Moveit Configuration Package to create a new MoveIt package, then load the robotic arm.
Calculate the collision model; for a single arm, use the default parameters.
Skip selecting virtual joints and proceed to define planning groups. Here, we need to create two planning groups: the arm planning group and the gripper planning group. First, create the arm planning group; set Group Name to arm, use KDL for the kinematics solver, and select RRTstar for OMPL Planning.
Setting the Kin.Chain
Add the control joints for the planning group, select joint1~joint7, click >, then save.
Planning group creation completed.
Setting the Robot Pose; you can pre-set some actions for the planning group here.
Skip End Effectors and Passive Joints, and add interfaces in the URDF.
Setting the controller, here we use position_controllers.
Simulation will generate a URDF file for use in Gazebo, which includes physical properties such as joint motor attributes.
After configuration, fill in your name and email.
Set the package name, then click Generate Package to output the function package.
Launching the MoveIt Package
cd ~/nero_ws/src
source install/setup.bash
ros2 launch nero_moveit2_config demo.launch.py
After successful launch, you can drag the marker to preset the arm position, then click Plan & Execute to control the robotic arm movement.
So I built a small tool called ros2_info.
The idea was simple: what if fastfetch, but for your entire ROS2 environment?
One command → instant snapshot of everything happening in your ROS2 setup.
What it shows:
• ROS2 distro + whether it’s LTS or nearing EOL
• Live nodes, topics, services, and actions
• Auto-detects which DDS middleware you’re running
• All detected colcon workspaces + their build status
• Installed ROS2 packages grouped by category
• System stats (CPU, RAM, Disk)
• Pending ROS2-related apt updates
• A small web dashboard at localhost:8099
Basically the stuff I kept checking with 10 different commands… now in one place
Works across ROS2 distros: Foxy → Humble → Iron → Jazzy → Rolling
cd ~/ros2_ws/src
git clone https://github.com/zang7777/ros2_info.git
cd ~/ros2_ws && colcon build --symlink-install
source install/setup.bash
(best recommended) ros2 run ros2_info ros2_info --interactive
or just
ros2 run ros2_info ros2_info
Always fun building little dev tools for the ecosystem