Skip to content

alltheseas/telemoq

Repository files navigation

telemoq – Priority-aware transport for robot teleoperation

Problem – In today's teleop stacks (WebRTC, DDS) all streams share a single ordered channel. When a video packet is lost, the whole link freezes (head-of-line blocking) and the robot can become uncontrollable.

Solutiontelemoq builds on the IETF-standard MoQ (Media over QUIC) protocol. Each logical track (heartbeat, control, joint state, video, point cloud…) lives on its own QUIC stream with an explicit priority. Under bandwidth congestion the scheduler delivers high-priority control data first, while lower-priority video is gracefully degraded.

Impact – In a realistic 300 kbps congested link the control loop stays teady (≈ 1 ms latency, 0% packet loss), whereas WebRTC drops to ≈ 0.2 msgs/s for the same control stream. Video simply starves – the robot never loses control.

"Control data must never freeze." – IHMC DRC Lesson: 9,600 bps is enough for control; everything else is expendable.

▶️ Live demo – MoQ vs WebRTC vs DDS | 📹 Walk-through video (2 min) | Benchmark Results


Built on moq-dev/moq. MoQ is an IETF standards-track protocol over QUIC (RFC 9000).

The Demo

Under congestion (bandwidth-limited, not artificial packet loss), the subscriber output tells the story:

  telemoq | robot-1 | relay: https://localhost:4443 | uptime: 12s
  ╔══════════════════════╤═════╤═════════╤══════════╤════════════╤═════════╤═══════╤═══════════════════════╗
  ║ Track                │ Pri │ Recv/s  │ Expected │   Status   │ Latency │  Gap  │ Throughput            ║
  ╟──────────────────────┼─────┼─────────┼──────────┼────────────┼─────────┼───────┼───────────────────────╢
  ║ safety/heartbeat     │ P0  │    10/s │     10/s │     OK     │    1ms  │   8ms │ ████████████████████    170 B/s ║
  ║   └─ seq: 120 estop: ok                                                                              ║
  ║ control/streaming    │ P1  │   165/s │    167/s │     OK     │    1ms  │   6ms │ ████████████████████ 26.4 KB/s ║
  ║ control/task_ack     │ P1  │    10/s │     10/s │     OK     │    1ms  │   9ms │ ████████████████████    140 B/s ║
  ║   └─ task #1: executing (45%)                                                                        ║
  ║ sensors/joints       │ P2  │    98/s │    100/s │     OK     │    2ms  │  10ms │ ████████████████████ 11.8 KB/s ║
  ║   └─ joints: [-0.12, 0.45, -0.89, 1.23, -0.56, 0.78, 0.01] rad                                     ║
  ║ sensors/force_torque │ P3  │    96/s │    100/s │     OK     │    2ms  │  10ms │ ████████████████████  5.4 KB/s ║
  ║   └─ F=[10.2, 2.0, -5.3]N  weld contact: YES                                                        ║
  ║ sensors/imu          │ P5  │   160/s │    200/s │   DEGRAD   │    5ms  │  15ms │ ████████████████░░░░ 14.1 KB/s ║
  ║ perception/ptcloud   │ P10 │     4/s │      5/s │     OK     │   12ms  │  45ms │ ████████████████████  0.2 MB/s ║
  ║ video/camera0        │ P20 │     5/s │     30/s │   DEGRAD   │  340ms  │ 200ms │ ███░░░░░░░░░░░░░░░░░  0.3 MB/s ║
  ╟──────────────────────┴─────┴─────────┴──────────┴────────────┴─────────┴───────┴───────────────────────╢
  ║  Latency = wall-clock delta (pub→relay→sub).  Gap = max inter-arrival time (12ms = IHMC KST window). ║
  ║                                                                                                       ║
  ║  IHMC DRC lesson: 9,600 bps was enough for control. Video is expendable.                              ║
  ║  TCP/WebRTC: One lost video packet  →  ALL streams freeze (HOL blocking)                              ║
  ║  MoQ/QUIC:   Video degrades         →  Robot stays under control (priority streams)                   ║
  ╚═══════════════════════════════════════════════════════════════════════════════════════════════════════╝

High-priority control: rock steady. Low-priority video: starved first. The robot never loses control.

How It Works

Each track maps to a separate QUIC stream with a different priority. Under congestion, quinn's priority scheduler sends high-priority streams first, starving lower-priority ones. Track hierarchy follows IHMC's KST architecture -- the teleop link carries SE3 task goals, not raw joint commands.

Track Priority Rate Payload Description
safety/heartbeat P0 10 Hz 17 bytes Watchdog liveness. Absence = safe stop.
control/streaming P1 167 Hz ~160 bytes SE3 pose targets (IHMC KST pattern)
control/task_status P1 10 Hz 14 bytes Task acknowledgment (robot → operator)
sensors/joints P2 100 Hz ~120 bytes 7-DOF joint state (Franka Panda)
sensors/force_torque P3 100 Hz ~56 bytes Contact detection for weld monitoring
sensors/imu P5 200 Hz ~88 bytes Orientation/acceleration
perception/pointcloud P10 5 Hz ~50 KB Downsampled 3D point cloud
video/camera0 P20 30 fps ~50 KB Camera feed (lowest priority)

No video encoding -- frames are 50KB payloads at 30fps to isolate transport behavior. Real H.264 via lumina-video adds ~20ms encode latency.

Latency Measurement

Publisher embeds SystemTime::now() timestamps; subscriber computes recv_time - publish_time per track. Also tracks max inter-arrival gap vs. IHMC's 12ms KST extrapolation window.

Setup

Prerequisites

Clone moq-dev/moq alongside this repo:

develop/
├── lumina-video/moq/    # moq-dev/moq repo
└── telemoq/             # this repo

Rust 1.85+ required (edition 2021).

Build

cargo build

Run

Terminal 1: Start the relay

cd ../lumina-video/moq && cargo run --bin moq-relay -- --listen '[::]:4443' --tls-generate localhost --auth-public ''

Terminal 2: Publisher

cargo run -- --url https://localhost:4443 --tls-disable-verify --broadcast robot-1 publish

Terminal 3: Subscriber

cargo run -- --url https://localhost:4443 --tls-disable-verify --broadcast robot-1 subscribe

CSV Logging

cargo run -- --url https://localhost:4443 --tls-disable-verify --broadcast robot-1 --csv subscribe > telemoq.csv

CSV columns: elapsed_s,track,priority,recv_per_s,expected_per_s,pct,bytes_per_s,avg_latency_ms,max_gap_ms,mode

Benchmark: Three Modes, One Binary

Three benchmark modes isolate the effect of priority scheduling and stream multiplexing:

Flag Simulates Behavior
(default) MoQ Per-stream priority scheduling
--no-priority DDS All tracks equal priority 0, no starvation
--single-stream WebRTC All tracks multiplexed into one QUIC stream (HOL blocking)

Same binary, same relay, same payloads. Only the transport behavior changes. Add --no-priority or --single-stream flags to publisher and subscriber.

Measured results under congestion (~63s runs):

  • Priority (MoQ): P0-P5 control: 22.2 msgs/s. Video/pointcloud starved to zero.
  • No-priority (DDS-like): Control: 5.3 msgs/s (76% loss). Degradation unpredictable.
  • Single-stream (WebRTC-like): Control: 0.2 msgs/s (99% loss). Near-total blackout.

See benchmark results for visual comparison.

Architecture

  • Single binary with publish / subscribe subcommands
  • Publisher creates one TrackProducer per stream with a different priority
  • Under congestion, quinn's BBR + priority scheduler deliver high-priority tracks first

Caveats

  • BBR vs. random loss: BBR handles bandwidth congestion well (what this demo tests) but can collapse under >5% random packet loss. Production in high-loss environments needs FEC or loss-tolerant CC. See BBR FAQ.
  • Connection migration: QUIC supports session migration across network changes (WiFi → cellular) via connection IDs. Not yet implemented in moq-relay.
  • Single publisher: Demo uses one publisher for both directions. Production uses two publishers through the relay.

License

MIT OR Apache-2.0

About

MoQ transport for robotics teleoperation

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors