Rerun’s cover photo
Rerun

Rerun

Software Development

Stockholm, Sweden 19,569 followers

Rerun is building the data stack for Physical AI.

About us

Rerun is building the data stack for Physical AI. Open source logging and visualization multimodal data. Managed infrastructure to ingest, store, analyze, and stream data at scale with built-in visual debugging. Fast, flexible, and easy to use.

Website
http://www.rerun.io
Industry
Software Development
Company size
11-50 employees
Headquarters
Stockholm, Sweden
Type
Privately Held
Founded
2022
Specialties
computer vision, tooling, open source, deep learning, AI, MLops, multimodal, visualization, and robotics

Locations

Employees at Rerun

Updates

  • Rerun reposted this

    Built a complete LeRobot teleoperation and dataset collection pipeline from scratch using NVIDIA Jetson Thor. Key highlights: • Designed and configured the teleoperation setup • Integrated camera streams and robotic arm control • Recorded and visualized robot observations/actions using Rerun • Collected custom manipulation datasets for imitation learning • Successfully performed pick-and-place teleoperation tasks • Worked with ROS/Linux-based robotics workflow and AI data pipelines Tech Stack: LeRobot | NVIDIA Jetson Thor | Python | OpenCV | Robotics | AI | Teleoperation | Dataset Collection | Rerun.io | Machine Learning This project gave me hands-on experience in: - Real-world robot control - Data collection for embodied AI - Human-in-the-loop robotics - Edge AI deployment - Robot learning workflows Excited to explore more in Physical AI, Autonomous Robotics, and Embodied Intelligence. #LeRobot #Robotics #PhysicalAI #EmbodiedAI #NVIDIA #Jetson #JetsonThor #AI #MachineLearning #Teleoperation #RobotLearning #ImitationLearning #OpenCV #ROS #EdgeAI #Automation #DatasetCollection #DeepLearning #Python #Engineering #Innovation #HumanoidRobot #AIEngineer #Mechatronics #Linux Companies / Organizations to mention: NVIDIA Hugging Face Face Lerobot.ch OpenCV SOLOCLIC Rerun.io

    • No alternative text description for this image
    • No alternative text description for this image
    • No alternative text description for this image
    • No alternative text description for this image
    • No alternative text description for this image
      +2
  • Rerun reposted this

    🧗 A parkour wall climb. One moving camera. Last week, I came across a clip from @shaneparkour — an impressive wall climb filmed casually on a moving phone. Working on #markerless #MoCap. I got curious: could we actually digitize this? 🤔 This is one of the hardest scenarios for #motioncapture: → 📱 Single moving camera → ⚡ Fast motion + phone-level FPS = intense motion blur. Have a look at the raw result below. 👇 The question that naturally arises is whether this kind of output saves time for #animators compared to keyframing from scratch, or whether the cleanup costs more than starting fresh. 🎬 I wrote up the full breakdown with an interactive Rerun viewer that lets you orbit the 3D scene alongside the reference video. Link in the comments 🔗

  • Rerun reposted this

    I’ve been digging into the VGGT reconstruction model. While the paper's results are impressive, I wanted to see how the model actually behaves during the reconstruction process, specifically regarding pose drift and confidence. I built a pipeline to pipe VGGT's output into Rerun for real-time visualization, adding a Gradio UI to make the setup more interactive for image uploads and parameter tuning. The goal wasn't just to visualize the point cloud, but to monitor how the model handles geometric constraints in 3D space and more importantly where it fails. By visualizing the inference live, it’s much easier to see the effects of uncertainty based filtering on pose drift in real time. Repo: https://lnkd.in/dQDe4kDS

    • No alternative text description for this image
  • Rerun reposted this

    I've been on a SLAM/SFM kick. It's one of the more underexplored and lacking areas when it comes to human teleop/data collections, so I've brought over Deep Patch Visual Odometry/SLAM to Rerun and Gradio. With this example, we now have 1. pycuvslam 2. pycolmap/glomap 3. mast3r-slam 4. dpvo/slam all integrated into rerun. The question becomes, which method should be used in what situations? They all make different trade-offs with different camera requirements and throughput/accuracy. What about when a new method comes out? Now that I have several different methods, I plan to use VSLAM-LAB for evaluation. It uses prefix.dev to isolate all the dependencies of each of these methods and easily compare them against each other. In particular, I'll be converting the data preprocessing, algorithm outputs, and evaluation into rerun recordings (rrd files). This will allow both programmatic querying of anything stored in the files (which method had the highest ATE-to-FPS ratio? Which dataset/sequence caused the most difficulty? etc. etc.), all with easy visual inspection using the rerun server to link them all together. Another really important side effect of this is how it impacts agents. As Karpathy said ``` LLMs are exceptionally good at looping until they meet specific goals, and this is where most of the "feel the AGI" magic is to be found. Don't tell it what to do, give it success criteria, and watch it go. ``` by having accuracy and throughput metrics deeply tied with human inspectable artifacts. One can really accelerate agentic development with an actual understanding of how the method/data performs. I think this is another killer use case that I'll be really leaning into to make ingestion of new datasets/methods trivial with an agent. I'm making it my mission for folks to understand that rerun as a visualization tool only scratches the surface of what its true benefit is. Deep integration between data and visuals, with powerful query capabilities. I'll be focusing on the SLAM use case first and then bringing this into the full egocentric/exocentric data collection domain!

  • Rerun reposted this

    Just released px4-rerun v0.1.0.  If you work with PX4 flight logs, this gives you a one-line install for visualizing any .ulg in the Rerun viewer directly (see first screenshot).  You get the vehicle in 3D, trajectory, USGS terrain underlay, and full log messages — all synced on a scrubbable timeline. Many more visualizations coming soon. Or embed the C++ library in your own tools. The second image shows the library visualizing live data from a running PX4 SITL instance.  MIT licensed. Repo: https://lnkd.in/dkv-NmwR

    • No alternative text description for this image
    • No alternative text description for this image
  • Rerun reposted this

    Built a 3D video + pose detection pipeline. 𝗜𝘁 𝘁𝗮𝗸𝗲𝘀 𝗮 𝗿𝗲𝗴𝘂𝗹𝗮𝗿 𝘃𝗶𝗱𝗲𝗼 𝗮𝗻𝗱:  • 𝖢𝗈𝗇𝗏𝖾𝗋𝗍𝗌 𝖾𝗏𝖾𝗋𝗒 𝖿𝗋𝖺𝗆𝖾 𝗂𝗇𝗍𝗈 𝖺 𝟥𝖣 𝖦𝖺𝗎𝗌𝗌𝗂𝖺𝗇 𝖲𝗉𝗅𝖺𝗍 𝗉𝗈𝗂𝗇𝗍 𝖼𝗅𝗈𝗎𝖽 (𝖠𝗉𝗉𝗅𝖾 𝖬𝖫-𝖲𝖧𝖠𝖱𝖯)  • 𝖣𝖾𝗍𝖾𝖼𝗍𝗌 𝗆𝗎𝗅𝗍𝗂𝗉𝗅𝖾 𝗉𝖾𝗈𝗉𝗅𝖾 𝗌𝗂𝗆𝗎𝗅𝗍𝖺𝗇𝖾𝗈𝗎𝗌𝗅𝗒 𝖺𝗇𝖽 𝗅𝗂𝖿𝗍𝗌 𝗍𝗁𝖾𝗂𝗋 𝗉𝗈𝗌𝖾𝗌 𝗂𝗇𝗍𝗈 𝟥𝖣  • 𝖲𝗁𝗈𝗐𝗌 𝗍𝗁𝗋𝖾𝖾 𝗅𝗂𝗏𝖾 𝗏𝗂𝖾𝗐𝗌: 𝟥𝖣 𝗌𝖼𝖾𝗇𝖾, 𝗂𝗌𝗈𝗅𝖺𝗍𝖾𝖽 𝟥𝖣 𝗌𝗄𝖾𝗅𝖾𝗍𝗈𝗇 𝗉𝖾𝗋 𝗉𝖾𝗋𝗌𝗈𝗇, 𝖺𝗇𝖽 𝗍𝗁𝖾 𝗈𝗋𝗂𝗀𝗂𝗇𝖺𝗅 𝗏𝗂𝖽𝖾𝗈 𝗐𝗂𝗍𝗁 𝗉𝗈𝗌𝖾 𝗈𝗏𝖾𝗋𝗅𝖺𝗒 𝖳𝗁𝖾 𝗍𝗋𝗂𝖼𝗄𝗒 𝗉𝖺𝗋𝗍 𝗐𝖺𝗌 𝗉𝗅𝖺𝖼𝗂𝗇𝗀 𝗍𝗁𝖾 𝟤𝖣-𝖽𝖾𝗍𝖾𝖼𝗍𝖾𝖽 𝗌𝗄𝖾𝗅𝖾𝗍𝗈𝗇𝗌 𝖺𝖼𝖼𝗎𝗋𝖺𝗍𝖾𝗅𝗒 𝗂𝗇𝗌𝗂𝖽𝖾 𝗍𝗁𝖾 𝟥𝖣 𝗌𝖼𝖾𝗇𝖾, 𝗌𝗈𝗅𝗏𝖾𝖽 𝖻𝗒 𝗌𝖺𝗆𝗉𝗅𝗂𝗇𝗀 𝗍𝗁𝖾 𝖦𝖺𝗎𝗌𝗌𝗂𝖺𝗇 𝖽𝖾𝗉𝗍𝗁 𝖿𝗂𝖾𝗅𝖽 𝖺𝗍 𝖾𝖺𝖼𝗁 𝗉𝖾𝗋𝗌𝗈𝗇'𝗌 𝗍𝗈𝗋𝗌𝗈 𝗋𝖾𝗀𝗂𝗈𝗇 𝖺𝗇𝖽 𝗎𝗌𝗂𝗇𝗀 𝖬𝖾𝖽𝗂𝖺𝖯𝗂𝗉𝖾 𝗐𝗈𝗋𝗅𝖽 𝗅𝖺𝗇𝖽𝗆𝖺𝗋𝗄𝗌 (𝗆𝖾𝗍𝗋𝗂𝖼, 𝖻𝗈𝖽𝗒-𝖼𝖾𝗇𝗍𝗋𝖾𝖽) 𝖺𝗌 𝗍𝗁𝖾 𝗌𝗄𝖾𝗅𝖾𝗍𝗈𝗇 𝗌𝗁𝖺𝗉𝖾. Stack: Apple ML-SHARP, MediaPipe, OpenCV, Rerun, all running locally on Apple Silicon.

  • View organization page for Rerun

    19,569 followers

    Sneak peek: 2D grid map support is coming to Rerun! We'll offer a new archetype GridMap in the Rerun SDK, automatic loading of ROS 2 occupancy grids from MCAPs, and the colormap options that RViz users are familiar with. The archetype has a regular ImageBuffer component, so you can also send color images (e.g. to do custom color-mapping in your code). For layering of multiple maps, you can set draw order and opacity when logging, or separately in the viewer/blueprint.

  • Rerun reposted this

    🤖 LeKiwi Mobile Robot -- Autonomous Exploration, Mapping & Navigation Demo from PathOn Robotics!                                                                                                               From weeks to hours. That's how fast you can get a mobile robot autonomously navigating with our platform. Fully integrated with RVIZ2 and Rerun, Foxglove for visualization.                                                                                                       LeKiwi features a 3-wheel omnidirectional kiwi-drive with:                                                   - 🗺️ Autonomous exploration & mapping  - 🧭 ROS2 Nav2 for autonomous navigation                                                            - 🎮 Holonomic velocity control (vx, vy, vθ)  - 🏗️ MuJoCo simulation + real robot support   - 🦾 Arm + gripper for manipulation tasks                                                                                  Built on an affordable, open-source, 3D-printable robot -- our platform gets you from zero to autonomous navigation in hours, not weeks.                                                                                            🔬 Vision-language based mobile manipulation with our SO-101 6DoF + symmetric gripper arm is under active development -- stay tuned!       https://lnkd.in/gcDe9Yba                                                                                        📦 LeKiwi hardware is open-source and 3D-printable:                                                      https://lnkd.in/ez6B9C-Z  #Robotics #OpenSource #ROS2 #Navigation #MobileRobot #3DPrinting #AI #AutonomousNavigation #SLAM #Mapping

  • Rerun reposted this

    I've migrated the old Mast3r-SLAM example I had made last year to the latest version of Rerun and made a bunch of improvements! I wanted to spend some time with agents to modernize it. Here's an example of me walking around with my iPhone and getting a dense reconstruction at about 10FPS on a 5090. Heres the following improvements I made. Brought it into the monorepo with proper packaging: • Using prefix.dev pixi-build to get rid of all the mast3r/asmk/lietorch vendored code with just a few small patches. This let me remove so 60k lines of code from the repo! • Don't have to build the lietorch code on my machine anymore, which was taking ~10 minutes to compile (and also made it work on blackwell when it previously did not) Rebuilt the Gradio interface: • Fixed incremental updates, .MOV uploads, and stop behavior • Made the CLI + Gradio interface share the same entry point so updates automatically propagate Upgraded the @rerundotio integration: • Switched to a multiprocessing async logging strategy • Added video/pointmap/confidence logging • Improved blueprint layout and hid noisy entities from 3D view • Biggest perf win was the async background logger - documented about a ~2.5x speedup from decoupling logging from tracking The newest and most interesting part was my attempt to replace the CUDA kernels for Gauss-Newton ray matching with a Modular Mojo backend. As a Python dev, every time I look at CUDA code I basically shy away as it's pretty difficult for me to understand. Mojo let me rewrite the matching logic in a syntax I'm more comfortable with while still getting near-CUDA performance. Mojo is now the default matching backend with CUDA fallback. One major piece that's missing is the custom PyTorch op path, but I'll eventually do that as well. I heavily leaned on Claude Code to do the CUDA → Mojo migration, and I have no doubt it's not the cleanest or most idiomatic, BUT it's way more readable for me and helps me better understand the underlying algorithm. This was a ton of work, and a large part of why I'm doing it is how the monorepo compounds. This becomes an artifact for the next example I want to build with Claude that I can point to, which will make it even faster to implement. The compounding nature of this is really interesting and part of why I'm spending so much time trying to make things nice and readable.

Similar pages

Browse jobs

Funding

Rerun 2 total rounds

Last Round

Seed

US$ 17.0M

See more info on crunchbase