Inspiration

Robotics development is slow because testing physical hardware is expensive, fragile, and time-consuming. We wanted a way to experiment with robotic behaviors instantly using natural language without needing the physical robot connected. Our goal was to create a system where a human can simply say what they want a robot to do, and the system translates that intent into a structured plan inside a digital twin environment.

What it does

TwinPilot converts plain-English commands into executable robotic actions inside a simulated environment. Instead of programming movement step-by-step, users describe a task (for example: “move the black bottle to the right zone”), and the system generates a structured action plan, validates it, and executes it inside a simulation scene.

The system tracks objects, zones, and robot state, allowing it to reason about tasks and update the digital environment in real time. This allows teams to prototype robotic workflows quickly without needing physical hardware.

How we built it

TwinPilot is built as a modular robotics agent system:

  • Natural language planning layer using the OpenAI API to convert instructions into structured action plans.
  • Simulation environment representing objects, zones, and robot state.
  • Execution engine that validates plans and runs them step-by-step.
  • CyberWave integration layer designed to connect the planner to digital twin environments.

Each scene object has a canonical ID, allowing the planner to reason about objects deterministically instead of guessing names. This prevents execution errors and allows robust plan validation.

Challenges we ran into

One of the biggest challenges was ensuring the AI planner referenced the correct objects in the environment. Early versions produced invented object identifiers, which caused execution failures.

We solved this by introducing canonical object IDs and strict validation of planner output. The planner is now constrained to choose only from active objects in the scene, ensuring reliable execution.

Another challenge was designing a system that works both with simulated environments and real hardware without changing the planning logic.

Accomplishments that we're proud of

We successfully built a system where natural language instructions are converted into structured robotic actions and executed in a simulated environment. The planner can reason about object positions, zones, and tasks while keeping execution deterministic and safe.

Most importantly, the architecture supports both simulation and real robotics hardware in the future.

What we learned

We learned how important structured planning and validation are when integrating AI with robotics systems. Natural language is powerful, but without strict constraints it can produce unpredictable outputs.

Designing a system that combines AI planning with deterministic execution requires carefully balancing flexibility and control.

What's next for TwinPilot

The next step is connecting TwinPilot to real robotic hardware and live digital twin environments. Our architecture already separates planning, validation, and execution layers, which makes it possible to integrate physical robots without changing how tasks are described.

Long term, TwinPilot could become a universal control layer for robotics systems where humans interact with robots using natural language instead of low-level programming.

Built With

  • codex
Share this project:

Updates