What is Kineworks?
Kineworks is an AI-powered CAD and robotics simulation workspace. You describe a mechanical component in plain English — sports car frame, a windmill fan, a wheel — and Kineworks generates a dimensionally accurate, 3D-printable STL file in seconds. You can then animate your assembly, simulate kinematic motion between states, and iterate entirely through conversation.
Think of it as Cursor, but for building robots.
Inspiration
I believe by the mid-2030s, I think we'll see the rise of a new kind of engineer — someone who can leverage AI-powered physics engines to ship fully manufactured products end-to-end, without deep specialization in CAD, finite element analysis, or geometric tolerancing. The same way GitHub Copilot made individual developers dramatically more productive, AI will compress the feedback loop between "I want to build this robot" and "the parts are ready to print."
Today, that loop is brutal. A hobbyist or early-stage hardware startup has to learn Solidworks or Fusion 360, manually model every component, set up simulation environments, and iterate for days — before a single part ever gets printed. Kineworks bets that this entire workflow can be described in natural language and executed by a multi-step AI agent backed by a real engineering-grade CAD kernel.
This isn't just a productivity tool. It's a glimpse at what engineering infrastructure looks like in the AI-native era.
How I Built It
Kineworks is built on a multi-step Gemini 3.1 Pro pipeline that separates intent, constraint analysis, JSON generation, and geometry rendering into distinct passes — each one making the next more precise.
The CAD Pipeline When you ask Kineworks to "create a servo horn with 25mm arm length", here's what happens:
- Intent Detection (Gemini 3.1 Pro + function calling)
→ identifies this as a
createComponentcall - Constraint Analysis (Gemini 3.1 Pro) → resolves exact dimensions, tolerances, mounting holes, material assumptions from the user's description + chat history
- CAD JSON Generation (Gemini 3.1 Pro)
→ outputs a parametric shape tree:
{ "shape": "cylinder", "radius": 3, "height": 8, "operations": [...] } - CadQuery Engine (Python + OpenCASCADE)
→ executes the JSON through
cadGenerator.py→ renders a real .stl mesh using engineering-grade geometry - Three.js viewer → streams the STL into the browser in real time
If CadQuery rejects the geometry (invalid profile, non-manifold surface, etc.), Gemini is called again with the error message to correct the JSON — up to 3 retries — before surfacing a failure to the user.
The Simulation Engine
On the frontend, a requestAnimationFrame loop interpolates component transforms between two saved scene states. Position and scale use linear interpolation; rotations use Euler lerp for standard components and quaternion slerp for components with custom rotation axes — ensuring physically correct angular motion regardless of gimbal lock.
The Stack
- Google Gemini 3.1 Pro — all AI inference (function calling, multimodal, chat)
- CadQuery 2.7 / OpenCASCADE — parametric CAD kernel
- Node.js + Express — streaming NDJSON API server
- React + Three.js — 3D viewer and UI
- MongoDB Atlas — persistent projects, chats, states, simulations
- Nginx + PM2 + Let's Encrypt — production on a Hetzner VPS
What I Learned
The biggest technical lesson was how to design larger, more reliable workflows for AI agents.
Early in the project, I tried to get Gemini to do everything in one prompt — understand the request, figure out constraints, and output the CAD JSON simultaneously. The results were inconsistent: sometimes brilliant, sometimes geometrically nonsensical.
The breakthrough was decomposition. By splitting the pipeline into distinct, purpose-built calls — each with its own system prompt, context window, and output format — I could dramatically improve reliability at each stage. The constraint analysis pass forces Gemini to think before it codes, much like asking a human engineer to spec out a part before modeling it. The geometry correction loop gives the AI a chance to learn from CadQuery's compiler-like error messages.
This taught me that the right mental model for AI agents isn't "one smart prompt" — it's "a well-designed assembly line where each worker has a clear, narrow job."
Challenges
1. Making AI output manufacturable geometry Generative image-based 3D models (NeRF, diffusion) produce visually plausible but geometrically invalid meshes — holes, non-manifold surfaces, self-intersections. No 3D printer can handle them. The solution was to never generate geometry directly. Instead, Gemini generates instructions for a deterministic CAD kernel. The AI's job is to produce a valid parameterization; CadQuery's job is to enforce geometric integrity.
2. Streaming through a reverse proxy
Nginx buffers proxy responses by default, which meant all the step labels (Analyzing... Generating... Retrying...) would appear simultaneously at the end instead of streaming live. The fix was proxy_buffering off + X-Accel-Buffering: no in the Nginx location block — a non-obvious configuration that took real debugging to find.
3. Quaternion interpolation with custom pivot axes Simulating rotation around a non-origin axis (like a wheel spinning around its hub) required correctly composing translate → rotate → un-translate transforms using quaternion slerp. Getting the direction, angle range, and frame-of-reference right — especially across state changes — took significant iteration.
4. Mongoose schema coercion silently dropping data
When saving component transforms (including custom rotation axes) to MongoDB, Mongoose's schema coercion was silently resetting nested pivot.direction values back to defaults during document.save(). The fix was switching to findOneAndUpdate with $set and runValidators: false — bypassing schema coercion entirely for raw geometry data.
What's Next
Kineworks is a foundation. The next layers:
- Physics simulation — add mass, inertia, friction, and collision to the kinematic engine
- Electrical integration — route wiring, select motor drivers, estimate power budgets
- BOM generation — automatically produce a bill of materials with supplier links
- Multi-agent collaboration — multiple engineers working on the same assembly in real time
The long-term vision: a platform where a team of three people can design, simulate, validate, and order parts for a fully functional robot — without anyone opening Solidworks or KiCAD.
Log in or sign up for Devpost to join the conversation.