Conduit
Unifying wastewater treatment operations through real-time intelligence, AI copilot assistance, and immersive 3D visualization.
Inspiration
Wastewater treatment is one of the most critical, and most overlooked, pieces of public infrastructure. Every day, billions of gallons of water pass through treatment facilities that protect public health and the environment. Yet the operators running these plants are often stuck juggling disconnected legacy systems, paper-based e-logs, and manual compliance workflows.
We kept coming back to one question: what if plant operators had the same caliber of tooling that software engineers take for granted? Real-time dashboards, intelligent search across historical data, AI-powered copilots, and mobile-first field tools, all unified under a single platform.
The gap between how modern software teams work and how critical infrastructure teams work inspired us to build Conduit, a platform that brings real-time monitoring, semantic knowledge retrieval, and AI assistance to the people keeping our water safe.
What It Does
Conduit is a full-stack operations platform for wastewater treatment plants. It provides:
Real-Time Monitoring Dashboard — Live KPI cards tracking flow rates (MGD), compliance percentages, active alerts, and open work orders. Animated sensor gauges for pH, dissolved oxygen, TSS, turbidity, chlorine, UV intensity, temperature, ammonia, phosphorus, and ORP give operators an at-a-glance view of plant health.
AI Copilot — A conversational interface that lets operators query plant status, retrieve alerts by severity, pull sensor readings, acknowledge incidents, create e-log entries, and powerfully perform semantic search across all historical logs, alerts, and work orders using natural language. Ask "what happened the last time turbidity spiked in clarifier 2?" and get contextually relevant results ranked by relevance.
3D Waste Visualization — An immersive React Three Fiber scene with procedurally generated textures (FBM noise, Sobel normal maps), animated water surfaces, and YOLO-detected debris classification overlaid in real time. Waste objects are classified as plastic, organic, metal, or unknown where each is rendered with distinct physically-based materials.
Edge Video Stream Analysis — YOLOv8n-seg running on Vultr edge GPUs processes 1080p video streams in real time, detecting and classifying waste with confidence scores, bounding boxes, and tracking IDs — all rendered in a HUD overlay.
Mobile Field App — A full Expo React Native app with tabs for Dashboard, E-Log, LiDAR Scanner, Alerts, and Work Orders. Built with offline-first architecture and smart host detection so it works reliably in the field.
Compliance & Lab Tracking — Discharge Monitoring Report generation, lab result approval workflows, and automated compliance rate calculations.
How We Built It
Architecture
conduit/
├── apps/
│ ├── web/ our Next.js dashboard + API
│ ├── mobile/ the mobile Expo 54 + React Native app
│ └── vector-service/ our FastAPI semantic search engine, built for large scale data
├── packages/
└── docker-compose.yml for our Cortex Vector DB + Vector Service, to bundle and build yourself!
Web Dashboard
Built with Next.js and React 19, styled with Tailwind CSS 4. The 3D visualization layer uses React Three Fiber with custom shaders, procedural texture generation (FBM noise with 5 octaves, Sobel-filter normal maps, computed roughness/AO maps), and post-processing effects (bloom, vignette). State management via Zustand, data fetching via TanStack React Query.
AI Copilot & Semantic Search
The copilot is backed by Google Generative AI for conversational intelligence and a custom vector search pipeline for knowledge retrieval.
Documents (logs, alerts, work orders) are embedded into 1536-dimensional vectors using OpenAI text-embedding-3-small, stored in Actian Cortex (a gRPC-based vector database), and queried with cosine similarity. The vector service, built in FastAPI, handles batch upserts (chunked at 100) and fire-and-forget indexing so log creation is never blocked.
Edge Vision Pipeline
Video streams are processed by YOLOv8n-seg deployed on Vultr edge GPUs, providing real-time waste classification with segmentation masks, confidence scores, and persistent tracking IDs — streamed back to the dashboard with a tactical HUD overlay.
Mobile App
Built with Expo 54 and Expo Router v6. Features smart API resolution (LAN detection, Expo tunnel fallback, Android emulator special-casing via 10.0.2.2), graceful offline demo mode with built-in seed data, and a custom floating tab bar. The LiDAR scanner uses camera frames as seeds for deterministic point cloud generation via a Linear Congruential Generator (LCG) seeded with DJB2 string hashes.
Voice Interface
Hold-to-talk voice control powered by ElevenLabs text-to-speech, with a real-time audio visualizer using Web Audio API frequency data.
Data Layer
SQLite for structured data (plants, sensors, readings, alerts, work orders, lab results, compliance reports, users) and a TON of RESTful API routes under /api/plants/[plantId]/* handle all CRUD operations and streaming.
Challenges We Ran Into
Vector DB orchestration — Getting Actian Cortex (gRPC on port 50051) to play nicely with our FastAPI vector service inside Docker Compose required careful health-check configuration and graceful degradation when the DB wasn't ready. We implemented heartbeat checks and fallback empty results to keep the app functional even when the vector pipeline was booting.
Procedural 3D textures at interactive framerates — Generating FBM noise textures, Sobel normal maps, and roughness maps procedurally for each waste object was expensive. We solved this with in-memory caching, deterministic seeding (so textures only generate once per unique object), and efficient canvas-based computation.
Keeping the copilot grounded — Making the AI copilot useful without hallucinating required careful context assembly. We retrieve the top-10 semantically relevant documents, format them with full metadata, and inject them into the model's context window — keeping responses grounded in actual plant data.
Accomplishments That We're Proud Of
End-to-end semantic search over heterogeneous operational data — a plant operator can type a natural language question and get relevant results across logs, alerts, and work orders from months ago, ranked by vector similarity.
Production-grade 3D visualization with procedurally generated PBR materials, animated water physics, and real-time YOLO detection overlays — all running smoothly in the browser.
Sub-second edge inference pipeline using YOLOv8 on Vultr edge GPUs for real-time waste classification in video streams.
A unified platform that actually ties together monitoring, compliance, maintenance, AI assistance, and field operations — the kind of integrated experience that critical infrastructure deserves.
What We Learned
Vector databases change how you think about search. Moving from keyword-based filtering to semantic similarity opens up entirely new interaction patterns. Operators don't need to remember exact terminology , all they need to describe what they're looking for and the system understands.
Procedural generation is an underrated superpower. By generating textures mathematically instead of loading assets, we achieved rich visuals with zero asset pipeline overhead and deterministic seeding means consistent renders across sessions.
The gap between "demo" and "usable" is mostly in the details. Smart host detection, proper timeout handling, shift-aware log categorization, severity-based alert grouping and these small touches are what make a platform feel real.
What's Next for Conduit
Predictive maintenance models — Using historical sensor time-series data to predict equipment failures before they happen, automatically generating preventive work orders.
Multi-plant — Scaling from single-plant monitoring to a unified view across an entire utility's portfolio, with cross-plant benchmarking and anomaly comparison.
Regulatory auto-filing — Automatically generating and pre-filling DMR submissions from lab results and sensor data, reducing compliance burden from hours to minutes.
On-device ML for mobile — Running lightweight anomaly detection models directly on field devices for instant alerts during inspections, even without connectivity.
Built With
- 11elevenlabs
- 3js
- actian
- figma
- love
- next
- typescript

Log in or sign up for Devpost to join the conversation.