Inspiration
Most people who need large-scale compute—portfolio analysis, anomaly detection, route optimization, climate risk—don’t know Slurm, partitions, or job specs. They just know: “I need this analyzed.” We wanted to close that gap. What if you could describe your problem in plain English, drop in a file, and have an AI figure out the rest? No cluster expertise, no scripting. Just describe what you need—we run it.
What it does
NAI turns natural language and file uploads into simulated HPC jobs and insights.
You type things like “Analyze my 50k stock portfolio,” “Scan this CSV for anomalies,” or “Monte Carlo simulation, 50k iterations.” Upload a CSV (or other data) when relevant. Our agent analyzes the problem, chooses strategies, estimates cost and memory, writes Python, and generates a Slurm-style job spec—all behind the scenes. You get results via memory graphs, generated code, job specs, and protein-structure–style visualizations in floating windows, plus plain-language explanations in the chat. No HPC knowledge required.
How we built it
User Interface: Sketched user interface layouts/frames, then translated it into Figma wireframes. Then, we used Figma Make to transition the wireframes into prototypes.
Frontend: Used Figma MCP integration with Google Antigravity to translate prototypes into components. Next.js app with a landing page, chat UI, file upload, and floating visualization windows (memory graphs, code, job specs, protein structure). Styled with Tailwind; Auth0 for auth.
Agent backend: Google ADK–style agent framework powered by Nemotron as the LLM. A tool pipeline drives the workflow:
analyzeProblem→readUploadedFile→generateRoadmap→planToolCalls→costAnalyzer→chooseStrategies→writeCode→writeJobSpec→predictMemoryUsage→saveArtifactsToDb→uploadJob, withgenerateReasoningbetween steps so the user sees why decisions are made.Orchestration: A job orchestrator runs the pipeline step-by-step, updates status, and emits tool-call events. A simulated Slurm scheduler and processor execute jobs so we can demo the full flow without a real cluster.
Stack: TypeScript, Next.js, Tailwind, Nemotron API, React for visualizations. Monorepo with Turborepo.
Challenges we ran into
LLM output truncation: Long code or JSON sometimes got cut off. We added robust JSON extraction (find first
{, last}), fallbacks for partial parses, and explicit handling whenfinishReason === 'length'.Pipeline design: With many tools and shared context, ordering and error handling mattered a lot. We iterated on the tool queue, tool-specific handlers (e.g.
saveArtifactsToDb,generateReasoning), and how we passToolContextbetween steps.Non-expert UX: Showing code, job specs, and memory curves without overwhelming users. We used floating, dismissible windows and reasoning messages so each artifact stays optional but discoverable.
Simulating HPC believably: Making the Slurm simulator feel realistic (partitions, nodes, status updates) while keeping the abstraction clean so we could swap in a real cluster later.
Accomplishments that we're proud of
End-to-end flow: Natural language + file → analyzed problem → roadmap → cost plan → generated code → job spec → simulated run → visualizations and chat. All without the user touching Slurm or writing job scripts.
Agentic AI doing real workflow design: The model doesn’t just answer questions—it analyzes the task, picks strategies, estimates resources, writes execution code, and explains its reasoning. That’s the “creative use of generative/agentic AI” we wanted to showcase.
Rich, contextual UI: Memory graphs, code windows, job-spec previews, and protein-structure–style viz in floating windows, tied to specific tool steps. Progress and reasoning visible as the job runs.
Production-style structure: Type-safe TypeScript, tool registry, clear separation between orchestrator, tools, and Slurm simulation. Built to extend to a real cluster.
What we learned
Discrete tools beat one giant prompt. Breaking HPC workflow into explicit tools (analyze → plan → cost → code → spec → run) made the agent reliable and debuggable.
JSON from LLMs is brittle. Truncation, stray markdown, and minor syntax errors are common. Defensive parsing and “best effort” extraction were essential.
Abstraction is UX. Making HPC accessible meant thinking like a non-expert at every layer: inputs (language + files), process (hidden complexity), and outputs (graphs + explanations, not raw logs).
Simulation accelerated iteration. Simulating Slurm let us tune the pipeline and UI without cluster access, and keeps the path to real HPC clear.
What's next for NAI
- Real cluster integration: Point the job submission at an actual Slurm (or similar) cluster so users run real HPC jobs.
- More domains: Templates and tooling for finance, logistics, climate, and other “analyze at scale” use cases.
- Richer outputs: More chart types, exportable reports, and clearer “so what?” summaries for decision-making.
- Persistence and history: Saved jobs, result history, and optional sharing/collaboration.
- Broader inputs: Support for more file types, bigger uploads, and optional links to cloud storage.
Built With
- alphafold
- auth0
- google-adk
- nemotron
- oauth
- react
- typescript

Log in or sign up for Devpost to join the conversation.