Inspiration

Brain tumor diagnosis depends on MRI analysis, which is a slow, manual process requiring specialist radiologists. We wanted to build a tool that could instantly segment and visualize tumor regions from raw MRI scans, using powerful diagnostic AI.

What it does

NeuroSight takes a raw brain MRI scan (.nii.gz) and automatically identifies three tumor sub-regions: the Necrotic Core, Edema, and Enhancing Tumor. It renders results as an interactive mesh with clipping plane controls, a synchronized 2D slice viewer across all three anatomical axes, per-region volume statistics, and AI-generated clinical explanations for each tumor region.

How we built it

We started with a 2D U-Net trained on BraTS 2021 MRI data, feeding it four imaging modalities per slice to detect the different tumor regions. The backend runs on Flask and handles everything from processing the raw scan files to generating the 3D meshes using marching cubes. On the frontend, we used Three.js for the 3D viewer and built the 2D slice navigator from scratch with the Canvas API. Tumor region explanations are pulled from the Claude API.

Challenges we ran into

The 3D clipping plane gave us the most trouble, when you slice through a mesh in Three.js, you get a hollow cutout, which looks wrong for medical visualization. Getting solid filled caps required a multi-pass stencil buffer approach that took a lot of trial and error to get right. Keeping the 2D and 3D viewers in sync was also trickier than expected since they operate in completely different coordinate systems, so we ended up building a fraction-based sync layer between them. And marching cubes on large tumor volumes would sometimes spit out meshes with hundreds of thousands of vertices that tanked browser performance, so we had to write adaptive decimation logic on the fly.

Accomplishments that we're proud of

  • A complete end-to-end pipeline from raw MRI upload to interactive 3D visualization in seconds
  • Professional stencil-buffered clipping with solid interior caps
  • Synchronized 2D/3D navigation that accurately reflects the same anatomical position across viewers
  • Federated learning infrastructure for privacy-preserving training across institutions

What we learned

Processing MRI volumes slice-by-slice is way more viable than we initially thought, as long as you do proper post-processing to clean up the 3D result. We also went deep into WebGL internals we'd never thought about before. Stencil buffers aren't something most web developers ever need to think about, but they're incredibly powerful once you understand the render ordering. Most importantly, we realized how much of medical imaging comes down to data representation: getting the coordinate spaces, voxel spacing, and axis orientations right is half the battle.

What's next for NeuroSight

Upgrading to a 3D U-Net to capture inter-slice spatial context, adding DICOM support for hospital-standard imaging formats, deploying the federated learning server so multiple institutions can collaboratively train without sharing patient data, and integrating longitudinal scan comparison to track tumor progression over time.

Built With

Share this project:

Updates