Inspiration
Although modern medicine and imaging technologies are powerful, innovation medical environments are slow to translate due to heavy restrictions on patient safety and privacy. Technologies that significantly improve the healthcare process without risking harm to patients are difficult to create, but NeuronB aims to do just that.
What it does
NeuronB takes two MRI scans with different markers (FLAIR and T1Gd), runs them through a deep learning model to segment the tumor, and returns a 3D visualization — all within seconds. The result is an easy-to-understand, 3-dimensional model of the tumor in the brain space, giving a clearer idea of where the tumor exists that might be difficult to see through a series of 2D MRI slices.
How we built it
We trained a 2D UNet model off of the NIH's BraTS-TCGA-LGG dataset, slicing each of the 3D MRI's into 2D tensors based off of 2 different tumor markers, and then feeding it into a training cycle over 15 epochs.
IMPORTANT STATISTICS (SEE IMAGES ATTACHED) Final Val Loss = 0.0098 (lower is better) Final Dice Score = 0.9421 (1 is perfect)
Accomplishments that we're proud of
Our final model was surprisingly accurate, with val_losses and dice_scores that were of publishable quality (i.e. if we were to disregard scientific significance, it could have been in a paper).
What's next for NeuronB: Medical Imaging Made Simple
Future steps would likely involve segmenting the model into further parts, separating flair and T1GD, as well as other segmentations for a more through representation of the scan. Additionally, upgrading the model to a 3D UNet could also prove beneficial, since the 2D counterpart was chosen for performance reasons.
Built With
- dain
- flask
- python
- pytorch
- typescript
- unet
- vedo
- vercel
Log in or sign up for Devpost to join the conversation.