Inspiration
In microgravity, fluid shifts toward the head. Intracranial pressure rises. The optic nerve swells. Vision distorts and in some cases, never fully recovers. This is SANS (Spaceflight-Associated Neuro-Ocular Syndrome), one of the most serious unresolved health risks standing between humanity and Mars. Space travel is becoming more commercial, but a private ISS trip still runs $35,000 to $50,000 per night. For those rare few who make it up there, every safety tool has to be lightweight, fast, and accessible. Current SANS screening is reactive, expensive, and built for a world where only government astronauts go to space. That world is ending. SANSight helps mitigate some of that cost by replacing the need for dedicated screening hardware and specialist oversight with a simple biometric check-in anyone can do. We asked: what if any astronaut could get a personalized SANS risk score from a quick biometric check-in? That question became SANSight.
What it does
SANSight takes in an astronaut's biometric data and outputs a personalized SANS risk percentage — an early warning score, not a diagnosis. Inputs:
Retinal scan image (uploaded by user) Sodium / salt intake levels Vitamin D, Calcium, and Magnesium levels
Outputs: A SANS risk score (0–100%) Top contributing risk factors Trend tracking over time
The system is designed to be lightweight, interpretable, and actionable — giving astronauts and flight surgeons a continuous signal to act on before symptoms appear.
How we built it
Machine learning
- Scaffolded two models on Friday: an EfficientNet-B0 CNN for retinal image analysis and a Random Forest tabular model via scikit-learn for biometric data
- Sourced and cleaned five datasets across Kaggle and NASA LSDA, aligning astronaut nutrition variables (Vitamin D, Calcium, Magnesium, Sodium) and retinal scans into a unified feature schema Trained and tuned both models on Saturday, iterating on hyperparameters until predictions were meaningful
Frontend
- Built the React / Next.js frontend in parallel with the ML work on Friday
- Developed the risk dashboard with score breakdown, contributing factors, and trend tracking, which came together in the final hours
- Added space-themed UI elements including planets and rockets to match the mission aesthetic
Backend
- Stood up the FastAPI backend on Saturday and wired up endpoints for retinal upload, biometric input, and risk aggregation
- Connected the frontend to live API responses once both ends were stable
Authentication
- Integrated World ID on Friday alongside frontend scaffolding for privacy-preserving identity verification
AI assistant
- Added a Llama 3-powered LLM into the risk dashboard so users can ask questions about their results and better understand their risk score
Challenges we ran into
- Scarce SANS-specific data — very little labeled data exists for SANS specifically. We had to augment our datasets extensively — applying techniques like flipping, rotation, brightness shifts, and synthetic oversampling — to build a training set large enough to meaningfully train our ML models.
- Dataset size our retinal image datasets were far too large to manage through standard Git. We had to set up Git LFS (Large File Storage) to version and access the full dataset across the team without hitting repository size limits.
- Integrating World ID wiring up privacy-preserving authentication into our stack required navigating the World ID SDK carefully, handling verification flows, and ensuring user identity persisted correctly across sessions without storing sensitive data.
- Merging frontend and backend connecting the Next.js frontend to the FastAPI backend mid-hackathon introduced the usual cross-origin headaches, schema mismatches, and async timing issues that come with two people building both ends simultaneously under time pressure.
Accomplishments that we're proud of
- Built a working end-to-end multimodal ML system combining computer vision and tabular learning in a single hackathon weekend
- Integrated World ID for privacy-preserving identity verification, critical for sensitive health data
- Created a smart exercise re-log feature that saves past workouts for frictionless repeat entries
- Grounded every model feature in real NASA datasets (LSDA nutrition and exercise data)
What we learned
- Transfer learning on domain-adjacent datasets (general retinal disease → SANS proxy) can produce surprisingly useful signal even without ground-truth SANS labels
- Multimodal fusion is hard to get right — the weighting strategy matters enormously for user-facing interpretability
- Health-adjacent AI products need unusually careful UX framing: the difference between "risk indicator" and "diagnosis" has to be obvious at every touchpoint
What's next for SANSight
- Validated SANS dataset requesting labeled astronaut retinal data directly from NASA to retrain our models with real ground truth
- Wearable integration pulling heart rate variability, sleep quality, and activity data automatically from smartwatches
- Longitudinal modeling moving from snapshot risk scores to time-series predictions that catch trajectory changes early
- Flight surgeon dashboard a secondary view aggregating crew-level risk for mission medical officers
- Extension to other extreme environments deep-sea divers and high-altitude pilots face similar intracranial pressure dynamics
Built With
- cnn
- groq
- next.js
- python
- randomforest
- react
- typescript

Log in or sign up for Devpost to join the conversation.