Every year, thousands of homebuyers close on properties without understanding the full risk picture — hidden title defects, wildfire exposure, flood zones, and earthquake proximity.
Title insurance companies spend millions manually underwriting these risks.
We set out to build a tool that makes professional-grade property risk analysis accessible to any buyer or agent in seconds.
What it does
PROP.INTEL generates a comprehensive risk report for any U.S. property address in under 10 seconds.
It produces two scored risk dimensions:
Title Risk
Flags foreclosures, liens, rapid flips, and ownership chain anomalies using real county records.
Hazard Risk
Uses a machine learning model trained on FEMA National Risk Index data across 3,000+ U.S. counties.
- Ridge regression fits hazard weights and min-max scalers on county-level Expected Annual Loss (EAL) figures
- Produces a 0–100 composite score across 8 perils:
- Earthquake
- Wildfire
- Flood
- Hurricane
- Tornado
- Hail
- Strong Wind
- Coastal Flood
- Earthquake
The model then estimates annual dollar losses for the specific property by scaling county EAL against the property's assessed value, generating per-peril figures such as:
- ~$2.3K/year wildfire exposure
- ~$890/year flood loss
AI Synthesis Layer
An AI summary (GPT-4o) synthesizes both dimensions — and explains why each hazard score is high or low based on real local geography — into a plain-English underwriting recommendation:
Proceed · Caution · High-Risk · Avoid
How we built it
Frontend
- Next.js 14
- TypeScript
- Tailwind CSS
- Recharts (interactive hazard breakdown visualization)
Backend
- Next.js API routes
- Server-side rendered property pages calling internal API endpoints
Property Data
- ATTOM Data API (ownership history, sales, AVM estimates, property details)
Hazard Model
- FEMA National Risk Index (2023) dataset loaded at build time
- Ridge regression across 3,000+ U.S. counties
- Min-max scaling to normalize Expected Annual Loss into a 0–100 composite score
AI Layer
- GPT-4o-mini → location-specific hazard explanations grounded in FEMA EAL values
- GPT-4o → final structured risk narrative
County Resolution
- FCC Census Block API (primary)
- GPT fallback for coordinate → county FIPS mapping
Challenges we ran into
- ATTOM free tier capped at ~100 API calls/day
- FEMA NRI is county-level; parcel-level approximation required building stock ratio adjustments
- Passing live FEMA EAL data through four component layers to power dynamic Recharts tooltips
- Inconsistent geocoding coverage required layered FCC + GPT fallback for reliable county resolution
Accomplishments we’re proud of
- A real actuarial model backed by FEMA data — not synthetic scoring
- Per-property annual loss estimates (e.g., ~$2.3K/year wildfire exposure) derived from county EAL scaled by AVM
- AI explanations referencing real local geography
- Full portfolio tracking for side-by-side property risk comparison
What we learned
- Public datasets like FEMA NRI and FCC Census APIs become powerful when paired with ML normalization
- LLMs work best as a synthesis layer on top of structured data — GPT explains the numbers, it does not invent them
- Next.js API routes allow secure server-side key management while maintaining fast SSR performance
What’s next for PROP.INTEL
- Parcel-level hazard data (FEMA flood polygons, CAL FIRE FHSZ boundaries)
- Historical climate trend overlays (10-year wildfire & flood shifts)
- Lender and title agent dashboard for portfolio underwriting
- Real-time lien and foreclosure monitoring via county recorder APIs
Built With
- attom
- css
- fcc
- fema
- gpt-4o
- ml
- next.js
- node.js
- openai
- react
- regression
- tailwind
- typescript
Log in or sign up for Devpost to join the conversation.