Inspiration
South Carolina is losing farmland fast with 281,000 acres converted between 2001–2016, and billions of dollars in data center projects are accelerating that trend. These projects are often approved quietly, with little transparency on water or electricity impact. A $2.4B data center was approved in Marion County during a January 2026 winter storm; most residents didn't know it was on the agenda.
The data to fight this exists. It's scattered across USGS water databases, USDA crop surveys, 46 county zoning boards, and state legislative filings. Energy Almanac connects them.
What it does
Energy Almanac is a dashboard for SC agriculture planners with three core features:
- Threat Map: An interactive map overlaying data center locations, USGS groundwater wells, USDA farmland data, and watershed boundaries. Click a data center to see its specs and which farms share its aquifer.
- Impact Analysis: For any proposed data center, model projected groundwater risk, electricity rate increases, and the number of farms within a 5/10/25-mile radius.
- Actions: Provide was for farmers to hold large companies accountable for South Carolina energy costs.
How we build it
We built a Flask-based backend to expose a RESTful API that serves geospatial and modeling data to the frontend.
The API handles:
- Data center metadata (location, capacity, energy usage)
- USGS well and aquifer data
- USDA farmland boundaries
- Watershed overlays
- Impact modeling outputs
For the hackathon prototype, structured datasets were stored as JSON files, allowing us to rapidly iterate without provisioning a full relational or geospatial database.
Challenges we ran into
The biggest challenge we faced was sourcing reliable open-source data for our AWS S3 buckets and for building the electricity cost structural pressure multiplier. Publicly available datasets on data center energy use, water consumption, and grid impact was extremely limited or blocked behind paywalls.
We addressed this by aggregating data from multiple open-source APIs, cross-referencing public regulatory filings and agency datasets, and manually compiling verified real-world figures into structured JSON files for use in our prototype. This hybrid approach allowed us to maintain data integrity while working within public data constraints.
Accomplishments that we're proud of
Our two biggest accomplishments were building the structural pressure multiplier and successfully integrating the full AWS stack with our frontend dashboard.
We designed and implemented a modeling function that translates data center load into projected electricity rate pressure. Rather than presenting raw megawatt demand, the multiplier normalizes infrastructure strain into a structured, comparable metric that planners can interpret in economic terms.
By deploying this logic via AWS Lambda, we ensured the computation was scalable and separated from the Flask application layer. This made the model modular and production-ready while keeping the core API lightweight.
What we learned
One of our most important takeaways was how complex and constrained real-world data collection can be.
We learned that infrastructure data is often fragmented across agencies or not accessible to the public. Even when datasets are technically public, they frequently require significant cleaning and cross-referencing before they can be used in a modeling context.
What's next for Energy Almanac
Our next phase focuses on three priorities: improving the structural pressure multiplier, expanding our data pipeline, and increasing application functionality. The current multiplier provides a normalized estimate of electricity rate pressure driven by large-load facilities. Moving forward, we plan to incorporate more granular utility rate case data. We also want to expand data acquisition by partnering with state agencies or utilities for higher-resolution data.
Log in or sign up for Devpost to join the conversation.