Inspiration
About PredictPal
Inspiration
Forecasting tools are often powerful but inaccessible. We were inspired by how many people can collect data, but struggle to convert it into confident decisions or explain those decisions clearly to others. We wanted to build something that makes forecasting feel guided, understandable, and shareable.
What We Built
PredictPal is a 5-step forecasting workflow that takes a user from raw data to a published, notebook-style story:
- Get Started: upload a target dataset and optional driver datasets.
- Process Data: select columns, frequency, and preprocessing options.
- Train & Forecast: configure baseline vs multivariate models and train.
- Analysis & Results: review evaluation metrics and interactive forecast/driver charts.
- Publish Story: turn results into a readable post with text + chart blocks, then publish to Explore.
The goal was not just model output, but communication quality.
The process is guided by a context aware (can see your decisions and selections) LLM chatbot "predict pal" that walks you through the process explaining the options with the ability to autoselect suggested features for given datasets.
How We Built It
- Frontend: Next.js + React + Zustand + Recharts + Tailwind.
- Backend: FastAPI + pandas + scikit-learn + skforecast.
- Flow: artifact-based training outputs (
analysis_result.json, CSV artifacts) feed directly into the analysis and publishing steps. - Publishing: we support both backend-published stories and local persistence for reliable demos (including anonymous publishing fallback).
Core Forecasting Idea
We compare a baseline model and a multivariate model on a holdout window using metrics like RMSE:
$$ \mathrm{RMSE} = \sqrt{\frac{1}{n}\sum_{i=1}^{n}(y_i - \hat{y}_i)^2} $$
and report improvement as:
$$ \mathrm{Improvement\%} = \frac{\mathrm{RMSE}{baseline} - \mathrm{RMSE}{multi}}{\mathrm{RMSE}_{baseline}} \times 100 $$
This gives users a simple, interpretable way to judge whether extra drivers actually help.
Challenges We Faced
- Artifact consistency across steps: ensuring Step 4/5 could load outputs robustly without rerunning training.
- Driver generalization: moving from hardcoded driver charts to dynamic charts based on selected driver columns.
- State persistence: handling anonymous publishing and demo resilience when backend in-memory state resets.
- UI clarity: making technical outputs understandable for non-experts while keeping details for advanced users.
What We Learned
- “Explainability by design” (clear labels, guided phases, chart context) dramatically improves usability.
- Building resilient fallbacks (local persistence, robust parsing, merge-safe flows) makes demo systems much more reliable.
- Collaboration speed improves when each step produces stable artifacts and clear contracts for downstream steps.
What We’re Proud Of
PredictPal doesn’t stop at predictions; it helps users understand, communicate, and share forecasting outcomes with confidence.
Built With
- axios
- fastapi
- lucide
- next.js
- node.js
- pandas
- python
- react
- react-dropzone
- recharts
- scikit-learn
- skforecast
- supabase
- tailwind-css
- typescript
- uvicorn
- zustand
Log in or sign up for Devpost to join the conversation.