Inspiration
We were inspired by Charleston’s parks, specifically their deep-rooted beauty and the community’s dedication. We recognized a persistent gap: a tool to seamlessly connect local businesses and skilled volunteers with meaningful conservation projects. This ignited the vision for VolunteerFinder.
What it does
VolunteerFinder allows administrators from the Charleston Parks Conservancy to easily identify and contact local organizations that can provide resources in the form of project volunteers. It automates data collection (free text → keyword → Google Places scrape → website identification → website scraping → volunteer fitness assessment → contact extraction → custom CRM population), and provides an intuitive user interface to execute searches, track previous search, and organize/track organizations identified.
How we built it
- Define project requirements
- Met with Darlene to define need and desired outputs
- Front end app
- Identifying potential businesses
- Automatically scraping business websites
- Asses business website data to decide on volunteer fitness/likelihood
- Extract contact info from scraped data
- Incorporate a "CRM" to easily view all businesses scraped, as well as a way to view results from each search individually
- Met with Darlene to define need and desired outputs
- Aritect app design
- Work with Claude Opus to architect and define backend AI logic + front end UI design
- Vibe code
- Develop a Replit prompt based on aritecture design to build base of the application in replit.
- Follow up with iterative corrective prompts until POC is functional and has all required components
- Deploy
- Deployed POC with replit 1-click deployment
- Later discovered significant deployment bugs in the replit app and switched to railway deployment
- Stakeholder Feedback
- Demo'd tool to Darlene, Colin, and Caroline (and let them experiment on their own with deployed POC)
- Got feedback on good and bad parts of app, got requests for a few additional features
- Further experimented with deployed instance ourselves to identify any critical bugs
- Finalize
- Incorporated stakeholder feedback by adding requested new features
- Addressed and corrected critical deployment bugs via claude code
- Finalized MVP app
How it works
The best way to understand the app logic is to look at the flow diagram (best viewed in draw.io).
You can also look at the code defined here. Be sure to look at the railway branch.
The fully functioning MVP app can be found here. Feel free to play around with it!
Reach out to Jack Teitel - jack@title-ai.com, 803-829-0458 - if there are any issues or questions.
Defining A Search
The app starts with creating a "search profile". This is where the user defines all the inputs to the system.
Then the search profile runs. Note that all search profiles will run in the background so that you can continue to use the app while they run. Leaving the app webpage will not cancel any active or queued searches.
The system will only run 1 search at a time to ensure we do not surpass deployment resource allocation. However, you can queue up multiple searches. If a search is already running, you can simply click "run" on another search, and that search will be added to the queue. When the active search finishes, the next search in the queue will automatically start.
Note that when you create a search profile a search will automatically run. You may also edit the profile at any point in the future. Editing a search profile does not automatically trigger a run.
You may rerun a search profile at any point. If it has been edited since the last run it will run with whatever the edited field values currently are. When a search profile is rerun, any newly generated data will overwrite the old data. When you rerun a search profile there are various options you may select. These options are listed below and may be selected in any combination
- You can reuse the business list. In this case the search will not pull businesses down from google places, it will simply use whatever businesses were found in the last successful run of this search profile. Alternatively you can opt to run this step fresh, and download a whole new list of businesses from google places.
- You can reuse the relevancy scoring. If set, then the search run will simply reuse the relevancy scoring from the last successful run of this search profile. Note that if new businesses have been added since the last successful run, we will still calculate fresh scores for those businesses. If not selected then all businesses will have relevancy scores recalculated.
- You can reuse the web scraped data. If set this will not scrape the business websites, but it will still recalculate all the fitness logic and re-extract contacts. If not selected then the website data will be scraped fresh for all applicable businesses. Note that if in a previous run a business had its website scraped but that business is not included in the current run, that website data will not be erased - we only erase data when we have something new to replace it with.
Phase 1 - Identifying Businesses
It first leverages the hq location defined plus a search radius to create a grid search pattern to identify where to look for businesses (places API returns a max of 60 results per search). We then use AI to extrapolate searchable keywords the free text "business type" description provided by the user. We search every keyword in every grid quadrant to compute our final list of businesses.
We then leverage AI again to compare the business information from places against the "business type" free text description and compute a relevancy score, so we know how relevant each business is to the target business type.
This business list, along with the corresponding relevancy scores and all info extracted from the places api - such as business name, address, website, and phone - get stored in an SQL database. Note that all businesses will have a name and address, but not all businesses will have a website and/or phone number.
Phase 2 - Volunteer Fitness Scoring + Contact Extraction
In the search profile the user will have defined a max number of businesses to run through this phase. The default (field left empty) is to just run all businesses through this phase, although that may take some time. In the event that a max is set, it will select businesses above the relevancy threshold (set in profile) in order of relevancy score from highest to lowest.
For each business it
- Pulls the website home page from the google places api results
- If no website was recorded in google places it performs an AI powered web search via godaddy to identify the business website homepage
- Scrapes the website
- Leverages selenium + trifaltura to ensure javascript-friendly scraping of only relevant text
- Extracts all links found in the webpage
- Sorts identified links into one of 2 queues
- priority queue (processed first) = links likely relevant to volunteer criteria
- regular queue = all other links
- Proceeds through website queue until max number of pages is hit (defined in profile) or no more links are found
- Records webpage data in SQL database - one row per page
- Intelligently chunks each webpage and encodes those chunks into ChromaDB for use in later RAG system
- Uses AI to break down "volunteer fitness" free text criteria (specified in profile) into a series of specific and distinct criterion questions, assigns each criterion a weight
- Leverages agentic RAG to assign a score (0-100) to each criterion, also outputs short summary of reasoning for the decided upon score
- Stores scores and reasoning summaries in SQL database
- Computes weighted average of criterion scores = volunteer fitness score, also computes an overall summary based on the combination of all criterion reasoning summaries
- Stores overal fitness score + overall summary in SQL database
- Uses AI to examine each webpage and extracts contact information - each contact contains at least one of [name, title, email, phone number]
- Uses AI to assign each contact a relevancy score for how likely they are to be a decision make for volunteering efforts
- Stores contact list + scores in SQL database
Results
Under each search profile there is a results page. Only businesses that pass the profile-specified relevancy and fitness thresholds will be displayed (although all are stored in database). Businesses are originally presented in order of fitness score - highest to lowest.
You can click on the "i" icon to view detailed information about the business including fitness score criterion summaries and contact info
There is also a text field where the user can type notes about the organization. These notes will immediately save and persist across sessions. These note sections are linked to the business not the search profile results, meaning if a single business appears in multiple search profiles the notes content will be the same, and changes in one will effect the others. This feature was explicitly specified by the user after the initial demo.
Results can also be exported as a CSV file to be used outside the app.
Organizations Page
You can also view the results from all search pages under the organizations page. You can think of this as a custom CRM. It includes various filters as well as sorting options. The primary key here is business + search profile.
One special feature in the organizations page is the ability to edit an organizations website. If for whatever reason the wrong website is listed for an org, you can correct it here, then the next time the business is refreshed it will use this newly specified website.
On this page you also have the option to just rerun single businesses. Each business will run with the clearly labelled search profile in that row. This is particularly useful in combination with the previously describe feature (individual business website editing) - that way if you edit a single business you don't need to rerun the full search profile to see the new results.
You can also export results from this page as a CSV. Exports will mimic the current settings for filters and sorting.
Help Page
This is a quick guide for how to use the app, and the different functionalities of the pages. It does not go into detail on how the backed works, but is rather a guide to introduce lay/novice user on how to interact with the app.
Deployment Information
Railway
The app is deployed on Railway. It is currently deployed on the US East Metal region (metal = Railway owned infra and big price discount). This comes with 8GB RAM max and 8 vCPUs max. In tests I never needed more than 2GB RAM max and 2 vCPUs max.
Railway is directly linked to github and any updates to the github railway branch will trigger a redeploy. You can also manually deploy in the railway app (not the dashboard app, but the corresponding railway deployment configuration page)
The app is built on a serverless architecture. This means the app will "sleep" after 10 minutes of inactivity (both user activity and search profile runs count as active). When awake the app will function as normal and be charged as normal. When asleep the app will not be charged. On the first attempted use when the app is sleeping it may take up to 5-10 seconds for the app to spin up. This 5-10 second load time for initial page access is well worth the cost saving of it sleeping when not in use (note this spin up time will only apply to the very first web page access in a user session)
Costs for railway are calculated as a combination of:
- vCPU usage
- RAM usage
- internet egress
- volume storage
There is a minimum monthly spend of $5 on railway. Based on some rough calculations I do not expect monthly spend to exceed that $5 minimum. If it does it will be by a small margin.
OpenAI
All the AI in this app is powered by the OpenAI API. We currently use GPT-4.1-mini for the business relevancy scoring as that task is particularly simple. All other AI tasks leverage GPT-4.1.
Costs for these models, as of 06/27/2025 are as follows. Note that costs may change overtime (likely to go down if anything). Also note that these models may be deprecated at some point in the future (not likely to happen for at least a few years). You can find further pricing information here.
| Model | per 1M input tokens | per 1M cached input tokens | per 1M output tokens |
|---|---|---|---|
| gpt-4.1 | $2.00 | $0.50 | $8.00 |
| gpt-4.1-mini | $0.40 | $0.10 | $1.60 |
Based on initial rough cost estimates, the app will cost about $0.02 in OpenAI fees per business analyzed (including all analysis steps).
Challenges we ran into
- Challenge 1: Lost one of our teammates early on, leaving us with only one teammate with coding experience
- Challenge 2: Architecting software and deployment with extremely limited software engineering experience
- Challenge 3: Deploying the app and correcting memory leaks + other fundamental bugs
Accomplishments we’re proud of
- Accomplishment 1: Successfully built a no-code CRM pipeline that captures and logs volunteer data.
- Accomplishment 2: Successful incorporation of AI (chatgpt) in multiple steps (identifying businesses, rag for processing web data, AI +rag powered contact extraction) - allowing for efficient, cheap, scalable, and effective intelligence
- Accomplishment 3: First deployment was problematic so it was switched to a new service, making the new app both faster and cheaper.
- Accomplishment 4: Stakeholders say the tool is extremely useful and will save them several hours per week if not per day and are already asking to expand the tool beyond initial scope.
What we learned
- Key lesson 1: No-code platforms like replit are powerful for rapid prototype-to-production.
- Key lesson 2: Deployment can be very complex
- Key lesson 3: Software engineering by vibe coding alone can be tricky and bug fixing can be extremely difficult
- Key lesson 4: Vibe coding alone can produce a viable small-scale MVP
What’s next for VolunteerFinder
- Next step 1: Help Conservancy create necessary API keys and set up railway infrastucture under their own deployment - onboard them on how to use railway
- Next step 2: News releases to bring technological credibility to the Conservancy
- Next step 3: Scoping out a paid phase 2 through Jack's AI consulting company to expand tool to assist with finding donor organizations
Built With
- chromadb
- claude
- css
- html
- javascript
- openai
- python
- railway
- replit
- sqlite

Log in or sign up for Devpost to join the conversation.