Inspiration
Imagine a world where finding the right doctor is almost impossible and extremely time consuming - especially when your health needs depend on quick and reliable care. For many, that is the reality. We wanted to develop a platform where patients could find doctors that address their specific needs because they are specialized to do so - while doctors can find the patients that can most directly help.
What it does
MediMatch allows you to match with doctors that specialize in your healthcare needs. On the platform itself, potential doctors appear individually for the user to select by swipe or visible buttons. Swiping right indicates that the patient/user would like to work with that doctor towards treating their disease or illness, while swiping left indicates that the user would like to not work with that doctor. The patient can also filter their feed based on gender, specialty, and rating.
After matching with doctors, the user can chat with them to discuss further details regarding treatment and other medical logistics. If desired, a convenient explore page is provided to view all available doctors - as an alternative to the swiping functionality.
How we built it
We built MediMatch using a full-stack architecture that combines a React/Next.js frontend with a FastAPI backend powered by an AI-driven retrieval-augmented generation (RAG) system. The backend integrates Google’s Gemini API with a Hugging Face embedding model and ChromaDB to analyze user symptom inputs—both text and images—and return structured insights into possible causes.
Doctor and patient data are served through RESTful FastAPI routes, while the chat and swipe interfaces are implemented with responsive components in React. The RAG pipeline references a curated Diseases_Symptoms.csv dataset for factual grounding, allowing users to receive contextually relevant medical matches. The entire system is containerized and connected via standard HTTP endpoints with CORS-enabled communication between the frontend and backend.
Challenges we ran into
Some members did not have much experience with some of the technologies used, so applying certain APIs and frontend development concepts was a learning curve.
Accomplishments that we're proud of
We were able to successfully brainstorm an idea and delegate tasks amongst the members to complete necessary tasks. If any team members needed assistance, the other members were able to provide the necessary guidance. In the end, we were able to have a working product to show the judges.
What we learned
We learned how to integrate a multimodal AI RAG pipeline that combines language and image understanding within a real-world healthcare platform. Our team gained hands-on experience connecting a Next.js frontend with a FastAPI backend, enabling seamless interaction between users and AI-powered medical insights. We also learned how to use ChromaDB for efficient vector search, Hugging Face embeddings for symptom retrieval, and Google’s Gemini API for generating factual, structured responses. Beyond the technical side, we deepened our understanding of designing for accessibility, trust, and ethical use of AI in healthcare.
What's next for MediMatch
Next, we plan to expand MediMatch into a more comprehensive digital health assistant. This includes integrating real-time video consultations, secure authentication and data storage, and a recommendation engine that matches patients to verified doctors based on treatment history. We also aim to add EHR (Electronic Health Record) compatibility and deploy the AI pipeline to a scalable cloud environment for faster inference and reliability. In the long term, we envision MediMatch as a secure, AI-enhanced bridge between patients and healthcare professionals worldwide.
Built With
- chromadb
- gemini
- html
- javascript
- langchain
- next.js
- pandas
- python
- react
- typescript


Log in or sign up for Devpost to join the conversation.