Trip Buddy: The Frictionless Group Travel Platform What is the Actual Project About? Trip Buddy is designed to eliminate the logistical friction points that ruin group travel memories. Every group trip inevitably devolves into arguments over coordinating itineraries, splitting costs, and ensuring everyone knows where to be next. Our mission was to create a reliable, real-time platform where all the annoying parts of travel from dynamic itinerary adjustments to real-time peer-to-peer location tracking are handled seamlessly within the app. Our primary goal is to shift the focus back to spending time with friends and enjoying the adventure, not managing spreadsheets and arguing over directions.
How Did We Implement It? (System Design and Creative Solutions)
Our application was engineered for reliability and scalability from the ground up, avoiding the typical shortcuts found in hackathon projects. The system design is highly structured, integrating advanced communication and data patterns:
Real-Time Peer-to-Peer Communication (WebSockets) This was one of the most complex, yet cleanest, implementations. We used WebSockets and Socket.IO to handle real-time GPS location tracking for all group members. This approach is highly scalable through the use of distinct namespaces and rooms, allowing us to maintain a persistent, low-latency connection for location updates. Crucially, it minimized server-side processing and allowed us to perform valuable calculations like real-time distance between all users on the server.
Dynamic Itinerary Generation (NLP and Linked Lists)
The core of Trip Buddy is the dynamic itinerary, structured as a Doubly Linked List with nested linked lists representing sub-events within each main event.
• Itinerary Refinement via Semantic Analysis: To make the itinerary truly adaptive, we implemented a Relational Keyword Matching system leveraging Semantic Embeddings. When users provide feedback on an event, our system analyzes the semantics (e.g., positive keywords increase the time allocated; negative keywords decrease it). This allows the itinerary to fluidly adjust based on the group's real-time experience.
- Data Flow and Access (Pub/Sub)
We leveraged a Publish/Subscribe (Pub/Sub) pattern for handling specific events like image retrieval (e.g., pulling images from a shared Google Drive). This pattern is essential for scalable, decoupled communication when accessing third-party cloud storage and ensures that components are only notified when relevant data is available.
- Navigation Logic
Our navigation feature, while simple in execution, is powered by the core data structure. We simply traverse the itinerary's Linked List structure to identify the n+1 event location. This location is then passed to the front end, demonstrating 90% of the functional logic required for seamless turn-by-turn navigation (the remaining 10% being the final Maps Kit API call).
What Were the Challenges We Faced?
- NoSQL Database Integration and Schema Design
The single largest recurring challenge was integrating our Django backend with the Firebase NoSQL database.
• Configuration Failures: We were unable to successfully configure the Firebase Admin SDK and environment variables within the Django application. Even after attempting virtual environments, extensive debugging, and consulting multiple external resources, the connection and installation issues for the admin tools remained unresolved.
• NoSQL Mindset: None of the team members had prior experience with NoSQL. The initial task of visualizing and designing a robust schema—especially for the complex Linked List itinerary structure—was a significant hurdle.
- Itinerary Data Structure Complexity
We initially over-simplified the required schema for the itinerary. When it came time to implement the Doubly Linked List with nested linked lists to represent events and sub-events, the structural complexity of managing and iterating over this data in the database proved much higher than anticipated.
How Did We Solve the Challenges?
Despite the persistent Firebase configuration issues, we adopted two key pivots to ensure functionality and prove the concept:
- AWS S2 Bucket Pivot for Data Access
To demonstrate the crucial Pub/Sub functionality for image retrieval, we temporarily pivoted our testing environment to use an AWS S2 Bucket. This allowed us to successfully implement and test the logic for pulling data from cloud storage, isolating the issue to the Firebase SDK configuration itself.
- Front-End Data Handling for Proof of Concept (POC)
Due to the persistent difficulty in writing the complex Linked List iterator state into the problematic Firebase backend, we chose a high-speed Proof of Concept (POC) approach for the dynamic itinerary updates. We passed the newly updated Linked List iterator directly to the front end for display. We recognize this approach is not scalable for a production environment, but it successfully demonstrated the core functionality and logic of the NLP-driven itinerary updates.
Major Accomplishments We're Proud Of
Our proudest accomplishment is the robust and scalable system design we created in a limited timeframe. Specifically:
• Pioneering Peer-to-Peer Communication: Successfully implementing our first-ever peer-to-peer communication system using WebSockets and Socket.IO in a highly scalable manner. This complex system is the foundation for our real-time location features.
• The Cleanest Implementation: The P2P/WebSockets feature was the most complex part of our system design, yet its implementation was the cleanest and most effective solution in the entire project.
• Dynamic Intelligence: Integrating NLP with Semantic Embeddings to dynamically adjust the trip itinerary based on group sentiment, moving beyond static, pre-planned templates.
What Did We Learn?
This project provided invaluable real-world experience in designing scalable applications:
• NoSQL Schema Design: We gained hands-on experience in the complexities of translating relational data concepts into an effective NoSQL schema, realizing the critical importance of data visualization before implementation.
• Cross-Service Configuration: We learned about the deep challenges of configuring external service SDKs (like Firebase Admin) within a backend framework (Django) and the need for meticulous environment management.
• Advanced Communication Patterns: We successfully learned, configured, and implemented advanced communication patterns like WebSockets (P2P) and the Pub/Sub model, which are essential for building modern, decoupled, and real-time systems.

Log in or sign up for Devpost to join the conversation.