-
-
Natural language chat for individual case details
-
Home Page displaying case cards and short description
-
constructed timeline based on uploaded context data
-
Case discovery page with currently uploaded materials
-
Live listen tab with short transcript and relevant data pulled up on the right
-
Case Dashboard showing current agentic tasks and general workflow as well as research agent status
-
Current agent tasks and reasoning agent hypotheses/suggested human action
Inspiration
Paralegals play a critical role in the legal system, but much of their work involves repetitive, time-intensive tasks like organizing documents, maintaining timelines, and conducting research. We were inspired by the idea that a large portion of this work is structured and pattern-driven, making it a strong candidate for automation. At the same time, legal work requires precision and accountability, so we wanted to build something that assists rather than replaces. A system that augments legal workflows without sacrificing control or transparency.
What it does
ParaDocs is a locally hosted desktop application with a web-based UI that acts as an intelligent paralegal assistant. It allows users to create and manage cases, upload and organize documents, and interact with their case data through a structured interface.
The system builds and maintains case-specific context, including summaries, timelines, and stored documents. Users can also interact with a Chat with AI feature powered by Retrieval-Augmented Generation (RAG), which retrieves relevant information from case materials and generates grounded responses. This ensures that answers are tied directly to the available evidence rather than relying on general knowledge.
Overall, ParaDocs helps reduce manual overhead by organizing information, surfacing insights, and providing a consistent way to interact with complex legal data.
How we built it
We built ParaDocs as a desktop-first, AI system:
Frontend: React (Vite) for a fast, modern UI Desktop Shell: Electron to provide filesystem access and run everything locally Backend: FastAPI for handling case logic, APIs, and AI orchestration Storage: Local filesystem-based case directories (/cases/{case_id}) RAG Pipeline: Vector database for embeddings + retrieval of relevant context LLM Integration: Groq API for fast, low-latency response generation
Electron launches the FastAPI backend as a local service, and the React frontend communicates with it via HTTP. The backend handles document ingestion, context retrieval, and prompt construction for the LLM, enabling a seamless AI-assisted workflow.
Challenges we ran into
One of the biggest challenges was designing a system that evolves dynamically with new information. Legal cases are not static, and new evidence can change timelines or invalidate assumptions, so we had to think carefully about how to structure data and workflows in a way that supports updates without breaking consistency.
Another challenge was integrating multiple systems cleanly:
Connecting Electron, React, and FastAPI without introducing unnecessary complexity Ensuring reliable communication between the frontend and a locally spawned backend Bridging RAG retrieval with LLM generation in a way that minimizes hallucination
We also had to balance speed vs. architecture. Building something functional within a short timeframe while still laying the groundwork for a scalable system.
Accomplishments that we're proud of
- Designing and implementing a local-first AI architecture that keeps all case data on the user’s machine
- Successfully integrating RAG + LLM (Groq) into a conversational chat interface
- Creating a persistent case management system backed by the filesystem
- Building a clean, extensible UI that mirrors real-world legal workflows
- Structuring the system in a way that can evolve into a much more advanced “case intelligence” platform
What we learned
We also gained experience with:
- Local-first application design and its tradeoffs
- Coordinating multi-language stacks (JavaScript + Python)
- Designing RAG systems that balance relevance and performance
- Thinking in terms of incremental updates and dependency propagation, rather than static pipelines
What's next for ParaDocs
We see ParaDocs evolving into a full case intelligence platform. Next steps include:
Smarter timeline generation with automatic conflict detection Deeper legal research integration (statutes, case law, precedent matching) More advanced RAG capabilities with better context ranking and summarization Real-time note-taking and transcription during meetings Multi-case reasoning and cross-case insights Improved UI/UX for navigating complex case data Optional cloud sync for collaboration while maintaining privacy controls
Log in or sign up for Devpost to join the conversation.