Inspiration

Most meeting minutes tools are bulky, cloud-dependent, built for enterprise use. They are known for storing irrelevant data, and unnecessary automation. We wanted something completely opposite: a tool that runs fully locally, fits naturally into a terminal workflow, and produces clean, structured notes via simple interactions. It should retain the authenticity of a spontaneous meeting while capturing the core ideas.

What it does

miniMinutes captures live audio, transcribes it in real time, and classifies each line into questions, action items, assignments, or general discussion. It produces an instant, structured summary and lets users search semantically through their meeting all without leaving the command line.

How we built it

We combined a lightweight Whisper-based transcription engine with a custom sentence classification model trained on meeting style data. A Python-based CLI orchestrates audio capture, real-time processing, semantic search, and local storage. Everything runs fully offline with no external API calls.

Challenges we ran into

Building a classifier that works well on short, spoken, unstructured lines was difficult, especially with a limited custom dataset. Real-time audio ingestion required careful threading and buffering. Keeping the entire stack lightweight while still providing useful intelligence was also a core constraint.

Accomplishments that we're proud of

We shipped a fully functional local meeting assistant that produces structured minutes in real time. The classification model performs well despite minimal data, and the CLI feels fast, simple, and natural to use. The end result is a surprisingly capable tool built in a short timeframe, something that we would use on a daily basis. We also pride ourselves in the seamless user experience where assets are cleanly designed, actions are one interaction away, color theme fits perfectly with the retro vibe.

What we learned

We learned how to build efficient NLP pipelines that are designed for speed and local inference over heavy cloud models. We also gained experience designing custom datasets for classification, optimizing real-time audio capture, and balancing the tight hackathon time constraints. The CLI aspect was something completely new to us and required heavy research and experimentation to get right.

What's next for miniMinutes

We plan to refine the classifier with more diverse spoken-language data, add richer semantic search, and support multi-speaker tagging. Additional features like calendars, task management features, export formats, animations, and many more will also be implemented as long as they fit the core philosophy: simple, fast, no-nonsense.

Built With

Share this project:

Updates