Inspiration

  • We spend too much time context-switching to craft thoughtful replies.
  • Existing “smart replies” feel generic and miss context from the full conversation.
  • We wanted an overlay that sits on top of Messages, understands the thread, and drafts responses you actually send.

What it does

  • Reads your local iMessage database (on-device) to display conversations in a fast, keyboard-first overlay.
  • Generates 1–3 context-aware draft replies per incoming message using an agentic workflow.
  • Supports attachments rendering and smooth scrolling; shows unread counts; lets you accept/edit/send with minimal friction.
  • Privacy-first: all conversation data stays on your Mac. The model call uses your own API key.

How we built it

  • Desktop app: Electron + Vite + React + TypeScript + Tailwind + shadcn/ui. iMessage access: local SQLite chat.db on macOS with a Rust helper (Cargo) for fast queries; Swift helper to export Contacts.
  • Agentic generation: Vercel AI SDK (ai) with Zod-validated structured outputs; OpenAI-compatible provider via @ai-sdk/openai.
  • Shared packages: monorepo with packages/schema (types) and packages/client. ML stack (optional): pipeline for fine-tuning OpenAI OSS 20B on your own conversations (Modal + Axolotl), tracked with Weights & Biases.

Challenges we ran into

  • macOS privacy (TCC) permissions and Full Disk Access for reading the Messages DB. Apple’s evolving iMessage schema and attachment edge cases.
  • Keeping the overlay snappy with very long threads (virtualization, batching).
  • Structured outputs that are helpful, safe, and concise; prompt/guardrail iteration.
  • Building data + training pipelines.

Accomplishments that we're proud of

  • Successfully fine-tuning open source models on personal conversation data.
  • A smooth, keyboard-native overlay that feels instant on large histories.
  • Attachment rendering and improved word breaking for mixed-content threads.
  • Agent mode that streams structured candidates with confidence + rationale.
  • Clean separation between UI, Rust data access, and model logic.

What we learned

  • Deep quirks of the iMessage SQLite schema and attachment linking.
  • Practical patterns for streaming, validating, and ranking LLM suggestions.
  • Real-world performance tuning in Electron/Vite with large lists.

What's next for Textreme

  • More personalization
Share this project:

Updates