FastAPI Backend for AI-Powered Document Intelligence
FastAPI application with Supabase, LangChain, and AI integrations for the MemexLLM AI research assistant.
Frontend Repo • Features • Tech Stack • Quick Start • Deployment
The MemexLLM backend provides a powerful API for document processing, AI-powered chat, and content generation. It leverages modern AI frameworks to transform documents into interactive knowledge bases.
Built for production with async processing, streaming responses, and enterprise-grade security.
The backend follows a service-oriented architecture with a focus on asynchronous processing and reliable document ingestion.
- Document Processing: PDF, TXT, DOCX, audio, YouTube, and web content extraction
- AI Chat: Contextual Q&A with source citations and streaming responses
- Content Generation: Create podcasts, quizzes, flashcards, mindmaps, and more
- Vector Search: Semantic search using Pinecone embeddings
- Authentication: Supabase Auth with JWT tokens
- Real-time: Server-Sent Events (SSE) for streaming responses
| Category | Technology |
|---|---|
| Framework | FastAPI |
| Language | Python 3.11+ |
| Database | PostgreSQL + Supabase |
| ORM | SQLAlchemy + Alembic |
| AI | LangChain, OpenAI, Anthropic, Google Gemini |
| Embeddings | OpenAI, Google |
| Vector DB | Pinecone |
| TTS | Google Cloud TTS, ElevenLabs |
| YouTube | yt-dlp |
| Deployment | Docker, Coolify |
- Python 3.11+
- PostgreSQL database (local or Supabase)
- API keys for AI providers
cd backend
uv syncCopy .env.example to .env and configure:
# Supabase
SUPABASE_URL=https://your-project.supabase.co
SUPABASE_KEY=your-service-role-key
# Database
DATABASE_URL=postgresql://user:pass@localhost:5432/memexllm
# AI Providers
OPENAI_API_KEY=sk-...
ANTHROPIC_API_KEY=sk-ant-...
GOOGLE_API_KEY=...
# Pinecone
PINECONE_API_KEY=...
PINECONE_INDEX=memexllmuv run uvicorn src.main:app --reloadAPI docs available at http://localhost:8000/docs
backend/
├── src/
│ ├── api/ # API routes
│ │ └── v1/
│ │ ├── notebooks.py
│ │ ├── chat.py
│ │ ├── sources.py
│ │ └── ...
│ ├── core/ # Core config
│ │ ├── config.py
│ │ ├── security.py
│ │ └── supabase.py
│ ├── models/ # SQLAlchemy models
│ ├── services/ # Business logic
│ │ ├── ai/
│ │ ├── chat/
│ │ ├── documents/
│ │ └── storage/
│ └── main.py # App entry point
├── migrations/ # Alembic migrations
├── tests/ # Test files
└── alembic.ini # Alembic config
docker-compose up -dSee docs/COOLIFY_DEPLOYMENT.md for detailed deployment instructions.
docker-compose -f docker-compose.prod.yml up -d| Endpoint | Description |
|---|---|
POST /api/v1/notebooks |
Create notebook |
GET /api/v1/notebooks/{id} |
Get notebook |
POST /api/v1/sources/upload |
Upload document |
POST /api/v1/chat/message |
Send chat message |
GET /api/v1/studio/{type} |
Generate content |
See http://localhost:8000/docs for full API documentation.
- Follow the existing code style (ruff)
- Write tests for new features
- Update documentation
MIT License
