A sophisticated AI-powered forensic case reconstruction system that transforms scattered police reports and witness testimonies into structured, time-ordered timelines with visual hypotheses, detective-style case boards, and an intelligent conversational agent.
- Evidence Ingestion: Upload and process police reports, witness testimonies, and incident logs
- Scene Reconstruction: AI-powered scene completion with confidence scoring and assumption tracking
- Visual Hypotheses: Text-to-image generation for plausible scene visualizations (CCTV, wide-angle, FPV variants)
- Interactive Timeline: Chronological event player with export capabilities
- Case Board: Interactive entity relationship graph showing connections between people, objects, places, and events
- AI Chat Agent: LangChain-powered assistant with forensic investigation tools
- Audit Trail: Complete JSON logging for transparency and accountability
- PII Protection: Automated scrubbing of sensitive information
- LLM: OpenAI GPT-4o via OpenRouter for reasoning and scene completion
- Embeddings: Gemini embedding-001 (Google AI API)
- Text-to-Image: Gemini via OpenRouter for visual hypothesis generation
- Agent: LangChain agent with specialized forensic tools
- Storage: In-memory storage (easily replaceable with PostgreSQL/Redis)
- Framework: Next.js 14 with App Router
- UI: Tailwind CSS + shadcn/ui components
- Visualization: React Flow for case graphs, custom timeline player
- State: SWR for data fetching and caching
- Python 3.11+
- Node.js 18+
- OpenRouter API key (for chat models)
- Google AI API key (for embeddings)
cd apps/api
# Create virtual environment
python -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activate
# Install dependencies
pip install -r requirements.txt
# Create .env file
cp .env.example .env
# Edit .env with your API keyscd apps/web
# Install dependencies
npm install
# Create .env.local file
cp .env.local.example .env.local
# Edit .env.local with your API base URLcd apps/api
source venv/bin/activate
python -m uvicorn main:app --reload --port 8000The API will be available at http://localhost:8000
- API docs:
http://localhost:8000/docs - Health check:
http://localhost:8000/health
cd apps/web
npm run devThe web app will be available at http://localhost:3000
# OpenRouter API (for OpenAI chat models)
OPENROUTER_API_KEY=your_openrouter_key_here
OPENROUTER_API_BASE=https://openrouter.ai/api/v1
OPENAI_CHAT_MODEL=openai/gpt-4o
# Google AI API (for embeddings)
GOOGLE_API_KEY=your_google_ai_key_here
GEMINI_EMBEDDING_MODEL=gemini-embedding-001
# Text-to-Image (Gemini via OpenRouter)
TTI_PROVIDER=openrouter
OPENROUTER_IMAGE_MODEL=google/gemini-2.5-flash-image-preview
# App Settings
BACKEND_PORT=8000
FRONTEND_PORT=3000
CORS_ORIGINS=http://localhost:3000NEXT_PUBLIC_API_BASE_URL=http://localhost:8000
NEXT_PUBLIC_APP_NAME="Narrative Recovery Agent"- Select evidence type (Report, Witness, Log)
- Paste or type evidence text
- Click "Add Evidence"
- System will automatically reconstruct scenes and generate visual hypotheses
- Review reasoning panel showing caption, assumptions, and entities
- View generated images with different variants
- Navigate through events chronologically
- Play/pause automatic progression
- Scrub through timeline with slider
- Export as MP4 video (stretch feature)
- Interactive graph showing entity relationships
- Search and filter by entity name
- Click nodes to explore connections
- Zoom, pan, and navigate the graph
- Ask questions about the case
- Get summaries and gap analysis
- Search for entities and events
- Receive investigation suggestions
POST /ingest- Ingest evidence itemsPOST /reconstruct- Reconstruct scenes from evidenceGET /timeline- Get chronological timelineGET /graph- Get case entity graphPOST /chat- Chat with AI agentPOST /regen-image- Regenerate image with variantPOST /export- Export timeline as videoGET /logs- Get audit logs
- EvidenceForm: Evidence input with type selection
- ReasoningPanel: Scene reconstruction details with tabs
- HypothesisGallery: Image viewer with variant controls
- TimelinePlayer: Interactive timeline with playback
- CaseGraph: React Flow entity relationship visualization
- ChatDock: Conversational AI interface
- Button, Card, Badge, Input, Textarea
- Tabs, Skeleton, Dialog, Tooltip
- And more...
- Automatic regex-based scrubbing of emails, phone numbers, SSNs, etc.
- Redaction before LLM processing
- All images labeled as "AI-generated hypotheses"
- Confidence scores displayed
- Assumptions explicitly listed
- Watermarked outputs
- No identifiable faces generated
- Documentary/forensic style enforced
- JSON logs for every action
- Request IDs for tracing
- Input/output capture
- Latency tracking
{
event_id: string
timestamp?: string
title?: string
caption: string
assumptions: string[]
confidence: number // 0-1
image_urls: string[]
source_item_ids: string[]
key_entities?: {
people?: string[]
objects?: string[]
places?: string[]
}
}{
nodes: [{
id: string
type: "person" | "object" | "place" | "event"
label: string
}]
edges: [{
source: string
target: string
relation: string
}]
}# Backend
cd apps/api
pytest
# Frontend
cd apps/web
npm test- Video export with FFmpeg/Remotion
- Advanced graph interactions (subgraph highlighting, filtering)
- Audio TTS for timeline narration
- RAG over prior cases for pattern detection
- Role-based access control
- PDF report export
- Multi-case management
- Real-time collaboration
- Mobile responsive optimizations
MIT License - See LICENSE file for details
Contributions welcome! Please read CONTRIBUTING.md for guidelines.
- OpenAI for GPT-4o LLM and embeddings via OpenRouter
- Vercel for Next.js framework
- shadcn for UI components
- LangChain for agent orchestration
For questions or support, open an issue or contact the maintainers.