Skip to content

Sakshamyadav19/ForSight.AI

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

4 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

Narrative Recovery Agent πŸ•΅οΈβ€β™€οΈ

A sophisticated AI-powered forensic case reconstruction system that transforms scattered police reports and witness testimonies into structured, time-ordered timelines with visual hypotheses, detective-style case boards, and an intelligent conversational agent.

🎯 Features

  • Evidence Ingestion: Upload and process police reports, witness testimonies, and incident logs
  • Scene Reconstruction: AI-powered scene completion with confidence scoring and assumption tracking
  • Visual Hypotheses: Text-to-image generation for plausible scene visualizations (CCTV, wide-angle, FPV variants)
  • Interactive Timeline: Chronological event player with export capabilities
  • Case Board: Interactive entity relationship graph showing connections between people, objects, places, and events
  • AI Chat Agent: LangChain-powered assistant with forensic investigation tools
  • Audit Trail: Complete JSON logging for transparency and accountability
  • PII Protection: Automated scrubbing of sensitive information

πŸ—οΈ Architecture

Backend (FastAPI + Python)

  • LLM: OpenAI GPT-4o via OpenRouter for reasoning and scene completion
  • Embeddings: Gemini embedding-001 (Google AI API)
  • Text-to-Image: Gemini via OpenRouter for visual hypothesis generation
  • Agent: LangChain agent with specialized forensic tools
  • Storage: In-memory storage (easily replaceable with PostgreSQL/Redis)

Frontend (Next.js 14 + React)

  • Framework: Next.js 14 with App Router
  • UI: Tailwind CSS + shadcn/ui components
  • Visualization: React Flow for case graphs, custom timeline player
  • State: SWR for data fetching and caching

πŸ“¦ Installation

Prerequisites

  • Python 3.11+
  • Node.js 18+
  • OpenRouter API key (for chat models)
  • Google AI API key (for embeddings)

Backend Setup

cd apps/api

# Create virtual environment
python -m venv venv
source venv/bin/activate  # On Windows: venv\Scripts\activate

# Install dependencies
pip install -r requirements.txt

# Create .env file
cp .env.example .env
# Edit .env with your API keys

Frontend Setup

cd apps/web

# Install dependencies
npm install

# Create .env.local file
cp .env.local.example .env.local
# Edit .env.local with your API base URL

πŸš€ Running the Application

Start Backend

cd apps/api
source venv/bin/activate
python -m uvicorn main:app --reload --port 8000

The API will be available at http://localhost:8000

  • API docs: http://localhost:8000/docs
  • Health check: http://localhost:8000/health

Start Frontend

cd apps/web
npm run dev

The web app will be available at http://localhost:3000

πŸ”‘ Environment Variables

Backend (.env)

# OpenRouter API (for OpenAI chat models)
OPENROUTER_API_KEY=your_openrouter_key_here
OPENROUTER_API_BASE=https://openrouter.ai/api/v1
OPENAI_CHAT_MODEL=openai/gpt-4o

# Google AI API (for embeddings)
GOOGLE_API_KEY=your_google_ai_key_here
GEMINI_EMBEDDING_MODEL=gemini-embedding-001

# Text-to-Image (Gemini via OpenRouter)
TTI_PROVIDER=openrouter
OPENROUTER_IMAGE_MODEL=google/gemini-2.5-flash-image-preview

# App Settings
BACKEND_PORT=8000
FRONTEND_PORT=3000
CORS_ORIGINS=http://localhost:3000

Frontend (.env.local)

NEXT_PUBLIC_API_BASE_URL=http://localhost:8000
NEXT_PUBLIC_APP_NAME="Narrative Recovery Agent"

πŸ“– Usage Guide

1. Workspace (Evidence Ingestion)

  1. Select evidence type (Report, Witness, Log)
  2. Paste or type evidence text
  3. Click "Add Evidence"
  4. System will automatically reconstruct scenes and generate visual hypotheses
  5. Review reasoning panel showing caption, assumptions, and entities
  6. View generated images with different variants

2. Timeline View

  • Navigate through events chronologically
  • Play/pause automatic progression
  • Scrub through timeline with slider
  • Export as MP4 video (stretch feature)

3. Case Board

  • Interactive graph showing entity relationships
  • Search and filter by entity name
  • Click nodes to explore connections
  • Zoom, pan, and navigate the graph

4. Chat Assistant

  • Ask questions about the case
  • Get summaries and gap analysis
  • Search for entities and events
  • Receive investigation suggestions

πŸ› οΈ API Endpoints

Core Endpoints

  • POST /ingest - Ingest evidence items
  • POST /reconstruct - Reconstruct scenes from evidence
  • GET /timeline - Get chronological timeline
  • GET /graph - Get case entity graph
  • POST /chat - Chat with AI agent
  • POST /regen-image - Regenerate image with variant
  • POST /export - Export timeline as video
  • GET /logs - Get audit logs

🎨 UI Components

Custom Components

  • EvidenceForm: Evidence input with type selection
  • ReasoningPanel: Scene reconstruction details with tabs
  • HypothesisGallery: Image viewer with variant controls
  • TimelinePlayer: Interactive timeline with playback
  • CaseGraph: React Flow entity relationship visualization
  • ChatDock: Conversational AI interface

shadcn/ui Components

  • Button, Card, Badge, Input, Textarea
  • Tabs, Skeleton, Dialog, Tooltip
  • And more...

πŸ”’ Security & Ethics

PII Protection

  • Automatic regex-based scrubbing of emails, phone numbers, SSNs, etc.
  • Redaction before LLM processing

Visual Hypothesis Guidelines

  • All images labeled as "AI-generated hypotheses"
  • Confidence scores displayed
  • Assumptions explicitly listed
  • Watermarked outputs
  • No identifiable faces generated
  • Documentary/forensic style enforced

Audit Trail

  • JSON logs for every action
  • Request IDs for tracing
  • Input/output capture
  • Latency tracking

πŸ“Š Data Models

Event

{
  event_id: string
  timestamp?: string
  title?: string
  caption: string
  assumptions: string[]
  confidence: number  // 0-1
  image_urls: string[]
  source_item_ids: string[]
  key_entities?: {
    people?: string[]
    objects?: string[]
    places?: string[]
  }
}

Entity Graph

{
  nodes: [{
    id: string
    type: "person" | "object" | "place" | "event"
    label: string
  }]
  edges: [{
    source: string
    target: string
    relation: string
  }]
}

πŸ§ͺ Testing

# Backend
cd apps/api
pytest

# Frontend
cd apps/web
npm test

🚧 Stretch Goals / Roadmap

  • Video export with FFmpeg/Remotion
  • Advanced graph interactions (subgraph highlighting, filtering)
  • Audio TTS for timeline narration
  • RAG over prior cases for pattern detection
  • Role-based access control
  • PDF report export
  • Multi-case management
  • Real-time collaboration
  • Mobile responsive optimizations

πŸ“ License

MIT License - See LICENSE file for details

🀝 Contributing

Contributions welcome! Please read CONTRIBUTING.md for guidelines.

πŸ™ Acknowledgments

  • OpenAI for GPT-4o LLM and embeddings via OpenRouter
  • Vercel for Next.js framework
  • shadcn for UI components
  • LangChain for agent orchestration

πŸ“§ Contact

For questions or support, open an issue or contact the maintainers.


⚠️ Disclaimer: This system generates AI-powered visual hypotheses for investigation purposes only. All outputs are plausible scenarios, not verified facts. Always corroborate with physical evidence and witness testimony.

topcop

About

No description, website, or topics provided.

Resources

Contributing

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors