Inspiration

I wanted to build an AI that doesn’t just retrieve information, but actually understands causality. Every day, people ask "Why did this happen?" whether it’s about climate, economy, or world events. Yet most AI systems only summarize or predict, without linking facts together. I set out to design an explanatory AI that connects the dots: a Common Sense Engine.

What it does

Common Sense Engine takes a natural-language "Why..." question and: 1) Retrieves the most relevant facts from Elasticsearch (lexical + vector search). 2) Maps causal links between events using heuristic reasoning and implicit inference. 3) Summarizes the findings using Gemini (Vertex AI), producing: - a clear summary, - a causal graph (via PyVis), - and clickable sources for full transparency. It’s like watching an AI draw a map of why something happened.

How I built it

I built a multi-agent architecture with:

  • a FastAPI backend coordinating all reasoning steps
  • Elasticsearch for hybrid retrieval (BM25 + embeddings)
  • Gemini (Google Generative AI) for summaries and node labels
  • Streamlit for the interactive UI
  • PyVis for graph visualization

Each agent has a specific role: FactFinder, CausalityMapper, NodeSummarizer, ImplicitInferencer, TemporalOrganizer, and Synthesizer.

Challenges I ran into

  • Designing deterministic heuristics for causality detection
  • Keeping the pipeline fast and explainable while using large models
  • Managing the balance between precision and recall in evidence retrieval
  • Integrating live Google Search ingestion while isolating results per query
  • Making the UI intuitive enough to explain complex reasoning

Accomplishments that I'm proud of

  • Built a working causal reasoning demo in just a few weeks
  • Achieved hybrid retrieval (text + vector) and multi-agent orchestration
  • Generated human-like summaries with source traceability
  • Created an interactive causal graph that makes reasoning transparent

What I learned

  • How to combine retrieval, reasoning, and generation in one pipeline
  • That "why" questions push AI beyond language into commonsense reasoning
  • How small design choices (like confidence scoring and transitivity) impact interpretability

What's next for Common Sense Engine

  • Anchor correlation/contradiction edges to concrete nodes
  • Add factual verification and "what-if" scenario simulation
  • Deploy a public API for research and journalism use cases

Built With

Share this project:

Updates