This project is an AI-powered fact-checking assistant that verifies the truthfulness of online posts and articles using the Perplexity API Platform as its reasoning and retrieval engine. Its purpose is to help users quickly identify credible information, understand evidence, and build trust in digital content.

Built with Flask, the system accepts either text or a URL, extracts readable content with BeautifulSoup and platform-specific scrapers, and then runs a multi-stage Perplexity pipeline. Each stage makes a distinct Perplexity API call for a specialized reasoning task:

  • Claim extraction – Perplexity identifies verifiable factual statements within the text.
  • Domain ranking – It ranks authoritative domains for the topic to guide reliable retrieval.
  • Constrained search – It performs multiple context-aware searches across those domains to surface relevant evidence.
  • Source assessment – It evaluates each source’s reliability, stance, and direct quotes.

The app then aggregates all findings deterministically to produce a final verdict (True, False, Partially True, or Unverified) with confidence scores and cited sources.

This repeated, role-specific use of Perplexity’s reasoning and retrieval capabilities transforms raw content into structured, explainable judgments. By showing users how each verdict is built from transparent evidence, the system deepens understanding, promotes media literacy, and demonstrates how AI-powered reasoning can make complex truth evaluation both rigorous and accessible.

Team:

Share this project:

Updates