Inspiration

AI-generated images and sensational headlines are everywhere. It’s becoming harder to tell what’s real, especially when content is designed to trigger strong emotional reactions.

Social media rewards shock value, fear, drama, and virality. I wanted to build something that introduces friction before someone believes or shares misleading content.

DeepShield was created to analyze visual and textual manipulation patterns and give users measurable authenticity risk signals instead of blind trust.


What it does

DeepShield is a media authenticity assistant with two core features:

1. Image Authenticity Scanner

The image scanner evaluates visual characteristics commonly found in synthetic or heavily edited content.

It analyzes:

  • Edge density (poster-style compositing)
  • Color saturation (oversaturated AI thumbnails)
  • Contrast intensity
  • Bright text block ratio
  • Sharpness and blur consistency

These features are normalized and combined into a synthetic probability score between 0 and 1.

Instead of claiming forensic deepfake detection, DeepShield flags exaggerated visual manipulation patterns. Stylized thumbnails and dramatic AI-generated visuals score higher than normal photographs.

The output includes:

  • Synthetic probability
  • Confidence level (Low / Medium / High)
  • A short explanation of why the image was flagged

2. Headline Manipulation Checker

The headline analyzer evaluates emotional manipulation risk in text.

It checks for:

  • Sensational trigger words (BREAKING, SHOCKING, EXPOSED)
  • Fear-based language
  • Conspiracy phrases
  • Excessive capitalization
  • Overuse of punctuation
  • Emotional exaggeration patterns

Each trigger increases a manipulation score. The result is categorized as Low, Medium, or High risk.

This makes it clear when a headline is engineered for outrage rather than information.


How I built it

Frontend:

  • Responsive UI
  • Drag-and-drop image upload
  • Real-time confidence indicators

Backend:

  • FastAPI
  • OpenCV for image analysis
  • NumPy for statistical feature extraction
  • Rule-based NLP scoring for headline analysis

The final implementation avoids heavy deep learning models to ensure fast performance and stability. All analysis runs quickly and deterministically.


Challenges I ran into

The original implementation used deep learning models for deepfake detection. This caused:

  • Long load times
  • Inconsistent predictions
  • Unstable outputs

Without properly trained model weights and optimized datasets, results were unreliable.

I rebuilt the system using deterministic visual feature analysis. This improved speed, consistency, and stability.


Accomplishments that I’m proud of

  • Built a fully functional authenticity risk scanner
  • Achieved sub-second response time
  • Calibrated the system to flag highly stylized AI thumbnails
  • Designed a clean and intuitive user interface
  • Created a structured, explainable scoring system

What I learned

  • Stability matters more than overcomplicating with heavy models
  • Proper calibration is critical for believable scoring
  • Clear framing is important when dealing with probabilistic systems
  • Performance optimization can drastically improve user trust

What’s next for DeepShield

Future improvements would include:

  • Integration of pretrained deepfake classifiers
  • OCR extraction from image thumbnails
  • Transformer-based contextual misinformation detection
  • Browser extension deployment

DeepShield currently provides authenticity risk signals. The next step would be expanding it into a scalable misinformation defense tool.

Built With

Share this project:

Updates