GuardHer AI is a web-based platform that uses artificial intelligence to detect and prevent gender-based online harassment in real time. The system analyzes text, images, and voice input to identify abusive language, harmful visuals, sexism, bullying, and threats, helping create safer and more respectful digital spaces.
- Text Analysis – Detects harmful language, sexism, bullying, and threats
- Image Analysis – Identifies abusive visuals, memes, and screenshots
- Voice Analysis – Converts voice input to text and analyzes it for harassment
- Severity Levels & Explanations – Explains why content is harmful and how severe it is
- Anonymous Community Reporting – Users can safely report harmful content
- Analytics Dashboard – Tracks harassment patterns and trends over time
- PDF Report Export – Generates downloadable reports
- Email Reporting – Sends analysis reports directly via email
The purpose of GuardHer AI is to prevent online gender-based violence, support victims, assist moderators, and promote safer and more respectful online communication through AI-powered moderation.
- Users submit text, image, or voice content
- AI analyzes the content in real time
- The system detects abusive behavior and assigns a severity level
- Clear explanations are provided for transparency
- Reports can be exported as PDF or sent via email
- Insights are displayed on an analytics dashboard
- Frontend: React.js
- AI & NLP: Machine Learning Models
- Image Analysis: Computer Vision
- Voice Processing: Speech-to-Text
- Deployment: Vercel
🔗 https://guardherai.vercel.app/
/src– Application source code/public– Static assetsREADME.md– Project documentation
This project is the original work of the team and was developed for the Cyber Shield Hackathon.
✨ GuardHer AI – Protecting voices. Preventing abuse. Promoting respect.