-
-
canva slide 1
-
canva slide 2
-
canva slide 3
-
canva slide 4
-
canva slide 5
-
canva slide 6
-
canva slide 7
-
figma mainpage
-
figma signup page
-
figma login page
-
figma report a deepfake page
-
figma gather evidence page
-
figma us
-
figma india
-
figma china
-
figma singapore
-
figma Korea
-
figma mode select page
-
figma ai chat assistant
-
figma ai voice assistant
-
figma join community
-
figma about us
Inspiration
Deepfake abuse is a growing problem, yet victims struggle to find support and take action. When the younger sister on our team became a victim, we realized how difficult it was to navigate the legal process and access mental health support. The lack of clear resources and emotional guidance for victims motivated us to create DeepShield—a platform that empowers victims with the tools, resources, and community they need to fight back against deepfake crimes.
What it does
DeepShield provides comprehensive support systems for deepfake victims through three core features.
Report a Deepfake: Victims can learn how to report and remove harmful deepfake content with
- country-specific legal resources
- step-by-step guides
- official reporting channels to navigate takedown procedures.
Get Mental Support: Deepfake crimes can have severe psychological impacts, yet victims often feel isolated with little emotional support. Our AI-powered chatbot and voice assistant provide victims with real-time emotional support, coping strategies, and connections to mental health resources, ensuring they receive the guidance they need.
Join the Community: Many victims feel alone in their struggles. DeepShield offers a safe, anonymous community forum where users can share their experiences, seek advice, and connect with others facing similar situations. This fosters solidarity and empowerment, giving victims a space to heal and support one another.
How we built it
We developed DeepShield as a full-stack web application, ensuring usability, security, and accessibility for victims. Frontend: React.js (for a dynamic and responsive UI) Backend: Node.js with Express (handling authentication, reports, and messaging) Database: MongoDB (storing user data, reports, and forum discussions) AI Integration: Groq Cloud API (for chat assistant mental health support) AI Integration: OpenAI API, Google Cloud API (for voice assistant mental health support)
We made sure to implement the frontend as closely as possible to our original Figma design, successfully achieving our vision for the platform.
Challenges we ran into
- Ensuring Anonymity & Security: Since victims need a safe space, we had to implement anonymous login options and secure messaging without exposing user identities.
- Legal Research: Providing accurate reporting guidance required extensive research into regional deepfake laws and takedown procedures.
- AI Prompting & Integration: We needed to ensure that our chat and voice assistants provided empathetic, informative, and non-triggering responses, while limiting their scope to mental health support.
- Time Constraints: Developing a fully functional platform in less than a week was a major challenge, requiring rapid prototyping and efficient teamwork.
- Unexpected Login Issues: Our login page is currently not working due to unexpected issues such as CORS policy restrictions or missing domain configurations, affecting authentication on other devices.
Accomplishments that we're proud of
Building DeepShield in less than a week was an achievement in itself, but what we are most proud of is the impact and functionality of the platform.
- Creating a fully functional deepfake victim support platform: Despite limited time, we successfully developed a multi-feature platform that allows victims to report deepfakes, receive mental health support, and join a safe community
- Implementing a working AI chatbot for real-time emotional support: Integrating Groq Cloud AI allowed us to provide victims with immediate emotional guidance, an essential feature that sets DeepShield apart.
- Developing a country-specific deepfake reporting system: Many victims are unaware of how to legally report deepfakes in their own country. By compiling legal resources and step-by-step reporting guides, we make this process easier and more accessible.
- Building a secure and inclusive community space: Creating an anonymous forum for victims to share their stories was a critical milestone, ensuring that victims have a support network without fear of judgment or exposure.
- Successfully hosting and deploying the platform: Despite challenges, we managed to deploy DeepShield on Railway, making it accessible to users beyond just the hackathon environment.
What we learned
This project taught us valuable technical and real-world lessons that will shape our future development skills and understanding of deepfake-related issues. The importance of cybersecurity in victim support platforms: Handling sensitive user data required us to prioritize encryption, authentication, and anonymous data storage to ensure user safety. How to integrate AI for mental health guidance: We learned how to fine-tune AI responses to be empathetic and helpful, making sure that chatbots and voice assistants remain supportive rather than harmful. Legal complexities surrounding deepfake crimes: Researching various country laws and takedown procedures gave us a deeper understanding of how legal frameworks handle deepfake abuse. The power of community-driven support: Seeing victims connect and support one another reinforced the importance of creating a safe and understanding community for those affected. How to rapidly develop and deploy a full-stack project under extreme time constraints: Working efficiently, dividing tasks, and making real-time adjustments were essential in bringing DeepShield to life within the hackathon timeframe.
What's next for DeepShield
DeepShield was built in 4-5 days, but our vision extends far beyond the hackathon. Moving forward, we plan to expand, enhance, and improve the platform with several key developments:
- Multi-language support: Deepfake abuse is a global issue, so we aim to translate DeepShield into multiple languages to support victims worldwide.
- Enhanced AI chatbot responses: We plan to refine AI-generated responses, integrating more in-depth mental health resources and expert-reviewed guidance.
- Stronger security and anonymity features: Implementing end-to-end encryption, anonymous user verification, and stricter data privacy measures will make DeepShield even safer for victims.
- Expanding legal guides to more countries: Currently, we cover a limited number of countries. Our goal is to compile and validate legal takedown procedures for a broader range of regions.
- Developing a mobile app for better accessibility: Many victims need immediate support but may not have easy access to a desktop site. A mobile-friendly version or dedicated app will make DeepShield more accessible.
Our ultimate goal is to turn DeepShield into a real-world solution that empowers victims, helping them reclaim their digital identity, access support, and take meaningful action against deepfake abuse.
Built With
- api
- css
- figma
- google-cloud
- groq
- javascript
- mongodb
- openai
- react
Log in or sign up for Devpost to join the conversation.