🌟 Inspiration
It all started when I found myself idle, thinking deeply about how I could create something meaningful — a technology that would not just exist, but impact people’s lives in a real way. That’s how I stumbled into the world of deepfakes.
The more I researched, the more I realized the dangers: misinformation, identity theft, reputational damage. And yet, most tools only detect whether something is fake — very few can tell you who made it, with what, and how. That gap inspired me to build ReMorph: not just a deepfake detector, but a forensic tool to trace deepfakes back to their origin.
🛠️ How We Built It
My original dream was ambitious — full traceability of deepfakes down to their source. But reality hit quickly: my laptop had no GPU, limited processing power, and building such a system demanded heavy compute.
Instead of giving up, I scaled smartly:
- Chose lightweight frameworks and models that could run on CPU.
- Focused on building a minimum viable version that works, while leaving room for future robustness.
- Iterated constantly, learning and adapting at each step.
Together with my teammate Essie, we coded in Python, experimented with datasets, and leaned on research papers, online communities, and even LLMs whenever we hit roadblocks. It was less of a straight line and more of a loop: build → fail → learn → improve → repeat.
🔑 What ReMorph Can Do (Current MVP)
Despite the limitations, we’re proud that ReMorph already delivers meaningful capabilities:
🔥 Identify which AI model generated an image 🔥 Detect hidden artifacts left behind by synthetic generation 🔥 Estimate possible training datasets or generation settings 🔥 Continuously improve using curated, consented user submissions
This is lightweight for now, but it lays the foundation for scalable, robust deepfake forensics in the future.
📚 What We Learned
This project has been an incredible crash course:
- How deepfakes are created and where their weaknesses lie
- Techniques in deepfake detection and digital forensics
- Working with datasets and training ML models under strict hardware constraints
- Most importantly: that persistence matters more than perfection.
⚔️ Challenges We Faced
- Hardware limitations: No GPU, frequent system crashes, and CPU overload.
- Data hurdles: Finding quality datasets that weren’t noisy or incomplete.
- Debugging chaos: Endless runtime errors, dependency conflicts, and hangs.
- Time pressure: Balancing learning, building, and re-building in a week-long sprint.
Every obstacle slowed us down, but it also forced us to become more resourceful, creative, and resilient.
🚀 What’s Next
ReMorph is just the first step toward true deepfake traceability. Our long-term vision is to empower:
- Investigators — to attribute fake content to specific generative models.
- Journalists — to uncover truth in an age of misinformation.
- Platforms & policymakers — to establish accountability for synthetic media.
Checkout out our Github Repo for more insights on what problem our project solves, Its key features & how it works.
We dream of scaling ReMorph with more robust datasets, GPU-powered training, and a user-friendly platform that makes deepfake forensics accessible to all.


Log in or sign up for Devpost to join the conversation.