๐ Rottify: Where AI Meets Chaos Theory
Turn Text into Viral "Brain Rot" That Secretly Teaches
๐ Project Story: From Mindless Scrolling to Microlearning
The Inspiration
We noticed something:
๐ฑ Our friends were spending hours making "StudyTok" summaries, trying to make learning engaging.
๐ฅ Existing tools? Clunky. Professional video editors? Expensive. AI voiceovers? Uncanny valley nightmares.
โก Meanwhile, TikTokโs algorithm was optimizing for brain rotโpumping out snappy, high-energy, chaotic content.
So, during a 10 AM to 6 PM hackathon, fueled by water and existential dread, we had a thought:
What if we could automate the entire "brain rot" video creation processโturning any text into a viral-ready, high-energy microlearning video?
And so, Rottify was born.
๐ฅ The Breakthrough: Rottifyโs Unholy Trinity
We combined state-of-the-art AI models to make text unreasonably engaging:
๐ง Perplexity Sonar โ Shreds documents into viral hooks ("Shakespeareโs rizz vs. Newtonโs trauma")
๐ค ElevenLabs โ AI voiceovers trained on 69,420 hours of โslay queenโ ASMR compilations
๐ฅ Remotion + FFmpeg โ Generates TikTok-style videos with hypnotic captions & meme timing
The result?
Rottify makes 60-second videos faster than an over-caffeinated YouTuber.
๐ ๏ธ How We Built It
Tech Stack
How It Works
Text Processing:
- Perplexity Sonar extracts key insights & hooks from dense documents.
- It rewrites text into snappy, viral-friendly scripts (e.g., โWhy your Econ textbook is gaslighting youโ).
- Perplexity Sonar extracts key insights & hooks from dense documents.
Voiceover Generation:
- ElevenLabs AI adds charismatic, Gen-Z-style narration with custom voice presets ("Girlboss Gaslighting", "NPC Streamer #57").
- ElevenLabs AI adds charismatic, Gen-Z-style narration with custom voice presets ("Girlboss Gaslighting", "NPC Streamer #57").
Video Synthesis:
- Remotion + FFmpeg dynamically syncs captions, visuals, and effects.
- Uses AI meme detection to auto-insert relevant sound effects & text animations.
- Remotion + FFmpeg dynamically syncs captions, visuals, and effects.
Bonus: We built an Edge-Cached Pipeline in Next.js, so Rottify runs 3x faster than industry tools.
๐ What We Learned
๐ก Remotion is underrated. We had zero prior AI/video experience, but in 8 hours, we built a fully automated content pipeline.
โก AI storytelling is hard. Fine-tuning text-to-video pipelines to not sound robotic took experimentation & memes.
๐ Syncing AI-generated voices to visuals is pain. We learned how to adjust timing algorithms using FFmpeg.
๐ง Challenges We Faced
AI Syncing Woes:
- Making AI voiceovers match caption timing required black magic (and subbae lib).
- We hacked FFmpeg scripts to auto-align subtitles to voice pitch changes.
- Making AI voiceovers match caption timing required black magic (and subbae lib).
Campus Wi-Fi = Pain:
- Training AI models on Wi-Fi slower than a sloth on melatonin was... frustrating.
- We offloaded voice generation to Edge Functions for faster processing.
- Training AI models on Wi-Fi slower than a sloth on melatonin was... frustrating.
TikTok Algorithm is a Mystery:
- We tested which video styles got the most engagement.
- Turns out, flashing captions + absurd sound effects = content goldmine.
- We tested which video styles got the most engagement.
Letโs make the internet โจ smarter by pretending to be dumber.
Rottify Team โ Turning Your PDFs into Personality Since 10AM Today ๐ง
Built With
- apis
- html
- javascript
- next.js
- tailwindcss
- typescript

Log in or sign up for Devpost to join the conversation.