Inspiration
The idea for CompliGen was born from a frustrating reality: Creativity is fast, but Compliance is slow.
We realized that while Generative AI (like Midjourney or DALL-E) has democratized high-fidelity design, it has completely ignored the boring, rigid rules of retail. We saw brands generating stunning images that were immediately rejected by Amazon or TikTok because a button covered a logo or text was in a "danger zone."
We learned that 65% of retail media assets are rejected on the first submission, costing millions in wasted ad spend and designer hours. We wanted to build a bridge between the wild creativity of AI and the strict reality of retail guidelines. We didn't just want to make pretty pictures; we wanted to make assets that actually ship.
What it does
CompliGen is an end-to-end creative suite that "bakes" compliance into the design process.
Context-Aware Generation: You enter a product (e.g., "Gold Watch"), and CompliGen automatically detects the category (Luxury/Jewelry) to apply the correct visual tone.
The Compliance Layer: It overlays platform-specific "Safe Zones" (for TikTok, Amazon, Instagram) directly onto the canvas, ensuring no critical elements are placed where UI buttons or notches would cover them.
Instant Adaptation: It takes a single master creative and intelligently resizes it for different formats (9:16 Story, 4:5 Feed, 16:9 Banner) without losing the focal point.
Nano Banana Mode: A unique "simulation engine" we built specifically for this hackathon. It allows users to experience the full app workflow instantly without hitting paid APIs or waiting for GPU cold starts.
How we built it
We built CompliGen on a modern, serverless stack designed for speed and interactivity:
Frontend: We used Next.js 14 (App Router) for the framework, leveraging Server Actions for seamless data mutation. The UI is crafted with React and Tailwind CSS, using Framer Motion for the high-end, "magical" transitions.
Visual Engine: To make the experience immersive, we wrote custom WebGL shaders (the "Vortex" background) that react to user input.
AI Orchestration: We built a "Router" architecture. For live demos, it connects to OpenAI's DALL-E 3 for generation. For the "Nano Banana" simulation, it routes to a deterministic engine that maps keywords to pre-rendered, high-fidelity assets to ensure the demo never fails.
Compliance Logic: We hard-coded the UI overlays for major platforms (TikTok, Amazon) using absolute positioning and z-index layers to mimic the actual mobile viewports of these apps.
Challenges we ran into
The "Safe Zone" Paradox: We initially tried to get the AI to draw around the safe zones, but DALL-E is non-deterministic and often ignored our spatial instructions. We solved this by separating the concerns: The AI handles the art, and our deterministic Compliance Layer handles the rules via CSS overlays and smart cropping.
Performance vs. Visuals: Running heavy WebGL shaders (the background vortex) alongside high-res image manipulation caused frame drops on lower-end devices. We had to optimize the shader complexity and use Next.js image optimization to keep the app buttery smooth.
The Demo Curse: We were worried about API rate limits or latency during the live judging. This forced us to invent the "Nano Banana" mode—a fail-safe simulation layer that ensures the app works perfectly even if the internet is spotty or API keys expire.
Accomplishments that we're proud of
The "Nano Banana" Mode: It sounds funny, but building a robust simulation mode that mimics the AI's behavior was a huge technical win. It guarantees zero downtime during judging.
Interactive UX: We're really proud of the frontend polish. The way the interface transitions from the "Pain Simulator" (showing rejection red zones) to the "Hero Studio" feels like a professional SaaS product, not just a hackathon prototype.
Solving a Real Problem: This isn't just a wrapper around ChatGPT. It addresses a specific, high-value B2B pain point that affects every major brand advertising on social media today.
What we learned
Compliance is a UI/UX Challenge: We learned that "compliance" isn't just about legal text; it's about spatial awareness. Showing a user why their ad would fail (visually) is infinitely more powerful than telling them with text.
Next.js Server Actions: We deepened our understanding of Next.js 14's Server Actions, using them to handle the logic between our simulation engine and the real AI APIs without exposing keys to the client.
What's next for CompliGen
Direct API Integration: We plan to integrate directly with the Amazon Ads API and TikTok Marketing API so users can push their approved creatives directly to their ad campaigns from the dashboard.
Video Compliance: Expanding the engine to analyze video frames for "flashy content" warnings and safe zone violations in motion.
Enterprise SSO: Adding team collaboration features so legal teams can pre-approve "Safe Zone" templates for their designers.
Built With
- nextjs
- openai
- tailwind
- typescript
Log in or sign up for Devpost to join the conversation.