Inspiration

Online shopping can be frustrating. You see outfits styled perfectly on models who don’t look like you, in lighting that feels unrealistic, and in proportions that don’t match your body. It becomes hard to imagine how something will actually look on you, which often leads to hesitation, wasted money, and frequent returns.

We wanted to close the gap between inspiration and reality. Instead of guessing how something might look, we thought: what if you could instantly see it on yourself? That idea became Fitted. If you see it on them, you should be able to see it on you.

What it does

Fitted is a Chrome extension that lets users virtually try on outfits directly from Pinterest. After uploading a full-body reference photo one time, users can browse Pinterest normally. When they find an outfit they like, they simply click “Try On,” and within seconds, Fitted generates a photorealistic image of them wearing that outfit.

Users can also modify the outfit using natural language. For example, they can say “make it blue,” “add a jacket,” or “swap the pants,” and the system regenerates an updated version. Each change builds on the previous result, creating an experience that feels like a digital dress-up game powered by AI. Instead of just browsing, users actively experiment with style before committing to a purchase.

How we built it

We built Fitted as a full-stack AI-powered system. The frontend is a Chrome Extension (Manifest V3) with a side panel interface designed in Figma and implemented using vanilla JavaScript. The extension injects a content script into Pinterest to detect outfit images and trigger the try-on workflow.

Our backend is built with FastAPI and runs with uvicorn. We use async HTTP requests with httpx, Pillow for image processing, and rembg (u2net) for background removal.

The AI pipeline has four main steps. First, Gemini 2.5 Flash analyzes the Pinterest image and generates a structured description of the outfit, including color, fit, and style details. Then, FLUX.2 Pro (via Replicate) takes the user’s reference photo and the generated description to produce a photorealistic image of the user wearing the outfit. After generation, we remove the background to produce a clean result. Finally, we support conversational updates so users can iteratively modify their outfit while maintaining session context.

We structured everything as a classify → generate → post-process → chat pipeline to keep the system modular and scalable.

Challenges we ran into

One major challenge was maintaining facial consistency across multiple generations. Each time we regenerated an image with modifications, there was slight degradation in facial features due to model constraints.

We also had to manage latency, since each try-on takes around 12 seconds to complete. Balancing realism, cost, and speed was an ongoing trade-off, especially since each generation costs approximately $0.08.

On the frontend side, managing state across the Chrome extension side panel and ensuring smooth communication with the FastAPI backend required careful debugging. Prompt engineering was another challenge, as small wording changes significantly impacted output quality.

Accomplishments that we're proud of

We are proud that we built a complete end-to-end AI pipeline within a hackathon timeframe. Integrating multiple external AI services into one cohesive workflow was a big milestone for us.

We’re also proud that Fitted feels like a real product. The side panel experience is clean and intuitive, and the conversational outfit modification makes the experience interactive rather than static.

Most importantly, we designed Fitted with accessibility in mind. Many people face barriers when using traditional fitting rooms, whether due to mobility limitations, sensory sensitivities, social anxiety, or geographic access. Fitted brings the fitting room to the user.

What we learned

We learned how to architect a modular AI pipeline that integrates multiple model providers. We gained experience building and debugging a Chrome extension that communicates with a backend API.

We also learned how sensitive generative models are to prompt structure, and how iterative regeneration affects image consistency. Managing cost-performance trade-offs in real time was another valuable lesson.

Overall, we learned how to turn an ambitious AI concept into a working product within a limited timeframe.

What's next for Fitted

Next, we want to add user accounts and persistent try-on history so users can save and revisit past outfits. We also plan to deploy the backend to the cloud for scalability and reduce local setup friction.

In the future, we want to fine-tune models to better represent diverse body types and support seated poses, prosthetics, and other underrepresented forms in fashion technology. We also aim to expand beyond Pinterest to other retail platforms like ASOS, Zara, and H&M.

Long term, we see Fitted evolving into a full digital styling studio where inspiration, experimentation, and purchasing all happen in one seamless experience.

Share this project:

Updates