Inspiration
We've all been there: you see a beautiful sunset or a perfect moment, you pull out your phone, and the photo looks... meh. Professional photography is hard—it requires understanding lighting, composition, and complex settings. We wanted to bridge the gap between "point-and-shoot" and professional DSLR photography. We asked ourselves: What if your camera wasn't just a tool, but a teacher? What if it could see what you see and tell you exactly how to make it better before you take the shot?
What it does
PictureThis is an intelligent camera assistant that turns every user into a better photographer.
Real-Time AI Coaching: It analyzes your viewfinder continuously and gives specific, actionable feedback (e.g., "Move 2 steps left for better symmetry," "Too dark, try using flash"). Smart Auto-Pilot: The AI can take control of the camera, automatically adjusting zoom, exposure, and flash based on the scene context (Portrait, Landscape, Food, etc.). Reference Mode: Upload a photo you admire, and the app guides you to recreate its specific style, lighting, and composition. Pro Editing Suite: Features a custom presets engine built with Skia, allowing for real-time, high-performance adjustments (brightness, contrast, saturation) that are applied directly to the final image. Location Scout: Integrated with Google Places to discover and navigate to popular, photogenic locations nearby.
How we built it
We built PictureThis using React Native and Expo for a seamless cross-platform experience.
The Brain: We used AWS Bedrock (Nova Lite) for the multimodal AI analysis. We engineered complex prompts to turn the AI into a "Photography Instructor" that outputs structured JSON commands to control the camera. The Eyes: We utilized expo-camera for the viewfinder and implemented a custom continuous analysis loop that samples frames and sends them to the cloud without freezing the UI. The Engine: For image processing, we moved beyond basic overlays and implemented React Native Skia. This allowed us to build a powerful image processor that applies color matrices for brightness, contrast, and exposure adjustments in real-time. The Map: We integrated expo-location and the Google Places API to fetch and filter nearby points of interest specifically for photography potential.
Challenges we ran into
Real-Time Latency: Getting the AI to analyze an image and return feedback quickly enough to be useful was tough. We had to optimize image compression and prompt tokens to reduce lag. Native Module Hell: We faced significant hurdles with native dependencies, specifically getting expo-location and Skia to play nicely together in the iOS build. We had to debug complex Xcode build errors, provisioning profile conflicts, and stale cache issues. Image Processing Limitations: We initially tried expo-image-manipulator, but realized it couldn't handle basic adjustments like brightness or contrast. We had to pivot and write a custom Skia-based image processor from scratch to get professional-grade editing. Accomplishments that we're proud of The "Magic" Factor: Seeing the AI automatically zoom in and turn on the torch when it detects a dark, distant subject feels like magic. Reference Mode: We're particularly proud of the reference photo feature—it's a unique tool that actually helps users learn style, not just mechanics. TestFlight Deployment: Despite the build errors, we successfully navigated the Apple Developer ecosystem, fixed our signing issues, and got a stable build deployed to TestFlight. Custom UI: We built a beautiful, dark-mode interface with smooth animations (using react-native-reanimated) that feels like a premium native app.
What we learned
AI is a UI Component: We learned that AI shouldn't just be a chatbot; it can be an invisible engine that drives UI elements (like focus indicators and zoom sliders). Native Builds are Critical: You can't rely on Expo Go forever. Moving to Development Builds early gave us access to powerful native libraries like Skia and Location services. Prompt Engineering is Coding: Tweaking the system prompt to stop the AI from being "chatty" and instead act like a technical camera controller was a development process in itself. What's next for PictureThis AR Composition Guides: We want to overlay lines (Rule of Thirds, Golden Ratio) directly on the screen based on where the AI thinks the subject should be. Gamification: Adding daily photo challenges ("Capture a red object," "Find a shadow pattern") to encourage users to practice. Community Feed: A space to share "Before & After" shots to show off how the AI helped improve the final image. Offline Mode: Implementing a smaller, on-device model for basic composition feedback when internet isn't available.
Built With
- amazon-web-services
- cocopod
- eas
- nova-lite
- npm
- react-native
- testflight
- typescript
Log in or sign up for Devpost to join the conversation.