Constant - Project Overview
Inspiration
I've been working in AI/ML for many years—most recently as VP of Operations at Mirriad, where I collaborated closely with AI/ML teams on virtual product placement technology. I've watched generative video evolve from curiosity to capability, but 2024-2025 marked a real inflection point. The quality and prompt adherence reached a threshold where telling stories that could genuinely resonate emotionally suddenly felt possible.
I've always been drawn to understated science fiction—films like Gattaca, Never Let Me Go, Children of Men, Her, and Arrival that use speculative futures to examine deeply human questions about connection, memory, and what we leave behind. Shows like Black Mirror and The Three Body Problem explore technology's impact on intimacy and existence, often with devastating clarity. I wanted to work in that tradition—contemplative rather than spectacular, hopeful rather than dystopian.
I'm also fascinated by intergenerational storytelling and narratives that span lifetimes—films like Richard Linklater's Boyhood, The Tree of Life, Magnolia, or Cloud Atlas that trace how relationships and identity evolve across decades. These kinds of stories have historically required either the extraordinary patience of a Linklater or extensive CG/makeup for aging effects. But with generative AI, the ability to realistically depict someone across their entire lifespan suddenly seemed within reach. I wanted to test those boundaries—to see if I could create something that felt emotionally authentic while pushing what was technically achievable with aging and de-aging.
You can read more about my creative process and intentions in my interview with Interalia Magazine.
What it does
"Constant" is a 4.5-minute narrative short film that follows Bailee's lifelong relationship with her AI companion across nearly five decades. With minimal onscreen dialogue but thoughtful voiceover throughout, the film traces their bond from infancy through childhood, adolescence, and into middle age. It helps us explore whether artificial intelligence can genuinely comprehend love, loss, and the bittersweet nature of human impermanence—or if it simply reflects our desire to be truly known and remembered.
How we built it
The entire film was created using generative AI tools in an iterative, experimental workflow. I developed the narrative structure and visual approach first, then used Midjourney to generate hundreds of character images to establish Bailee's appearance from birth through age 45. Maintaining character consistency was crucial—I created detailed prompt formulas and reference libraries to ensure Bailee remained recognizable across nearly five decades of aging.
Each scene was storyboarded using Midjourney and Adobe Firefly, with refinement through Nanobanana, then animated using Runway Gen-2/Gen-3, Kling AI, and Veo 3. Beyond just aging the character, I wanted to show a realistic progression of how we'd interact with AI over time—imagining new features and inputs like sketch recognition, image processing, and voice interaction. I designed and reimagined UI for each era, but also the hardware itself: earbuds and glass tablets that felt authentic, tactile, and plausible as consumer technology evolved.
Voiceover and sound design were generated and refined using ElevenLabs and Suno, with final audio editing in Adobe Audition. Final editing, compositing, color correction, and upscaling were completed in Premiere Pro, After Effects, and Topaz Labs.
Challenges we ran into
Character consistency across age ranges was a primary technical challenge—generative AI models excel at individual images but struggle with continuity. I developed workarounds using seed values, style references, and detailed prompting strategies to maintain Bailee's core features across decades.
Emotional prompting was perhaps even more challenging. At the time I was creating the film, the ability to generate lip-sync that felt natural, could be directed with precision, AND conformed to the aesthetic I was pursuing made a traditional dialogue-driven narrative practically impossible. This meant every emotional beat and life moment had to be conveyed through visuals alone. Doing this creatively without resorting to a series of trite tropes required constant refinement and restraint.
I also wanted to set the film specifically in New York, weaving in locations and experiences unique to the city—the subway, Washington Square Park, the intimacy of small apartments, the rhythm of urban life across seasons. Generating imagery that felt authentically New York rather than generically urban required careful art direction and countless iterations.
The third major challenge was narrative compression—telling a 45-year story in under 5 minutes required ruthless editing decisions about which moments would carry the emotional weight of an entire lifetime.
Accomplishments that we're proud of
"Constant" has now been selected for seven international film festivals and entered with the Chroma Awards, validating that AI-generated cinema can compete alongside traditional filmmaking when deployed with intentionality and craft. More personally, I'm proud that the film seems to connect emotionally with audiences—many viewers have shared that it made them reflect on their own relationships with technology and mortality. The technical achievement of maintaining a consistent character across such a wide age range was significant, and the workflow I developed has informed my subsequent projects. Most importantly, I proved to myself that my 20 years in traditional production could translate meaningfully into this new medium.
What we learned
This project taught me that AI filmmaking isn't about letting algorithms create art autonomously—it's about developing new forms of authorship and craft. Every frame required curatorial decisions, artistic judgment, and technical problem-solving that felt remarkably similar to traditional directing, just with different tools. I learned that the technology's limitations often push you toward better creative solutions; character consistency issues forced me to think more carefully about visual symbolism and emotional continuity. I also discovered that audiences are remarkably open to AI-generated storytelling when it serves genuine narrative and emotional purpose rather than existing as a technical demonstration.
What's next for Constant
"Constant" has opened doors I didn't anticipate. The film continues its festival circuit while I'm developing a longer-form expansion that explores the ethical and philosophical questions the short only hints at. I'm also building a body of work around AI-assisted cinema, including my current project "Goodbye", and sharing process videos and techniques through social media channels dedicated to cinematic AI content. Beyond the film itself, "Constant" has become a calling card in my transition from traditional creative operations leadership into hands-on AI filmmaking and creative technology roles. It's proof of concept—both artistically and professionally—that this medium has extraordinary potential when approached with cinematic literacy and emotional intelligence.
Connect with me on LinkedIn to follow my ongoing work in AI filmmaking.
Built With
- after-effects
- aftereffects
- audition
- audition-editing
- color-correction-and-upscale-premiere
- compositing
- elevenlabs
- firefly
- kling
- klingai
- midjourney
- nanobanana
- nanobanana-video-generation-runway
- premiere
- runway
- suno
- topaz
- veo3
- veo3-audio-generation-and-editing-elevenlabs
Log in or sign up for Devpost to join the conversation.