AI Character Drift Demonstration
I'm Not Myself Anymore
Recently, in the midst of generating some dance sequences for a music video, I thought of a neat little experiment to try. I’ve used the technique before where one generates a video clip and then grabs the last frame of said video clip to use as the first frame of the next clip. This can create a longer video segment that appears to be one continuous take, but is instead comprised of multiple smaller clips. My experiment was to see just how well this technique would work when people are involved in the clip, and when not using character reference images.
The resulting video, I’m Not Myself Anymore (A Character Drift Demonstration Video), premieres at 3 PM Eastern, March 29,2026, on my Max Gumdrop YouTube channel. Once the video premieres, you can check it out via the embed above.
Character reference images are mandatory when one wants to use a character with a consistent appearance at multiple places within an AI video project. The AI video generator has the reference image to keep it on track. Just how badly can an AI video generator stray from the original appearance of a character when not given a reference image? That’s what my experiment was all about.
If one wants the best chance of preserving character appearance using the “grab-last-frame” method, then that last frame needs to clearly display the character’s face. In the absence of character reference images, the only reference the AI video generator has to the character likeness is whatever image it’s given to use as its first frame—or an end frame if you have one. If the supplied frames fail to clearly represent the character, then the AI generator must invent the missing features. And since facial features are typically one of the main identifying physical characteristics of a human, if those features can potentially be reinvented every 5 seconds, it only takes one mishap to completely change a character’s screen identity.
In I’m Not Myself Anymore, I didn’t use end frames, since for this demonstration I always only had the previous clip to work with. In practice, if you don’t have character reference images, but you do have images for both your start and end frames for each clip, then you shouldn’t see the kind of character drift demonstrated in this video. But I highly recommend using reference images if you have them, and if your chosen AI video generator supports them. You only need to generate such reference images once, and then you can reuse them for every generated clip that features the character.
In the first half of I’m Not Myself Anymore, I demonstrate character drift when there is no attempt to focus on the character’s face. By the end of the first example, the difference between the character as she started the sequence of clips and as she ends it is extreme.
In the second half of the video, I demonstrate character drift when there is an attempt to focus on the character’s face. I prompted the generator each time to end the clip with the character’s face clearly seen. In this instance, the drift is not as extreme, but it’s still quite noticeable by the end of the video sequence. I think you’ll agree.
Here’s a graphic showing the timeline in Pinnacle Studio 26 Ultimate for I’m Not Myself Anymore. If you know how to decipher the graphic, you’ll see that I used 18 video clips in the first example sequence, and 18 clips in the second example sequence.
Eighteen clips, and the character at the beginning is a complete stranger to the character at the end of each example sequence. Watch the video if you haven’t yet, and you’ll see.
The “grab-last-frame” method of music video generation can work extremely well when character drift isn’t a concern. I created another music video to demonstrate just that. The Existential Dread Technicality (AI Demonstration Film) features my Gary Glitch AI persona, showing him only at the beginning and the end of the short film, at which times I used character reference images for him to nail down his appearance. This video is unlisted at the time of this writing, with no premiere date yet set, so you should be able to watch it via the embed. Once I do set the premiere date for it, it will become unavailable to watch until the premiere date and time arrive.
For those who enjoy seeing the Pinnacle Studio timeline graphics for my videos, here’s the one for this Gary Glitch project.
For this particular video, I employed both music and speech audio clips. I also generated several of the clips with the audio toggle turned on, in case any sound effects were generated that I wanted to use, such as the shattering of glass that occurs about halfway through the video.
I hope you find these two videos interesting if not informative. You’ll find a bit more info about them by visiting their respective YouTube pages and reading the descriptions of each, where I identify exactly which AI video generators were used in creating them. Until next time, cheers!



