Ghosties Never Die is taking time to produce, as most good things do. I’ve been working extensively on planning, sketching, Blender-building, and updating the wiki, sorting out how this is all going to work out. It’s satisfying creative work, and I can’t wait to get the “vertical slice” playable demo out into the wild. I think I have something special here to offer, and I’d love to get people playing it.

For now, though, I need to pivot. I’ve done the prep work, I’ve sorted the systems, I’ve nailed down the cast, I’ve built some of the mall in Blender, I’ve figured out a lot of what I want to have done and what I need to do to get there. It’s time to critically look at what AI can do for me.
Short answer, not much. Sure, sure, I’ll use Claude for code, since I’m still not really a programmer, but that’s invisible to the player. Probably. I don’t know, maybe someone somewhere will parse how the ghosts move and have a Eureka moment and realize that I had AI work on the code. That person will have earned their applause. I simply can’t do the coding myself, so I’m stuck with this, or with hiring a guy. I know a guy. I know lots of guys, actually. I can’t pay them. I’d love to, some of these guys are great to work with. I can’t.
So, what of art? What have I used AI for so far, and what next?
First, most of the images on the wiki were made via generative AI. Specifically, I used Grok to produce my character portraits (Ghosties and the ghosts), as well as some other concept art.

It’s very stubborn about following directions, blaming the DALL-E engine for some limitations, so I had to shift over to ChatGPT for some larger scale concept art, like the concept art of the mall itself. For some reason, Grok refused to make it look like it was built at the 1200 foot diameter I insist on, perhaps observable in these two concept pieces from almost the exact same prompt in two different generators, both using a Blender model I made specifically to force it closer to what I want (though I’ll wind up using that model for the game, so it’s not a loss).


Notice how the sense of scale is significantly different, even if it’s more or less getting the same rough idea? The East Ring Mall is a big place, but Grok simply refused to cooperate.
Perchance.org made some interesting alternatives, but it’s even worse at taking directions than Grok, despite its overall prettiness. And, well, I need these systems to take directions.

AI systems, even the very impressive Meshy.ai, just don’t take directions well. Oh, sure, sometimes they get you 80% of the way there in 5 seconds, which is close to miraculous and fantastic for concept exploration (and for middle management to prompt up some ideas which they can’t normally put into artist language)… but if you want those extra 20 percentage points in the right direction, well, you’re going to have trouble. The time savings you get from the spitball phase running lightning fast are lost (and then some) in trying to get the systems to do precisely what you want them to.
More to the point, though there’s a huge consideration that has nothing to do with the technical feasibility of the tools. If players were OK with that 80%, using an AI generated asset in the final game, I’d be set. We’d be on the Fasttrack Express, running on AI steam, blasting down the rails. And, well, customers simply aren’t. Some call it a witch hunt, some call it just desserts, but whatever the rationale, AI assets in final production releases, even if they look good, trigger what seems to be an autonomic response, a sort of “activist ick” that rapidly metastasizes into shrill denouncements, boycotts and award clawbacks. If I’m actually going to make money at this, and I kind of need to, I can’t risk that response. I could probably even sneak in the “pretty good” assets without almost anyone being the wiser, but if just one player with a bone to pick gets picky, well, the response is a disproportional downside.
For some sense of what I’ve been able to do with said AI tools, though, let’s take a look at one of the key Ghostie characters. Meshy took Grok character designs I prompted up (including Grok’s “A-pose” for completeness) and made a 500,000 polygon character that’s a great approximation of what I want. It used the source image to apply textures to the mesh, again pretty well.


Meshy then reduced that to 30K and 10K, and was then able to wire in a bipedal skeleton rig. It then applied animations to that rig, allowing me to select from what appeared to be several hundred animations.

I deliberately scoped this project to just use humanoid rigs so I could use such an animation library, and transfer those animations between characters. It’s sort of a 3D version of what Fell Seal or Final Fantasy Tactics did, where artists worked with a simple set of baseline character designs and animations then did some variations on a theme to make a vast library of options. It’s a game development shortcut that works well with automated systems. And yet…
Well, even the 500,000 polygon model has weird glitches. The 30K and 10K models accentuate those glitches. The UV maps are an absolute mess. I know, I know, players don’t care about UVs, but if I have to go in and fix anything, that layout will be another time sink liability.

The meshes aren’t optimized so much as… bludgeoned.

The 30k and 10K models suddenly stopped inheriting the decent texturing from the high res model. Now on that last one, maybe Meshy is still a tool in development, so I’ll cut them slack there, but bottom line, they were only ever “pretty good”, not “actually good in an optimized, useful way”.

So, even if the audience would accept AI assets, and at this point, it seems increasingly unlikely (unless they don’t know they are AI, but that’s dishonest and incurs greater wrath when the secret gets out, false positives be hanged), I’d still have assets that aren’t optimized and that require tweaking to perfect. I know, I know, optimization in the days of Nanite and oodles of RAM is sort of out of vogue. And yet, there’s the ironically AI-fueled RAMpocalypse that is undercutting that trend. Some gamers really do care about getting 120 FPS out of their machines. Getting games out to Switch or tablets means optimization is actually kind of important still. If you want to get out to phones, yes, optimization is still very important. I’m not sure if my game translates all that well to phones, but a decently sized 1080p tablet or a Switch 2 could be a great fit. I really do need optimized assets.
That means I’m going to have to do my own art. I’m going to need to make my own portraits, character models, rigs, animations, environments, particle effects, textures, UI and a bunch of interesting other original, boutique hand-crafted pieces of art. The good news is, I can do that. I actually love doing that. I even like wrangling UVs, making rigs, painting weights, animating and all of those little tasks that nobody actually ever sees and even many fellow artists hate. The bad news is, it takes time.
I don’t actually mind this in principle, since I do want precise control over the creative process here. I knew, going in, that this was likely to be what would need to happen. I am neither surprised nor chagrined. It’s just interesting to me to see what the market is doing, and it makes me ask:
If AI is supposed to be an accelerant, what good does it do if it’s accelerating us face first into a wall?
I don’t need AI to spitball art ideas for me, I can do that in my head. That’s a toy for the executives who don’t know how to communicate to artists. It’s a genuinely useful bridge for that purpose, but I don’t need it. I’m running the show here, I don’t have that layer of abstraction and inefficiency. I don’t need AI to make assets I can’t use, even if they do manage to iron the technical wrinkles out. I don’t need AI to rewrite my script, it always sounds worse when it tries. I don’t need AI to analyze my designs and then hallucinate things that aren’t there. I don’t need the headache that comes with anti-AI activists.
Google’s Gemini is the most hallucinogenic, and it’s really weird to see what it does sometimes. It’s the student who reads the Cliff’s Notes then makes up weird stuff to fill in gaps he doesn’t know. Deepseek is supposedly decent, but I’m not sure I want to lean on China and trust them with my data. (Yeah, yeah, Sam Altman, Zuckerberg and Elon Musk aren’t paragons of virtue, but China? It’s the student who steals your class notes, passes them off as his own, then screws up the assignment anyway.) Chat GPT is almost as bad, but it tends to understand a little more. It’s the guy who would rather be somewhere else, but he’ll give the task a shot anyway, and coasts on mild competency without really trying. Grok is the overachieving engineer, mostly avoiding hallucinations, but fond of restating the text and calling it a summary, then breaking it down mechanically, then eagerly trying to suggest something to change. It’s the valedictorian, itching to fix everything, even if it doesn’t need fixing. Claude is so far the head of the class, even though it’s stuck in brownnose mode and “it’s not this, it’s that” analysis. It’s the cheerleader who actually does its homework and doesn’t hallucinate much, but still doesn’t quite understand the assignment.
I wind up running things like my Combat page through Grok and Claude, asking it to look for problems for engine implementation. That’s why the Combat page is so lengthy, it’s stuffed with corner cases and “prompt-speak” for Claude Code to mull over at some point. The player won’t see or need to see probably 90% of that page, but I have to keep the AI coder within bounds and on track. This is actually a useful application of the tech in my project. These things are fast, mostly efficient analytical tools, if you can keep them from hallucinating and take everything they say with a grain of salt.
Elder David A Bednar has a fantastic talk out there title “Things as they Really Are 2.0” that references some of this. His key suggestion is to use these AI beasties as a master uses tools, not to let them actually make decisions or drive the creative process. I do find them useful in that regard, helping me sort out technical glitches, helping me find places where I made typos or orphan ideas on the wiki, or even chasing down design flaws.
AI tech does have uses. It’s not all downside, and I’ll keep using Claude to sort out my Unreal 5.7 blueprints and Claude Code to do some of the heavy lifting of C++ that I just can’t do. Of course, I have to trust that it’s not as idiotic as its other modules can be sometimes, since I don’t have the requisite expertise to troubleshoot it. Maybe what this actually means is that I also Learn To Code. I don’t even mind that all that much in principle. I love logic, I just hate chasing down semicolons and apostrophes.
Again, though, if I can’t use AI output anyway, since the market will excoriate me for it, well, I’m just going to have to roll up my sleeves and do it myself. I’m OK with that. I just don’t know how to pay the bills in the meantime. I can only hope that once I get this game polished up and ready to play, my hard work will be appreciated, enjoyed, and profitable. I am still dedicated to getting this done in my spare time. I believe that it’s worth doing. Here’s hoping you all like it when it’s in your hands.
























































