Running Sevantia, our AI chat product, I watched the same 2,800-token system prompt get re-billed on every single user message. Multiply that by thousands of conversations a day and you're paying Anthropic to re-read the same instructions into Claude's context over and over. Prompt caching fixes this. It took me longer than it should have to get the mental model right, and the docs skip over the parts that trip up most people.
This is the guide I wish I'd had when I first enabled it.
"But it worked locally." The Laravel scheduler is the duct tape of every production app I've worked on. It triggers the reports, cleans up expired data, pings the health checks, rotates the tokens, sends the digests. It works quietly for weeks, then one morning the team realises nothing has run for six days and nobody got an alert.
If you've already read my queue worker version of this list, the scheduler has its own set of silent failure modes. Every one below cost me or a client real money or real trust. None of them show up in php artisan schedule:list. All of them are avoidable once you know where to look.
Examples target Laravel 11 and later (so routes/console.php and bootstrap/app.php). On Laravel 10 and earlier, the same calls live in the schedule() method of App\Console\Kernel.
In a previous post, I listed "your job runs before the data exists" as one of five queue failures that only show up in production. I showed the afterCommit() fix and moved on.
But that post didn't answer the question that kept nagging me: why do your tests always pass when this bug is sitting right in your code? And where else in your app is this same timing problem hiding, beyond direct dispatch() calls?
This post digs into both.
When a client asks me to look at their Laravel application, I don't start by reading code. I run a specific sequence of checks that tells me more in 30 minutes than reading source files for a full day would.
This isn't a full audit. It's triage. After working with Laravel for over a decade, I've found that a handful of structural checks reveal 80% of the problems. The patterns are surprisingly consistent.
Here's the exact process I follow.
Your queries work fine in development. Fifty rows, instant responses, no complaints. Then you deploy to production, real data piles up, and a page that loaded in 200ms takes four seconds.
The problem usually isn't your Eloquent code. It's the indexes you forgot to add.
Every Laravel developer has seen the MCP hype by now. Laravel's tagline changed to "The clean stack for Artisans and agents." The laravel/mcp package landed, tutorials appeared overnight, and suddenly everyone's building weather tools.
But those tutorials stop right where real questions start. How do you authenticate AI clients hitting your server? What happens when an agent sends garbage to a tool that writes to your database? Should you even use MCP, or is your REST API already enough?
I've been digging into this for a project, and these are the decisions and patterns I wish someone had documented before I started.
Your queue works perfectly in local. Every job dispatches, processes, and completes without a hitch. Then you deploy to production with real traffic, real concurrency, and real third-party APIs, and things start breaking in ways your test suite never predicted.
I've been running Laravel queues in production for years across multiple applications. Every failure on this list caught me off guard at least once. Not because the documentation doesn't cover them, but because you don't think about them until they bite you at 2 AM.
I had a Laravel project designed to be white-labeled. Swap an env file, change the domain, and the whole application launches as a completely different product.
The problem was the Markdown files.
If you've used Laravel Scout on a project with more than a couple of searchable models, you know the drill. You change something in your toSearchableArray(), and now you need to flush and reimport every model's index. Two commands per model. Manually. One at a time.
On a project with ten searchable models, that's twenty commands to rebuild your search indexes.
In May 2024, I published the first version of anthropic-php, a PHP client for Anthropic's Claude API. Nearly two years and 370,000+ downloads later, it's the most installed dedicated Anthropic SDK in the PHP ecosystem. This is the story of why I built it, what it can do, and why I think packages born from real necessity tend to be the good ones.
I've been deploying Laravel applications for over a decade. Started with custom bash scripts. Moved to Laravel Envoy. Then Laravel Envoyer, which worked great for a long time. Eventually moved away from it to Ploi.io so I could manage deployments and servers in one place.
Each tool taught me something the previous one didn't. And the biggest lesson wasn't about tools at all. It was about the small practices around deployment that separate "mostly works" from "actually works every time."
Zero-downtime deployment is widely available now. Most hosting panels and deployment services support it out of the box. But to get the most out of it, there are practices you need to get right. That's what this post covers.
If you've run a WooCommerce store long enough, you've felt the slowdown. Pages take longer to load. The admin panel becomes sluggish. Checkout starts timing out during traffic spikes. You throw caching at it, upgrade your hosting, install optimization plugins. It helps for a while. Then it doesn't.
I've been building with PHP and Laravel for over ten years, and I've migrated production WooCommerce stores to custom Laravel applications. Not as a theoretical exercise, but because the stores hit a wall that no amount of WordPress optimization could fix.
Here's what that process actually looks like, including the parts the generic migration guides conveniently skip.
Eighteen months ago, using Claude or GPT from a Laravel app meant writing raw HTTP calls or using whatever half-maintained wrapper you could find on Packagist. Today there are four solid options, all actively maintained, all with real download numbers behind them.
I maintain one of them. anthropic-php and its Laravel companion anthropic-laravel have a combined 597,000 installs on Packagist. I've been building and shipping AI features in PHP since before most of these packages existed. So when people ask me "which one should I use?", I have opinions. They might surprise you.
Here's the honest breakdown.
In late 2023, I was working on a large Laravel project and hit a point where the app/ directory had become a mess. Models, controllers, services, policies, listeners, jobs, notifications, all thrown into flat directories with no logical grouping. Everything related to billing lived next to everything related to user management. Finding anything required memorizing file names instead of following a structure.
I knew the answer was modular architecture. Break the application into self-contained modules where each module owns its models, controllers, migrations, routes, views, and everything else related to that feature. I'd seen this pattern work in other frameworks and it made perfect sense for Laravel.
So I looked at what was available. I tried several packages. None of them clicked.
I started building my own solution inside that project. As it grew and I wanted the same structure in my other projects, I extracted it into an open-source package: mozex/laravel-modules.
When Pest v4 dropped, one feature got my attention right away: test sharding. My CI pipelines were painfully slow, and splitting tests across parallel GitHub Actions runners seemed like the obvious fix.
I upgraded, configured the shards, pushed to CI, and waited.
Every shard ran every test. Shard 1 of 4? Full suite. Shard 3 of 4? Full suite. My "optimization" made things four times worse.