Every Laravel developer has seen the MCP hype by now. Laravel's tagline changed to "The clean stack for Artisans and agents." The laravel/mcp package landed, tutorials appeared overnight, and suddenly everyone's building weather tools.
But those tutorials stop right where real questions start. How do you authenticate AI clients hitting your server? What happens when an agent sends garbage to a tool that writes to your database? Should you even use MCP, or is your REST API already enough?
I've been digging into this for a project, and these are the decisions and patterns I wish someone had documented before I started.
Your queue works perfectly in local. Every job dispatches, processes, and completes without a hitch. Then you deploy to production with real traffic, real concurrency, and real third-party APIs, and things start breaking in ways your test suite never predicted.
I've been running Laravel queues in production for years across multiple applications. Every failure on this list caught me off guard at least once. Not because the documentation doesn't cover them, but because you don't think about them until they bite you at 2 AM.
I had a Laravel project designed to be white-labeled. Swap an env file, change the domain, and the whole application launches as a completely different product.
The problem was the Markdown files.
If you've used Laravel Scout on a project with more than a couple of searchable models, you know the drill. You change something in your toSearchableArray(), and now you need to flush and reimport every model's index. Two commands per model. Manually. One at a time.
On a project with ten searchable models, that's twenty commands to rebuild your search indexes.
In May 2024, I published the first version of anthropic-php, a PHP client for Anthropic's Claude API. Nearly two years and 370,000+ downloads later, it's the most installed dedicated Anthropic SDK in the PHP ecosystem. This is the story of why I built it, what it can do, and why I think packages born from real necessity tend to be the good ones.
I've been deploying Laravel applications for over a decade. Started with custom bash scripts. Moved to Laravel Envoy. Then Laravel Envoyer, which worked great for a long time. Eventually moved away from it to Ploi.io so I could manage deployments and servers in one place.
Each tool taught me something the previous one didn't. And the biggest lesson wasn't about tools at all. It was about the small practices around deployment that separate "mostly works" from "actually works every time."
Zero-downtime deployment is widely available now. Most hosting panels and deployment services support it out of the box. But to get the most out of it, there are practices you need to get right. That's what this post covers.
If you've run a WooCommerce store long enough, you've felt the slowdown. Pages take longer to load. The admin panel becomes sluggish. Checkout starts timing out during traffic spikes. You throw caching at it, upgrade your hosting, install optimization plugins. It helps for a while. Then it doesn't.
I've been building with PHP and Laravel for over ten years, and I've migrated production WooCommerce stores to custom Laravel applications. Not as a theoretical exercise, but because the stores hit a wall that no amount of WordPress optimization could fix.
Here's what that process actually looks like, including the parts the generic migration guides conveniently skip.
Eighteen months ago, using Claude or GPT from a Laravel app meant writing raw HTTP calls or using whatever half-maintained wrapper you could find on Packagist. Today there are four solid options, all actively maintained, all with real download numbers behind them.
I maintain one of them. anthropic-php and its Laravel companion anthropic-laravel have a combined 597,000 installs on Packagist. I've been building and shipping AI features in PHP since before most of these packages existed. So when people ask me "which one should I use?", I have opinions. They might surprise you.
Here's the honest breakdown.
In late 2023, I was working on a large Laravel project and hit a point where the app/ directory had become a mess. Models, controllers, services, policies, listeners, jobs, notifications, all thrown into flat directories with no logical grouping. Everything related to billing lived next to everything related to user management. Finding anything required memorizing file names instead of following a structure.
I knew the answer was modular architecture. Break the application into self-contained modules where each module owns its models, controllers, migrations, routes, views, and everything else related to that feature. I'd seen this pattern work in other frameworks and it made perfect sense for Laravel.
So I looked at what was available. I tried several packages. None of them clicked.
I started building my own solution inside that project. As it grew and I wanted the same structure in my other projects, I extracted it into an open-source package: mozex/laravel-modules.
When Pest v4 dropped, one feature got my attention right away: test sharding. My CI pipelines were painfully slow, and splitting tests across parallel GitHub Actions runners seemed like the obvious fix.
I upgraded, configured the shards, pushed to CI, and waited.
Every shard ran every test. Shard 1 of 4? Full suite. Shard 3 of 4? Full suite. My "optimization" made things four times worse.