AI can generate a thousand articles a minute. But it can't do your thinking for you. Hashnode is a community of builders, engineers, and tech leaders who blog to sharpen their ideas, share what they've learned, and grow alongside people who care about the craft.
Your blog is your reputation — start building it.
13h ago · 9 min read · By: Haryn.us Prologue: The Silent Timeout 11:47 PM. Somewhere in the digital ether. The cursor blinked. Once. Twice. A metronome counting down patience. On the screen, a single line of PowerShell awa
Join discussion
2h ago · 34 min read · TLDR: A pretrained LLM is a generalist. Fine-tuning makes it a specialist. Supervised Fine-Tuning (SFT) teaches it your domain's language through labeled examples. LoRA does the same with 99% fewer tr
Join discussion
5h ago · 2 min read · If you've ever dealt with secrets in Kubernetes, you know it can be a bit of a headache. Keeping sensitive data like API keys, database credentials, or configuration bits secure and up-to-date across your Google Kubernetes Engine (GKE) clusters has t...
Join discussion
14h ago · 5 min read · This week has been somewhat intense and a lot of fun - both can be true! A teammate and I have been working on installing Panda CSS in our repo and upgrading our Design System. It's at times interesti
Join discussion
CEO @ United Codes
1 post this monthObsessed with crafting software.
9 posts this month#cpp #design-patterns #rust
1 post this monthBuilding backend systems. Occasionally understanding why they work.
1 post this monthSecurity Researcher | Red Team
1 post this monthCEO @ United Codes
1 post this monthObsessed with crafting software.
9 posts this month#cpp #design-patterns #rust
1 post this monthBuilding backend systems. Occasionally understanding why they work.
1 post this monthSecurity Researcher | Red Team
1 post this monthCompletely agree, most failures I’ve seen come from poor context management and unclear data flow, not the model itself. State handling also becomes a major issue when workflows scale, especially with multiple tools and agents interacting. In my experience, debugging improves a lot once you treat it as a system design problem rather than just an AI model issue.
Hmm, I think AI tools are actually pretty helpful, but you still have to double-check everything — they’re not perfect 🙂
Most companies haven't answered a basic question yet: who is accountable when an AI agent takes an action? Until that's resolved, they'll keep defaulting to safe, surface-level AI features instead of truly rethinking workflows. The bottleneck isn't the technology; it's the accountability layer nobody wants to own.
API docs get attention. The frontend/API contract usually doesn't. TypeScript helps, but types lie without runtime validation. The API returns an unexpected null, a renamed field, an edge case you never tested and your types had no idea. Zod fixes this. Parse at the boundary. If the API changes shape, you catch it at the schema. Not in a Sentry alert a week later. We do this with Next.js Server Actions too. The server/client boundary is the natural place to validate. Keep the schema next to the call. Documentation problem and type-safety problem are usually the same problem.
You’re definitely not alone that “Step 5 bottleneck” is where most AI-assisted teams hit reality. Right now, most teams aren’t fully automating reviews yet. The common pattern I’m seeing is a hybrid approach, not purely human or purely automated. What others are doing AI generates code → Automated checks (linting, tests, security, architecture rules) → Targeted human review (not full manual review) 👉 The key shift: humans review intent + architecture, not every line.
Nice first deployment walkthrough! One thing worth adding to this stack: set up an OAI (Origin Access Identity) or the newer OAC (Origin Access Control) so your S3 bucket stays fully private and only CloudFront can read from it. Without that, the bucket is publicly accessible even though CloudFront is in front. Also, consider adding a Cache-Control header strategy — setting immutable assets to max-age=31536000 with content hashing in filenames, and your index.html to no-cache so CloudFront always checks for the latest version. WAF is a solid move this early — most people skip it until they get hit with bot traffic.
I keep seeing people blame the model when something breaks. In most cases, that’s not where the problem is. From what I’ve seen, things usually fail somewhere else: agents pulling in too much or wron
Agree. This is very close to what I’ve seen while building Origin. Once you connect AI to tools, files, and workspace state, it becomes much...
100% agree — this matches what I see building automation systems for clients daily. The model is usually the most reliable part of the stack...