Hashnode’s cover photo
Hashnode

Hashnode

Internet Publishing

201 Spear St # 1100, San Francisco, California 28,024 followers

Write to think. Publish to connect.

About us

Hashnode is a community of builders, engineers, and tech leaders who blog to sharpen their ideas, share what they've learned, and grow alongside people who care about the craft. Publish on your own custom domain, fully optimized for SEO and AI discovery. AI can generate a thousand articles a minute. But it can't do your thinking for you. Your blog is your reputation — Hashnode is where you start building it.

Website
https://hashnode.com
Industry
Internet Publishing
Company size
2-10 employees
Headquarters
201 Spear St # 1100, San Francisco, California
Type
Privately Held
Specialties
Programming, Software Development, Dev Community, Software Engineering, Web Development, Mobile Development, Publishing, Blogging, Content creation, JavaScript, Flutter, Java, Rust, HTML, CSS, and Testing

Locations

Employees at Hashnode

Updates

  • Hashnode reposted this

    getting ai ask engines to recommend your product is way easier than most think. we did it for bug0 from scratch. 400% growth in 2 months. giving away the playbook in a free 15 min session. covers what we did with docs, blog content, and structured data to show up in ai search results. comment if you want in.

    • No alternative text description for this image
  • Hashnode reposted this

    Week 1: Playwright tests running in CI. Green. Team posts the screenshot in Slack. Month 2: Suite takes 18 minutes. Devs open a PR, context-switch, forget to check results. Some merge before tests finish. Month 3: Designer moves the "Submit" button to a sticky header. Three tests break. An engineer adds // TODO: fix after redesign. They never come back. Month 5: CI is green. But 40% of E2E tests are quietly disabled. The signup flow hasn't been tested in six weeks. A regression ships. A customer emails support. I've watched this exact timeline across dozens of SaaS teams that set up GitHub Actions for automated testing. The setup takes an afternoon. That's the easy part. The hard part is what runs inside the pipeline, and whether it's still running three months later. Most teams I talk to have "automated testing" that only covers unit tests. Their CI passes. Their checkout flow throws a 500 error. That green checkmark means the pipeline executed. It doesn't mean the product works. We wrote a guide on building a GitHub Actions pipeline that doesn't fall apart after month 3. It also covers the point where maintaining Playwright scripts yourself stops being worth the time. Link in comments. #QAtesting #e2etesting #regressiontesting #apptesting #aitesting #ai

  • Hashnode reposted this

    before we wrote a single line of code for bug0, we ran a free qa services company for 2 months. moved to the bay area with my wife and a toddler. airbnb for 6 months. went to founder friends and asked: let us take over your entire qa. for free. what we found was surprising. ai coding tools were great at writing code and unit tests. none of them could automate e2e browser testing at scale. hiring qa engineers cost a bomb. buying tools meant months of onboarding with nothing to show for it. so we just did the work. manually. 5 design partners in 3 weeks. nobody wanted the old school agency model with slow, manual processes. but it taught us exactly what to build. when we cracked ai agents and figured out one engineer could serve multiple customers instead of 1:1, the economics changed overnight. all 5 design partners converted to paying customers. sometimes you gotta build the agency before you build the product.

    • No alternative text description for this image
  • Hashnode reposted this

    View profile for Fazle Rahman

    Bug09K followers

    I've talked to 200+ engineering teams this year. The ones who tried AI testing tools? Most turned them off within 2 months. The pattern: Week 1: "This is amazing, it wrote 50 tests from plain English." Week 3: "Why are 15 tests failing every run?" Week 6: Engineers spend more time triaging AI failures than they ever spent writing manual tests. Week 8: Tool gets turned off. Seed/series A stage startups. 50 to 300-person eng orgs. Same failure mode. Here's the part nobody talks about: the testing gap is getting worse, not better. Teams ship 76% more code per person than two years ago thanks to Cursor, Claude Code, and Copilot. But a CodeRabbit study on 470 PRs found AI-generated code contains 1.7x more issues. 75% more logic errors. The kind that look fine in review and break in production. More code = More bugs = No more QA. So teams reach for AI testing tools. The demos look great. Generate 50 tests from a URL. Self-healing locators. Zero config. Then 20 tests fail on a run. Half are false positives. Nobody knows which half without investigating each one manually. Your engineers are now doing triage work they didn't sign up for. Within weeks, they stop checking the alerts. The tool becomes noise. Noise gets muted. I've watched this exact sequence play out dozens of times. The tools aren't broken. The model is. The teams where AI testing actually works have a human between the AI output and the developer. Someone who confirms every failure is real before it reaches an engineer's inbox. AI generates + Humans validate That's the architecture. It's why we built Bug0 this way… the hybrid model tool + FDE. Names changed for brevity.

    • No alternative text description for this image
  • Hashnode reposted this

    View profile for Fazle Rahman

    Bug09K followers

    introducing the qa engineer you never have to onboard. give it a URL. no test plan, no context. it just opens your app and starts browsing like a real user. it finds your signup flow, your search, your checkout. stuff you'd normally spend a week mapping out in a spreadsheet before anyone writes a single test. each flow becomes a draft test. you decide what to keep. we're calling it bug0 discover. the whole thing takes about 5 minutes. but here's the thing. you can leave it running. go to sleep. come back in the morning and it's found flows in your app you didn't even know existed. edge cases nobody on your team thought to document. no qa tool has done this before. most tools wait for you to tell them what to test. this one figures it out on its own. that's not a 2x improvement in qa productivity. that's 1000x. your test coverage grows while you're not even working. here's what it looks like ↓ (video has been sped up)

  • Hashnode reposted this

    we shipped something interesting today. bug0.com now serves clean markdown to ai agents instead of bloated html. ~3kb of structured content instead of ~500kb of html noise. middleware detects the Accept: text/markdown header or .md url suffix and serves the clean version automatically. when someone asks an ai assistant about testing tools, we want to be the answer it cites. not the page it skips because it couldn't parse through our react bundle. 19 landing pages, blog, knowledge base, competitor alternative pages. all agent-readable now. try it yourself: 𝚌𝚞𝚛𝚕 -𝙷 "𝙰𝚌𝚌𝚎𝚙𝚝: 𝚝𝚎𝚡𝚝/𝚖𝚊𝚛𝚔𝚍𝚘𝚠𝚗" 𝚑𝚝𝚝𝚙𝚜://𝚋𝚞𝚐𝟶.𝚌𝚘𝚖 𝚌𝚞𝚛𝚕 -𝙷 "𝙰𝚌𝚌𝚎𝚙𝚝: 𝚝𝚎𝚡𝚝/𝚖𝚊𝚛𝚔𝚍𝚘𝚠𝚗" 𝚑𝚝𝚝𝚙𝚜://𝚋𝚞𝚐𝟶.𝚌𝚘𝚖/𝚟𝚘𝚒𝚌𝚎-𝚊𝚒-𝚝𝚎𝚜𝚝𝚒𝚗𝚐 𝚌𝚞𝚛𝚕 𝚑𝚝𝚝𝚙𝚜://𝚋𝚞𝚐𝟶.𝚌𝚘𝚖/𝚕𝚕𝚖𝚜.𝚝𝚡𝚝 or just append .md to any page url. we also set up /llms.txt as a site-wide index so agents know what content exists before crawling blindly. we build ai agents that test apps e2e. making our own site agent-readable felt like the obvious next step. if you're building a website in 2026 and not thinking about agent readability, you're leaving discovery on the table.

  • Hashnode reposted this

    Just shipped a new feature to Bug0. You can now tag some of the tests on Bug0 Studio with arbitrary tags like smoke, critical, etc and run these tests selectively. Internally, we achieve this by utilizing playwright's tagging mechanism. Using pw tags via code is easy. But I believe a GUI is even better. It standardizes the usage, you can see all your tags at one place and don't need code level access.

  • Hashnode reposted this

    "we have 34 prs to merge in 2 weeks and our qa team can't keep up... help." that's the cry we keep hearing from customers and on demo calls. the reality: claude code and codex have boosted engineering productivity 10x. but while they're getting better at browser automation, qa e2e testing is still stuck on stone age frameworks like selenium, cypress, or playwright. with modern frameworks in the picture, script-based tests go stale the very next day. asking an engineer to maintain 16,000 tests with each new pr? nightmare. here's how we're solving peace-of-mind-as-a-service in qa testing. https://lnkd.in/g4T9AdxG

    • No alternative text description for this image

Similar pages

Browse jobs

Funding