From End-to-End to Scalable: AN Engineering Technical Test Maturity Model for the Real World

I’ve lost count of how many times I’ve sat in meetings and heard something like:

“We need more test coverage. Let’s add another 500 end-to-end tests.”

It sounds logical, right? More tests = more coverage = more confidence. But anyone who’s worked in software engineering long enough knows the reality: every new end-to-end test is a bit like buying another luxury car for a daily commute. It looks impressive in the parking lot, but the costs pile up, maintenance becomes a nightmare, and suddenly you’re stuck in traffic anyway.

The truth is, not all tests are equal. And if we want to scale — really scale — we need to get smarter, not just bigger.


The Model: Engineering Technical Test Maturity vs. Scalability

Oh no, not another Engineering or QA model, but wait hear me out 🙂 it might be the model that helps put some of the others in context. For background this also ties in to the Test Pyramid Revisited I wrote back in 2018.

Here’s the mental model I use with teams: picture a two-axis graph.

  • X-axis: Engineering Technical Test Maturity → How well the engineering team designs and distributes tests across levels of the system (unit, service/component, integration, end-to-end)
  • Y-axis: Scalability → How easily your test suite can grow without spiraling into cost and chaos.

Where you sit on this graph says everything about how sustainable your testing approach is.

The Four Quadrants

  1. Bottom-Left (Low Maturity, Low Scalability):
    Everything is tested through the UI. Tests are slow, flaky, and expensive. The team spends more time fixing broken test scripts than actually improving quality. Releases drag.
  2. Bottom-Right (Higher Maturity, Still Limited Scalability):
    The team has invested in some unit and integration tests, but the balance is still skewed. Scalability is limited because there’s still too much weight on bulky end-to-end flows.
  3. Top-Left (Low Maturity, Attempted Scaling):
    These are the danger zones. Teams try to “scale” by multiplying end-to-end tests — buying more cloud platform licenses, running suites in parallel — but it’s duct tape. The costs grow linearly, and every release cycle feels heavier. Is it sustainable though?
  4. Top-Right (High Maturity, High Scalability):
    The sweet spot. Tests are distributed smartly across the pyramid. Most are small, fast, and cheap; a handful of end-to-end flows validate the user journey. Costs stay stable even as coverage grows. Releases are fast, feedback loops are tight.

So where do you fit in this model? Challenge yourselves and realistically try to plot your company against this model in terms of Maturity vs Scalability.


Why End-to-End Feels So Safe (But Isn’t)

End-to-end tests are seductive. They simulate the user journey, they’re easy to explain to stakeholders, and they give the illusion of “covering everything.” But here’s what happens when you lean too heavily on them:

  • Every test is potentially slow. One flow = minutes. Multiply by hundreds = hours.
  • Infrastructure explodes. More VMs, more licenses, more environments = $$$.
  • Flakiness rises. The more complex the flow, the more likely it fails for reasons unrelated to defects.
  • Maintenance burns out teams. A single UI tweak or API response change can break dozens of tests.

It’s coverage on paper, not in practice.


The 80/20 Rule: Why Balance Wins

The pragmatic target is 80/20:

  • 80% → Unit, service, component, and integration tests.
  • 20% → End-to-end tests validating full workflows.

This ratio balances confidence with scalability. You still see the full user journey, but you’re not drowning in it.


Real-World Case Study

Let’s compare two hypothetical teams:

Team A (E2E Heavy):

  • 500 end-to-end tests.
  • Average runtime: 2 minutes/test → 16 hours total.
  • Infrastructure cost: $5,000/month.
  • Regression cycle: 2–3 days.
  • Developer morale: low (lots of flaky failures).

Team B (Balanced):

  • 400 unit/integration tests (<5s each).
  • 80 component tests (30s each).
  • 20 end-to-end tests (2m each).
  • Total runtime: <45 minutes.
  • Infrastructure cost: $1,200/month.
  • Regression cycle: <1 hour.
  • Developer morale: high (fast feedback, less noise).

Both claim “coverage,” but only one is scalable.


The Hidden Economics of Testing

This is where leaders often get surprised. Optimizing your test mix doesn’t just improve quality and scalability — it saves real money.

  • Scenario: A team reduces their end-to-end suite from 70% of coverage down to 20%.
  • Result: Infra spend drops 50–70%. Test runtime shrinks from hours to under an hour.
  • Knock-on Effect: Faster builds → faster releases → faster time-to-market.

When you frame it this way, engineering test maturity isn’t just a technical decision — it’s a business strategy.


Steps to Move Up the Curve

Think of this as your roadmap out of E2E quicksand:

  1. Audit your suite. How many tests are end-to-end vs. unit/integration? Be brutally honest.
  2. Break down big flows. Start with one journey (e.g., “user login + transfer”) and split it into smaller pieces.
  3. Upskill your team. If your testers are mostly non-technical, invest in training. Pair them with devs, run coding workshops, start small with service tests.
  4. Invest in tooling. Use frameworks and technologies that make lower-level automation fast and maintainable.
  5. Track ROI. Show leadership the time and cost saved after shifting coverage. It’s your best argument for continued investment. Make sure to snapshot what your testing and cost looks like today and how that gradually shifts into green as you advance in this maturity
  6. Introduce AI & Predictive Testing. Modern tools can help select the right subset of tests to run, cutting even further into cycle time while maintaining confidence. It also helps to understand your code repos, understanding technical architecture, helps in writing more lower level tests fast (or helps with quicker adoption at this level)

The Future of Scalability: AI and Predictive Testing

We’re entering a new era but instead of moving head strong blindly, take a balanced approach. Instead of brute-forcing more tests, practically maybe you could approach it as follows:

  • AI-based test generation. Tools that create unit and service tests automatically from code changes.
  • Predictive test selection. Running only the tests most likely to fail, based on history.
  • Intelligent monitoring. Linking production telemetry back to test prioritization.

The engineering technical test maturity vs scalability model still applies, but now the journey to the top-right corner is getting accelerators, we are more equipped now to get there faster than we ever were in the past 🙂


Wrapping It Up

Scaling testing isn’t about adding more tests. It’s about adding smarter tests.

  • If you’re stuck in the bottom-left, ask yourself: what’s one flow we can break down today?
  • If you’re duct-taping your way in the top-left, ask: how long until this collapses under cost and flakiness?
  • If you’re climbing toward the top-right — congrats. You’re not just improving your engineering test maturity, you’re future-proofing delivery.

The next time someone says “we need more coverage, why did our testing not find that production issue”, pull up this model and ask: “Do you want more tests, or do you want more scalability?”

See also : (https://toyerm.wordpress.com/2018/10/16/lower-level-automation-and-testing-be-more-precise-the-automation-triangle-revisited-again/)

Blog Series: QA Reimagined – Navigating Agile, AI and Automation-Part 5

Part 5: AI in QA – Beyond the Buzzwords, Into the Real Work

Previously in this series:

In Part 4, we explored how practices like feature flags and trunk-based development shift the way we think about testing.

The message was clear: QA isn’t a checkpoint anymore — it’s a constant, contextual presence.

Now for the topic we have all been waiting for AI, Agents and LLMs. This post picks up from the past few posts, intentionally — but adds a new dimension:

How can AI actually support QA today — in ways that are helpful, not hype? It’s more to show that AI in QA does not just come out of the blue, it comes as an evolution to what we already know, cultivated and built as our craft and area the we love so much-QA Engineering.

AI in Testing: We’ve Moved Past the “Wow” Phase

Let’s be honest: the past year has been filled with “AI + testing” announcements — auto-generating test cases, writing scripts from prompts, autonomous testing agents, etc.

And while it’s exciting, I’ve found most teams are still asking:

“But how do we use this in practice — with our codebase, our tests, and our people?”

In this post, I’ll focus on the real use cases I’ve seen working. These aren’t sci-fi. They’re practical, targeted, and most importantly — they fit into how teams already work.

Where AI Actually Helps (Today:2025 😅)

1. Test Case Generation from Requirements

With LLMs, we can now generate:

-Gherkin-style scenarios from user stories

-Acceptance test outlines from Jira tickets

-Manual test case drafts from Confluence pages

This helps teams get unblocked faster, especially when QA joins mid-sprint or when specs are vague.

Tip: Always validate — but don’t start from scratch.

2. Root Cause Analysis (RCA) Acceleration

Related to faster RCAs LLMs can:

  • Read failed test logs
  • Cross-reference recent code change
  • Suggest likely failure areas

I’ve seen this help triage complex integration failures within minutes — saving hours of Teams/Slack messages and blame-chasing.

Think of this as a co-pilot for defect analysis, not a replacement for engineering intuition.

3. Smart Test Selection and Prioritization

Tools or internal LLM-based RAG setups help answer:

Which tests should run based on this commit? Can we skip tests unrelated to the change?

This matters more than ever in monorepos, microservice-heavy environments, or cross-platform suites.

The goal isn’t fewer tests — it’s smarter pipelines that give faster feedback.

4. Accessibility and UI Heuristics

LLMs and agents can now:

Scan UI designs or web apps for accessibility violations Flag common anti-patterns (e.g., missing labels, poor contrast) Offer contextual suggestions inline

Early stages, yes — but a great tool in exploratory testing, especially for WCAG compliance.

5. Exploratory Testing Assistance

During manual testing sessions, AI copilots can:

Record user actions and annotate test steps Suggest variations or edge cases to try next Capture screenshots + logs automatically

I’ve tested this with mixed results — but it’s promising for teams trying to elevate the quality of exploratory work.

The value isn’t just in AI automation — it’s in lowering friction.

What AI Still Can’t Do (Well) *based on my viewpoint*

Let’s be clear — this isn’t magic. AI in QA still struggles with:

-Domain-specific logic (unless trained with your own data)

-Flaky test identification (unless signal-to-noise ratios are high)

-Deep exploratory intuition (that human curiosity edge we all have)

-Systems that require understanding non-textual complexity (multi-modal flows, hardware interactions)

Use AI to assist your judgment — not to replace it.

My Guidance for QA Leaders and Teams* very important golden rules

If you’re trying to introduce AI into your QA practice:

a) Start with pain points. Where are you losing time — writing boilerplate, triaging defects, checking logs?

b) Use your existing data. Connect LLMs to your Jira, Test Management tools (Xray/Test Rail etc), test logs, or commit metadata. Don’t expect good answers from general prompts.

c) Focus on augmentation. AI is your accelerator, not your decision-maker.

d) Build trust slowly. Validate its outputs. Refine the prompt. Re-evaluate what success looks like.

e) Train the team. Not just on tools — but on what to expect, how to review AI output, and when to trust vs override.

Closing Thoughts: Practical, Not Just Promising

In this post I purposefully kept it at more of a strategic level related to AI, there is a lot more under the surface from different types of LLMs, MCPs and advanced agents, but that’s for a whole new blog series… watch this space 😅

We’re at a moment where AI can actually support real QA work. But it requires alignment with reality: Your context , Your tests , Your pain points, Your people

The good news? The tools are ready.

The challenge? The thinking has to catch up.

The best AI solution in testing is the one that fits your team’s needs, not someone else’s roadmap.

Up Next:

I conclude the blog series with some reflections, predictions and forecasts that may be the differentiator in what the future brings.

Blog Series: QA Reimagined – Navigating Agile, AI and Automation-Part 4

Part 4: Testing in Motion – Feature Flags, Trunk-Based Dev, and the QA Mindset Shift

Previously in this series:

In Part 3, I focused on how automation needs to evolve to support modern Agile teams — not just in coverage or tooling, but in how we think about feedback, risk, and delivery flow.

This time, we look at two game-changers in software delivery: feature flags and trunk-based development (TBD). Together, they’re reshaping not just how we release — but how, when, and what we test.

And QA needs to adapt.

The Challenge of Continuous Change

As teams move toward faster release cycles — sometimes daily or even hourly — the QA landscape shifts beneath our feet. We no longer have the luxury of waiting for “final” builds or stable UIs.

Instead, we now test in motion:

Features under development are already merged. Releases are decoupled from deployments. Flags are used to control exposure, experimentation, and rollback.

This is exciting — but also messy.

Without the right mindset and structure, testing can become disconnected from real risk and real users.

Feature Flags: Power and Responsibility

Feature flags (also called toggles) give teams control over when and how features are released. With a flip of a switch, we can:

– Test features in production without exposing them to users

– Deploy incomplete features safely

– Roll out gradually (e.g., 10% of users)

– Run A/B experiments

But with great flexibility comes a new QA burden.

QA Considerations with Feature Flags:

Are you testing both the on and off states of each flag? Are your tests flag-aware — or are they failing when a flag is inactive? How are flag configurations handled across test environments? Do you have a strategy to clean up stale flags?

Feature flags give agility — but without QA awareness, they also add complexity and blind spots.

Trunk-Based Development (TBD): Fast, Frequent, and Fragile (if you’re not ready)

TBD encourages small, frequent changes merged into a shared main branch. It reduces long-living branches and the painful merges that come with them.

Combined with CI/CD, it allows near-continuous delivery.

But for QA, it creates three key shifts:

1. Testing Must Be Contextual and Incremental

You don’t test a “release candidate” anymore — you test the latest commit. This means automation needs to be intelligent and fast (see Part 3). Manual testing shifts toward exploration, not catch-all regression.

2. Quality Signals Must Be Real-Time

Build failures, test alerts, and flag mismatches need to surface quickly. Slack notifications, dashboards, and trend tracking help teams respond in hours — not sprints.

3. Everyone Must Own Risk

With TBD, the feedback loop is tight. There’s no time for QA to “catch it later.” Developers, QA, and product must collectively assess change impact.

In trunk-based workflows, QA can’t be reactive. It must be embedded.

Patterns I’ve Seen Work Well related to TBD and Feature Flags

Over the last few years, here’s what I’ve seen help teams succeed when combining flags + TBD:

✅ Flag-Aware Automation

Automated tests are written with both flag states in mind. Feature flag states are injected or mocked during test runs. CI pipelines run a matrix of tests with flags toggled.

✅ Exploratory Testing in Context

Exploratory sessions are run while features are toggled on — not just post-release. Teams test both behavior and rollback impact.

✅ Change Impact Awareness

Teams use tools or metadata to track which services or flags a code change touches. This helps QA prioritize test coverage and exploratory focus.

✅ Documentation in Motion

Feature flag status and test coverage are visible and traceable (via Confluence, Jira, etc.) Flags are part of the Definition of Done — including clean-up plans.

But What Happens Without QA Adaptation?

In teams that don’t align QA to these practices, I often see:

-False positives from automation when flags are off

-Missed test coverage because toggled-off features go untested

-Painful rollback bugs when off-state behavior isn’t validated

-Confusion over what’s tested vs. what’s “released”

The result? Features are in production but invisible to QA — and that’s a dangerous place to be.

How to approach This as a QA Leader

When coaching teams or leading internal quality transformation, here are the principles I use:

1. Design your tests to account for configuration, not just functionality

2. Define test strategies for both feature flag states — and clean them up

3. Integrate flag visibility into your test reporting and dashboards

4. Use toggle-aware scenarios when possible (“Given flag X is on…”)

5. Involve QA in release toggling and flag planning — not just test execution

And most importantly:

Have a shared understanding of what “tested” and “done” actually mean in this new world.

Closing Thoughts: QA’s Role is Shifting — Again

As release and deployment decouple, and change becomes constant, QA is no longer a final check. It’s a continuous lens.

In a world of feature flags and trunk-based commits, testing is never done — it’s always in context.

Our job isn’t to resist that — it’s to bring clarity to it.

Quality in motion needs visibility, adaptability, and a QA mindset that’s not afraid to evolve.

⏭️ Up Next:

In the next post, I’ll dive into how AI is actually being used to support modern QA — beyond the hype.

Blog Series: QA Reimagined – Navigating Agile, AI and Automation-Part 3

Part 3: Test Automation in Agile – Designed for Speed, Built for Confidence

If you missed the last post:

In Part 2, I talked about the foundational QA principles that still matter — things like layered test strategies, the role of risk, and how quality must be a team mindset. That post grounded us.

This one builds forward.

We All Have Tests — But Do We Have a Test Strategy?

Let’s face it: almost every modern Agile team has some form of test automation in place. Whether it’s unit tests run through CI, API checks on a staging environment, or a bunch of end-to-end tests trying to simulate real-world scenarios — we’ve all got something running somewhere.

But the real question I keep asking is this:

Is our test automation actually serving the pace and priorities of our team — or are we just dragging legacy habits into faster cycles?

Over the years, I’ve seen this pattern repeat across teams of all sizes:

Test suites grow, but feedback slows. CI pipelines exist, but no one fully trusts them. Automation exists… but it doesn’t accelerate delivery — it delays it.

It doesn’t have to be that way.

Agile Changed Everything — Automation Has to Catch Up

In Agile, the way we build software has changed dramatically:

Stories are smaller and continuously evolving. Releases can happen daily — or multiple times a day. Feedback loops are expected to be instant.

So automation has to shift from being a standalone initiative to becoming a core enabler of flow.

This means:

Writing tests in the sprint, not after. Testing small, frequent changes — not large, delayed bundles. Designing automation with feedback and trust in mind — not just pass/fail results.

In Agile, test automation isn’t a phase. It’s an embedded feedback system.

Automation That Supports the Flow of Agile Work

Here’s what’s working well in teams I’ve supported or worked with:

1. Automation aligned to story delivery

Test cases are created with the story. That might mean:

Developers and QA pairing on both unit and functional tests Feature-flag-aware tests for toggled or incomplete features

2. Pipelines that prioritize speed and confidence

A typical setup might look like:

Fast unit tests on every commit

Selected integration/API tests based on change impact

Targeted E2E tests on merge or before deployment Full regression in nightly runs, not every PR

We stop expecting every test to run everywhere — and start thinking about purpose and signal-to-noise ratio.

3. Failures lead to learning, not just red pipelines

Flaky tests are tracked and reviewed regularly. Test ownership is clear. When a test fails, we don’t ignore it — we ask, “Is this still valid? What is it telling us?”

Automation Is More Than Writing Scripts

One mindset shift I always encourage is this:

Automation is not just about writing Selenium, Cypress, or Playwright scripts. It’s about designing systems for feedback, trust, and maintainability.

Ask yourself:

Are tests reliable enough to gate a release?

Can you pinpoint which change broke what — or does it require detective work?

Do devs want to run the tests before merging — or do they avoid them?

If your automation adds friction instead of flow, it’s time to rethink its design — not just its tool.

Common Pitfalls I Still See:

Let’s be honest — these still happen too often:

-End-to-End Overload: Too many UI tests trying to cover business logic.

-Unowned Test Suites: No one tracks failures, test coverage, or test debt.

-“Automate Everything” Pressure: No prioritization, leading to bloat and slow pipelines.

-Test-After Stories: Dev work is marked “done,” and test work begins from scratch.

Agile isn’t just about speed. It’s about delivering value faster. Automation has to support that mission.

When working with teams, here are the core principles I lean on and truly advocate for. Fair enough in my experiences with those teams, we might not have achieved all of this, but I always believe in the North Star and vision, and thus geared the teams to start thinking in this direction. Some moved faster and others slower (based on the situation in every org/team). Nonetheless I will still encourage everyone to start thinking along these lines:

-Automate by intent, not inertia. Every test should have a reason to exist.

-Time-box test coverage per sprint. If it doesn’t make the sprint, it needs to be tracked and prioritized.

-Use tags or annotations. Not all tests need to run on every pipeline. Design for filtering.

-Maintain a quality dashboard. Track flakiness, failure reasons, execution time, and trust levels.

-Review tests in retros. What passed? What failed? Why? Are we testing too much or not enough?

Closing Thoughts: Automation That Serves, Not Slows

Automation is powerful — but only if it serves the team.

If your tests aren’t giving your team confidence, clarity, and speed, then they’re just noise. Agile requires us to move fast, yes — but not blindly.

The best automation strategy is one that your team trusts and uses — every day.

Stay tuned to the next blog post, it’s about to get a little more technical from this point on, so stay tuned for more…

blog series: QA Reimagined: Navigating Agile, AI, and Automation Part 2

Foundations First: What Still Matters in Modern QA

In my previous post, I shared how the world around us has shifted — rapidly and profoundly. As QA professionals, we’re not just dealing with more tools and buzzwords; we’re navigating an entirely new way of thinking, building, and collaborating.

But amid all the noise — AI, LLMs, automation agents, DevOps, and daily deploys — I keep coming back to a core belief: no tool or technique will make a lasting impact unless it’s built on the right foundations.

So before we dive deeper into what’s new, we need to acknowledge what’s still true.

This post is about that. It’s about the timeless principles that still shape great QA work — the ones I’ve seen hold up whether I’m working with a lean product team, scaling across squads, or coaching in complex regulated environments.

These are the fundamentals that allow us to evolve without losing our edge.

1. Testing Isn’t a Phase — It’s a Thought Process

Modern QA starts long before the first test case. It begins when:

  • A requirement sparks a “what if?” scenario
  • An edge case pops into your head during backlog grooming
  • Someone wonders, “Will this fail silently in production?”

Testing is thinking — not typing.

In the strongest teams, quality shows up in how stories are written, how code is reviewed, and how assumptions are challenged. The earlier that QA mindset enters the conversation, the more resilient the outcome.

Testing is not a gate. It’s how we build things that last.

2. The Testing Pyramid Still Holds Up

Yes, it’s been drawn a thousand times. But it still helps guide the right strategy.

I shared this post quite a few years ago, most of which is still relevant today:

A balanced pyramid is more than just a theory — it’s the difference between:

  • A 10-minute pipeline vs. a 90-minute bottleneck
  • Test failures that isolate root cause vs. long triage sessions
  • Agile delivery vs. sprint rollovers

Anti-Patterns I Still See:

  • UI-Heavy Suites that break on every small DOM shift
  • Neglected Unit Tests, relying solely on manual regression
  • Orphaned API Tests with no clear ownership

The shape of your pyramid tells the story of your test strategy maturity.


3. Shift-Left Is More Than Early Testing

Too often, I hear “shift-left” used as a buzzword. But it’s not about just writing tests earlier — it’s about thinking about quality earlier.

What real shift-left looks like:

  • QA helping define acceptance criteria before a story starts
  • Developers asking, “What’s the edge case here?” as they code
  • Testability being discussed in design reviews — not after launch

I have spoken about this topic before and you can reference here for more context on this aspect:

It’s collaboration, not redistribution.

Shift-left doesn’t mean everyone writes tests. It means everyone thinks in terms of quality.

4. Quality Is a Team Mindset

In high-performing teams, I’ve noticed something simple but powerful:

Everyone owns quality.

Not just QA. Not just automation engineers. Everyone.

  • Devs review test coverage with care.
  • Product managers flag risky areas for extra validation.
  • Designers loop the team into UX and accessibility checks.
  • QA coaches — not commands.

And when bugs happen (because they always do), the response isn’t finger-pointing. It’s:

“What did we miss — and how do we prevent it next time?”

That’s what a real quality culture looks like.

5. Risk-Based Testing: The Underrated Superpower

You can’t test everything — and you shouldn’t try to.

The teams that scale testing sustainably know this:

  • They identify business-critical flows and tag them as such
  • They design test plans that evolve with risk — not with gut feel
  • They know when not to automate

What this looks like practically:

  • Selectively running critical tests on every commit, full suite nightly
  • Layering exploratory testing for features with ambiguous UX

Risk isn’t a blocker — it’s your compass

Closing Thoughts: Tools Change, Mindset Endures

Before we get into the tactical parts of automation, AI inspired QA, and smarter pipelines, it’s worth asking:
Is our QA practice designed for speed — or built for confidence?

Foundational thinking — like collaboration, thoughtful layering, and risk-based decisions — helps you adapt to any tech shift.

Without it, even the best tools will give you the wrong kind of confidence.


Coming Next:

Test Automation in Agile: Building for Speed, Feedback, and Maintainability
Where we go deeper into how QA automation evolves in modern pipelines — and how to avoid turning it into just another fragile layer.

New blog series: QA Reimagined: Navigating Agile, AI, and Automation Part 1

QA Is Everyone’s Business — And It’s Changing Fast

It’s been a while since I posted on here, but during this time I’ve gained hands-on experience across multiple industries, geographies, and team scales — from legacy-heavy enterprises to fast-moving Agile squads.

And like many of you, I’ve found myself facing a new reality in tech — one buzzing with AI, LLMs, Agents, and rapid automation. But I don’t see this as just a wave of buzzwords. I believe the real opportunity lies in overlaying this new world with the practical, hard-earned QA expertise that brought us here.

The changing world of tech and engineering

It’s not about adopting trends for the sake of it — it’s about evolving meaningfully. Grounded knowledge is the foundation for lasting progress.

I will be writing a series of blog posts over the next few weeks, in a series I call: “QA Reimagined: Navigating Agile, AI, and Automation”

Why This Blog Series, and Why Now?

In this new series, I’ll be sharing my personal lens on how quality engineering is changing — and how we, as QA professionals, team leads, and builders, can evolve with it.

I’ll be unpacking:

-The intersection of traditional QA best practices with AI-based augmentation

-What it actually means to shift left, right, and intelligently

-How quality becomes everyone’s responsibility — not just a QA engineer’s

-Why test automation isn’t enough without strategy, context, and cross-functional collaboration

This is not a theoretical exploration. It’s practical, grounded in what I have seen being implemented, debugged, and transformed — in startups, regulated sectors, and everything in between.

QA Today: More Than Testing

The days of treating QA as a late-stage safety net are over.

Quality now lives in: Design reviews,Story grooming, PR validations, Release toggles, Production observability

In modern pipelines, QA isn’t just about if something works — it’s about why, for whom, under what conditions, and at what risk.

And when things break — as they always do — it’s QA who helps the team learn, not just fix.

What to Expect

In this blog series, I’ll reflect on how modern QA practices are adapting to today’s challenges and opportunities. I’ll share perspectives on:

-How our role as QA professionals is evolving in collaborative, fast-paced teams

-Where automation adds value — and where it still needs human intuition

-How to think critically about integrating new tools like AI without losing strategic focus

-What it takes to build a culture of quality that scales across teams, tools, and time zones

Expect practical insights, real examples, and a focus on relevance over hype.

If you’ve felt the growing pains of modern QA — the endless test maintenance, the pressure to automate everything, the unclear ownership of bugs — stay tuned.

Subscribe to the blog. Drop a comment. Or tag a teammate who still thinks “QA is just testing.”

Because in this world, QA is everyone’s business.

Let’s Build the Future — Grounded

If you’re leading QA efforts, managing cross-functional teams, or just trying to keep up with this new frontier — you’re not alone. I hope this series gives you both clarity and confidence as we adapt together.

Progress is good. But grounded progress is great.

Mind mapping for Software Engineering management and beyond


In a previous post a few years ago MIND-MAPPING IN SOFTWARE TESTING: INCREASE THAT TEST COVERAGE, ITS ABOUT TIME! – I delved into the power of mind mapping as a transformative tool in software testing. Since then, my career journey has taken a remarkable turn from the technical trenches to the strategic pathways of management. As I navigated this evolution, so too has my use of mind mapping, morphing from a mere testing tactic to a comprehensive instrument that intertwines quality and software engineering management. Today, I’m excited to share how this versatile tool has adapted and thrived in a broader context. Join me in my latest webinar I did once again with the BiggerPlate team (linked below) where I explore the refined art of mind mapping through the lens of an evolved career in software and quality management.


https://player.vimeo.com/api/player.js

Test Automation uncovered : podcast feature for Xray App podcast series

I was fortunate enough to be a guest on the QA therapy series sponsored by Xray. Sergio Freire and Cristiano Cunha, Solution Architects and Testing Advocates at Xray chatted with me extensively on the topic of Test Automation. The actual podcast is sponsored by Xray – a native quality management app for Jira: listen here to the full podcast https://hubs.li/Q01sq5x-0

QA Management – Tips for leading Global teams (my post featured on the LambdaTest blog platform)

The events over the past few years have allowed the world to break the barriers of traditional ways of working. This has led to the emergence of a huge adoption of remote working and companies diversifying their workforce to a global reach.
Leading teams in an ‘in-person’ setting can have its challenges but these challenges and complexities can be multiplied once you work with, lead and manage global or remote teams. 🌍
Read more on the thoughts I shared on the LambdaTest platform regarding QA management: tips on leading global teams here : https://www.lambdatest.com/blog/qa-management-tips-for-leading-global-teams/