Archive

Quantification

Your Software Requirements Are Worthless

Every day, software teams burn millions of pounds building the wrong thing because they mistake fuzzy feelings and opinioneering for engineering specifications

Software teams continue writing requirements like ‘user-friendly’, ‘scalable’, and ‘high-performance’ as if these phrases mean anything concrete.

They don’t.

What they represent is ignorance (of quantification) disguised as intellectual laziness disguised as collaboration. When a product manager says an interface should be ‘intuitive’ and a developer nods in agreement, no communication has actually occurred. Both parties have simply agreed to postpone the hard work of thinking and talking until later—usually until users complain or products break.

The solution isn’t better communication workshops or more stakeholder alignment meetings. It’s operational definitions—the rigorous practice of quantifying every requirement so precisely that a computer could verify compliance.

What Are Operational Definitions?

An operational definition specifies exactly how to measure, observe, or identify something in terms that are meaningful to the Folks That Matter™. Instead of abstract concepts or assumptions, operational definitions state the precise criteria, procedures, or observable behaviours that determine whether something meets a standard—and why that standard creates value for those Folks That Matter™.

The term originates from scientific research, where researchers must ensure their experiments are replicable. Instead of saying a drug ‘improves patient outcomes’, researchers operationally define improvement as ‘a 15% reduction in Hamilton Depression Rating Scale scores measured by trained clinicians using the 17-item version at 6-week intervals, compared to baseline scores taken within 72 hours of treatment initiation, with measurements conducted between 9-11 AM in controlled clinical environments at 21°C ±2°C, amongst patients aged 18-65 with major depressive disorder diagnosed per DSM-5 criteria, excluding those with concurrent substance abuse or psychotic features’.

This example only scratches the surface—a complete operational definition would specify dozens more variables including exact clinician training protocols, inter-rater reliability requirements, patient positioning, statistical procedures, and missing data handling. This precision is what makes scientific breakthroughs reproducible and medical treatments safe.

The Software Development Challenge

Software teams constantly wrestle with ambiguous terms that everyone assumes they understand:

  • ‘This feature should be fast’
  • ‘The user interface needs to be intuitive’
  • ‘We need better code quality’
  • ‘This bug is critical’

These statements appear clear in conversation, but they’re loaded with subjective interpretations. What’s ‘fast’ to a backend engineer may be unacceptably slow to a mobile developer. ‘Intuitive’ means different things to designers, product managers, and end users.

Worse: these fuzzy requirements hide the real question—what specificaly do the Folks That Matter™ actually need?

How Operational Definitions Transform Software Teams

1. Connect Features to the Needs of the Folks That Matter™

Consider replacing ‘the API should be fast’ with an operational definition: ‘API responses return within 200ms for 95% of requests under normal load conditions, as measured by our monitoring system, enabling customer support agents to resolve inquiries 40% faster and increasing customer satisfaction scores by 15 points as measured on <date>.’

This eliminates guesswork, creates shared understanding across disciplines, and directly links technical decisions to the needs of the Folks That Matter™.

2. Turn Subjective Debates Into Objective Decisions

Operational definitions end pointless arguments about code quality. Stop debating whether code is ‘maintainable’. Define maintainability operationally:

  • Code coverage above 80% to reduce debugging time by 50%
  • Cyclomatic complexity below 10 per function to enable new team members to contribute within 2 weeks
  • No functions exceeding 50 lines to support 90% of feature requests completed within single sprint
  • All public APIs documented with examples to achieve zero external developer support tickets for basic integration

Each criterion ties directly to measurable benefits for the Folks That Matter™.

3. Accelerate Decision Making

With operationally defined acceptance criteria, teams spend less time in meetings clarifying requirements and more time attending to folks’ needs. Developers know exactly what ‘done’ looks like, and the Folks That Matter™ verify completion through measurable outcomes.

4. Bridge Cross-Functional Disciplines

Different roles think in different terms. Operational definitions create a common vocabulary focused on the needs of the Folks That Matter™:

  • Product: Transform ‘User-friendly’ into ‘Users complete the checkout flow within 3 steps, with less than 2% abandonment at each step, increasing conversion rates by 12% and generating £2M additional annual revenue
  • Design: Transform ‘Accessible’ into ‘Meets WCAG 2.1 AA standards as verified by automated testing and manual review, enabling compliance with federal accessibility requirements and expanding addressable market by 15%
  • Engineering: Transform ‘Scalable’ into ‘Handles 10x current load with response times under 500ms, supporting planned user growth without additional infrastructure investment for 18 months

5. Evolutionary Improvement

Operational definitions evolve as the needs of the Folks That Matter™ become clearer. Start with basic measurements, then refine scales of measure as you learn what truly drives value. A ‘fast’ system might initially mean ‘under 1 second response time’ but evolve into sophisticated performance profiles that optimise for different user contexts and business scenarios.

Real-World Implementation: Javelin’s QQO Framework

Some teams have already embraced this precision. Falling Blossoms’ Javelin process demonstrates operational definitions in practice through Quantified Quality Objectives (QQOs)—a systematic approach to transforming vague non-functional requirements into quasi or actual operational definitions.

Instead of accepting requirements like ‘the system should be reliable’ or ‘performance must be acceptable’, Javelin teams create detailed QQO matrices where every quality attribute gets operationally defined with:

  • Metric: Exact measurement method and scale
  • Current: Baseline performance (if known)
  • Best: Ideal target level
  • Worst: Minimum acceptable threshold
  • Planned: Realistic target for this release
  • Actual: Measured results for actively monitored QQOs
  • Milestone sequence: Numeric targets at specific dates/times throughout development

A Javelin team might operationally define ‘reliable’ as: ‘System availability measured monthly via automated uptime monitoring: 99.5% by March 1st (MVP launch), 99.7% by June 1st (full feature release), 99.9% by December 1st (enterprise rollout), with worst acceptable level never below 99.0% during any measurement period.’

This transforms the entire conversation. Instead of debating what ‘reliable enough’ means, teams focus on achievable targets, measurement infrastructure, and clear success criteria. QQO matrices grow organically as development progresses, following just-in-time elaboration of folks’ needs. Teams don’t over-specify requirements months in advance; they operationally define quality attributes exactly as needed for immediately upcoming development cycles.

This just-in-time approach prevents requirements from going stale whilst maintaining precision where it matters. A team might start with less than a dozen operationally defined QQOs for an MVP, then expand to hundreds as they approach production deployment and beyond—each new QQO addressing specific quality concerns as they become relevant to actual development work.

Toyota’s Product Development System (TPDS) demonstrates similar precision in manufacturing contexts through Set Based Concurrent Engineering (SBCE). Rather than committing to single design solutions early, Toyota teams define operational criteria for acceptable solutions—precise constraints for cost, performance, manufacturability, and quality. They then systematically eliminate design alternatives, at scheduled decision points, that fail to meet these quantified thresholds, converging on optimal solutions through measured criteria rather than subjective judgement.

Both Javelin’s QQOs and Toyota’s SBCE prove that operational definitions work at scale across industries—turning fuzzy requirements into systematic, measurable decision-making frameworks that deliver value to the Folks That Matter™.

Practical Examples in Software Development

User Story Acceptance Criteria

Before: ‘As a user, I want the search to be fast so I can find results quickly.’

After: ‘As a user, when I enter a search query, I should see results within 1 second for 95% of searches, with a loading indicator appearing within 100ms of pressing enter.’

Bug Priority Classification

Before: ‘This is a critical bug.’

After: ‘Priority 1 (Critical): Bug prevents core user workflow completion OR affects >50% of active users OR causes data loss OR creates security vulnerability.’

Code Review Standards

Before: ‘Code should be clean and well-documented.’

After: Operationally defined code quality standards with measurable criteria:

Documentation Requirements:

  • 100% of public APIs include docstrings with purpose, parameters, return values, exceptions, and working usage examples
  • Complex business logic (cyclomatic complexity >5) requires inline comments explaining the ‘why’, not the ‘what’
  • All configuration parameters documented with valid ranges, default values, and business impact of changes
  • Value to the Folks That Matter™: Reduces onboarding time for new developers from 4 weeks to 1.5 weeks, cuts external API integration support tickets by 80%

Code Structure Metrics:

  • Functions limited to 25 lines maximum (excluding docstrings and whitespace)
  • Cyclomatic complexity below 8 per function as measured by static analysis tools
  • Maximum nesting depth of 3 levels in any code block
  • No duplicate code blocks exceeding 6 lines (DRY principle enforced via automated detection)
  • Value to the Folks That Matter™: Reduces bug fix time by 60%, enables 95% of feature requests completed within single sprint

Naming and Clarity:

  • Variable names must be pronounceable and searchable (no abbreviations except industry-standard: id, url, http)
  • Boolean variables/functions use positive phrasing (isValid not isNotInvalid)
  • Class/function names describe behaviour, not implementation (PaymentProcessor not StripeHandler)
  • Value to the Folks That Matter™: Reduces code review time by 40%, decreases bug report resolution from 3 days to 8 hours average

Security and Reliability:

  • Zero hardcoded secrets, credentials, or environment-specific values in source code
  • All user inputs validated with explicit type checking and range validation
  • Error handling covers all failure modes with logging at appropriate levels
  • All database queries use parameterised statements (zero string concatenation)
  • Value to the Folks That Matter™: Eliminates 90% of security vulnerabilities, reduces production incidents by 75%

Testing Integration:

  • Every new function includes unit tests with >90% branch coverage
  • Integration points include contract tests verifying interface expectations
  • Performance-critical paths include benchmark tests with acceptable thresholds defined
  • Value to the Folks That Matter™: Reduces regression bugs by 85%, enables confident daily deployments

Review Process Metrics:

  • Code reviews completed within 4 business hours of submission
  • Maximum 2 review cycles before merge (initial review + addressing feedback)
  • Review comments focus on maintainability, security, and business logic—not style preferences
  • Value to the Folks That Matter™: Maintains development velocity whilst ensuring quality, reduces feature delivery time by 25%

Performance Requirements

Before: ‘The dashboard should load quickly.’

After: ‘Dashboard displays initial data within 2 seconds on 3G connection, with progressive loading of additional widgets completing within 5 seconds total.’

The Competitive Advantage

Teams that master operational definitions gain significant competitive advantages:

  • Faster delivery cycles from reduced requirement clarification—deploy features 30-50% faster than competitors
  • Higher quality output through measurable standards—reduce post-release defects by 60-80%
  • Improved confidence from the Folks That Matter™ from predictable, verifiable results—increase project approval rates and budget allocations
  • Reduced technical debt through well-defined standards—cut maintenance costs whilst enabling rapid feature development
  • Better team morale from decreased frustration and conflict—retain top talent and attract better candidates

Most importantly: organisations that operationally define their quality criteria can systematically out-deliver competitors who rely on subjective judgement.

Start Today

Choose one ambiguous term your team uses frequently and spend 30 minutes defining it operationally. Ask yourselves:

  1. What value does this QQO deliver to the Folks That Matter™?
  2. What specific, observable criteria determine if this value is achieved?
  3. What scale of measure will we use—percentage, time, count, ratio?
  4. How will we measure this, and how often?
  5. What does ‘good enough’ look like vs. ‘exceptional’ for the Folks That Matter™?

Aim for precision that drives satisfaction of folks’ needs, not perfection. Even rough operational definitions linked to the needs of the Folks That Matter™ provide more clarity than polished ambiguity.

Implementation Strategy

Start Small and Build Consensus

Begin by operationally defining one or two concepts that cause the most confusion in your team. Start with:

  • Definition of ‘done’ for user stories linked to specific value for the Folks That Matter™
  • Bug severity levels tied to business impact measures
  • Performance benchmarks connected to user experience goals
  • Code standards that enable measurable delivery improvements

Define Scales of Measure

Write operational definitions that specify not just the criteria, but the scale of measure—the unit and method of measurement. Include:

  • Measurement method: How you will measure (automated monitoring, user testing, code analysis)
  • Scale definition: Units of measure (response time in milliseconds, satisfaction score 1-10, defect rate per thousand lines)
  • Measurement infrastructure: Tools, systems, and processes needed
  • Frequency: How often measurements occur and when they’re reviewed
  • Connection to the Folks That Matter™: What business need each measurement serves

Evolve Based on Learning

Operational definitions evolve as you learn what truly drives meeting the needs of the Folks That Matter™. Start with basic measurements, then refine scales as you discover which metrics actually predict success. Regular retrospectives can examine not just whether definitions were met, but whether they satisfied the intended needs of the Folks That Matter™.

Document and Automate

Store operational definitions in accessible locations—team wikis, README files, or project documentation. Automate verification through CI/CD pipelines, monitoring dashboards, and testing frameworks wherever possible. The goal is measurement infrastructure that runs automatically and surfaces insights relevant to the needs of the Folks That Matter™.

Conclusion

Operational definitions represent a paradigm shift from ‘we all know what we mean’ to ‘we are crystal clear about what value we’re delivering to the Folks That Matter™’. In software development, where precision enables competitive advantage and the satisfaction of the needs of the Folks That Matter™ determines success, this shift separates organisations that struggle with scope creep and miscommunication from those that systematically out-deliver their competition.

Creating operational definitions pays dividends in reduced rework, faster delivery, happier teams, and measurable value for the Folks That Matter™. Most importantly, it transforms software development from a guessing game into a needs-meeting discipline—exactly what markets demand as digital transformation accelerates and user expectations rise.

Operational definitions aren’t just about better requirements. They’re about systematic competitive advantage through measurable satisfaction of the needs of the Folks That Matter™.

Take action: Pick one fuzzy requirement from your current sprint. Define it operationally in terms of specific needs of the Folks That Matter™. Watch how this precision changes every conversation your team has about priorities, trade-offs, and success.

Further Reading

American Psychiatric Association. (2013). Diagnostic and statistical manual of mental disorders (5th ed.). American Psychiatric Publishing.

Beck, K. (2000). Extreme programming explained: Embrace change. Addison-Wesley.

Cockburn, A. (2004). Crystal clear: A human-powered methodology for small teams. Addison-Wesley.

DeMarco, T. (1982). Controlling software projects: Management, measurement, and estimation. Yourdon Press.

DeMarco, T., & Lister, T. (2013). Peopleware: Productive projects and teams (3rd ed.). Addison-Wesley.

Falling Blossoms. (2006). Our Javelin™ process (Version 2.0a). Falling Blossoms.

Gilb, T. (1988). Principles of software engineering management. Addison-Wesley.

Gilb, T. (2005). Competitive engineering: A handbook for systems engineering management using Planguage. Butterworth-Heinemann.

Gilb, T., & Graham, D. (1993). Software inspection. Addison-Wesley.

Hamilton, M. (1960). A rating scale for depression. Journal of Neurology, Neurosurgery, and Psychiatry, 23(1), 56-62.

Kennedy, M. N., & Harmon, K. (2008). Ready, set, dominate: Implement Toyota’s set-based learning for developing products and nobody can catch you. Oaklea Press.

Morgan, J. M., & Liker, J. K. (2006). The Toyota product development system: Integrating people, process, and technology. Productivity Press.

Sobel, A. E., & Clarkson, M. R. (2002). Formal methods application: An empirical tale of software system development. IEEE Transactions on Software Engineering, 28(3), 308-320.

W3C Web Accessibility Initiative. (2018). Web content accessibility guidelines (WCAG) 2.1. World Wide Web Consortium.

Ward, A. C. (2007). Lean product and process development. Lean Enterprise Institute.

Weinberg, G. M. (1985). The secrets of consulting: A guide to giving and getting advice successfully. Dorset House.

Yourdon, E. (1997). Death march: The complete software developer’s guide to surviving ‘mission impossible’ projects. Prentice Hall.

The OKR Racket

How Consultants Monetise Management Cowardice

Why the latest framework fad is perfect for people who profit from your incompetence

Here we go again. Management has found another silver bullet, another framework that will finally, finally solve all their organisational problems. Objectives and Key Results (OKRs) are just the latest in an endless parade of management fads that promise transformation whilst delivering mostly PowerPoint presentations and wasted time.

Let’s be brutally honest: OKRs are this decade’s equivalent of Six Sigma, which was the previous decade’s equivalent of Total Quality Management, which was the 90s’ equivalent of Business Process Reengineering. Same song, different acronym. Management consultants get rich, middle managers get busy, and actual productive work gets buried under layers of administrative theatre.

Admiral Grace Hopper, one of the wisest people in computing history, said it perfectly:

‘You don’t manage people; you manage things. You lead people.’

John Gall, who understood systems better than anyone, warned us decades ago:

‘That the system is the solution becomes the problem.’

OKRs are a perfect example—a system designed to solve alignment problems that becomes the alignment problem.

And Tom Gilb, who spent his career figuring out what actually works in complex organisations, taught us that ‘you can’t control what you can’t measure’—but he also warned that measuring the wrong things is worse than measuring nothing at all.

Read that again. You don’t manage people. You guide them. The system becomes the problem. And measuring the wrong things makes everything worse.

But most managers don’t want to guide because that requires courage, judgement, and personal accountability. It’s easier to ‘manage’ people through systems, frameworks, and processes because that lets you avoid the hard work of actually guiding human beings towards results.

Here’s what separates effective people in charge from incompetent ones: effective people solve real problems and take responsibility for results. Incompetent people collect frameworks and fads and make excuses.

And right now, incompetent managers everywhere are absolutely orgasmic over their latest excuse-making tool: Objectives and Key Results. OKRs are cocaine for people who’d rather manage spreadsheets than guide people.

What Are OKRs? Management Theatre for People Who Won’t Guide

OKRs break down into two parts, neither of which requires actual guidance skills:

Objectives are fluffy, feel-good statements that sound important in PowerPoint. ‘Improve customer satisfaction.’ ‘Become market leaders.’ ‘Drive innovation.’ Vague enough that they can never really fail, specific enough that fake bosses can pretend they’re providing direction instead of just avoiding the hard work of real guidance.

Key Results are where people who won’t guide get their measurement rocks off. ‘Increase NPS from 7 to 9.’ ‘Capture 25% market share.’ ‘Reduce churn by 15%.’ Numbers make non-guides feel scientific.

But here’s where Tom Gilb’s wisdom becomes crucial: these people are measuring the wrong things. Tom suggests that measurement is essential—’you can’t control what you can’t measure’—but he also warned that measuring activities instead of outcomes, measuring what’s easy instead of what matters, and measuring for the sake of the system instead of for the sake of the Folks That Matter™ is worse than not measuring at all.

OKRs almost always measure the wrong things. They measure what can be easily quantified in quarterly cycles rather than what actually meets the needs of the Folks That Matter™. They measure team activities rather than outcomes. They measure adherence to the process rather than progress towards meaningful goals.

The whole system runs on quarterly cycles because most people in positions of authority have the attention span and strategic thinking ability of caffeinated squirrels. They’d rather shuffle metrics than do the hard work of actually guiding people through complex challenges.

Remember what Grace Hopper taught us: you manage things, you guide people. OKRs are a system for managing people like they’re inventory. That’s not guidance—that’s cowardice.

You’re Not a Manager—You’re Supposed to Guide People

Let’s get something straight right now: the word ‘manager’ has rotted your brain. It’s made you think your job is to control people through systems instead of enabling them to meet folks’ needs.

Guiding people means having the courage to make difficult decisions. It means taking responsibility when things go wrong. It means supporting people to do their best work, not controlling them through elaborate measurement systems.

But guiding people is scary because it’s personal. When your guidance fails, there’s no framework to blame. There’s no system to point to. There’s just you, and your failure to guide effectively.

So instead, you hide behind ‘management’. You create OKR systems that let you pretend you’re guiding when you’re really just measuring. You build elaborate frameworks that give you the illusion of control without requiring any actual people skills.

Here’s the uncomfortable truth: people don’t need to be managed. They need to be supported. And if you can’t tell the difference, you have no business being in charge of anyone.

Stop Making Excuses—You’re The Problem

Before we go further, let me introduce you to a concept from Ray Immelman’s brilliant work: there are two types of people in positions of authority. Great Bosses and what he calls ‘Dead Bosses.’

Great Bosses understand the difference between managing and guiding. They support people and manage systems. They take responsibility for results, focus on the Folks That Matter™, hire good people, make tough decisions, and remove obstacles. When something goes wrong, they look in the mirror first.

Dead Bosses try to manage people like they’re inventory. They collect fads and make excuses. They think the right framework will solve their people problems. They’re scared to make real decisions, so they hide behind processes. When something goes wrong, they blame the framework, the market, or their people—anyone but themselves.

Dead Bosses are framework junkies because frameworks give them something to hide behind. ‘It’s not my fault the numbers are bad—people aren’t following the OKR process correctly!’

Bullshit. You’re supposed to guide people towards success. The results are your responsibility. Stop looking for systems to manage people and start supporting them towards better outcomes. Ask them what they need.

The Framework Addiction Cycle (And Why You Keep Falling for It)

John Gall understood something that most people in charge refuse to accept: ‘A complex system that works is invariably found to have evolved from a simple system that worked. A complex system designed from scratch never works and cannot be patched up to make it work.’

But that doesn’t stop incompetent people from trying. I’ve watched the same people cycle through framework after framework for decades:

  • 1990s: Total Quality Management
  • 2000s: Six Sigma
  • 2010s: Agile Everything
  • 2020s: OKRs

Each time, these people convince themselves they’ve found the holy grail. Each time, they design elaborate systems from scratch. Each time, exactly as Gall predicted, the systems don’t work and can’t be patched up to work.

But here’s where Gall’s deeper insight becomes terrifying: these systems develop their own agenda. The OKR system stops being about results and becomes about feeding the OKR system. People spend more time updating OKR dashboards than talking to customers. Teams optimise for OKR scores rather than customer value. The quarterly review process becomes more important than quarterly results.

As Gall warned: ‘The system always kicks back.’ OKRs, designed to create alignment, create misalignment. Designed to improve focus, they create distraction. Designed to drive results, they drive process compliance.

Here’s the pattern every single time:

Month 1: Person in charge reads about Framework X, gets excited about having a ‘solution’ Month 2: Expensive consultants arrive promising transformation, everyone drinks the Kool-Aid, consultant bank accounts get fatter Months 3-6: Implementation hell, productivity crashes, good employees start job hunting, consultants bill for ‘change management support’ Months 7-12: Framework quietly dies whilst the person in charge discovers Framework Y, consultants pivot to selling the next shiny system Repeat forever: Company culture becomes a graveyard of half-dead initiatives, consultants get rich, actual problems remain unsolved

Notice the pattern? Consultants are the drug dealers of the framework addiction cycle. They’re not selling solutions—they’re selling dependency. OKRs are perfect for this business model because they’re complex enough to require ‘expert’ help but vague enough that failure can always be blamed on implementation rather than the fundamental idiocy of the approach.

The best consultant gig in the world is one where you never have to show that your client’s business actually improved. OKRs deliver exactly that—months of billable workshops, coaching sessions, and ‘alignment facilitation’ with built-in excuses when nothing gets better.

You know why this keeps happening? Because it’s easier to implement a framework than to admit you don’t know how to guide people. It’s more comfortable to blame the system than to take responsibility for your results. And it’s easier to pay consultants to make you feel busy than to do the hard work of actually improving your business.

The system becomes the solution. The solution becomes the problem. And you become the person who can’t see that you’re the real problem.

Stop making excuses. Stop looking for silver bullets. Stop enriching consultants who profit from your incompetence. Your problems aren’t systematic—they’re personal. You’re just bad at supporting people, and no framework is going to fix that.

Why Bad Bosses Love OKRs (Hint: It’s Not About Results)

OKRs are perfect for incompetent people in charge because they solve all the wrong problems—and consultants absolutely love this dynamic:

They create the illusion of strategy: Instead of actually figuring out what the Folks That Matter™ need, bad bosses can spend months cascading objectives and aligning key results. It feels strategic without requiring any actual strategic thinking or personal accountability. Consultants love this because they can bill for endless ‘strategic alignment workshops’ without ever having to show that the Folks That Matter™ are happier or business results improved.

They delegate responsibility: Why make hard decisions when you can just set objectives and let the framework sort it out? Bad bosses love systems that make guiding people seem automatic because they’re terrified of being held accountable for actual decisions. Consultants love this because they can sell ‘OKR coaching’ and ‘implementation support’ without taking any responsibility for whether things actually gets better.

They generate endless meetings: OKR planning sessions, alignment workshops, quarterly reviews, cascade meetings. Bad bosses mistake activity for results and confuse being busy with being effective. John Gall called this perfectly: ‘The system tends to oppose its own proper function.’ The OKR meetings become more important than the work the meetings are supposed to coordinate. Consultants absolutely love this because every meeting is billable hours, every workshop is revenue, and none of it requires them to actually improve business outcomes.

They measure everything except what matters: Tom Gilb spent decades helping organisations measure effectively. His core insight: measure what the Folks That Matter™ need, not what’s convenient for your process. But OKRs typically measure internal metrics (‘reduce deployment time by 20%’) instead of outcomes (‘increase customer satisfaction with product reliability’). They measure team activities instead of business results. They measure adherence to quarterly cycles instead of progress towards meaningful goals. Consultants love this because measuring the wrong things means they never have to prove their consulting actually works—the client has to blame poor ‘OKR adoption’ instead of poor consulting.

They produce pretty reports: Nothing makes an incompetent person in charge feel more important than a well-formatted OKR dashboard. All those numbers! All that alignment! It must be working! But as Gilb warned, measuring the wrong things systematically is worse than not measuring at all—because it gives you false confidence whilst you optimise for irrelevance. Consultants love dashboards because they look impressive and keep clients paying for ‘refinements’ to the system.

They provide built-in excuses: ‘We missed our targets because people didn’t embrace the OKR mindset.’ Translation: ‘It’s not my fault—it’s the system’s fault, or the people’s fault, or anyone’s fault but mine.’ The system designed to create accountability becomes the excuse for avoiding accountability. Consultants love this most of all because when OKRs inevitably fail to improve results, they can blame ‘change management’ or ‘cultural resistance’ rather than admit they sold a turkey.

They create dependency: Here’s the dirty secret consultants won’t tell you—OKRs are designed to be complex enough that you need ongoing ‘expert’ help to implement them correctly. The quarterly cycles create perpetual opportunities for ‘optimisation’ and ‘coaching’. The cascade complexity requires facilitation. The scoring methodology needs calibration. It’s the perfect consultant product: high complexity, low accountability, recurring revenue.

What OKRs Actually Do to Your Company (Whilst You’re Making Excuses)

Whilst bad bosses are masturbating over their OKR spreadsheets, here’s what’s happening to their companies:

The system takes over: John Gall observed that ‘systems tend to grow and encroach’. What starts as a simple quarterly goal-setting exercise metastasises into cascading alignment sessions, mid-quarter check-ins, OKR coaching, dashboard maintenance, scoring calibration meetings, and retrospective workshops. People spend more time feeding the OKR system than doing the work the system was supposed to organise.

Innovation dies: Nothing kills creativity faster than making everything measurable within 90 days. But here’s the thing—you don’t care about innovation. You care about covering your arse with metrics that make you look busy. Tom Gilb understood this: when you measure short-term activities instead of long-term value creation, you systematically destroy your ability to build anything meaningful.

Good people quit: High performers don’t need frameworks to stay focused. They need clear priorities, adequate resources, and people in charge who take responsibility instead of creating administrative bollocks. When you bury these people under OKR theatre, they leave for companies with competent guidance. As Gall predicted: ‘The system tends to oppose its own proper function’—OKRs, designed to retain talent, drive talent away.

Gaming becomes the job: Make the numbers the target, and people will hit the numbers by any means necessary. Teams manipulate metrics, focus on vanity projects, and optimise for looking good instead of being good. But hey, at least your OKR dashboard looks pretty. This is exactly what Gilb warned against: when you measure the wrong things, you get more of the wrong things.

Real problems hide: When everyone’s focused on hitting their OKR targets, the actual business problems—customer complaints, product failures, competitive threats—get ignored. The framework becomes a distraction from reality, which is exactly what bad bosses want. The system designed to surface problems becomes the problem that needs surfacing.

How Great Bosses Actually Work (No Frameworks Required)

Great Bosses don’t need OKRs because they understand what Grace Hopper taught us: they support people and manage systems, not the other way around. They also understand John Gall’s wisdom: simple systems that work are better than complex systems that don’t. And they apply Tom Gilb’s measurement principles: measure what the Folks That Matter™ value, not what’s convenient for your process.

Great Bosses hire adults: They find people who are better than them at specific jobs, then guide those people towards shared goals. They don’t need cascading objectives because they hire people who already understand what needs to be done and inspire them to do their best work.

Great Bosses communicate reality: Instead of setting arbitrary targets, they explain the business situation honestly—what’s working, what’s not, what needs to change. Then they guide competent people towards solutions instead of trying to manage them through metrics.

Great Bosses measure what matters: Following Gilb’s principles, they measure customer outcomes, not internal activities. They measure long-term value creation, not quarterly process compliance. They measure what their stakeholders—customers, employees, shareholders—actually care about, not what’s easy to put in a dashboard. When they measure ‘customer satisfaction’, they mean actual customer feedback, not proxy metrics like ‘response time to support tickets’.

Great Bosses evolve gradually: Instead of implementing complex systems from scratch (which Gall pointed out never work), they make small improvements to things that already work. They don’t redesign their entire goal-setting process every year—they incrementally improve their communication, their decision-making, and their obstacle removal.

Great Bosses remove obstacles: Whilst fake bosses are creating new processes to manage people, Great Bosses are eliminating the bureaucratic bollocks that prevents good work from happening. They understand that their job is to make the system serve the people, not the other way around.

Great Bosses make decisions: When there’s ambiguity or conflict, Great Bosses actually decide things instead of hoping a framework will decide for them. They take responsibility for those decisions and guide their people through the consequences.

Great Bosses stay consistent: They don’t chase new frameworks every year, because they’ve figured out how to support people effectively and they stick with it. Their teams aren’t exhausted by constant change because effective support provides stability and direction. If something works, don’t fix it.

The Real Cost of Your Framework Addiction (And Why You Need to Stop)

Every framework-addicted person in charge thinks their process obsession is harmless. ‘We’re just trying to improve!’ they say. But this has real consequences—and a whole consulting industry getting rich off your incompetence:

Your best people leave: Nobody wants to work for someone who’s more interested in process optimisation than human support. High performers go where they can do meaningful work without administrative theatre.

Your consultant bills skyrocket: OKRs are a consultant’s dream—complex enough to require ‘expert’ facilitation, ongoing enough to generate recurring revenue, and vague enough that failure can always be blamed on ‘poor implementation’ rather than a fundamentally stupid system. You’ll pay for initial training, quarterly workshops, mid-cycle coaching, dashboard setup, scoring calibration, change management support, and ‘OKR maturity assessments’. The consultants get rich whilst your real business problems remain exactly the same.

Your culture becomes cynical: After watching people in charge chase fad after fad, employees stop believing anything will actually change. They develop learned helplessness and stop trying to improve anything. They’ve seen the consultant parade before—expensive suits promising transformation, delivering PowerPoints, and disappearing as the results fail to materialise.

Your competitive advantage erodes: Whilst you’re in OKR planning sessions paying consultants to facilitate alignment workshops, your competitors are shipping products, talking to the Folks That Matter™, and solving real problems.

Your results get worse: All this process creates layers of bureaucracy that slow decision-making and kill initiative. But you’ll just blame the implementation instead of admitting the whole thing was stupid to begin with. And your consultants will happily sell you more ‘refinements’ to fix the problems their advice created.

The Bottom Line: Take Responsibility or Get Out of Authority

Here’s what every framework-addicted person in charge might choose to understand: your employees don’t need another system. Your customers don’t care about your OKR scores. Your business doesn’t need more measurement—it needs better guidance.

That means YOU need to get better. Not your process. Not your system. YOU.

The best people in charge I know are boring as hell. They hire good people, communicate clearly, make decisions quickly, remove obstacles, and take responsibility for results. They don’t have fancy frameworks because they don’t need them. They have something better: competence.

So here’s my challenge to every excuse-making, framework-addicted person in authority reading this: go one year without implementing a single new system. Instead:

  • Take responsibility for your current results instead of blaming external factors
  • Talk to your customers every week until you understand their problems better than they do
  • Have honest conversations with your team about what’s working and what isn’t—and actually listen
  • Make the hard decisions you’ve been avoiding whilst you were playing with spreadsheets
  • Remove stupid policies that prevent good work from happening
  • Measure customer satisfaction and business results—the stuff that actually matters to success

But I know most of you won’t do this. It’s too hard. It requires actual people skills instead of process management. It means being responsible for outcomes instead of hiding behind frameworks. It means admitting that you’re the problem, not the system.

So go ahead, implement your OKRs. Join the long line of incompetent people in charge who think the right system will fix their broken guidance abilities. Just don’t be surprised when your best people quit, your results get worse, and your competitors eat your lunch whilst you’re updating your quarterly scorecards.

Here’s the truth nobody wants to tell you: you don’t have a framework problem. You have a people problem. And that problem is you.

Grace Hopper understood the fundamental distinction: you manage things, you guide people. John Gall warned us that complex systems designed from scratch never work and always develop their own agenda. Tom Gilb taught us that measuring the wrong things systematically is worse than not measuring at all.

OKRs violate all three principles. They try to manage people like things. They’re complex systems designed from scratch that inevitably take over the organisation they were meant to serve. And they measure internal activities and process compliance instead of the needs of the Folks That Matter™.

Great Bosses build great companies. Bad bosses build great spreadsheets.

Stop making excuses. Start taking responsibility. Or get out of positions of authority and let someone competent do the job.

The choice is yours.


Further Reading

Gall, J. (2002). The systems bible: The beginner’s guide to systems large and small (3rd ed.). General Systemantics Press.

Gilb, T. (1988). Principles of software engineering management. Addison-Wesley.

Gilb, T. (2005). Competitive engineering: A handbook for systems engineering, requirements engineering, and software engineering using Planguage. Butterworth-Heinemann.

Immelman, R. (2003). Great boss dead boss. Stewart Philip International.

Winget, L. (2004). Shut up, stop whining, and get a life: A kick-butt approach to a better life. Wiley.

Winget, L. (2007). It’s called work for a reason! Your success is your own damn fault. Gotham Books.

Beyond Giving Voice to Values: Why Listening to Folks’ Actual Needs Matters More

The concept of “giving voice to values” has become a cornerstone of how organisations approach ethics and workplace culture. The notion is compelling: encourage people to speak up when they witness behaviour that conflicts with their moral principles, create safe spaces for ethical concerns, and build cultures where doing the right thing isn’t just tolerated but invited and celebrated.

But there’s a fundamental problem with this approach—it assumes that articulating values is the same as addressing real human needs. In practice, focusing primarily on values can create a kind of ethical theatre where the performance of moral clarity takes precedence over solving actual problems people face.

The Values Trap and the Performance of Virtue

When organisations emphasise giving voice to values, they often end up with beautifully crafted mission statements, inspiring town halls, and employees who can eloquently describe what the company stands for. Yet the same workplaces may struggle with basic issues: people working unsustainable hours, feeling disconnected from meaningful work, or lacking the resources to do their jobs effectively.

This disconnect reveals a deeper issue: much of what passes for values-driven culture is actually virtue signalling—the conspicuous expression of moral positions designed to demonstrate good character rather than create meaningful change. Managers hold forums about “creating safe environments” whilst maintaining practices that punish honest feedback. Organisations trumpet their commitment to “work-life balance” whilst expecting immediate responses to emails sent at midnight.

When people talk about values in the workplace, these conversations tend to operate at a high level of abstraction, making them perfect vehicles for this kind of performative morality. Someone might speak up about “integrity” or “respect”, but these concepts can mean vastly different things to different people. More importantly, they don’t necessarily point towards concrete solutions—which is often exactly the point.

As software engineering pioneer Tom Gilb has long argued, anything that cannot be quantified cannot be properly managed or improved. Values discussions typically resist quantification entirely, making them immune to both measurement and meaningful progress. The result is wishy-washy rhetoric that sounds inspiring but changes nothing.

The Authority Problem

There’s an even more troubling dynamic at play: the tendency of people in positions of authority to dictate what values others should hold and how they should express them. This top-down approach to moral discourse creates several problems.

First, it assumes that those in power are uniquely qualified to determine what constitutes ethical behaviour for everyone else. A CEO who’s never worked a frontline job may have strong opinions about “customer service excellence”, but little understanding of what it actually takes to maintain that standard under real-world pressures.

Second, when authority figures prescribe values, they often reflect the perspectives and priorities of those already in power. The values that get emphasised tend to be ones that preserve existing hierarchies and ways of operating, rather than challenging systems that might benefit those at the top at the expense of everyone else.

Third, dictated values create compliance rather than genuine commitment. When people are told what to care about rather than being asked what they need, the result is often superficial adherence to stated principles whilst underlying problems persist or worsen.

The Antimatter Principle: A Different Approach

This problem isn’t new, and some thinkers have proposed radically different approaches. I have what I call the Antimatter Principle. The principle is superbly simple: “Attend to folks’ needs.”

The Antimatter Principle cuts through the abstraction and performance of values-based approaches by focusing directly on what people actually need to thrive. Rather than debating what “respect” means, it asks: what specific things do people need to feel respected? Rather than proclaiming commitment to “work-life balance,” it investigates what concrete changes would help people manage their work and personal lives more effectively.

Discovering needs requires productive, skilled dialogue between everyone involved—not the superficial conversations that typically pass for workplace communication, but deep, empathetic listening that gets to the heart of what people actually need. The focus shifts from abstract principles to concrete human experiences that can be understood, quantified, and addressed.

The Power of Needs-Based Dialogue and Quantification

Consider the difference between these two statements:

“I value work-life balance” versus “I need predictable schedules so I can pick up my children from school by 3:30 PM at least four days per week.”

The first is a value statement—noble, but vague, and easily co-opted by those who want to appear enlightened without changing anything substantial. The second identifies a specific need that can be both quantified and addressed through concrete actions: adjusted meeting times, flexible scheduling policies, or better project planning.

Following Gilb’s emphasis on quantified requirements in software engineering, we can see how the measurable version transforms the conversation entirely. Instead of debating abstract concepts, we can track progress: How many people currently achieve their desired pickup times? What percentage of meetings currently end before 3 PM? How might we restructure workflows to increase these metrics?

When we shift from giving voice to values to giving voice to quantified needs, several things happen:

Power dynamics become more transparent. It’s easier to dismiss someone’s “values” as misguided than to ignore their concrete, measurable needs. When an employee says they need response times to critical emails reduced from the current average of 3.2 days to under 24 hours, it’s harder for a manager to respond with platitudes about “taking ownership.”

Specificity replaces abstraction. Instead of debating what “fairness” means in the abstract, people can discuss specific situations where current processes create inequitable outcomes—and quantify those outcomes. How long do different types of requests take to process? What percentage of people feel their contributions are recognised? These questions have answers.

Solutions become clearer and measurable. It’s hard to operationalise “respect”, but it’s straightforward to address someone’s need for clearer communication about project expectations—and to measure whether communication has actually improved by tracking metrics like the percentage of projects with clearly defined success criteria or the frequency of status updates.

Progress becomes visible. Abstract values discussions can continue indefinitely without any indication of whether things are getting better or worse. Quantified needs create benchmarks that make improvement—or the lack thereof—immediately apparent.

Empathy increases through shared understanding. Abstract values can feel preachy or judgemental, especially when they’re handed down from above. Specific, quantified needs—like wanting recognition for contributions (perhaps measured by the frequency of public acknowledgement) or needing quiet space to concentrate (perhaps measured by decibel levels or interruption frequency)—are relatable human experiences that transcend hierarchical boundaries.

Making the Shift Through Quantification

Organisations that want to move beyond values rhetoric towards meaningful change can start by reframing their conversations around quantifiable outcomes:

Instead of asking “What are our values?” ask “What do people need to do their best work, and how will we know when they’re getting it?”

Rather than creating spaces to voice ethical concerns, create mechanisms for people to articulate practical needs with measurable success criteria—and more importantly, to track whether those needs are being met over time.

Replace abstract discussions about culture with concrete conversations about working conditions, resource allocation, and structural barriers that prevent people from thriving—all of which can be measured and monitored.

Most importantly, resist the urge to have these conversations flow primarily from the top down. The people best positioned to identify what’s needed are often those furthest from positions of formal authority—the ones actually doing the work, serving the customers, and experiencing the day-to-day reality of organisational life. They’re also often best placed to suggest meaningful metrics.

This doesn’t mean values are irrelevant. Underlying principles still matter. But those principles should emerge from and serve the goal of meeting measurable human needs, not function as moral decorations designed to make those in charge feel enlightened.

The Ripple Effect of Quantified Progress

When organisations prioritise understanding and addressing people’s actual, measurable needs, something interesting happens. The values they claim to hold—things like respect, integrity, and care—start manifesting naturally in how work gets done and how people treat each other.

An employee who has their quantified need for professional development met (perhaps measured by training hours, skill assessments, or career progression rates) is more likely to extend similar support to colleagues. A team that gets the resources they need to succeed (measured by project completion rates, quality metrics, or stress indicators) is more likely to approach challenges with integrity rather than cutting corners. A workplace that addresses people’s need for open communication (perhaps measured by speaking-up frequency, error reporting rates, or anonymous feedback scores) will see more honest communication and ethical behaviour.

This organic development of ethical culture is far more robust than the brittle veneer created by top-down values initiatives. It’s also much harder to fake, which makes it a more reliable indicator of actual organisational health. Numbers, as Gilb would emphasise, don’t lie—or at least they lie less convincingly than inspiring speeches about company values.

Moving Forward with Quantifiable Impact

The path from values to needs isn’t about abandoning moral principles. It’s about recognising that those principles only have meaning when they translate into concrete, quantifiable actions that improve people’s actual experiences. It’s also about acknowledging that the people best positioned to identify what’s needed may not be the ones currently holding microphones at company all-hands meetings.

The next time you’re in a meeting where someone talks about “giving voice to values”, try asking a different question: “What do you need right now that would help you do your best work, and how would we measure whether you’re getting it?” You might be surprised by how much more productive—and ultimately more ethical—the conversation becomes.

After all, the most profound values are often expressed not through eloquent statements about what we believe, but through consistent, measurable actions that demonstrate we care enough to listen to what people actually need and track whether we’re delivering it. The most authentic change comes not from proclaiming virtue from positions of authority, but from creating conditions where everyone can articulate their quantified needs and see measurable progress towards meeting them.

As Tom Gilb has consistently demonstrated throughout his work on software engineering management and evolutionary project development, the power of quantification lies not just in measurement for its own sake, but in its ability to make the invisible visible, the vague specific, and the impossible achievable. When we apply this same rigour to human needs in the workplace, we transform values from performance art into measurable progress towards better working lives for everyone.

Further Reading

Gentile, M. C. (2010). Giving voice to values: How to speak your mind when you know what’s right. Yale University Press.

Gilb, T. (1988). Principles of software engineering management. Addison-Wesley.

Gilb, T. (2005). Competitive engineering: A handbook for systems engineering, requirements engineering, and software engineering using Planguage. Butterworth-Heinemann.

Gilb, T., & Graham, D. (1993). Software inspection. Addison-Wesley.

Kaptein, M. (2019). The moral entrepreneur: A new component of ethical leadership. Journal of Business Ethics, 156(4), 1135-1150.

Marshall, B. (2013, October 12). The antimatter principle. FlowChainSensei. https://flowchainsensei.wordpress.com/2013/10/12/the-antimatter-principle/

Rosenberg, M. B. (2003). Nonviolent communication: A language of life (2nd ed.). PuddleDancer Press.

Schwartz, S. H. (2012). An overview of the Schwartz theory of basic values. Online Readings in Psychology and Culture, 2(1), 1-20.

Treviño, L. K., & Brown, M. E. (2004). Managing to be ethical: Debunking five business ethics myths. Academy of Management Executive, 18(2), 69-81.

Weaver, G. R., Treviño, L. K., & Cochran, P. L. (1999). Integrated and decoupled corporate social performance: Management commitments, external pressures, and corporate ethics practices. Academy of Management Journal, 42(5), 539-552.

What Makes a Great User Story?

A great user story accurately pinpoints what people truly need from your product and translates those needs into guidance that development teams can easily understand and act upon. It’s worth noting that “user story” is actually a misnomer – these might better be called “Folks That Matter™ stories” since they centre on real people with real needs, not just abstract “users” of a system.

Core Components

While there are many formats for writing these stories, the essential components remain consistent: identifying the Folks That Matter™, their needs, and the benefits they’ll receive. The story should clearly communicate who needs the feature, what they need, and most importantly, why they need it.

The Living Nature of Stories

Folks That Matter™ stories aren’t static artefacts – they evolve, morph, and grow across numerous iterations. Like elements in a Needsscape (the visualisation of all the folks that matter and their changing needs), stories adapt as we gain deeper understanding of people’s requirements. What begins as a simple narrative might develop into a complex web of interconnected needs as teams learn more through development cycles, feedback loops and product deployments.

Essential Qualities

Great Folks That Matter™ stories share several important characteristics:

  • They can be developed independently from other stories
  • Their details remain open to discussion and refinement
  • They deliver clear value to the folks that matter™
  • Teams can reasonably estimate the effort required
  • They’re focused enough to complete in a single iteration
  • They include clear criteria for testing and validation

Focus on Needs

The most effective Folks That Matter™ stories focus on identifying and attending to needs rather than implementing specific solutions. They describe outcomes and the results foilks gain, not the technical implementation. This gives development teams space to find the best technical approaches.

Clear Acceptance Criteria

Each Folks That Matter™ story includes explicit acceptance criteria that define when the story is complete and needs have been met. Such criteria will be testable, quantified (Cf. Gilb), and agreed upon by all the Folks That Matter™.

Summary

Effective Folks That Matter™ stories serve as bridges between human needs and technical solutions. They identify the Folks That Matter™, articulate their genuine needs, and provide development teams with clear guidance – while leaving room for creativity in implementation. Rather than static requirements documents, they function as living artefacts that evolve through conversation and iteration and feedback. By focusing on outcomes rather than specifications, and by including clear, quantified acceptance criteria, these stories help teams build products that truly meet people’s needs—the essence of successful product development and the cornerstone of navigating the broader Needsscape of any organisation.

The Theatre of Optics

The Illusion of Action: Perception vs. Genuine Problem-Solving

Humans have developed a remarkable skill that transcends mere performance—the art of appearing to address problems whilst artfully avoiding their effective resolution. This phenomenon is not merely a quirk of our genes but a deeply entrenched mechanism that permeates political institutions, corporate environments, and bureaucratic structures with remarkable consistency.

Imagine a world where the energy invested in crafting the perception of action were redirected towards actual problem-solving. Where press conferences, policy statements, and corporate initiatives were judged not by their blustering rhetoric, but by their measurable, tangible impact. Such a vision seems almost revolutionary in its simplicity, yet remains frustratingly distant from our current reality.

The disconnect between appearance and substance has become so normalised that we frequently fail to distinguish between meaningful action and mere elaborate theatre. Politicians deliver impassioned speeches that sound transformative yet change nothing. Business leaders launch initiatives wrapped up in impressive language, creating the illusion of progress whilst maintaining precisely the status quo.

Problem Management as Performance

In both governmental and business spheres, we see sophisticated dances of seeming proactivity. Leaders, politicians and executives have evolved an intricate set of strategies designed to create the perception of action without any real action. This involves overblown press releases, meticulously staged press conferences, and grandiose statements that sound remarkably impressive yet deliver minimal concrete outcomes.

Why Perception Trumps Substance

The Political Machine

Political systems and businesses are particularly adept at this performative approach. Politicians frequently invest substantially more energy in crafting narratives about potential solutions than in implementing genuine, transformative changes. |It’s almost as if they have no clue about even the basics of how to effect real change. The electoral cycle rewards those who can convincingly blather about progress rather than those who quietly and effectively resolve systemic challenges.

Business’s Cosmetic Approach

Corporate environments mirror their political cousins with remarkable fidelity. Companies frequently launch elaborate initiatives that look impressive on annual reports, investor presentations and in the media, but create zero real-world impact (aside from speding even more money on bullshit rhetoric and actions). These initiatives aim to demonstrate corporate responsibility and commitment to change without fundamentally altering existing power structures or addressing root problems.

The Psychology of Perceived Action

Why We Fall for the Illusion

Humans are remarkably susceptible to the appearance of action. Our cognitive biases prefer a compelling narrative of potential change over the often mundane, incremental work of just getting on with fixing things. This psychological vulnerability allows leaders in both politics and business to consistently profess change over actually doing anything.

Breaking the Cycle

Demanding Genuine Accountability

To move beyond this performative approach, society might choose to recognise its vulnerability to bullshit, and develop effective ways of both seeing it and rejecting it. This requires:

  • Cultivating a culture that values quantifiable outcomes over hand-wavy flourishes
  • Developing robust, independent approaches to assessment
  • Choose metrics that reflect the genuine needs of the people affected, from their point of view
  • Encouraging transparency and genuine quantified reporting
  • Supporting teams and organisations that demonstrate authentic problem-solving approaches

Conclusion

The chasm between being seen to address a problem and actually resolving it represents one of the most significant challenges in contemporary organisational and political life. Until we collectively demand and reward genuine, substantive action, we will continue to be governed by the theatre of perceived progress – the Theatre of Optics.

Enhancing Software Development Outcomes

A Cornucopia of Techniques

In the realm of software development, teams have at their disposal a rich array of techniques designed to raise productivity and outcomes. These techniques, evolved over decades, and championed by thought leaders in their respective fields, offer unique approaches to common challenges. Let’s explore some of the most notable ones:

Gilb’s Evolutionary Project Management (Evo)

Tom Gilb’s Evo technique emphasises incremental delivery and the use of quantification, focusing on delivering measurable value to the Folks That Matter™ early and often throughout the development lifecycle.

Goldratt’s Theory of Constraints (TOC)

Eliyahu Goldratt’s TOC encourages teams to identify and manage the primary bottlenecks in their processes, thereby improving overall system performance.

Ackoff and Systems Thinking

Russell Ackoff’s techniques promote viewing problems holistically, considering the interconnections between various parts of a system rather than addressing issues in isolation.

Seddon’s Vanguard Method

John Seddon’s Vanguard method advocates for understanding work as a system, focusing on customer demand and designing the organisation to meet that demand effectively.

Rother’s Toyota Kata

Mike Rother’s Toyota Kata is a practice routine that helps teams develop scientific thinking skills, fostering a culture of continuous improvement and adaptation.

Deming’s System of Profound Knowledge

W. Edwards Deming’s System of Profound Knowledge is a management philosophy that emphasises system thinking, understanding variation, and the importance of intrinsic motivation in the workplace. SoPK consists of four main themes:

  1. Appreciation for a System
    • Understanding how different parts of an organisation interact and work together
    • Recognising that optimising individual components doesn’t necessarily optimise the whole system
  2. Knowledge about Variation
    • Understanding the difference between common cause and special cause variation
    • Recognising when to take action on a process and when to leave it alone
  3. Theory of Knowledge
    • Emphasising the importance of prediction in management
    • Understanding that all management is prediction and that learning comes from comparing predictions with outcomes
  4. Psychology
    • Understanding human behaviour and motivation
    • Recognising the importance of intrinsic motivation over extrinsic rewards and punishments

Marshall’s Organisational Psychotherapy

My own field of Organisational Psychotherapy focuses on techniques for addressing the collective assumptions and beliefs of an organisation, aiming to improve outcomes and overall effectiveness through overhauling these shared assumptions.

The Adoption Quandary

Whilst these various techniques offer glittering avenues for improvement, many development teams find themselves at a crossroads. The crux of the matter lies in two key questions:

  1. Will the effort invested in mastering one or more of these techniques yield a worthwhile return?
  2. More fundamentally though, can we muster the motivation to make the necessary effort?

The Crux: Self-Motivation

The second question is the more critical of the two. It’s not merely about the potential payoff; it’s about the willingness to embark on the journey of learning and mastery in the first place. Crucially, this motivation must emanate from within the team itself, rather than relying on external factors.

Surmounting Inertia

Change is inherently challenging, and the comfort of familiar practices can be a powerful deterrent to adopting new techniques. Teams rarely find the inner drive to overcome this inertia and push themselves towards new horizons.

Nurturing a Desire for Self-Betterment

Fostering a culture that values learning and self-betterment is paramount. When team members view challenges as opportunities for growth rather than insurmountable obstacles, they’re more likely to embrace new techniques. This mindset shift must be initiated and nurtured by the team itself.

Peer-Driven Inspiration

In the all-too-common absence of top-down motivation, teams can look to each other for inspiration and encouragement. By sharing successes, discussing challenges, and collaboratively exploring new techniques, team members can create a supportive environment that fuels self-betterment.

Individual Responsibility

Each team member bears the responsibility for their own personal and professional development. By setting personal goals for improvement and actively seeking out opportunities to learn and apply new techniques, individuals can drive the team’s overall progress.

Conclusion

While the array of available techniques to improve development team outcomes is legion, the true challenge lies not in their complexity or the time required to master them. Rather, it’s in cultivating the self-motivation to pursue excellence and adopt such techniques.

As we ponder the question, “Can we be bothered to make the effort to improve ourselves, our capabilties and our outcomes?”, we must remember that the most successful teams are those who answer with a resounding “Yes” – not because they’re compelled to, but because they genuinely desire to excel. It is this intrinsic commitment to growth and improvement that ultimately distinguishes high-performing teams from the rest. And if the outcomes are simply making the rich (management, shareholders) richer, then none of this is likely to happen.

The journey of improvement commences with a single step, taken not because someone else pushed us, but because we ourselves choose to move forward. In the end, the power to transform our outcomes lies within our own hands. The techniques are there, waiting to be explored and mastered. The question remains: are we ready to take steps towards a better future for ourselves, our teams and our lives? Do we need it?

Postscript

By the bye, this subject was the topic of my keynote at Agile Spain, 2016 2 December 2016, in Vitoria Gasteiz.

The “Good Enough” Sweet Spot

[Tl;Dr: “Good enough” means optimising for best meeting all the needs of the Folks That Matter™]

The Perils of Over-Engineering

In our quest for excellence, it’s tempting to over-engineer solutions, pouring needless resources into perfecting every tiny detail. However, this pursuit of flawlessness often comes at a steep price. Over-engineering can lead to diminishing returns, where the marginal benefits of additional effort become negligible. It can also result in unnecessary complexity, making systems harder to maintain and adapt.

The Pitfalls of Under-Engineering

On the flip side, under-engineering can be equally detrimental. Cutting corners or settling for subpar solutions may seem like a shortcut to efficiency, but it often leads to technical debt, compromised quality, and long-term sustainability issues. Under-engineered products or processes are more prone to failure, necessitating costly reworks or replacements down the line.

Striking the “Good Enough” Balance

Between these two extremes lies the “good enough” sweet spot – a delicate balance that maximises value while minimising waste. Embracing the “good enough” mindset means understanding when to invest resources and when to call it a day. It’s about recognising that perfection is an asymptote that can never be reached, and that diminishing returns inevitably set in.

The “Good Enough” Approach

Adopting a “good enough” approach involves setting realistic goals and prioritising the most critical aspects of a project or product. It means focusing on core functionality and user needs, rather than getting bogged down in superfluous features or tiny optimisations. By identifying the minimum viable product (MVP) and iterating from there, teams can meet folks’ needs faster and adapt more readily to changing requirements.

Quantifying the “Good Enough” Threshold

Of course, to deliver just what’s good enough, we have to know what’s good enough. Choosing to quantify the qualitative aspects of deliverables can help (Cf. Gilb).

Quantifying the Qualitative

Defining “good enough” can be challenging, especially when dealing with qualitative aspects such as user experience, design aesthetics, or customer satisfaction. However, by quantifying these qualitative elements, teams can establish more objective criteria and benchmarks for what constitutes “good enough.”

Leveraging Data and Metrics

One approach is to leverage data and metrics to measure and track qualitative aspects. For example, user testing and feedback can provide numerical scores for usability, intuitiveness, and overall satisfaction. Analytics data can reveal user behavior patterns, highlighting areas of friction or success. Even design aesthetics can be quantified through techniques like preference testing or eye-tracking studies. (See also: Gilb: Competitive Engineering).

Defining Acceptance Criteria

Another powerful tool is setting clear acceptance criteria upfront. By collaborating with stakeholders and subject matter experts, teams can define specific, measurable criteria that must be met for a deliverable to be considered “good enough.” These criteria can encompass functional requirements, performance benchmarks, accessibility standards, and qualitative thresholds based on user feedback or industry best practices.

Prioritising and Iterating

Once acceptance criteria are established, teams can prioritize the most critical aspects and focus their efforts on meeting those thresholds. By adopting an iterative approach, they can continuously refine and enhance the deliverables, incorporating feedback and adapting to evolving needs while maintaining a “good enough” baseline.

Embracing a Quantification-Driven Approach

Quantifying qualitative aspects requires a data-driven mindset within the organisation. Teams must be equipped with the tools, skills, and processes to collect, analyse, and act upon relevant data. Additionally, fostering a culture of continuous learning and experimentation can help, allowing for ongoing refinement and optimisation based on empirical evidence.

By quantifying qualitative aspects and establishing objective criteria, teams can more effectively arrive at the “good enough” sweet spot. This approach ensures that resources are allocated judiciously, core needs are met, and a solid foundation is established for ongoing iteration and improvement.

Embracing Iteration and Continuous Improvement

The beauty of the “good enough” philosophy is that it doesn’t preclude ongoing improvement. In fact, it embraces iteration and continuous refinement. By shipping a “good enough” initial version and gathering real-world feedback, teams can identify areas for enhancement and prioritise future efforts accordingly. This approach allows for more efficient resource allocation and greater responsiveness to the evolving needs of all the Folks That Matter™.

Fostering a “Good Enough” Culture

Cultivating a “good enough” culture requires a shift in mindset – one that values pragmatism, efficiency, and attending to folks’ needs over perfection. It means fostering an environment where team members feel empowered to make trade-offs and prioritise based on business impact. Teams play a crucial role in setting the tone, celebrating progress, and encouraging a bias towards action over analysis paralysis. Good enough applies to not only the product(s) but to the way the work to produce and support them works, too.

In essence, the “good enough” sweet spot is about striking the right balance – investing enough effort to deliver quality solutions that meet core needs, while avoiding the pitfalls of over- or under-engineering. By embracing this mindset, teams can optimise their resources, better address folks’ needs (but no better than good enough!) and foster a culture of (good enough) continuous improvement and adaptation.

Note to self: Mention the Kano Model, the Taguchi Loss function, and e.g. muri, mura and muda.

The Misunderstood World of Quality Assurance

What is Quality Assurance?

Quality Assurance (QA) is a term that gets tossed around quite frequently in the business world, particularly in the realms of product development and software development. However, despite its widespread usage, QA remains one of the most misunderstood and misused terms out there. Many conflate it with quality control, when in reality, QA is a separate and far more comprehensive approach that we might choose to see permeate every aspect of a business’s operations.

Separating QA from Quality Control

A fundamental misconception is viewing QA and quality control as one and the same. This could not be further from the truth. Quality control refers to the specific processes and techniques used to identify defects or non-conformances in products or services. It is a reactive measure, taken after a product or service has been created.

Quality Assurance, on the other hand, is a proactive and all-encompassing mindset, focused on implementing principles, processes, and activities designed to achieve the goal of “ZeeDee” – Zero Defects. When effective QA practices are in place, the need for extensive quality control measures – a.k.a. inspections, testing – becomes largely unnecessary.

The Holistic QA Approach

In the context of product development, we might choose to see QA integrated into every phase, from conceptualisation to final delivery and beyond. This involves establishing clear quality objectives, defining measurable criteria, implementing robust preventive measures, and continuously monitoring and improving based on feedback and data.

Similarly, in software development, we may choose to regard QA as crucial throughout the entire lifecycle, ensuring functionality, reliability, and an optimal user experience – not through testing, but through activities like risk management, all geared towards the Zero Defects goal.

Prevention over Correction

The true power of Quality Assurance lies in its ability to prevent issues before they arise, rather than correcting them after the fact. By implementing comprehensive QA strategies with e.g. ZeeDee as the guiding star, organisations can significantly reduce or eliminate the need for resource-intensive quality control processes (inspections and the like), resulting in increased efficiency, cost savings, and a superior end product or service.

An Organisational Culture

Ultimately, Quality Assurance is not merely a set of tools and techniques; it is a mindset and a culture that must be embraced by every member of an organisation. From top management to front-line employees, everyone must understand the importance of quality and take ownership of their role in ensuring that products and services consistently meet the needs of all the Folks That Matter™, with Zero Defects as the guiding principle.

Conclusion

In a world where businesses strive for excellence and customer satisfaction is paramount, Quality Assurance as defined here is not a luxury; it is a necessity. By recognising the true scope and significance of QA, its distinction from quality control, and its pursuit of ZeeDee (Zero Defects), organisations can unlock the full potential of their products and services, foster a culture of quality, and ultimately, achieve sustainable success in an increasingly competitive marketplace.

Quantification enhances clarity of communication. Increased clarity of communication enhances interpersonal relationships. Are enhanced interpersonal relationships something you need?

Quintessence Worth £Billions

Let’s do a little back-of-a-fag-packet math re: Quintessence.

There’s somewhere around 26 million software developers worldwide.

A typical software developer, including on-costs, runs out at about £30,000 per annum (UK more like £90K, BRIC countries maybe £10k).

So that’s a world-wide spend of some (26m * 30k) = £780 billion (thousand million), per annum.

Given an uplift in productivity of 5-8 times for Quintessential development approaches, that’s an annual, recurring cost reduction (saving) of £624 billion to £682.5 billion.

You may find claimed productivity increases of this magnitude (5-8 times) somewhat unbelievable (despite the evidence). So let’s be conservative and propose a modest doubling of productivity. That would mean an annual, recurring cost reduction (saving) of £390 billion. Still not to be sniffed at.

For The Individual Organisation

Let’s consider a single UK-based organisation with 100 developers. Present costs (for the developers alone) will be around £90k * 100 = £9 million annually (more or less, depending on a number of factors). Again, assuming a modest doubling of productivity*, a quintessential approach would garner an annual, recurring cost reduction (saving) of £4.5 million for this example organisation.

What do these figures tell us? That the world and individual organisations both are not at all interested in reducing software development costs (or increasing software development productivity). Or maybe they just don’t believe it’s possible to be any more productive than they are already (it is possible to be much more productive, see e.g. RIghtshifting).

*Or getting twice as much done in a given time, for the same spend. Or halving the time it takes to get something done, for the same spend.

– Bob

Further Reading

Marshall, R.W. (2021). Quintessence: An Acme for Software Development Organisations. Falling Blossoms (LeanPub).

Managers Are PONC

Number 4 in Phil Crosby’s Four Absolutes of Quality is “The measurement of quality is the price of nonconformance (PONC), NOT indices.”

By which he meant, that the price of non-conformance to requirements tells us how often and the degree to which our organisation achieves quality. 

Le’s unwrap that a bit further.

In Absolute number 1, he says “The definition of quality is conformance to requirements, NOT ‘goodness’”.

So when we’re delivering stuff to our immediate customers that fails to conform to their requirements, that is not contributing to their success, we are failing to provide quality goods or services. 

We can measure this failure in terms of the price of non-conformance. That is, the costs involved in reworking, retesting, correcting, substituting, fixing-up or otherwise remediating the things we deliver so they do conform to our customers requirements.

In short, any cost that would not have been expended if quality had been perfect contributes to the price of non-conformance.

Managers Are PONC

If we look at the typical work of managers, almost all of what they do on a daily basis is all those fire-fighting remediations mentioned above. Indeed, in most organisations, this is the raison d’être of the manager’s role (minuscule other reasons include prevention of problems, and growth of the organisation and its revenues, profits).

Therefore it’s but a small jump to see that managers are one of the major contributors to their organisation’s price of non-conformance. In other words their costs (salaries, etc). are almost entirely consequent on their fire-fighting role.If fire-fighting was unnecessary, so would be the managers, and their costs.

– Bob

Further Reading

Unknown (2013). Manager Thought: Price of Non Conformance (PONC). [online] Manager Thought. Available at: https://rkc-mgr.blogspot.com/2013/07/price-of-non-conformance-ponc.html [Accessed 4 Mar. 2022].

Merbler, K. (2021) The Entrepreneur Who Created A Business Camelot: Philip B. Crosby. Dominionhouse Publishing & Design, LLC. Kindle Edition.

The idea of quantifying things (e.g. topics of disagreement) for clarity can seem like a major waste of time until we realise the amount of time we’re wasting through lack of clarity.

 

It’s amazing how a few hours of discussion can save a few minutes of quantification.