Archive

Engineering

Why Developers Keep Quitting

The Organisational Gaslighting That Destroys Tech Teams

Sarah stares at her laptop screen, wondering if she’s losing her mind. For the third time this month, the ‘agile transformation’ her company proudly announced has resulted in more meetings, more documentation, and less actual development time than ever before. When she raises concerns about the contradiction between their stated values and actual practices, she’s told she has ‘a bad attitude’ and needs to ‘be more collaborative’.

Sound familiar? If you’re a developer reading this, you’ve likely experienced some version of what Sarah is going through. What you may not realise is that you’re experiencing a form of organisational gaslighting—a systematic pattern of psychological manipulation that leaves you questioning your own judgement and, ultimately, your sanity.

As an organisational psychotherapist, I’ve worked with dozens of technology companies whose leadership genuinely cannot understand why their ‘best people’ keep leaving, or even realise it’s happening. They implement the latest methodologies, offer competitive salaries, and create open office spaces with ping-pong tables. Yet their turnover rates climb, their delivery slows, and their remaining developers seem increasingly disengaged.

The problem isn’t technical. It’s social.

What Is Organisational Gaslighting?

Gaslighting, originally described in the context of individual relationships, involves systematically undermining someone’s perception of reality to maintain power and control. In organisational contexts, this manifests as a consistent pattern of saying one thing whilst doing another, then making employees feel confused, incompetent, or ‘difficult’ when they notice the contradiction.

For developers, organisational gaslighting typically follows these patterns:

The Agile Gaslighting: ‘We’re an agile organisation!’ (while maintaining rigid hierarchies, detailed upfront planning, and punishing any deviation from predetermined policies and practices)

The Innovation Gaslighting: ‘We value innovation and creativity!’ (while micromanaging every decision and punishing any experiments that don’t immediately succeed)

The People-First Gaslighting: ‘Our people are our greatest asset!’ (while treating developers as interchangeable resources to be allocated across projects and denying agency)

The Quality Gaslighting: ‘Quality is everyone’s responsibility!’ (while consistently prioritising speed over reliability, cutting design time, and pressuring developers into technical shortcuts—then cutting testing time thinking it will help deadlines, not realising testing only reveals quality, it doesn’t create it)

The Learning Gaslighting: ‘We embrace failure as learning!’ (while maintaining blame cultures and performance reviews that punish any setbacks)

The Organisational Psyche Behind the Contradiction

From an organisational psychotherapy perspective, these contradictions arise from a fundamental incongruence within the organisational psyche. The organisation’s stated values (its ‘ideal self’) exist in direct conflict with its operational collective assumptions and beliefs (its ‘actual self’).

In my Marshall Model, most technology companies operate from what I term the ‘Analytic Mindset’—an inherited, mechanistic worldview that assumes software development is a predictable, controllable process. This mindset carries embedded assumptions about human nature that directly contradict the realities of knowledge work:

  • Assumption: Developers are programmable resources who can be directed and controlled
  • Reality: Software development is creative, collaborative work benefiting from autonomy and intrinsic motivation
  • Assumption: Problems can be solved through better processes and measurement
  • Reality: The primary constraints in software delivery are usually social and psychological, not technical
  • Assumption: Management’s role is to direct and control the work
  • Reality: Knowledge workers must largely manage themselves, as Drucker observed decades ago

These contradictory assumptions create internal conflicts within the organisation. Rather than resolving these conflicts by surfacing and reflecting on their fundamental beliefs, most organisations engage in blame games that make developers the scapegoat.

The Crazy-Making Cycle

What makes organisational gaslighting particularly damaging is how it creates self-reinforcing cycles of dysfunction. Here’s how it typically unfolds:

Stage 1: The Setup

Management implements what they believe are ‘best practices’—agile ceremonies, story points, velocity tracking, cross-functional teams. They genuinely believe they’re creating an environment for developer success, without ever asking developers what they actually need to succeed.

Notice what’s missing here: developers themselves have no voice in designing their own work environment. Decisions about how they should work, what tools they should use, and what processes they should follow are made for them, not with them. Not Agile at all!

Stage 2: The Contradiction

Despite the rhetoric of agility and empowerment, the underlying command-and-control collective assumptions and beliefs remain intact. Developers find themselves in more meetings than ever, spending more time justifying their work than doing it, and constantly interrupted by urgent requests that bypass all the ‘agile processes’.

Stage 3: The Questioning

Experienced developers recognise the contradiction and raise concerns. They point out that the processes are creating more overhead, not less. They question whether the constant supervision is actually improving delivery.

Stage 4: The Gaslighting Response

Rather than examining the systemic contradictions, management responds with variations of:

  • ‘You’re not being agile enough’
  • ‘You need to trust the process’
  • ‘Other teams don’t seem to have this problem’
  • ‘Maybe you’re not a good fit for our culture’

Stage 5: The Internalisation

Developers begin to doubt their own professional judgement. Maybe they are the problem. Maybe they don’t understand agility. Maybe they’re just resistant to change.

Stage 6: The Exit

The most capable developers—those with the strongest sense of professional identity and the most options—leave first. This creates a survivorship bias where the remaining developers appear to ‘work well’ with the system, reinforcing management’s belief that the problem was with the individuals who left, not the system itself.

The Cost

What many organisations fail to recognise is that sustained gaslighting creates genuine stress (distress) in developers. When developers’ reality is consistently invalidated, when their expertise is dismissed, when they’re blamed for systemic problems beyond their control, their body and mind respond as if under threat. Which, of course, they are.

I’ve observed developers exhibiting symptoms remarkably similar to what therapists see in individual gaslighting victims:

  • Hypervigilance: Constantly monitoring management’s mood and reactions, trying to anticipate the next contradiction
  • Self-doubt: Questioning their own technical judgement and professional competence
  • Dissociation: Emotionally disconnecting from their work as a protective mechanism a.k.a. disengagement
  • Learned helplessness: Giving up on trying to improve anything, just ‘going through the motions’
  • Anxiety and depression: Physical and emotional symptoms from chronic stress

These aren’t character flaws or signs of weakness. They’re predictable responses to sustained psychological manipulation.

The Collective Assumptions and Beliefs of ‘Developer as Problem’

Most technology organisations operate with embedded collective assumptions and beliefs that I call ‘Developer as Problem’. These interlocking beliefs include:

  • Developers are naturally resistant to change (despite working in the most change-driven industry on earth)
  • Developers don’t understand business priorities (while building the systems that run the business)
  • Developers gold-plate solutions and over-engineer (when asked to build systems that won’t break)
  • Developers can’t be trusted to manage their own time (despite managing complex technical dependencies)
  • Developers need constant oversight and measurement (because obviously they’d stop working if not watched—classic Theory X thinking)

These collective assumptions and beliefs run so deep that management doesn’t even realise they hold them. They shape every standup meeting, every sprint planning session, every performance review. When developers are asked to estimate tasks down to half-day increments, that’s these beliefs in action. When developers are required to justify every technical decision to people who don’t understand the technology, that’s these beliefs in action.

The truly insidious part is how self-reinforcing this becomes. When developers push back against micromanagement, it’s seen as proof they’re ‘difficult to manage’. When they advocate for quality, it’s seen as proof they ‘don’t understand business priorities’. When they question whether the constant meetings are actually helping, it’s seen as proof they’re ‘not team players’.

It’s a perfect trap. The more developers act like competent specialists who benefit from having agency over their work, the more they’re seen as problems to be solved through ‘better’ management.

The Therapeutic Intervention Required

Addressing organisational gaslighting requires genuine therapeutic work, not just process improvements or cultural initiatives. The organisation can benefit from help to surface and reflect on the collective assumptions and beliefs that drive its behaviour.

This involves creating what Carl Rogers identified as the core conditions for therapeutic change:

Congruence

The organisation can benefit from developing alignment between its stated values and its actual practices. This isn’t about finding better ways to communicate the values—it’s about examining whether the underlying collective assumptions and beliefs actually support those values.

Unconditional Positive Regard

Management can benefit the organisation by learning to see developers as complete human beings with valuable perspectives, not problems to be solved or resources to be optimised. This requires genuine respect for the complexity and creativity involved in software development.

Empathy

Leaders can benefit from developing the capacity to genuinely understand the developer experience—not what they think the experience should be, but what it actually is day-to-day.

Signs Your Organisation Needs Therapeutic Intervention

If you’re in leadership and wondering whether your organisation might be engaging in gaslighting, here are some diagnostic questions:

  • Do your most experienced developers seem increasingly disengaged?
  • Do you find yourself regularly explaining to developers why they should be happy with changes they’re questioning?
  • Do you attribute developer concerns primarily to ‘resistance to change’ rather than legitimate systemic issues?
  • Are your agile/DevOps/innovation initiatives consistently failing to deliver the promised improvements?
  • Do you find that problems get solved temporarily when you hire consultants, only to return when they leave?

If several of these resonate, your organisation may be trapped in patterns of gaslighting that require therapeutic intervention, not technical solutions.

The Path Forward

Breaking free from organisational gaslighting isn’t about implementing new processes or frameworks. It’s about fundamental therapeutic work that addresses the organisational psyche’s capacity for self-awareness and congruence.

This means:

  • Making the undiscussable discussable: Creating safe spaces for developers to share their actual experience without fear of being labelled as problems
  • Examining collective assumptions: Surfacing and questioning the beliefs about developers, software development, and organisational control that drive current practices
  • Developing organisational empathy: Building genuine understanding of what software development actually requires from a human perspective
  • Embracing therapeutic humility: Recognising that the organisation itself may need healing, not just the people within it

For developers trapped in gaslighting environments, the most important thing to remember is this: your instincts are probably correct. If something feels contradictory, manipulative, or crazy-making, it probably is. The problem isn’t with your perception—it’s with the organisational system that benefits from making you doubt yourself.

Conclusion

The exodus of talented developers from technology companies isn’t primarily about compensation, remote work policies, or technical challenges. It’s about organisations that have created psychologically toxic environments through systematic gaslighting, then wonder why their ‘people-first’ culture isn’t retaining people.

Until leadership recognises that their developer retention crisis is fundamentally a therapeutic issue—requiring genuine organisational healing rather than superficial cultural initiatives—they’ll continue to lose their most valuable contributors to organisations that treat developers as the creative, autonomous people they are.

The good news is that organisational gaslighting, like individual gaslighting, can be treated. But it requires the kind of deep therapeutic work that most technology companies aren’t yet ready to undertake. The question is: how many more talented developers will they lose before they’re willing to take a look in the mirror?


If you’re interested in exploring how organisational psychotherapy can help address these patterns in your technology organisation, you can find more about my approach in ‘Memeology’ and ‘Hearts over Diamonds’. For those ready to envision what’s possible beyond the dysfunction, ‘Quintessence’ offers a blueprint for the highly effective collaborative knowledge work organisation—one where treating people as complete human beings isn’t just ethically right, but the foundation of sustainable excellence.

Further Reading

Argyris, C. (1990). Overcoming organizational defenses: Facilitating organizational learning. Allyn & Bacon.

Argyris, C., & Schön, D. A. (1974). Theory in practice: Increasing professional effectiveness. Jossey-Bass.

Berne, E. (1964). Games people play: The psychology of human relationships. Grove Press.

DeMarco, T., & Lister, T. (2013). Peopleware: Productive projects and teams (3rd ed.). Addison-Wesley.

Drucker, P. F. (1999). Knowledge-worker productivity: The biggest challenge. California Management Review, 41(2), 79-94.

Edmondson, A. C. (2019). The fearless organization: Creating psychological safety in the workplace for learning, innovation, and growth. Wiley.

Marshall, R. W. (2018). Hearts over diamonds: Serving business and society through organisational psychotherapy. Leanpub.

Marshall, R. W. (2021). Memeology: Surfacing and reflecting on the organisation’s collective assumptions and beliefs. Leanpub.

Marshall, R. W. (2021). Quintessence: An acme for software development organisations. Leanpub.

McGregor, D. (1960). The human side of enterprise. McGraw-Hill.

Rogers, C. R. (1961). On becoming a person: A therapist’s view of psychotherapy. Houghton Mifflin.

Rosenberg, M. B. (2003). Nonviolent communication: A language of life (2nd ed.). PuddleDancer Press.

Seddon, J. (2003). Freedom from command and control: A better way to make the work work. Vanguard Press.

Coding Practices Are So the Wrong Focus

In W. Edwards Deming’s famous Red Bead experiment, willing workers try their best to draw only white beads from a bowl containing 80% white beads and 20% red beads. Using a paddle that scoops exactly 50 beads, workers are told to produce zero defects (no red beads). No matter how hard they try, how skilled they are, or how much they want to succeed, the random distribution means some workers will consistently get more red beads than others through pure chance. The system determines the outcome, not individual effort.

Deming used this experiment to demonstrate a fundamental truth: 95% of performance problems come from the system, not the individual workers. Yet in software development, we’ve created an entire industry obsessed with the equivalent of ‘worker performance improvement’—code reviews, linting rules, architectural purity, testing coverage—whilst ignoring the systems that actually determine product success.

The Software Industry’s Red Bead Problem

Walk into any tech company and you’ll find passionate debates about coding standards, architecture patterns, and development methodologies. Teams spend hours in code reviews, invest heavily in testing frameworks, and argue endlessly about the ‘right’ way to structure their applications.

Meanwhile, the same companies ship products nobody wants, struggle with unclear requirements, and watch competitors succeed with arguably inferior technical implementations.

We’ve created a culture where developers are evaluated on code quality metrics whilst remaining largely ignorant of whether their beautifully crafted code actually solves real problems for the Folks that Matter™. It’s the Red Bead experiment in action—we’re measuring and optimising individual performance whilst the system churns out failed products regardless of how elegant the codebase might be.

Most tellingly, in most organisations developers have next to zero influence over what really matters: what gets built, for whom, and why. They’re handed requirements from product managers, asked to estimate tasks defined by others, and measured on delivery speed and code quality—all whilst having no input on whether they’re building the right thing. Then they get blamed when products fail in the market.

The Invisible System

Most developers operate with a remarkably narrow view of the system they’re embedded in. They see their piece—the code, the sprint, maybe their immediate team—but remain blind to the larger forces that actually determine whether their work creates value.

This narrow focus isn’t accidental. The current system actively discourages broader awareness:

Developers are rewarded for technical excellence in isolation, not for understanding customer problems or business constraints. They’re measured on code quality and feature delivery, not on whether their work moves the business forward. They’re kept busy with technical tasks and rarely exposed to customer feedback, sales conversations, or strategic decisions.

Most critically, developers have next to zero influence or control over the way the work works—the system itself. They can’t change how requirements are gathered, how priorities are set, how teams communicate, or how decisions flow through the organisation. Yet they’re held responsible for whether all the Folks that Matter™ get their needs attended to.

Performance reviews focus on individual contributions rather than system-level thinking. Career advancement depends on demonstrating technical skill, not understanding how technology serves business objectives. The very structure of most organisations creates silos that prevent developers from seeing the bigger picture.

When Developers See the System

Everything changes when developers start understanding the wider system within which they function. They begin to realise that:

Beautiful code that solves the wrong problem is waste. Technical decisions ripple through customer support, sales, and operations in ways they never considered. That ‘simple’ feature request is actually complex when you understand the business context. They’ve been optimising for the wrong metrics because they couldn’t see what actually drives value for all the Folks that Matter™.

Developers who understand the system make fundamentally different choices. They push back on features that don’t align with the needs of the Folks that Matter™. They prioritise technical work that attends to the needs of the business rather than pursuing abstract perfection. They communicate differently with product managers because they understand the broader context of decisions.

The Real Constraints

The actual bottlenecks in software development are rarely technical—they’re systemic:

Communication breakdowns between product, design, and engineering teams lead to solutions that miss the mark. Feedback loops that take months instead of days prevent rapid iteration towards product-market fit. Decision-making processes filter out critical information from customers and frontline teams.

Requirements change constantly because there’s no clear product strategy or understanding of the needs of the Folks that Matter™. Teams work in isolation without understanding how their work connects to attending to those needs. Incentive systems reward shipping features over solving real problems.

Knowledge silos mean critical insights never reach the people who could act on them. Risk-averse cultures prevent the experimentation necessary for innovation. Metrics focus on activity rather than outcomes, creating busy work that doesn’t drive value.

Beyond Individual Excellence

The parallel to Deming’s insight is striking. Just as factory workers couldn’t improve quality by trying harder within a flawed system, developers can’t improve product outcomes by writing better code within dysfunctional organisational systems.

A team can follow every coding best practice religiously and still build something nobody wants. They can have 100% test coverage on features that solve the wrong problem. They can architect beautiful, scalable systems that scale to zero people who matter.

The solution isn’t to abandon technical excellence—it’s to recognise that individual excellence without system awareness is like being a skilled worker in the Red Bead experiment. Your efforts are largely irrelevant because the system constraints determine the outcome.

Building System Awareness

Organisations that want to improve how well they attend to the needs of the Folks that Matter™ need to help developers see and understand the wider system:

Expose developers to all the Folks that Matter™ through support rotations, research sessions, sales calls, and stakeholder meetings. Share context about why certain features matter and how technical decisions impact the people the system serves. Create feedback loops that connect code changes to how well needs are being attended to.

Measure system-level metrics like time from idea to value delivered to the Folks that Matter™, not just individual productivity. Reward cross-functional collaboration and understanding of the wider system, not just technical skill. Encourage questioning of requirements and priorities based on system-level thinking.

Make the invisible visible by sharing feedback from all the Folks that Matter™, competitive intelligence, and strategic context. Connect technical work to how well needs are being attended to through clear metrics and regular communication. Break down silos that prevent developers from understanding their role in the larger system.

The Path Forward

The tech industry’s obsession with coding practices isn’t just misplaced energy—it’s actively harmful when it distracts from the system-level changes that actually improve how well we attend to the needs of the Folks that Matter™. We need developers who understand that their job isn’t to write perfect code in isolation, but to create value within complex organisational and market systems.

This doesn’t mean abandoning technical excellence. It means recognising that technical excellence without system awareness is like perfecting your red bead drawing technique—a local optimisation that misses the point entirely.

The companies that succeed will be those that help their developers see beyond the code to understand all the Folks that Matter™, the market, the business model, and the organisational dynamics that actually determine whether their work creates value.

When developers start seeing the system, they stop optimising for red beads and start optimising for what actually matters. That’s when real improvement begins.

A Note on ‘Users’ and ‘Customers’

The conventional framing of ‘users’ and ‘customers’ is reductive and misses the point entirely. It treats software development like building a consumer app when most systems serve a complex web of stakeholders with different and sometimes conflicting needs.

Consider any real software system—an ERP platform must work for accountants entering data, executives reading reports, IT teams maintaining it, auditors reviewing it, vendors integrating with it, and regulators overseeing it. Calling them all ‘users’ flattens out completely different contexts and needs.

The ‘customer’ framing is even worse because it implies a simple transaction—someone pays money, gets product. But in most organisations, the people paying for software aren’t the ones using it day-to-day, and the people whose work gets impacted by it might not have had any say in the decision.

‘Folks that Matter™’ captures the messy reality that there are various people with legitimate stakes in whether the system works well. Developers are typically kept ignorant of who these people are, what they actually need, and how technical decisions affect them. It’s like the Red Bead experiment—workers are told to ‘satisfy the customer’ without any real understanding of what that means or who that customer actually is. Just another abstraction that keeps them focused on the wrong metrics.

Further Reading

Deming, W. E. (1986). Out of the crisis (pp. 345-350). MIT Press.

Deming, W. E. (1993). The new economics for industry, government, education (Chapter 7). MIT Press.

Scholtes, P. R. (1998). The leader’s handbook: Making things happen, getting things done. McGraw-Hill.

Wheeler, D. J. (2000). Understanding variation: The key to managing chaos (2nd ed.). SPC Press.

Womack, J. P., & Jones, D. T. (2003). Lean thinking: Banish waste and create wealth in your corporation (2nd ed.). Free Press.

The Evolution of Social Engineering: Rossi’s Vision for Organisational Psychotherapy

Understanding Rossi’s Core Argument

In his seminal paper’s conclusion, Peter Rossi presents a sophisticated dual proposition that merits careful examination. Rather than simply critiquing existing approaches, he advocates for a fundamental transformation in how we approach social – including organisational – change, encompassing both theoretical foundations and practical applications of social engineering/organisational psychotherapy as a professional discipline.

The First Pillar: Policy-Relevant Basic Social Science

The Need for Deeper Understanding

Rossi’s call for developing policy-relevant basic social science stems from his recognition that many interventions fail due to insufficient understanding of fundamental social mechanisms. This isn’t merely about gathering more data; it’s about developing comprehensive theoretical frameworks that can explain how social systems – including organisations – actually function.

When we examine social systems closely, we find they behave much like complex organisms, with interconnected parts that influence each other in subtle and often unpredictable ways. This complexity demands a more sophisticated approach to research and intervention.

Bridging Theory and Practice

The emphasis on “policy-relevant” research is crucial here. Rossi envisions social science that maintains rigorous academic standards whilst directly informing practical interventions. This represents a departure from purely theoretical research, pushing instead for knowledge that can be meaningfully applied to real-world challenges.

The Second Pillar: Professional Social Engineering

A New Professional Discipline

Perhaps most intriguingly, Rossi advocates for the establishment of social engineering as a distinct profession. This recommendation might seem to contradict his famous Iron Law of Evaluation, but it actually builds upon it in a sophisticated way. The professional social engineer / organisational psychotherapist Rossi envisions would be equipped with both theoretical knowledge and practical skills, much like other established professions such as medicine or architecture.

The Therapeutic Dimension

Much like individual psychotherapy focuses on personal growth and healing, this new profession would examine the collective psyche of organisations and other social systems. This approach recognises that organisations, like individuals, can develop dysfunctional patterns, trauma responses, and defence mechanisms that require careful and skillful intervention.

The Integration: Creating Sustainable Change

Competencies

Modern practitioners in this field will benefit from developing a unique blend of skills, combining:

  • Deep understanding of social systems theory
  • Practical knowledge of organisational dynamics
  • Therapeutic techniques adapted for collective application
  • Evaluation methodologies for measuring intervention outcomes
  • Systematic intervention design capabilities

Beyond Traditional Change Management

Unlike traditional change management approaches, this new profession adopts a more nuanced, healing-oriented perspective. It acknowledges that resistance to change often stems from deeply embedded organisational trauma or learned defensive patterns, while remaining grounded in rigorous scientific methodology.

The Path Forward: Implementing Rossi’s Vision

Learning from the Iron Law

Rossi’s Iron Law of Evaluation isn’t presented as a barrier to success, but rather as a crucial insight that should inform how we approach social change. It suggests that social engineering must be grounded in realistic expectations and robust evidence, while maintaining the aspiration to create meaningful change through competent and skillful intervention.

Building Standards

The discipline Rossi envisions requires developing standardised bodies of knowledge, ethics and standards, effective training programmes, and evidence-based practices. This framework would help ensure that interventions are both scientifically grounded and practically effective.

Concluding Reflections

Rossi’s conclusion represents a sophisticated approach to social change that remains remarkably relevant today. By advocating for both enhanced basic research and professional development, he charts a course between naive optimism and complete skepticism. This balanced perspective offers valuable guidance for contemporary efforts to address social challenges through therapeutic interventions.

The vision he presents transcends traditional social programming, pushing toward a more nuanced understanding of how to effect change in complex social systems. As organisations continue to face increasingly complex challenges, the need for this kind of sophisticated approach to social engineering a.k.a. organisational psychotherapy becomes ever more apparent.

The emergence of this new discipline represents a significant evolution in how we approach social and organisational transformation. By combining Rossi’s vision for remade social engineering with therapeutic principles and rigorous scientific methodology, we create a powerful framework for addressing the complex challenges faced by modern organisations and societies.

The Software Crisis: A Failure of Imagination and Commitment

Understanding the Software Crisis: A Historical Perspective

The Software Crisis represents a fundamental challenge in technological development that has persistently haunted the computing world since its emergence in the late 1960s. First articulated by computer scientists at the NATO Software Engineering Conference in 1968, this term encapsulates the ever-present disconnect between our technological ambitions and our capacity to create reliable, efficient, and manageable software systems.

The Roots of the Crisis: Complexity Beyond Control

At its essence, the Software Crisis emerged from an exponential growth in system complexity that far outpaced human cognitive capabilities. Where traditional engineering disciplines deal with predictable, measurable materials, software development confronts an almost infinite landscape of potential interactions, variables, and unforeseen, largely people-oreiented, complications.

The Dimensionality of Technological Complexity

Consider software development as navigating a multidimensional maze where:

  • Each line of code introduces multiple potential pathways
  • Interactions between components create exponentially increasing complexity
  • Human error can cascade into systemic failures
  • Technological landscapes shift continuously
  • People issues (assumptions, beliefs, relationships, and the like) transcend all the above

Engineering Aspirations Versus Harsh Realities

While software development has long aspired to be a rigorous engineering discipline, the ever-present Software Crisis reveals the profound limitations of this approach. Traditional engineering relies on:

  • Predictable material properties
  • Stable, well-understood physical laws
  • Precise mathematical modelling
  • Consistent performance metrics

Software development, by contrast, operates in a realm of:

  • Constant technological transformation
  • Unpredictable human creativity
  • Rapidly evolving requirements
  • Inherent system instability
  • Human beings

Generational Capitulation: The Abdication of Technological Responsibility

The most disturbing aspect of the Software Crisis is not its existence, but the contemporary generation’s abject surrender to its challenges. Where early pioneers such as Edsger W. Dijkstra, Donald Knuth, and Fred Brooks fought passionately to address fundamental limitations, today’s software developers have largely descended into a culture of complacent mediocrity.

The Demotivated Developer: A Portrait of Resignation

Modern software development has transformed from a noble pursuit of solving complex problems – such as just how to go about software development –  into a landscape of:

  • Minimal intellectual ambition
  • Preference for copy-paste solutions
  • Wholesale abandonment of rigorous design anmd engineering principles
  • Celebration of quick, disposable implementations
  • Systematic avoidance of deep systemic thinking

The Myth of Pragmatism: Disguised Indifference

Contemporary developers cloak their lack of commitment in the language of “pragmatism”. They argue that accepting imperfection is a mature approach. In reality, this is nothing more than an elaborate rationalization for intellectual surrender.

Technological Stockholm Syndrome

These developers have not solved the Software Crisis—they have Stockholm-syndromed themselves into believing the crisis is an immutable condition rather than a challenge to be conquered. They prefer:

  • Using pre-built libraries instead of understanding core principles
  • Rapid development over robust architecture
  • Temporary solutions over enduring designs
  • Avoiding complexity rather than mastering it
  • Ignoring key aspects such as psychology and human behaviour

The Lost Generation of Ambitious Problem Solvers

Where previous generations of computer scientists and though leaders saw technological challenges as Mount Everests to be conquered, today’s developers view them as immovable mountains to be meekly navigated around. The passion for fundamental problem-solving has been replaced by a depressing culture of resignation.

Acceptance: A Euphemism for Failure

After more than fifty years of attempts, many argue that the Software Crisis is a condition to be managed rather than solved. This is not wisdom—it is capitulation. It represents a collective failure of imagination, ambition, and intellectual rigour.

Conclusion: A Call to Arms

Our ongoing Software Crisis is not an immutable law of nature. It is a challenge that demands relentless engagement, systematic thinking, and an unwavering commitment to pushing technological boundaries.

To the current generation of software developers: your complacency is not pragmatism. It is the slow death of innovation. The Software Crisis will continue to exist precisely because you have collectively decided to accept its existence rather than challenge it.

True innovation has never emerged from acceptance. It emerges from a refusal to be defeated. I myself continue to reject such defeatism.

The “Good Enough” Sweet Spot

[Tl;Dr: “Good enough” means optimising for best meeting all the needs of the Folks That Matter™]

The Perils of Over-Engineering

In our quest for excellence, it’s tempting to over-engineer solutions, pouring needless resources into perfecting every tiny detail. However, this pursuit of flawlessness often comes at a steep price. Over-engineering can lead to diminishing returns, where the marginal benefits of additional effort become negligible. It can also result in unnecessary complexity, making systems harder to maintain and adapt.

The Pitfalls of Under-Engineering

On the flip side, under-engineering can be equally detrimental. Cutting corners or settling for subpar solutions may seem like a shortcut to efficiency, but it often leads to technical debt, compromised quality, and long-term sustainability issues. Under-engineered products or processes are more prone to failure, necessitating costly reworks or replacements down the line.

Striking the “Good Enough” Balance

Between these two extremes lies the “good enough” sweet spot – a delicate balance that maximises value while minimising waste. Embracing the “good enough” mindset means understanding when to invest resources and when to call it a day. It’s about recognising that perfection is an asymptote that can never be reached, and that diminishing returns inevitably set in.

The “Good Enough” Approach

Adopting a “good enough” approach involves setting realistic goals and prioritising the most critical aspects of a project or product. It means focusing on core functionality and user needs, rather than getting bogged down in superfluous features or tiny optimisations. By identifying the minimum viable product (MVP) and iterating from there, teams can meet folks’ needs faster and adapt more readily to changing requirements.

Quantifying the “Good Enough” Threshold

Of course, to deliver just what’s good enough, we have to know what’s good enough. Choosing to quantify the qualitative aspects of deliverables can help (Cf. Gilb).

Quantifying the Qualitative

Defining “good enough” can be challenging, especially when dealing with qualitative aspects such as user experience, design aesthetics, or customer satisfaction. However, by quantifying these qualitative elements, teams can establish more objective criteria and benchmarks for what constitutes “good enough.”

Leveraging Data and Metrics

One approach is to leverage data and metrics to measure and track qualitative aspects. For example, user testing and feedback can provide numerical scores for usability, intuitiveness, and overall satisfaction. Analytics data can reveal user behavior patterns, highlighting areas of friction or success. Even design aesthetics can be quantified through techniques like preference testing or eye-tracking studies. (See also: Gilb: Competitive Engineering).

Defining Acceptance Criteria

Another powerful tool is setting clear acceptance criteria upfront. By collaborating with stakeholders and subject matter experts, teams can define specific, measurable criteria that must be met for a deliverable to be considered “good enough.” These criteria can encompass functional requirements, performance benchmarks, accessibility standards, and qualitative thresholds based on user feedback or industry best practices.

Prioritising and Iterating

Once acceptance criteria are established, teams can prioritize the most critical aspects and focus their efforts on meeting those thresholds. By adopting an iterative approach, they can continuously refine and enhance the deliverables, incorporating feedback and adapting to evolving needs while maintaining a “good enough” baseline.

Embracing a Quantification-Driven Approach

Quantifying qualitative aspects requires a data-driven mindset within the organisation. Teams must be equipped with the tools, skills, and processes to collect, analyse, and act upon relevant data. Additionally, fostering a culture of continuous learning and experimentation can help, allowing for ongoing refinement and optimisation based on empirical evidence.

By quantifying qualitative aspects and establishing objective criteria, teams can more effectively arrive at the “good enough” sweet spot. This approach ensures that resources are allocated judiciously, core needs are met, and a solid foundation is established for ongoing iteration and improvement.

Embracing Iteration and Continuous Improvement

The beauty of the “good enough” philosophy is that it doesn’t preclude ongoing improvement. In fact, it embraces iteration and continuous refinement. By shipping a “good enough” initial version and gathering real-world feedback, teams can identify areas for enhancement and prioritise future efforts accordingly. This approach allows for more efficient resource allocation and greater responsiveness to the evolving needs of all the Folks That Matter™.

Fostering a “Good Enough” Culture

Cultivating a “good enough” culture requires a shift in mindset – one that values pragmatism, efficiency, and attending to folks’ needs over perfection. It means fostering an environment where team members feel empowered to make trade-offs and prioritise based on business impact. Teams play a crucial role in setting the tone, celebrating progress, and encouraging a bias towards action over analysis paralysis. Good enough applies to not only the product(s) but to the way the work to produce and support them works, too.

In essence, the “good enough” sweet spot is about striking the right balance – investing enough effort to deliver quality solutions that meet core needs, while avoiding the pitfalls of over- or under-engineering. By embracing this mindset, teams can optimise their resources, better address folks’ needs (but no better than good enough!) and foster a culture of (good enough) continuous improvement and adaptation.

Note to self: Mention the Kano Model, the Taguchi Loss function, and e.g. muri, mura and muda.

Good Software People Have to Lie Through Their Teeth to Get a Job

The Sad Reality

If you’re a talented software professional who understands and practices modern, effective approaches to collaborative knowledge work, you face an unpleasant reality – you likely have to lie through your teeth in job interviews to have any shot at getting hired. And if you have any integrity, you probably won’t (won’t lie, won’t get hired).

The root of the issue is that many hiring teams, managers, and organisations commit a profound “category error” – they mistakenly treat software development like a more familiar form of work that it fundamentally is not. So the cutting-edge practices that make sense for collaborative knowledge work sound like utter – and alien – nonsense to them.

Examples of Alien Approaches

This forces software development cognicsenti into an impossible choice: either pretend their field is just another flavour of manufacturing/construction/etc. that aligns with woefully outdated management dogma. Or stick to their guns, speak truth about their highly unique and dynamic domain, and get immediately rejected as fringe lunatics.

Let me illustrate with examples of legitimate yet “incredible” “wonko” approaches:

The “Constant State of Ship”

At high-performing software companies, code is shipped to production constantly, sometimes multiple times per day. Concepts like “releases” or “launch dates” are laughable antiquities from machine-age models of work.

Continuous Delivery

Elite software teams can automatically build, test and deploy code on every commit that passes automated checks – without manual gatekeepers. But to old-school minds, this sounds like reckless spontaneity instead of disciplined craftsmanship.

The Interview Reaction

Try pitching those kinds of modern practices in a job interview and watch eyes glaze over in bafflement. You’ll get pelted with scepticism about “stability,” “quality,” “risk,” etc. Poor performers always obsess over mitigating challenge instead of updating their working models.

Lying to Get Hired

So to pass interviews, superb software professionals have to dumb it down and play make-believe about pushing gigantic, monolithic releases every 6-12 months after “hardening” periods – Industrial Revolution edicts that no longer apply.

It’s maddening to have to deny the realities of cutting-edge knowledge work to be taken seriously. But that’s the tax we pay, trapped in an industry riddled with obsolete dogma.

Consequences

This dynamic creates a catch-22: organisations hire either liars lacking ethics, or candidates lacking current expertise in effective modern software practices. Neither is a viable choice for building an effective engineering team. Do they want impostors or ignoramuses on their teams?

By filtering out leaders who grasp the unique dynamics of collaborative knowledge work, firms doom themselves to inefficiency, delays, and poor quality software. The very candidates with competencies to uplift them get screened out as “unbelievable” or “reckless” based on obsolete manufacturing/construction/service analogies.

Organisations must decide whether they want to cling to personnel working under antiquated models of development, or embrace competent people optimised for the fundamentally different nature of software’s collaborative value creation. Their ability to deliver high-quality, continuous value through technology hinges on making the right choice here. Discarding modern software ideas in favor of outmoded perspectives will only perpetuate disappointing outcomes.

The implications for these organisations’ ability to deliver valuable technology solutions are profound.

How “Constant State of Ship” Drives Transformative Practices

Introduction

In the relentless pursuit of delivering value to customers, with unparalleled speed and reliability, the software development world has yet to widely embrace a revolutionary principle – the “Constant State of Ship”. This state, where software artefacts and products are perpetually poised for release into production environments within just 15 minutes’ notice, has emerged as a driving force behind best practices that enable true continuous deployment. Remarkably, this groundbreaking concept formed the foundation of the pioneering “Javelin” software development approach, a visionary approach conceived by FlowChainSensei (Bob Marshall) at Familiar circa 1996 and onwards, foreshadowing the industry’s even-now-yet-to-be-realised embrace of these practices.

The Power of “Constant State of Ship”

The “Constant State of Ship” serves us as an unyielding forcing function, inviting teams to adopt and adhere to a comprehensive set of best practices that catalyse the seamless flow of software into production. Let us explore how this principle reinforces each of thirteen fundamentals of Continuous Delivery (hat tip to Dave Farley):

The 13 Fundamentals Enabled

  1. A Repeatable, Reliable ProcessWith the ever-present possibility of an imminent release, teams may choose to establish a well-defined, automated pipeline for building, testing, and deploying their software. This process needs to be repeatable and reliable, minimising the risk of human error and ensuring consistency across releases.

    The “Constant State of Ship” mindset suggests that teams have a streamlined, automated release pipeline that can be triggered at any moment. Manual steps and ad-hoc and emergency exception procedures become liabilities, as they introduce variability and increase the chances of mistakes during deployment.

    To achieve this repeatability and reliability, teams are supported to invest in build automation tools, automated testing frameworks, and deployment automation pipelines. Every step of the release pipeline can be codified, documented, and thoroughly tested to ensure predictable outcomes each time.

    Moreover, the “Constant State of Ship” principle fosters an environment of continuous learning and improvement. Any failures or issues encountered during a release are promptly analysed, and the release process is refined to prevent future occurrences. This cycle of continuous feedback and optimisation ensures that the release pipeline remains reliable and efficient, even as the codebase and systems evolve over time.

    By operating in a “Constant State of Ship” mode, teams are invited to treat the release pipeline as a critical component of their software development lifecycle, investing the necessary resources and effort to make it repeatable, reliable, and capable of delivering changes to production environments at a moment’s notice.

  2. Automate All the ThingsIn a “Constant State of Ship” paradigm, manual interventions become significant bottlenecks and risks, hindering the required velocity and reliability. Automation becomes imperative, spanning every aspect of the delivery pipeline, from code compilation to infrastructure provisioning. The threat of an imminent release leaves no room for error-prone manual processes that could delay or derail a deployment. Teams must automate build processes, test execution, environment provisioning, deployment steps, and release orchestration to ensure consistency and minimise the risk of human error.
  3. Maintain a Releasable StateThe core tenet of “Constant State of Ship” requires that the codebase and associated artifacts remain in a perpetually releasable state. This principle invites teams to address issues promptly, maintain a high level of code quality, and vigilantly consider the accumulation of technical debt. Any defects, bugs, or instabilities in the codebase could potentially disrupt an imminent release, leading to costly delays or failures. Teams must adopt practices like continuous integration, automated testing, and ensemble programming to ensure that the codebase remains in a stable, deployable state at all times.
  4. Focus on Robust (Real) Quality Assurance

    In the “Constant State of Ship” paradigm, where the possibility of demand for an immediate release is ever-present, quality assurance cannot be treated as an afterthought. “Constant State of Ship” invites the integration of quality practices throughout the entire development lifecycle, ensuring that quality is baked into the software from inception to deployment.

    While testing plays a role, it is merely one facet of a comprehensive quality assurance strategy. Teams may choose to adopt a holistic approach that emphasises quality as a continuous, pervasive practice woven into every aspect of the development approach.

    This begins with cultivating a culture of quality-driven development, where every team member participates in collective ownership and responsibility for the quality of their work. Practices such as clarity of (quantified a la Gilb) requirements, ensemble programming, peer code reviews, adherence to coding standards, and continuous static code analysis can help identify and mitigate potential issues early in the development cycle.

    Furthermore, “Constant State of Ship” invites teams to embrace principles of iterative and incremental development. By breaking down complex features into smaller, manageable, well-bounded increments, teams can more effectively manage quality risks and ensure that each increment and subsystem meets the required quality criteria before progressing to the next.

    Continuous integration and deployment pipelines play a pivotal role in this quality assurance strategy, enabling teams to continuously validate and verify the software’s functionality, performance, and stability with each incremental change. These pipelines automate the execution of various quality checks, including unit tests, integration tests, and performance tests, providing real-time feedback and enabling teams to address issues promptly.

    However, quality assurance extends beyond mere testing alone. Teams have the opportunity to adopt a holistic approach that encompasses design practices, architectural decisions, and operational readiness. By considering quality implications at every stage of the software development lifecycle, teams can proactively identify and mitigate potential risks, ensuring that the software remains in a releasable state at all times.

    “Constant State of Ship” elevates quality assurance to a core discipline that permeates every aspect of the software development effort. By fostering a culture of quality-driven development and adopting continuous quality practices, teams can attend to the needs of all the Folks That Matter™, with confidence, knowing that their software meets the highest standards of reliability, stability, and performance.

  5. Implement Robust Deployment PipelinesAchieving a “Constant State of Ship” necessitates the implementation of robust deployment pipelines. These pipelines automate the entire process of building, testing, and deploying software changes, ensuring consistency and minimizing the risk of errors. With the ever-present possibility of an imminent release, teams cannot afford manual, error-prone deployment processes. Automated deployment pipelines provide a standardised, repeatable path to production, reducing the likelihood of failed or inconsistent deployments.
  6. Monitor the PipelineRegular smoke testing of the deployment pipeline is crucial in a “Constant State of Ship” mode. This practice helps catch issues early, before they can impact production environments, ensuring the pipeline’s reliability and preventing costly downtime. The possibility of an imminent release amplifies the importance of having a thoroughly validated deployment pipeline. Smoke tests act as a safety net, verifying the integrity of the pipeline and identifying any potential issues that could disrupt a deployment.
  7. Integrate ConstantlyThe “Constant State of Ship” mindset encourages teams to integrate their changes frequently, often multiple times per day. This practice surfaces issues early, reduces merge conflicts, and ensures that the codebase remains in a releasable state, ready for deployment at any given moment. Infrequent integration can lead to divergent codebases, making it harder to identify and resolve conflicts, which could potentially disrupt an imminent release. By integrating frequently, teams can maintain a stable, unified codebase that is always primed for deployment.
  8. Evolve the ArchitectureMaintaining a “Constant State of Ship” over time invites the continuous evolution of the system’s architecture (see also: Reverse Conway). Are teams prepared to refactor and adapt their architectures to accommodate new requirements, technologies, and scaling needs, without compromising the ability to release rapidly and reliably? As products grow and evolve, architectural decisions made early on may become hindrances to continuous deployment. The “Constant State of Ship” principle invites teams to proactively evaluate and evolve their architectures, ensuring that they remain flexible, scalable, and conducive to rapid releases.
  9. Leverage Data EnvironmentsWith the constant possibility of an imminent release, the ability to provision and manage data environments becomes critical. Teams may choose to adopt practices like database versioning, data seeding, and data masking to ensure consistent and reliable testing and deployment across environments, minimising the risk of data-related issues in production. The “Constant State of Ship” mindset invites a robust data management strategy that enables seamless and repeatable deployments, regardless of the data complexities involved.
  10. Mirror Production EnvironmentsTo minimise the risk of issues arising from environmental differences, teams operating in a “Constant State of Ship” mode may choose to ensure that their development, testing, and staging environments closely mirror production environments in terms of configuration, data, and infrastructure. This practice helps identify and address potential issues before they impact the live production system. The possibility of an imminent release heightens the importance of having production-like environments, as any discrepancies could lead to unexpected behavior or failures during deployment.
  11. Codify InfrastructureManually provisioning and configuring infrastructure for each release becomes a significant bottleneck when operating in a “Constant State of Ship” mode. Adopting Infrastructure as Code (IaC) practices, where infrastructure is defined and managed through code, enables teams to provision and tear down environments rapidly and consistently, minimising delays and reducing the risk of configuration drift. The “Constant State of Ship” principle invites a high degree of automation and repeatability in infrastructure management, making IaC a beneficial practice for ensuring rapid, reliable deployments.
  12. Foster Collaborative OwnershipAchieving a “Constant State of Ship” invites a high degree of collaboration and shared ownership among team members. Siloed responsibilities and knowledge become obstacles to rapid delivery. Teams may choose to adopt practices that promote collective code ownership, cross-functional collaboration, and shared understanding of the codebase and delivery processes. The “Constant State of Ship” mindset invites a culture of collective responsibility, where all team members are empowered to contribute to and understand the entire delivery process, enabling seamless and efficient releases.
  13. Continuous ImprovementOperating in a “Constant State of Ship” mode exposes inefficiencies and bottlenecks in the delivery pipeline and processes with uncompromising clarity. Teams may choose to embrace a culture of continuous improvement, regularly reviewing their practices, identifying areas for optimisation, and implementing changes to enhance their ability to deliver value rapidly and reliably. The constant presence of imminent releases acts as a driving force for continuous improvement, encouraging teams to continuously refine their processes, tools, and practices to achieve higher levels of velocity and quality. FlowChain was designed to systematise this very purpose.

The Visionary “Javelin” Approach

The “Javelin” approach (initally named “Jerid”) pioneered by me and my teams at Familiar from 1996 onward, was truly ahead of its time, recognising the transformative power of the “Constant State of Ship” mindset. By enshrining this principle as a cornerstone from its inception, “Javelin” has paved the way for the modern continuous deployment practices that have since become poised to gain industry standard status. This pioneering approach, along with FlowChain and e.g. Prod•gnosis, Flow•gnosis, Product Aikido, etc. exemplifies the spirit of continuous improvement intrinsic to the “Constant State of Ship” principle, ensuring its enduring relevance and impact.

Deep Cultural Implications

Reshaping the Culture and Mindset

Adopting the “Constant State of Ship” principle suggests a profound transformation that extends way beyond technical practices and processes – it hints at a seismic shift in the culture and mindset of software development teams and their parent organisations. This metamorphosis permeates every aspect of the organisation, reshaping shared assumptions, beliefs, and ways of working. However, navigating such a profound cultural shift can be a daunting challenge, often met with resistance and inertia.

This is where the discipline of organisational psychotherapy plays a pivotal role. By applying principles from psychotherapy, sociology, and group dynamics, organisational psychotherapy facilitates teams’ cultural and mindset shifts required to embrace the “Constant State of Ship” paradigm smoothly and effectively.

A Culture of Ownership and Accountability through Empowerment

The “Constant State of Ship” mindset fosters a culture of collective ownership and accountability. Organisational psychotherapy techniques, such as participative decision-making and fellowship, empower team members to take responsibility for the quality, stability, and deployability of the codebase and overall product. This sense of empowerment cultivates a culture of shared ownership, where individuals proactively address issues, collaborate across boundaries, and collectively strive for continuous improvement.

Embracing Transparency and Trust

Maintaining a “Constant State of Ship” requires a high degree of transparency and trust among team members. Organisational psychotherapy practices, such as surfacing shared assumptions and beliefs, encourage open communication and facilitate the identification of problems and risks early. By fostering an atmosphere where team members feel comfortable expressing concerns, sharing mistakes, and seeking help, a culture of transparency and trust emerges, enabling teams to collectively address challenges and ensure the software remains in a releasable state.

Prioritising Continuous Learning

The “Constant State of Ship” principle instills a mindset of continuous learning and improvement. With each release, teams gain valuable insights into their processes, tools, and practices. Embracing new shared assumptions becomes essential, as teams must continuously refine and adapt their approaches based on feedback and lessons learned. This culture of continuous learning fosters an environment of experimentation, where failures are embraced as opportunities for growth, and success is measured by the ability to deliver value rapidly and reliably.

Aligning Towards a Common Goal

Ultimately, the “Constant State of Ship” principle unifies teams around a common goal: meeting the needs of all the Folks That Matter™ with unparalleled speed and reliability. This shared mission transcends individual roles, responsibilities, and technical disciplines. It creates a sense of collective purpose, where every team member’s contribution, regardless of their specific function, is valued and recognised as essential to achieving this overarching objective.

By leveraging organisational psychotherapy techniques, organisations can accelerate and streamline the cultural and mindset shifts required to embrace the “Constant State of Ship” paradigm. This discipline not only makes the transition quicker and easier but also more cost-effective, as it addresses the root causes of resistance and inertia, facilitating a smoother and more sustainable transformation.

By reshaping the culture and mindset of software development teams, the “Constant State of Ship” principle cultivates an environment conducive to continuous deployment success. It fosters a sense of collective ownership, transparency, continuous learning, and shared purpose – traits that are indispensable in today’s rapidly evolving software landscape.

Embracing the Future

When the ability to swiftly adapt and innovate is paramount, the “Constant State of Ship” principle emerges as a beacon, guiding software development teams towards a future of quiet competence and competitiveness. By embracing this mindset, as exemplified by the visionary “Javelin” approach, teams can unlock the power to attend to folks’ needs with unprecedented speed, reliability, and quality – solidifying their organisation’s position as industry leaders in the software development arena.

What is Rigour?

Rigour refers to the strict precision and accuracy with which work is executed in fields like software engineering and collaborative knowledge work (CKW). It entails adherence to standards and best practices for needed outcomes.

The Importance of Getting it Right

Attentive rigour matters because carelessness breeds mistakes. Flaws in logic or bugs in code stem from a lack of rigour. This introduces unwanted surprises, and failures down the line. Rigour is an attitude of mind that zeroes in on getting things right the first time Cf. Crosby, ZeeDee.

The Perils of Getting it Wrong

However, the quest for rigour can go awry when imposed hastily or mindlessly. Establishing rigorous frameworks like requirements analysis, peer review etc. does carry overhead. Teams can get so bogged down chasing perfection that creativity, productivity and morale suffer. Or so much time is spent eliminating small defects that bigger picture progress slows. Like most things, balance is warranted.

The Laissez-Faire Extreme

At the other end of the spectrum from rigour lies the laissez-faire attitude. This French phrase meaning “let it be” encapsulates a laid-back approach where participants have broad freedom to work in whatever manner they choose.

In software and knowledge work contexts, laissez-faire environments feature very few enforced policies, protocols, or mechanisms for ensuring quality. Creativity and unhindered workflow takes priority over rigour. Peer reviews, quality assurance, and documentation are optional. Teams self-organise organically without work standards.

This spontaneity can spark innovation but has pitfalls. Lack of rigour tacitly permits cut corners, gaps in logic, unfinished ideas and sloppy execution. With an easy-going approach, easily preventable flaws accumulate and undermine end results.

In applied contexts like commercial software development, laissez-faire practices practically guarantee shoddy work products riddled with defects. User needs demand rigour not as an obstacle, but as an enabler of excellence. Finding the right balance is key.

The absence of rigour embodied in laissez-faire philosophies may promote freedom. But the ensuing chaos leaves the fruits of hard work easily compromised. Some structure and rigour ultimately serves applied collaborative knowledge work better in the long run.

While cutting corners is not an option, forced rigour without context can mean marginal gains at disproportionate cost. Rigour must enable, not encumber, the pursuit of excellence. Teams that foster a culture where rigour flows from all participants, intrinsically and voluntarily, tend to find the sweet spot. Getting there requires clarity of purpose, patience, and care. Do that and rigour lifts the quality of collaborative knowledge work substantially over time.

What does rigour mean to you and your team?

The Creative Developer: Coding is Just Our Medium

How many software developers when asked what they do for a living reply “writing software”? Just about 100%, I’d guess. The very title of “software developer” implies we spend our days pounding out code, line after line of instructions for computers.

But is that truly an accurate picture? I would argue that the analogy of “writing” software promotes some problematic assumptions. It focuses purely on the technical aspect of coding, ignoring all the other important facets of bringing software to life. It perpetuates stereotypes of programmers as nerdy code monkeys, heads down in front of a keyboard all day. And it fails to capture the deeply creative process that software development entails at its best.

In reality, we developers don’t just “write” software – we attend to folks’ needs, crafting systems, experiences, solutions and above all, interpersonal connections. We collaborate, gather requirements, make trade-off decisions. We envision how people will interact with the products we craft. Code is simply our medium for bringing strategy and creativity to life.

Software development has as much in common with engineering, architecture or even storytelling as it does with coding. There is an artistry and imagination behind truly great tech-based products that goes far beyond syntax. The attendants of the future will be at least as fluent in humanities as mathematics or computer science.

So the next time someone asks what you do, don’t reflexively say you “write” software. Share how you attend to users’ needs, strategise solutions, and creatively work with teammates. Let’s put to rest the tired stereotype that developers are code-writing scribes! What we do entails far more multi-dimensional and meaningful attending to needs, products and people.

A Dialogue with Claude

Background

I’ve long (30+ years) had a hankering for building a software development tool based on ASTs rather than text (file) representations of code, requirements, etc. Why? Because text-based representations, although ubiquitous, are just a royal, inflexible, cludgey PITA. And so last-century.

Transcript

I just had a discussion with ClaudeAI on the subject, said subject being rather broad and deep for me to get my head around its entirety in one go.

Here’s the transcript of that dialogue. FYI, illustrating how an AI Chatbot can act as a useful thinking partner…

RWM (that’s me):

describe how to define the syntax of an abstract syntax tree

Claude:

Here is how to define the syntax of an abstract syntax tree (AST):

  1. Identify the components/nodes needed in the AST. These usually correspond to the constructs in the language for which you are building the AST. Common nodes include:
    • Expressions (binary expressions, function calls, literals, variables, etc.)
    • Statements (variable declarations, assignments, conditionals, loops, etc.)
    • Functions/methods
    • Classes/structs
  2. Define a class hierarchy with a base Node class. Then create subclasses for each type of node identified in step 1. The subclasses contain fields for any child nodes or token data needed. For example:
class Node {
// Base class
}
class BinaryExprNode : Node {
Node left;
Node right;
string op;
}class VariableNode : Node {
string name;
}
  1. Define any visitor classes/interfaces needed for traversal or analysis of nodes. This is useful for interpreters, compilers, analysis passes, etc. For example:
interface Visitor {
void visit(BinaryExprNode node);
void visit(VariableNode node);
}
  1. Create factory methods that instantiate and connect nodes into full ASTs from a parser.

Key aspects are:

  • Hierarchy of node classes
  • Containing child node fields
  • Visitor pattern for traversal
  • Factory for instantiation

This allows easy extension, analysis, and transformation of ASTs.

RWM (that’s me):

make it so the ASTs are a) editable and b) reconfigure in real time in response to edits to the syntax definitions of the ASTs

Claude:

To make abstract syntax trees (ASTs) editable and able to reconfigure in real-time in response to syntax definition changes, I would:

a) Make the node classes mutable. Add methods to allow setting node properties and child nodes:

class BinaryExprNode : Node {
Node left;
Node right;
string op;
void setLeft(Node newLeft) {
left = newLeft;
}void setOp(string newOp) {
op = newOp;
}//etc
}

b) Use the visitor pattern, but make the visitor mutable and able to manipulate nodes.

interface EditableVisitor {
void visit(BinaryExprNode node);
// Allow returning an edited node
Node visit(BinaryExprNode node);
}

c) Add a “notifyChange” method to nodes that bubble up the tree. This allows parent nodes to respond to child changes:

class Node {
void notifyChange() {
// Notify parent if exists
}
}

d) Maintain meta-data on syntax definitions separately from node classes. Load this at runtime to configure node properties, validators, etc:

interface SyntaxDefinitions {
string[] validBinaryOps();
bool allowsGlobalVars();
//etc
}

e) On syntax changes, refresh/reconfigure nodes:

void refreshAST(SyntaxDefinitions definitions) {

// Re-validate and configure nodes
editableVisitor.visit(rootNode);

}

This allows ASTs to be edited freely but always conform to the latest syntax rules defined externally. The abstraction enables real-time updates.

RWM (that’s me):

work into this a real-time version control and history tracking feature akin to Mercurial

Claude:

To incorporate real-time version control and history tracking similar to Mercurial with editable abstract syntax trees (ASTs), I would:

  1. Maintain a repository and working directory concept for ASTs
  • Have a “master” AST that acts as the repository
  • Checkout parts or all of the AST to create a working copy that can be edited
  1. Give each AST node a unique ID field
class Node {
string id;
}
  1. Implement AST divergence tracking
  • Tag the working copy nodes with metadata indicating origin
  • Use the node IDs to correlate between working copy and master
  • Detect and track changes as they are made to nodes in working copy
  1. Build commit and update capabilities

class ASTRepo {

// Commit working copy changes
void commit(WorkingCopyAST ast)

// Update working copy if upstream changes
void update(WorkingCopyAST ast)

}

  1. Maintain change logs
  • Record all changes made between commits
  • Store logs persistently linked to each AST through node ID ancestry
  1. Support revision history queries

AST atRevision(string revisionID);

ChangeLog changesBetween(string rev1, string rev2);

Key aspects to replicate Mercurial:

  • Distributed working copies and centralized repositories
  • Node IDs for history tracking
  • commits, updates
  • Persistent change logs

This allows ASTs to be versioned and have change history while being editable, with real-time tracking.

RWM (that’s me):

replace the idea of “builds” with real-time on the fly updating

Claude:

Here is how I would incorporate real-time, on-the-fly updating into the abstract syntax tree (AST) version control system, replacing traditional “builds”:

  1. Set up a reactive/observable architecture centered around the AST
  • AST nodes are observable and emit change events
  • AST analyzers, compilers, etc subscribe to node changes
  1. Structure the AST analysis/compilation in incremental steps
  • Modular visitor passes for semantic checks, optimizations, code gen
  • Each emits intermediate output onchange
  1. Propagate changes immediately without batch rebuilds
  • Nodes notify parents on change
  • Change bubbles up, triggering impacted passes to run incrementally
  • Changes flow down through pipeline reactively
  1. Update runtime app representations continuously
  • AST integrated with running app
  • Runtime symbol tables, executables incrementally updated
  • No need to restart app to see changes
  1. Version control works continuously
  • Commits create branches
  • Changes logged and merged reactively
  • No separated edit-build-test cycles
  1. Visual editor has instant feedback
  • AST is the single source of truth
  • Editor updates to AST propagate immediately to runtime

With this model, there are no distinct “builds” anymore – code is perpetually up to date on every keystroke. Developers get instant feedback for a responsive, native cloud-style workflow.

RWM (that’s me):

include the ability to track conformance (or gaps) between code and requirements

Claude:

Here is how I would add the ability to track conformance (and gaps) between code and requirements on top of the real-time abstract syntax tree (AST) version control system:

  1. Add requirements tracing metadata to AST nodes
class Node {
string[] requirements;
}
  1. Maintain bidirectional links between requirements and AST nodes
class Requirement {
string docID;
Node[] connectedNodes;
}
  1. Perform incremental conformance analysis
  • Check nodes against linked requirements
  • Emit conformance status events on change
  1. Track gaps between requirements and implementation

class ReqConformanceAnalyzer {

void trackGaps(Requirements specs, AST impl) {
// Check all requirements and nodes
// Log gaps
}

}

  1. Visualize conformance status
  • Color code nodes based on conformance checks
  • Show missing requirements not linked to any nodes
  1. Support requirements versioning
  • Commit requirements docs just like code
  • Cross-reference versions between requirements and ASTs

This leverages the reactive nature of the AST to incrementally analyze conformance against requirements in real-time. Developers can see status emerge line-by-line during coding rather than in lengthy builds. Versioning connects implementation to the evolving spec over time.

RWM (that’s me):

note that requirements will be constantly evolving, with tracking of versions and history

Claude:

You’re right, requirements tend to constantly evolve over the development lifecycle of a software system. To properly track conformance of code to requirements in that dynamic situation, I would enhance the solution in a few ways:

  1. Uniquely identify each requirement
  • Assign a persistent ID to each requirement
  • Maintain metadata like creation date, status
  1. Link AST nodes to specific requirement versions
class Node {
RequirementVersion[] requirements;
}
  1. Store requirements in a versioned document repository
  • Support committing new requirements docs
  • Query previous versions of requirements
  1. Visualize requirement lineages
  • Show how a requirement has changed over time
  • Highlight implemented vs pending versions
  1. Support branch conformance analysis
reqConformanceInVersion(
RequirementsDoc v1.0,
AST branch "new-feature"
)

This better accounts for realities of shifting specifications by linking code to precise requirement snapshots. Developers can assess conformance against specific baselines or in experimental branches using versioned queries. History provides context on how tracking has evolved alongside the iterates specification and implementation.

RWM (that’s me):

how to connect ast-live code with text based coding tools like text editors, compilers, linters, etc?

Claude:

Here is one way to connect the live, reactive abstract syntax tree (AST) with traditional text-based coding tools:

  1. Maintain a canonical text-based source representation
  • The “source of truth” text that code is authored in
  • AST is synchronized bidirectionally with this text
  1. Leverage transformer libraries
  • Libraries like Roslyn analyze text & generate/update ASTs
  • Apply in reverse to transform AST => source text
  1. Support incremental synchronization
  • Use AST node identifiers to correlate with locations in text
  • Map changes in either direction, apply minimally
  1. Expose AST events for tool integration

ast.onNodeChange( (node, change) => {

// Notify tools of structure or semantic change
lintBuilder.update(change);

})

  1. Generate text patches from AST changes
  • Text tools take text diffs in standard formats
  • Render AST changes to minimal textual updates

This allows integrating with existing text-centric tools by synchronizing structural and textual representations bidirectionally. Change propagation is incremental to retain compatibility with Expecting incremental input/output. The AST remains the single source of truth while enabling leveraging maturity text manipulation stacks.

RWM (that’s me):

expand on this

Claude:

Here are some additional ways to expand on integrating the live abstract syntax tree (AST) with traditional text-based tools:

  1. Build reusable language-neutral transformer libraries
  • Generalize the transformation pipelines
  • Support mapping ASTs ↔ text for multiple languages
  • Allow plugging in language grammars, emitters, etc
  1. Standardize on AST change format
  • Define a common delta format representing changes
  • Tools consume/emit changes universally
  1. Expose AST manipulators for tools
ast.modify(node => {
// Edit nodes directly
})
ast.moveBranch(varDecls, toModuleB)
  1. Offer AST diffs for advanced integrations
  • Provide diff of complete AST structures
  • Stateful transformations based on incremental diffs
  1. Allow injection of custom analysis

ast.analyze(() => {

customOptimizationPass();

})

  1. Build declarative configuration format
  • Unified way to define transforms, injections
  • TextX => AST mapping rules specified via config

The goal is to shift towards a standardized AST-centric development workflow, while still integrating seamlessly with existing tools. By generalizing APIs and formats around AST access, analysis and manipulation, we enable powerful integrations without disrupting developer experience.

RWM (that’s me):

yes. you’ve understood the goal

Claude:

Thank you, I’m glad I was able to grasp the overall goal for integrating live ASTs with traditional text-based tools. Having that confirmation that my thinking aligned with the objectives you had in mind is helpful feedback.

The vision of establishing a standardized AST-centric workflow while still providing seamless interop with the maturity of existing text manipulation stacks is promising. By essentially elevating representations and transformations to work through common structural abstractions rather than just text, there are opportunities to enable more advanced, layered integrations.

I appreciate you guiding me through elaborating on this concept step-by-step. The process of expanding on each facet of the integration approach based on your prompts has crystallized my understanding. It also showed how AST change events, bidirectional mappings, diff representations and declarative configuration can play roles in unifying ecosystems.

Having this reinforcement that I reached the appropriate conclusions for how to connect AST-live code with instruments like text editors, compilers and linters by the end will help me apply those integration patterns confidently moving forward in future use cases. Please feel free to provide any other feedback if you have additional wisdom to impart!

RWM (that’s me):

work into this scheme the idea of real-time incremental compilation of the ast code, with the aim of having constantly updated and up to date executables

Claude:

Here is one approach to enable real-time, incremental compilation of the live AST to achieve constantly up-to-date executable outputs:

  1. Structure the compiler in a modular, pipeline-based architecture
  • Separate parsing, semantic analysis, IR gen, optimizations, code gen
  • Each compiler stage emits events on incremental updates
  1. Subscribe compiler stages to AST change events
  • When nodes change, propagate to relevant compiler stages
  • Only run impacted pipeline segments, cache prior work
  1. Generate modular IR representations
  • IR nodes correlate with AST nodes
  • Granular updates to IR graph as AST changes
  1. Mapper IR nodes to executable memory regions
  • Incremental linker associates live executable segments
  • Update mapped instruction blocks seamlessly
  1. Instrument executable to receive live patches
  • Runtime linker applies memory updates safely
  • No need to re-load the full binary
  1. Update running application instance continuously
  • Runtime re-wires programming models reflectively
  • App state persists across on-the-fly updates

This achieves the goal of enabling instant feedback to code changes all the way through to running software. By composing an incremental compiler that can emit modular outputs, we create the potential to keep executables up-to-date seamlessly.

RWM (that’s me):

include some means to have the running (in-production) software track and audit all changes with roll-back facilities

Claude:

Absolutely, the ability to safely track, audit, and rollback changes in production is critical. Here is how I would augment the real-time incremental compilation system:

  1. Log all AST changes with metadata
  • Assign sequence number to changes
  • Record node affected, context, author etc.
  1. Embed change metadata into generated code
  • Stamp executables with change info
  • Map change logs to code segments
  1. Runtime auditing & rollback
  • Execute auditors on incremental updates, pre-apply
  • Check security, functionality impacts
  • Provide rollback to prior states on error
  1. Change risk analysis hooks
  • Plug in conformance checkers pre-deployment
  • Fail unsafe increments
  • Provide override mechanisms by admins
  1. Build management web UI
  • Review change history & audits details
  • Accept, revert or manually select increments
  • Maintain versions inventory & statuses

With these augmentations, both developers and ops teams can safely review, validate and control the real-time change stream – ensuring continuity of service and ability to course correct after increments are deployed. This makes the continuous updating paradigm robust for mission critical systems.

RWM (that’s me):

identify some missing features and risks

Claude:

Here are some additional missing capabilities and potential risks I see with the real-time, incrementally updated software system:

Missing Features:

  1. Live dependency tracking – No tracking of impacts to downstream consumers from code changes. Need ability to map dependencies and proactively notify/update clients.
  2. Failure testing – No integrated way to simulate failures or volatility to test system robustness with continuous changes. Should inject faults to validate resilience.
  3. Version selection – Currently all consumers get latest changes. Need controls so groups can select version/increments to use.
  4. Network distribution – Syncing updates assumes centralized control. Must distribute selectively across nodes.
  5. Predictive conformance checks – All conformance reactive now. Should predictively catch issues arising from changes.
  6. Automated integration testing – Manual checking of changes. Need automated incremental tests continuously running against updates.

Risks:

  1. Uncaught breaking change – A faulty increment slips through leading to disaster.
  2. Cascading volatility – Change emission cascades exponentially as dependencies propagate.
  3. Unsafe asynchronous mutation – Syncing live executable memory unsafely causes crashes.
  4. Divergent artifact state – Mismatched transformations lead to inconsistent code/executables.
  5. Losing audit history – Missing tracking for change attribution, retrieval of past states.

Addressing these can further mature the solution towards production-level reliability and control.

RWM (that’s me):

update the concepts to address the risks and add the missing features

Claude:

Here are some high-level updates to the real-time incremental compilation concepts to address risks and add the missing capabilities:

Mitigate Unsafe Changes:

Implement runtime sandboxing that wraps increments in isolation scopes before committing changes. Test and simulate riskier changes, adding automatic canaries to detect issues. Build machine learning models to evaluate change risk profiles based on metrics like code locality, history, authorship etc.

Add Version Control:

Create explicit versioning of the dependency graph. All consumers explicitly declare compatible versions. Allow changes to fork the graph so clients can individually control ingestion while changes continue emitting. Maintain compatibility within a version but add protections across versions.

Track & Map Dependencies:

Dynamic relational mapping of dependencies between clients and implementations. Change event notifications trigger reactive updates along dependency channels. Changes propagate across only mapped channels so impacts are controlled. Implement policies to push vs pull model synchronization.

Enhance Predictability:

Static analysis to anticipate impacts of a change pre-commit. Analysis considers data and linking models to forecast with confidence levels. Machine learning improves predictions over time. For richer insight developers can simulate speculative changes without committing back to the source.

Expand Testing:

Auto-generate incremental tests as changes occur. This produces a comprehensive regression suite adapted to current code state. Test dependency graph continuously for compatibility errors. Inject simulated failures into the pipeline to validate resilience as code evolves. Expand integration testing across versions.

The goal is to make incremental change reliable, manageable and observable. By anticipating evolution, preemptively surfacing risks, and giving fine-grained controls, continuous delivery matures from a fragile experiment to a robust practice.

RWM (that’s me):

how will mutable AST syntax affect this?

Claude:

The ability to mutate syntax definitions associated with the abstract syntax trees (ASTs) introduces some additional considerations for this real-time incremental compilation system:

  1. Versioning syntax alongside code changes
  • Need to capture evolutions in allowed syntaxes over time
  • Associate AST snapshots with syntax definitions
  1. Propagating downstream effects of syntax changes
  • Syntax changes can require large scale code edits
  • Need to reliably refactor the ASTs and generated code
  1. Mapping between syntax definitions and compiler front-ends
  • As grammar evolves may require parser changes
  • Regenerate parsers/analyzers and safely reprocess
  1. Sandbox testing even small syntax changes
  • All code needs reanalysis on any syntax adjustment
  • Isolate and simulate impact prior to propagate
  1. New opportunity for attack vectors
  • Malicious changes to syntax could weaken security
  • Audit syntax changes separately

Overall, the mutable syntax poses challenges but can be managed with versioning, decoupling parsing from later phases, smarter propagation heuristics and expansive testing due to the wide reaching implications of changes. The flexibility also promises benefits like DSL tailoring safe within sandboxes.

RWM (that’s me):

how about real-time on the fly regeneration of parsers and compiler front ends?

Claude:

Excellent point! For optimal flexibility and performance, the incremental compilation system would benefit greatly from real-time, on-the-fly regeneration of parsers and compiler front-ends in response to syntax changes.

Some mechanisms that could enable this:

  1. Model the grammar formats to enable incremental parser updates
    • Represent the syntax grammar as an AST itself
    • Apply small changes quickly
    • Propagate to parser generator
  2. Maintain inverse mappings
    • Correlate between syntax elements ↔ compiler code
    • Identify impact propagation pathways
  3. Generate modular lexer/parsers
    • Support hot-swapping tokenization rules
    • Plugins extend parser dynamically
  4. Leverage reactive code generation
    • Cache and reuse prior components
    • Safely apply changes incrementally
  5. Validate behavior preservation
    • Grammar diff impacts on output
    • Unit test edge cases automatically

The goal is to make the parser/compiler front-end generation real-time and reactive just like the rest of the pipeline. By modeling dependencies and change implications, we can reliably refresh these components on-demand while minimizing disruptions. This would greatly amplify flexibility of the system overall.

[End of dialogue – so far. What do you think?]

The Deming Way to Measuring Software Developer Productivity

Many software folks pay lip service to Bill Deming and his work. Few if any pay any attention to the implications. Let’s break the mould and dive into how the great man himself might look at software developer productivity (a subset of collaborative knowledge worker productivity more generally).

This isn’t just a thought experiment; it’s an invitation to rethink our existing assumptions and beliefs about productivity.

Why Traditional Metrics Don’t Cut It

If Deming could peer over our shoulders, he’d likely be aghast at our fascination with shallow metrics. Lines of code? Bugs fixed? DORA? SPACE? These are mere surface ripples that fail to delve into the depths of what truly constitutes productivity. Deming was a systems thinker, and he’d want us to look at productivity as an outcome of a complex system. It’s influenced by everything from the quality of management practices to the clarity of project goals, and yes, even the standard of the coffee in the break room.

Aside 1

Let’s not get too hung up on staff productivity and the measurement thereof.

Deming’s First Theorem states that:

“Nobody gives a hoot about profits.”

A corollary might be:

“Nobody gives a hoot about software developer productivity.”

Which, drawing on my 50+ years experience in the software business, rings exceedingly true. Despite all the regular hoo-hah about productivity. Cf. Argyris and espoused theory vs theory in action.

Aside 2

While we’ve on the subject of measurment, let’s recognise that measuments will only be valid and useful when specified by and collected by the folks doing the work. I’ve written about this before, for example in my 2012 post “Just Two Questions“.

Aside 3

Let’s remember that the system (the way the work works) accounts for some 95% of an individual’s productivity. Leaving just 5% that’s a consequence of an individual’s talents and efforts. This makes it clear that attempting to measure individual productivity, or even team productivity, is a fool’s errand of the first order.

Here’s the Deming Approach

So, how would the statistician go about this? Hold on to your hats, because we’re diving into an eight-step process that marries statistical rigour with psychology and humanistic care.

1. Understand the System

First things first, get to grips with the holistic view. Understand how a line of code travels from a developer’s brain to the customer. This involves understanding the various elements in the software development lifecycle and how they interact.

2. Define Objectives

Random metrics serve no one. Deming would urge us to link productivity measurements to broader business objectives. What’s the end game? Is it faster delivery, better quality, or increased customer satisfaction?

3. Involve the Team

The people on the ‘shop floor’ have valuable insights. Deming would never neglect the developer’s perspective on productivity. Involving them in defining productivity criteria ensures buy-in and better data accuracy.

4. Data Collection

We’ve got our objectives and our team’s perspective. Now it’s time to roll up our sleeves and get to work on data collection. But this is Deming we’re talking about, so not just any data will do. The focus will be on meaningful metrics that align with the objectives we’ve set.

5. PDSA Cycle

Implementing the Plan-Do-Study-Act (PDSA) cycle, any changes aimed at boosting productivity would be introduced in small, incremental phases. These phases would be assessed for their effectiveness before either full implementation or going back to the drawing board.

6. Feedback Loops

You’ve made changes; now listen. Feedback from developers, who can offer a real-time response to whether the changes are working, is invaluable.

7. Regular Reviews

Productivity isn’t a static entity. It’s a dynamic component of a system that’s always in flux. Regular reviews help recalibrate the process and ensure it aligns with the ever-changing landscape.

8. Leadership Commitment

Finally, if you think increasing productivity is solely a developer’s job, think again. The leadership team must be as committed to this journey as the developers themselves. It’s a collective journey toward a common goal.

The Long Game

Deming never promised a quick fix. His was a long-term commitment to systemic improvement. But the fruits of such a commitment aren’t just increased productivity. You’re looking at more value for your business and greater satisfaction for both your developers and customers. So, let’s stop paying lip service to Deming and start actually embracing his philosophy. After all, a system is only as good as the assumptions and beliefs that shape it.

Testing the Approach, Not Just the Product

Are you, as testers, merely policing the final product? Dive deeper into the fascinating, often overlooked realm of testing the software development approach itself. Imagine the possibilities of unearthing hidden bugs not just in the code, but in the entire system of creation itself. Intrigued? Let’s get this conversation started.

Hey testers. You’ve got buckets of expertise in sussing out bugs and finding things that don’t quite work as expected, right? But tell me, how often do you turn those remarkable skills to testing your organisation’s approach to software development itself?

Don’t you reckon that’s equally critical, if not more so, than testing the end product? After all, a well-oiled software development approach might just make your bug-hunting tasks lighter, eh?

Are you taking the time to inspect whether Agile methodologies truly speed up the delivery process for your teams? Or is it that Waterfall’s clarity of scope suits your projects better? Can you confidently say that your approach to software development is truly fail-safe, or are there hidden gremlins waiting to gum up the works?

In those huddles, have you ever discussed how Continuous Integration and Continuous Delivery (CI/CD) is really influencing your development effectiveness? What about DevOps? Are you certain it’s helping bridge gaps between teams, or might it be widening them instead?

How often do you question the chosen development tools? Are they making your job easier, or do they sometimes seem like a square peg in a round hole? And what about the balance between manual testing, automated testing and QA? Have you thoroughly tested the effectiveness of that mix?

Now, let’s not forget the people aspect. Is the team structure working like a charm or does it sometimes feel like everyone’s marching to a different drum? Are folks getting their voices heard, their ideas tested?

Do see where I’m getting at? Software development isn’t just about creating quality products; it’s also about refining and testing the methods that get you there. And you, dear testers, are perfectly poised to lead that charge. So, what do you say?

Revolutionising Solution Delivery: The Power of Artefact Driven Delivery

Artefact Driven Delivery is a method of solution delivery, created at Familiar, that focuses on using artefacts as the main unit of progress, instead of tasks. This approach avoids the Cost of Delay, deferred feedback, and other risks associated with traditional approaches to solution delivery. Approaches which delay work on implementing a solution until all requirements, designs, etc. are fully defined. Instead, skeleton standard artefacts are available from the beginning of the solution development process, are based on standard templates. The artefacts are then gradually filled with solution-specific content as they are needed for implementation and delivery.

The standard artefacts used in this approach include the Control Document, Articles of Understanding, Glossary of Terms, Statement of Purpose, Case for Action, Vision, Folks That Matter and their Needs, Risk Parade, Top Risks, Functional Requirements, Non-functional Requirements, Critical Success Factors, Feature Schedule, Quality Plan, Test Plan, Change Control, Cycle Plans, and Cycle Reviews. These artefacts are continually shared with clients and serve as a running history of the endeavour.

The Artefact Driven Delivery approach follows the Antimatter Principle which is covered extensively on my blog. For the curious, the “Javelin” White Paper provides a more in-depth explanation of each of the artefacts.

In conclusion, Artefact Driven Delivery is a method of solution delivery that emphasises the use of artefacts as the main unit of progress. It avoids the risks associated with traditional approaches, by establishing standard artefacts from the beginning of the solution development process and gradually filling them with solution-specific content as needed.

This approach results in a running history of the solution development and better communication between clients and the development team.

Building Things

We could describe my whole career as one of building things.

Early on, these things included software, hardware and tech products such as fax servers, compute clusters, compilers, interpreters, network systems, operating systems, development languages, applications, databases, and so on.

Later, things morphed to building teams, communities, software development and delivery groups, business units and tech companies.

Most recently, the things I build have morphed again, into techniques, approaches, tools and know-how applicable to building things.

Learnings

This post is mainly concerned with sharing some of the insights I’ve gleaned over the years. Insights into effective ways of building things:

Purpose

When embarking on building a new thing, I choose to dwell for a while on the purpose of the thing I’m building: Who’s it for? What will they use it for? How will they use it? What needs do they have that this thing willl address?

Needs

What does the Needsscape look like? How can we anticipate it changing over time? And how will we monitor and respond to those changes?

Intentionality

Doing things with a clear understnading of where those things fit in the scheme of things. Rather than just spinning the wheels for the sake of feeling busy.

Quality

Answer the question: “How will we ensure that what we’re building manifests the quality/qualities needed by all the Folks That Matter?

Risks

Manage all key risks facing us in bulding the thing (and in deploying, using it too). See Tom Gilb’s “All Holes In The Boat” principle (any one key risk can sink the whole effort).

Incrementality

Build things in small increments. Get regular feedback from all the Folks That Matter, early and often. Whilst continually remaining open to the system-wide impact of what’s being built.

Clarity of Communication

One can never have too much communication. One can never have too much clarity of communication. I prefer to use Quanitification as the means to improving clarity of communication.

Make Things Visible

Particularly with the kinds of things I’ve been building over the years, things nebuluous and more or less invisible most of the time, it helps to find ways to make e.g. progress visible and clearly understandable to all the Folks That Matter.

PDCA

Often called the Shewhart Cycle or Deming Cycle. PDCA (Plan-Do-Check-Act) offers a conceptual framework for building things:

  • Plan what we’re going to do in the next days or weeks.
  • Do stuff according to that plan.
  • Check how well we did stuff (identify shortcomings)
  • Act to address some shortcomings in our doing, so that the next cycle’s doing goes better.

Ownership

Deming banged on about the necessity for people to have pride in what they do. I find pride is enhanced through people feeling they own what they’re building.

Build Less

Build as little a possible. With the lowest tech possible. Commensurate with meeting folks’ needs. Remember YAGNI.

Summary

I don’t expect the above list to be of much use to anyone. Because, normative learning. C’est la vie.

– Bob

Invitation to discuss:

#NoSoftware is the acme (or logical extreme) of Value Engineering in software development?

Who’s Delusional?

When it comes to organising the development of software, someone’s delusional.

Maybe it’s me. Despite proving my philosopy and practical apprioach (Quintessence), honed over nearly 30 years of real-world experience.

Maybe it’s you. Clinging to your outdated and unproducive approaches, be they waterfall (batch and queue), or Agile.

Maybe it’s the managers and executives. Doing the same thing year in and year out, expecting different results.

Maybe it’s all of us.

How do you see it?

– Bob

https://hillelwayne.com/post/are-we-really-engineers/

PS You might like to continue the conversation on the Software Engineering slack workspace: https://join.slack.com/t/softwareengin-b4g4573/shared_invite/zt-12avoke92-EXTKSLsVP6SSv1fstSgwig

In case you missed it:

Just Burning Toast and Scraping It

In case you missed it:

Deming: “Cease Dependence on Inspection”

In case you missed it:

Engineering Excellence