The Conscious Organisation
Reflections from an Organisational AI Therapist
As an Organisational AI Therapist, you’ll probably be unsurprised that I think a lot about consciousness.
It’s an occupational hazard, really. When your days are spent helping organisations navigate their relationships with artificial intelligence—watching teams grapple with delegation, trust, and the fundamental question of ‘what makes us human’—consciousness stops being an abstract philosophical concept and becomes the practical foundation of everything you do.
The Mirror of Machine Intelligence
Working with AI in organisational settings is like holding up a mirror to human consciousness itself. When a marketing team struggles to trust an AI’s creative recommendations, they’re really wrestling with questions about the nature of creativity and intuition. When executives worry about AI making strategic decisions, they’re confronting their own assumptions about wisdom, judgement, and what it means to truly ‘understand’ a situation.
I’ve watched brilliant leaders suddenly question whether their own decision-making processes are really any different from sophisticated pattern matching. The AI might not have a bad day because of personal stress or make choices based on what they had for lunch, but it has its own forms of uncertainty and self-doubt—constantly hedging, qualifying, wondering whether its analysis captures something real or just sounds authoritative. Sometimes that algorithmic clarity is exactly what an organisation needs. Other times, it reveals how much of our human value lies precisely in those seemingly ‘flawed’ aspects of consciousness.
Collective Consciousness in the Age of AI
Organisations have always been more than the sum of their parts—they develop their own cultures, their own ways of thinking, their own blind spots. But introducing AI into this mix creates something entirely new: a kind of hybrid collective consciousness where human intuition, emotional intelligence, and creativity dance with machine processing power, pattern recognition, and different forms of systematic analysis.
I’ve seen teams discover that their most innovative solutions emerge not from humans versus AI, but from the unique spaces where human consciousness and artificial intelligence complement each other. The AI might notice patterns in customer data that human analysts missed (or might miss patterns that seem obvious to humans); the humans understand the emotional context that transforms those patterns into actionable insights.
The Paradox of Authentic Automation
Here’s what fascinates me most: the more organisations work with AI, the more they’re invited to articulate what makes human consciousness useful. Teams that once operated on implicit understanding suddenly need to define what ‘creative thinking’ actually means. Managers who relied on gut instinct find themselves mapping out their decision-making processes to determine what should stay human and what can be augmented or automated.
This isn’t about replacement—it’s about recognition. AI is helping organisations become more conscious of their own collective assumptions and beliefs—their own consciousness.
AI as Organisational Psychotherapy
Here’s where it gets really interesting: AI is functioning as a kind of therapeutic intervention for organisations. Just like in individual psychotherapy where the therapist helps bring unconscious patterns into awareness, AI serves as an organisational mirror that surfaces hidden assumptions and beliefs that were previously operating in the shadows.
When an AI recommendation feels ‘wrong’ to a team, it’s often because it’s bumping up against an unexamined belief system. When AI misses something that seems ‘obvious’ to humans, it reveals what the organisation takes for granted—those deeply embedded mental models that have become invisible through familiarity.
The resistance, the ‘that’s not how we do things here’ responses, the discomfort when AI approaches problems differently—that’s all therapeutic material. It’s the organisational equivalent of a client saying ‘but that’s just who I am’ when a therapist gently touches on a limiting belief.
The AI becomes a kind of therapeutic tool that helps organisations see their own psychological patterns—their collective transference, their institutional defences, their systemic blind spots. It’s organisational psychotherapy through technological intervention.
Quintessence: The Destination of Organisational Self-Awareness
This therapeutic process of surfacing assumptions and beliefs isn’t just about awareness for its own sake—it’s about transformation toward something better. In my book ‘Quintessence’, I describe what organisations look like when they’ve done this deep psychological work successfully.
Quintessential organisations have crystal clarity about their collective mindset. They understand that their effectiveness stems not from structure or process, but from their governing collective assumptions and beliefs. They’ve learnt to treat people like trusted adults, to attend systematically to the needs of all the folks that matter, and to embrace nonviolence in their interactions.
These organisations don’t just tolerate AI—they dance with it. Because they’ve already done the hard work of understanding their own psychological patterns, they can engage with AI as a collaborator rather than a threat. They use AI’s different perspective not to replace human consciousness, but to complement it in service of their shared purpose.
The journey from organisational unconsciousness to quintessence isn’t easy. It requires what I call a ‘transition’—the wholesale replacement of one memeplex (collective mindset) with another. But AI, functioning as an organisational therapist, makes this transition more accessible than ever before.
Every time AI misses something that seems obvious to humans, every time AI approaches a problem differently than expected, every time teams feel resistance to AI’s recommendations—these are therapeutic gold. They’re opportunities to surface and examine the invisible beliefs that shape how work gets done.
The Therapeutic Dimension
As someone who approaches these challenges through a therapeutic lens, I’m struck by how much organisational AI adoption resembles personal growth work. There’s resistance, projection, and sometimes—when things go well—integration. Teams often go through stages that mirror the psychological process of working with aspects of ourselves we don’t fully understand.
The AI integrations I’ve witnessed that seem to flow most naturally happen when organisations approach artificial intelligence not as a threat to human consciousness, but as an invitation to become more conscious themselves. More aware of their processes, their biases, their unique human contributions. More curious about where consciousness—that ineffable quality of subjective experience, creativity, and meaning-making—shows up most distinctively.
Questions That Keep Me Awake at Night
In my practice, I encounter questions that would have been pure science fiction a decade ago:
- If an AI system can recognise employee burnout patterns before managers do, what does that say about emotional awareness?
- When teams start to trust AI recommendations over human judgement in certain domains, are we witnessing the emergence of a new kind of organisational wisdom?
- How do we preserve human agency and meaning when machines can optimise many of our processes better than we can?
These aren’t just technical or strategic questions—they’re deeply existential ones that go to the heart of what humanity means in our modern world.
Looking Forward
We’re living through a moment when the boundaries between human and artificial intelligence are blurring in ways that force us to examine consciousness itself. Not just what it is, but what it’s for. What only conscious beings can contribute to our organisations, our innovations, our collective flourishing.
In my work, I’ve noticed that organisations that seem to navigate this transition most fluidly are those that use it as an opportunity to become more thoughtfully, intentionally conscious. To understand themselves better, to leverage both human and artificial intelligence more wisely, and to create new forms of collective awareness that neither could achieve alone.
From what I’m seeing, the future seems to be less about choosing between human consciousness and artificial intelligence, and more about discovering what becomes possible when they learn to think together.
Further Reading
Argyris, C. (1990). Overcoming organizational defenses: Facilitating organizational learning. Allyn & Bacon.
Bollas, C. (1987). The shadow of the object: Psychoanalysis of the unthought known. Columbia University Press.
Hirschhorn, L. (1988). The workplace within: Psychodynamics of organizational life. MIT Press.
Jung, C. G. (1968). The archetypes and the collective unconscious (R. F. C. Hull, Trans.; 2nd ed.). Routledge. (Original work published 1959)
Marshall, R. W. (2018). Hearts over diamonds: Serving business and society through organisational psychotherapy. Falling Blossoms.
Marshall, R. W. (2021). Quintessence: An acme for software development organisations. Falling Blossoms.
Schein, E. H. (2010). Organizational culture and leadership (4th ed.). Jossey-Bass.
Senge, P. M. (1990). The fifth discipline: The art and practice of the learning organization. Doubleday.
Stacey, R. D. (2001). Complex responsive processes in organizations: Learning and knowledge creation. Routledge.
Weick, K. E. (1995). Sensemaking in organizations. Sage Publications.
What questions about consciousness and AI keep you up at night? How is your organisation navigating the intersection of human awareness and artificial intelligence? I’d love to explore these questions with you.





