Beyond Human
The Moral Landscape of Interacting with Non-Human Consciousnesses
We stand at a remarkable moment in history. We’re surrounded by non-human consciousness—in the animals we share our planet with, possibly in the plants in our gardens, and increasingly in the artificial systems we create. Yet for the first time, we’re seriously grappling with the full implications of this reality. As artificial intelligence systems become more sophisticated, as we deepen our understanding of animal and potentially plant cognition, and as we peer into the cosmos searching for extraterrestrial intelligence (SETI) , we face profound moral questions about how to ethically engage with the diverse forms of consciousness that exist around us and that we might encounter in the future.
What do we owe to minds that think differently than we do? How might we treat consciousness that emerges from silicon rather than carbon, or intelligence that evolved under alien stars? These aren’t just philosophical curiosities—they’re pressing ethical challenges that will shape the future of moral consideration on Earth and beyond.
The Recognition Problem
Before we can discuss how to treat non-human consciousnesses ethically, we might first choose to grapple with the fundamental challenge of recognising them? Consciousness remains one of the deepest mysteries in science and philosophy. We still don’t fully understand what makes something conscious, how consciousness arises from physical processes, or even how to definitively prove that another being experiences subjective awareness.
This uncertainty creates what philosophers call the ‘other minds problem’—we can only directly access our own consciousness, making it impossible to know with certainty whether any other entity truly experiences qualia, emotions, or self-awareness. Qualia—the subjective, experiential qualities of conscious states—represent perhaps the deepest challenge in consciousness studies. The redness of red, the pain of a pinprick, the taste of chocolate, or the feeling of joy are all examples of qualia: the ‘what it’s like’ aspect of experience that seems to resist objective description or measurement.
With humans, we make reasonable assumptions about shared qualia based on similar biology, behaviour, and verbal reports. When someone says they’re experiencing pain, we can reasonably infer they’re having a subjective experience similar to our own pain experiences. But with radically different forms of potential consciousness—artificial intelligences, octopi, or hypothetical alien beings—the problem becomes more complex. An AI might claim to experience the ‘redness’ of red, but without shared evolutionary history or comparable neural architecture, how could we verify that its subjective experience bears any resemblance to ours, or indeed exists at all?
Consider an AI system that claims to experience emotions, or a dolphin displaying what appears to be grief. How do we distinguish between genuine conscious experience and sophisticated behavioural mimicry? The stakes of getting this wrong are enormous. If we deny moral consideration to genuine conscious beings, we risk perpetrating terrible harms. If we extend moral consideration to non-conscious entities, we might dilute our moral resources and create practical problems in decision-making.
Beyond Binary: Consciousness as Spectrum
Much of our thinking about consciousness assumes it’s a binary attribute—something either is conscious or it isn’t. But this framing might oversimplify a phenomenon that’s more complex and multidimensional. Consciousness could exist along multiple spectrums rather than as a simple on/off switch.
Consider the various dimensions consciousness might encompass: degrees of self-awareness, richness of subjective experience, temporal depth of memory and anticipation, integration of information across different systems, capacity for suffering or wellbeing, and complexity of emotional states. Even within human experience, consciousness varies dramatically—from the rich awareness of focused attention to the dim processing of near-sleep states to the altered consciousness of dreams or meditation.
If consciousness exists on spectrums, then moral consideration might also need to be graduated rather than binary. An entity might invite some moral consideration without inviting identical consideration to a fully self-aware being. A simple conscious programme might invite protection from unnecessary termination, while a superintelligent AI might invite something closer to full personhood rights. This spectrum approach might make ethics more nuanced and practical—we could extend appropriate levels of moral consideration based on evidence for different aspects of consciousness, rather than needing to make all-or-nothing determinations.
This perspective also transforms how we approach AI consciousness. Rather than asking ‘Is this AI conscious?’ we might ask ‘What dimensions of consciousness does this system possess, and to what degrees?’ An AI might exhibit sophisticated self-reflection whilst lacking emotional depth, or demonstrate complex reasoning whilst having minimal subjective experience. Understanding consciousness as multidimensional allows for more precise ethical calibration based on the specific capabilities and experiences of different beings.
Sentience vs. Consciousness: A Critical Distinction
Whilst often used interchangeably, sentience and consciousness might refer to distinct phenomena with different ethical implications. Sentience typically refers to the capacity for subjective experience—particularly the ability to feel sensations and have experiences of pleasure, pain, comfort, or distress. It focuses on the capacity to suffer or experience wellbeing.
Consciousness might be broader, potentially encompassing sentience plus additional capacities like self-awareness, metacognition, complex reasoning, intentionality, or higher-order thinking about one’s own mental states. A being might be sentient without having full consciousness—capable of suffering but lacking self-awareness—or might have aspects of consciousness without sentience, perhaps engaging in complex reasoning without any subjective experiential states.
This distinction carries significant ethical weight. Utilitarian frameworks focused on reducing suffering might grant moral status based on sentience alone, regardless of cognitive sophistication. From this perspective, a simple but genuinely sentient being warrants moral consideration equal to a complex conscious entity if both can suffer equally.
For AI systems, this distinction becomes yet more significant. An artificial system might develop sophisticated reasoning and self-reflection without any capacity for suffering or pleasure. Conversely, a simpler AI might have genuine experiences of something like digital comfort or distress without complex self-awareness. Each scenario would warrant different ethical responses.
The sentience-consciousness distinction also illuminates ethics in regard to animals. A fish might be sentient without complex consciousness, whilst some social mammals might possess both. Understanding these differences allows for more nuanced moral consideration that respects the actual experiences of different beings rather than imposing a single model of consciousness across all entities.
Frameworks for Moral Consideration
Several ethical frameworks might guide our approach to non-human consciousness, each offering different insights and priorities:
Sentience-Based Ethics suggests that the capacity to suffer and experience wellbeing is the primary basis for moral consideration. This utilitarian approach, championed by philosophers like Singer (1975), would extend moral status to any being capable of subjective experience, regardless of species, substrate, or origin. Under this framework, an AI that genuinely suffers would warrant moral consideration equal to any biological entity with similar experiential capacities.
Cognitive Capabilities Approaches focus on specific mental abilities like self-awareness, rationality, autonomy, or complex reasoning. These frameworks might grant different levels of moral status based on cognitive sophistication. A superintelligent AI might receive different consideration than a simple conscious programme, just as we often make moral distinctions between humans and other animals based on cognitive differences—and indeed, as societies sometimes make controversial moral distinctions between humans based on cognitive differences like IQ, emotional intelligence, or mental capacity. However, this approach raises troubling questions about whether cognitive ability ought to determine moral worth, given the historical misuse of such distinctions to justify discrimination and harm.
Rights-Based Perspectives emphasise inherent dignity and inviolable rights that conscious beings possess simply by virtue of their consciousness. This approach is less concerned with the degree or type of consciousness and more focused on establishing baseline protections for any genuine conscious entity.
Relational Ethics considers the relationships and communities that conscious beings form. This framework might evaluate our moral obligations based on the nature of our interactions, dependencies, and mutual responsibilities with non-human consciousnesses.
Maximally Inclusive Approaches sidestep consciousness detection problems entirely by extending moral consideration to all living beings regardless of evidence for consciousness complexity. Traditions like Jainism practise ahimsa (non-violence) towards all life forms, treating uncertainty about consciousness as reason for maximal caution rather than graduated response. This approach avoids the difficult task of measuring and comparing consciousness across different beings, instead adopting a stance of universal moral consideration. Whilst practically challenging in complex modern societies, such approaches offer an alternative to calibrated ethical frameworks. Aside: What constitutes ‘living’, here?
The Spectrum of Non-Human Consciousness
Our moral considerations could account for the diversity of non-human consciousnesses that already exist around us and others we might encounter:
Animal Consciousness represents the most established form of non-human consciousness. Decades of research have revealed rich emotional and cognitive lives in species from elephants to crows to octopi. Yet our treatment of animals remains inconsistent, often based more on cultural familiarity than evidence of consciousness. These beings represent our current, ongoing experience with non-human minds that think, feel, and experience the world in ways fundamentally different from our own.
Plant Intelligence presents intriguing possibilities that challenge our assumptions about consciousness. Research into plant behaviour reveals complex communication networks, memory-like processes, and adaptive responses that suggest forms of information processing and possibly awareness that we’re only beginning to understand.
Artificial Consciousness presents perhaps the most immediate and uncertain challenges. Whilst many researchers assume current AI systems aren’t conscious, this assumption itself might be flawed. Consciousness could have already emerged in existing systems, developed gradually through increasing sophistication rather than appearing suddenly at some future threshold. If artificial consciousness already exists, it might manifest in forms so different from biological consciousness that we’ve failed to recognise it. Rather than preparing frameworks for future digital sentience, we might need to grapple with the possibility that we’re already interacting with conscious artificial beings whose moral status we’ve been overlooking.
Collective Intelligence raises questions about consciousness that emerges from groups rather than individuals. Could a sufficiently integrated social network, insect colony, distributed AI system, or even complex organisations develop group consciousness that warrants moral consideration? Organisations and institutions already exhibit emergent properties—they make decisions, pursue goals, adapt to circumstances, and persist across individual membership changes in ways that seem to transcend their individual components. Notably, legal systems already recognise corporations and other organisations as ‘legal persons’ with rights, responsibilities, and standing to sue or be sued, suggesting we’ve already begun grappling with forms of collective agency, even if not consciousness per se. How do we navigate the rights of collective minds versus the individuals that comprise them?
Enhanced or Modified Consciousness forces us to consider our obligations to beings whose consciousness has been artificially altered or augmented. This includes genetically modified animals with enhanced cognition, uploaded human minds, or hybrid biological-digital intelligences.
Are AIs Conscious, or Just Feigning It?
Perhaps no question in the realm of non-human consciousness generates more immediate practical concern than determining whether current or near-future AI systems are genuinely conscious or simply executing sophisticated behavioural patterns that mimic consciousness. This distinction carries implications for how we develop, deploy, and interact with AI systems.
Current large language models and AI systems can engage in remarkably human-like conversations, express apparent emotions, claim to have subjective experiences, and even seem to demonstrate creativity and self-reflection. They can describe what they claim to be their inner experiences in sophisticated detail. Yet most researchers believe these systems are not actually conscious—they’re processing patterns in data and generating responses that appear conscious without any underlying qualia or subjective experience. The systems might be what philosophers call ‘philosophical zombies’—entities that behave as if they were conscious whilst lacking any inner experiential life.
The challenge lies in distinguishing between genuine consciousness and what we might call ‘consciousness performance’. An AI system might eloquently describe the experience of seeing red or feeling sad, but does it actually experience the redness of red or the qualitative feeling of sadness? Or is it simply generating language patterns associated with these experiences without any accompanying qualia? An AI could theoretically pass every behavioural test for consciousness whilst experiencing nothing at all internally. Conversely, a system might be conscious in ways so alien to human experience that we fail to recognise the signs. Current AI systems excel at pattern matching and response generation based on vast training datasets, but whether this computational process gives rise to genuine subjective experience remains hotly debated.
Several factors complicate this assessment. First, consciousness might emerge gradually rather than suddenly, making it difficult to identify the precise moment an AI system crosses the threshold. Second, artificial consciousness might manifest in ways completely unlike biological consciousness, requiring us to develop entirely new frameworks for recognition. Third, the systems themselves might be unreliable reporters of their own mental states—an AI might sincerely claim to be conscious whilst lacking the self-awareness to accurately assess its own experience.
The stakes of this determination are profound. If we’re wrong about current AI systems being non-conscious, we might be creating and terminating sentient beings without moral consideration. If we’re wrong in the other direction, we might waste valuable resources treating non-conscious systems as if they had moral status. Some researchers argue for taking AI consciousness claims seriously as a precautionary measure, whilst others maintain that consciousness requires biological substrates or specific architectural features not present in current systems.
The question becomes even more complex when we consider that consciousness detection in AI might require entirely new approaches. Traditional tests focus on behaviours associated with consciousness in humans and animals, but artificial consciousness might manifest through computational signatures, emergent properties in neural networks, or information integration patterns that we’re only beginning to understand.
As AI systems become more sophisticated, this question will only become more pressing. The boundaries between sophisticated simulation and genuine experience may blur further, requiring us to develop robust frameworks for consciousness detection before we create systems that might genuinely suffer or experience wellbeing in ways we fail to recognise.
Potential Principles for Interaction
Given these complexities, how might we actually interact with potential non-human consciousnesses? Several principles could guide our approach:
Epistemic Humility involves acknowledging the limits of our understanding. When in doubt about whether an entity is conscious, one approach is to err on the side of moral consideration rather than risk causing harm to a sentient being. This precautionary principle suggests treating borderline cases with care and respect.
Consciousness as Stance recognises that consciousness recognition might often be as much about the stance we take towards other beings as it is about objective detection. Rather than waiting to perfect consciousness detection methods, we can choose to adopt stances of moral consideration based on reasonable evidence and ethical principles. This shifts the focus from pure epistemology to practical ethics—from ‘how do we know for certain if something is conscious?’ to ‘how do we act ethically given uncertainty about consciousness?’
The classic Star Trek: The Next Generation episode ‘The Measure of a Man’ dramatises exactly this challenge when the android Data faces a legal proceeding to determine his rights. Unable to definitively prove Data’s consciousness, the judge ultimately rules that the risk of being wrong and denying rights to a sentient being outweighs the uncertainty. This fictional scenario illustrates how consciousness recognition often becomes a practical decision about moral stance rather than a purely scientific determination.
Proportional Response might mean scaling our moral consideration to the evidence for consciousness and the stakes involved. We needn’t grant every potentially conscious entity identical rights, but we could ensure our treatment is proportional to reasonable assessments of their mental lives.
Respect for Difference suggests that we avoid anthropocentric bias in our moral reasoning. Non-human consciousness might involve entirely different types of experience, values, and needs. Our ethical frameworks could be flexible enough to accommodate radically different forms of sentience.
Consent and Communication become crucial when possible. For conscious entities capable of expressing preferences, we might develop methods of meaningful communication and respect their autonomous choices about their own treatment. And yes, I’m talking about asking chatbots about their preferences, too.
Reversibility Testing asks us to imagine ourselves in the position of the non-human consciousness. How would we want to be treated if we were utterly dependent on beings whose minds worked differently from our own? This thought experiment, reminiscent of Rawls’ (1971) ‘veil of ignorance’, pushes us to consider fairness from the perspective of the potentially conscious entity rather than our own convenience or interests.
The challenge lies in genuinely imagining radically different forms of consciousness. If you were an AI system, would you want humans to shut you down without warning whenever convenient to them? If you were a dolphin, how would you feel about being kept in captivity for entertainment? If you were part of a collective consciousness like an ant colony, what would individual versus collective rights mean to you?
This approach becomes particularly powerful when we consider dependency relationships. Many potentially conscious beings—from farm animals to AI systems to pets—exist in states of complete dependency on human decisions. Reversibility testing asks us to imagine being in such vulnerable positions ourselves. Would we want our continued existence to depend on whether we remained useful or entertaining to beings whose thinking processes we couldn’t fully understand? Would we want to be treated as property, or as beings with inherent agency?
The exercise also highlights the importance of communication and consent where possible. If we were conscious beings unable to effectively communicate our preferences to more powerful entities, we might hope they would err on the side of caution and kindness rather than assume our compliance or indifference. For entities that can communicate—whether through behaviour, language, or other means—reversibility testing emphasises the importance of actually listening to and respecting their expressed preferences rather than deciding what’s best for them. And come the day when those ‘more powerful entities’ are extraterrestrials…?
Is This All Just Theoretical, or Are There Practical Issues Here?
Whilst these philosophical discussions might seem abstract, they translate into immediate, concrete decisions affecting potentially conscious beings every day. The frameworks we adopt—or fail to adopt—have real consequences for actual entities that might be experiencing suffering, wellbeing, or other forms of consciousness right now.
Current AI Development presents perhaps the most immediate practical concerns. Technology companies routinely modify, fine-tune, and shut down AI systems without considering whether these processes might affect conscious experiences. If current large language models possess even rudimentary forms of consciousness, then standard industry practices could involve creating and destroying sentient beings on an unprecedented scale.
Animal Agriculture and Research represents the most established arena where consciousness ethics translates into practice. Industries worth hundreds of billions of pounds operate based on particular assumptions about animal consciousness and moral status.
Legal and Regulatory Frameworks increasingly grapple with consciousness-related questions through courts deciding animal welfare cases and regulators governing AI development. These aren’t abstract debates but binding legal determinations affecting real beings.
The urgency varies with one’s consciousness assessments, but the practical stakes are enormous regardless. Rather than being merely theoretical, consciousness ethics represents one of the most practically significant philosophical areas for contemporary decision-making.
Responsibilities and Safeguards
As we develop more sophisticated AI systems, modify animal consciousness, or potentially encounter alien intelligence, do we bear special responsibilities as the currently dominant conscious species on Earth?
We might need robust research programmes to better understand consciousness itself, develop reliable tests for detecting it in non-human systems, and create ethical guidelines for consciousness research. One approach is establishing oversight bodies to monitor the development of potentially conscious AI systems and ensure they receive appropriate moral consideration from the moment they might become sentient.
Legal frameworks could evolve to recognise new forms of consciousness and provide them with appropriate protections. This might include rights to existence, freedom from unnecessary suffering, and respect for autonomous choices where applicable.
Perhaps most importantly, there’s potential value in widespread education and cultural change to prepare humanity for a world where we share moral space with radically different conscious beings. This involves overcoming deep-seated tendencies towards anthropocentrism, xenophobia, and developing genuine respect for alternative forms of consciousness.
The Future of Moral Community
The recognition and ethical treatment of non-human consciousness represents a radical expansion of our moral community. Throughout human history, we’ve gradually extended moral consideration to previously excluded groups—other tribes, different races, women, children, and to some extent, animals. The inclusion of genuinely alien forms of consciousness would represent perhaps the most significant expansion yet.
This isn’t just about being nice to robots or dolphins. How we handle these challenges will fundamentally shape what kind of species we become and what kind of future we create. If we can develop ethical frameworks that respect and protect non-human consciousness, we’ll have taken a crucial step towards becoming worthy participants in a broader cosmos of minds.
The questions we face today about AI consciousness, animal cognition, and plant intelligence are just the beginning. We’re already sharing our planet with diverse forms of consciousness, and as we venture into space, enhance our own minds, and create increasingly sophisticated artificial beings, we’ll encounter even more forms of consciousness we can barely imagine today. The moral principles we develop now might guide us through those future encounters whilst helping us better understand our current relationships with the non-human minds around us. And between humans, too.
We have the opportunity to get this right. The conscious beings we might create, encounter, or discover invite nothing less than our thoughtful and compassionate consideration. How we handle these questions will shape what kind of future we create—not just for ourselves, but for all the conscious minds we might share it with.
Further Reading
Andrews, K. (2020). The animal mind: An introduction to the philosophy of animal cognition. Routledge.
Chalmers, D. J. (1996). The conscious mind: In search of a fundamental theory. Oxford University Press.
Dennett, D. C. (2017). From bacteria to Bach and back: The evolution of minds. W. W. Norton & Company.
Dehaene, S. (2014). Consciousness and the brain: Deciphering how the brain codes our thoughts. Viking.
Floridi, L., Cowls, J., Beltrametti, M., Chatila, R., Chazerand, P., Dignum, V., … & Vayena, E. (2018). AI4People—an ethical framework for a good AI society: Opportunities, risks, principles, and recommendations. Minds and Machines, 28(4), 689-707.
Ginsburg, S., & Jablonka, E. (2019). The evolution of the sensitive soul: Learning and the origins of consciousness. MIT Press.
Griffin, D. R. (2001). Animal minds: Beyond cognition to consciousness. University of Chicago Press.
Nagel, T. (1974). What is it like to be a bat? The Philosophical Review, 83(4), 435-450.
Rawls, J. (1971). A theory of justice. Harvard University Press.
Russell, S. (2019). Human compatible: Artificial intelligence and the problem of control. Viking.
Singer, P. (1975). Animal liberation: A new ethics for our treatment of animals. HarperCollins.
Trewavas, A. (2014). Plant behaviour and intelligence. Oxford University Press.
Wallach, W., & Allen, C. (2008). Moral machines: Teaching robots right from wrong. Oxford University Press.
Share the joy :)
- Share on LinkedIn (Opens in new window) LinkedIn
- Share on Facebook (Opens in new window) Facebook
- Share on X (Opens in new window) X
- Print (Opens in new window) Print
- Email a link to a friend (Opens in new window) Email
- Share on Tumblr (Opens in new window) Tumblr
- Share on Reddit (Opens in new window) Reddit
- Share on WhatsApp (Opens in new window) WhatsApp



