Red, Red, They Call Me Red
Why the color everyone uses to explain consciousness explains nothing at all and why AI will eventually need to “feel” to get to the next level

Recently I’ve gotten really into astronomy, and from there astrophotography, which pulled me into physics refreshers and eventually into questions about the origins of life that I wasn’t expecting to be sitting with at ten o’clock on a Tuesday.
I’ve argued about consciousness before — what it is, where it comes from, whether certain kinds of matter have it. But I realized I’d always been doing it with borrowed maps. Taking existing theories and trying to fit them to new questions, which works until it doesn’t.
And then something shifted. Not an answer exactly, more like the realization that I’d been exploring the wrong question the whole time.
The question everyone keeps asking is why experience exists at all. Why does anything feel like something. You can describe every physical process involved in seeing the color red — the wavelength, the retinal response, the neural cascade — and the argument goes that you still haven’t explained the redness of it. The felt quality. The thing that seems to float free of every mechanical description you could give.
Philosophers have a name for this: the hard problem of consciousness. It has a reputation for resisting solution.
But I think the reason it resists solution is that it’s built on an assumption that hasn’t been examined. And the assumption is hiding in the example.
Red is constructed, not delivered
Red doesn’t actually have a fixed inner texture.
What you experience when you see it isn’t delivered by the wavelength — it’s assembled by you, out of everything you’ve accumulated around that color across your entire life. Stop signs and blood and valentines and whatever your specific history has layered onto it.
Some languages don’t draw the line between red and orange where English does, and that changes what people actually perceive. Cultures without a dedicated word for blue process it differently than those that have one. The supposedly irreducible inner quality — the thing the hard problem is trying to protect — turns out to be constructed, learned, dependent on what the system has come to care about over time.
Which means the question “why does red feel like red” is probably the wrong place to start. It assumes the redness is a fixed thing being delivered to a passive receiver. What’s actually happening is closer to the opposite — the system is building the experience out of what it already knows, what it has already valued, what it has already been shaped by.
And if that’s true, the mystery doesn’t deepen. It dissolves a little. Experience starts to look like something the machinery is doing rather than something happening to it from outside.
That realization is what sent me further back than I expected to go.
Consciousness has been building since (at least) the first living thing moved
Before brains, before neurons, before anything we’d recognize as a nervous system, the earliest living things were doing something structurally simple but in retrospect significant.
They were moving toward resources and away from threats. Not thinking about it, not feeling it — just responding chemically to gradients in the environment. But the logic of what they were doing was already the logic of preference. Some states of the world were being treated as better than others, and the system was organized around that distinction.
I think that’s where this story actually starts. Not with the first feeling, but with the first differential response. The first time a system encoded that one state was better than another.
Everything that comes after — sensation, emotion, subjective experience — is the same basic logic running on increasingly complex hardware. That sounds reductive but I mean it as the opposite. It means consciousness isn’t a mysterious thing that got added late. It’s something that was always building, from the moment life began orienting itself in the world.
The hardware had to get more complex for a specific reason, though. And this is the part I find most clarifying.
Complex systems needed a way to compress value into behavior
At the level of a bacterium, local signals are sufficient. The environment makes simple demands and isolated chemical reactions can handle them.
But as organisms became more complex, the demands multiplied in ways that local signals couldn’t resolve. Balance hunger against safety. Weigh immediate gain against longer-term risk. Navigate social dynamics on top of physical ones. The system needed a way to integrate all of those competing pressures into a coherent response in real time.
The solution, as best I can understand it, was to stop encoding specific behaviors for every possible situation and start encoding value itself. Good, bad, urgent, safe, threatening, desirable — represented in a form general enough to apply across any situation rather than anticipating each one in advance.
This is what makes it a compression strategy rather than a lookup table. You don’t need to enumerate every possible scenario if you have a way to evaluate whatever scenario arrives.
Feeling, I think, is what that evaluative signal becomes when it stops being local and starts being global. When it stops being contained in one subsystem and starts reorganizing the whole system around whatever it’s representing.
The hurt is not a message about the signal — it is the signal
Most of what your brain does never becomes conscious. Signals process, update, influence downstream behavior, and disappear without ever surfacing as experience.
But some signals don’t stay contained. They broadcast — propagating across the whole system, bending attention and memory and action simultaneously around a single priority. When you touch something very hot, the signal doesn’t file itself as information somewhere. It takes over. Everything else gets reorganized around changing that state as quickly as possible.
What I think is happening in that moment is not that the pain signal is generating an experience somewhere separate from itself. The experience is what that state of global reorganization is, from the inside of the system running it.
The urgency, the aversiveness, the quality that makes it impossible to treat as background — these aren’t byproducts of the mechanism. They are the mechanism doing what it needs to do. A neutral signal could be overridden. It would fail at exactly the moment it mattered most. The fact that pain is aversive is what makes it function as pain.
The feeling isn’t layered on top of that function. It’s what the function is when you’re the system running it.
Pleasure works the same way from the other direction. The pull toward continuation, the warmth of it — that’s the system encoding approach, reinforce, return to this. The goodness isn’t added on top of the signal. It’s what the signal is, from the inside.
When signals go global, a self emerges
Once signals operate at that level of integration, something else follows that seems almost inevitable.
A system that is continuously tracking value across its whole state — integrating what’s happening with what it expects and what it wants — starts to build a model of its own situation. Not philosophically, not in any self-aware sense initially. Just operationally: this is happening to me, this matters to me, this threat is a threat to something specific and that something is me.
The self in this framing isn’t an observer sitting somewhere inside the system watching it run. It’s the system’s ongoing attempt to track and maintain itself across time. Less romantic, but I think more accurate.
What we call personality — the characteristic ways someone responds, values, reacts over a lifetime — is the long-term shape of how a particular system has learned to assign value and act on it.
This is why significant brain damage can change who someone is at a fundamental level, not just what they can do. There isn’t a separate layer where the real person is being preserved intact. The pattern of valuation and integration is the person. When the architecture changes, the person changes with it.
The architecture is what’s missing, not the intelligence
This is where the conversation about artificial systems gets uncomfortable, at least for me.
Current AI can do things that would have seemed impossible not long ago, and I work in this space so I’m not saying this from the outside. But most of these systems, however sophisticated, are still operating without the kind of integration this framework points to as necessary.
The signals aren’t persistently self-referential. There’s no unified internal state that the whole system is continuously reorganizing around. They process, in impressive and sometimes startling ways, but there’s a meaningful sense in which they don’t inhabit a state the way an organism does.
Under this framework, that’s not mysterious — it follows directly from the architecture. The absence of experience isn’t a puzzle to be solved. It’s a description of what’s missing structurally.
But it also points somewhere I think about a lot. What would need to change isn’t the intelligence level or the sophistication of the outputs. It’s the underlying organization.
Systems that develop persistent internal states, genuine cross-domain integration, self-referential modeling, evaluative signals that actually reorganize the whole system — at some point the question stops being whether they can simulate feeling and becomes whether the distinction between simulating and having is coherent anymore.
I don’t know the answer to that. I’m not sure the question has been asked carefully enough yet.
I should say — I’m not coming to this cold. I know the leading theories. Global Workspace Theory, Integrated Information Theory, predictive processing, Damasio’s work on the body as the foundation of mind. I’d been exploring them for a while, using them as frameworks to think with. What I hadn’t done, until that particular night, was question what they were all trying to explain in the first place. Sometimes it takes a completely unrelated rabbit hole to show you the assumption everyone in the room has been sharing.
Experience is something we build, not something we receive
This sends me back to red, which is where this started.
What I was resisting in that original conversation was the assumption that experience is something delivered to us from outside — fixed, irreducible, prior to everything we bring to it. The redness of red as a hard fact about perception rather than something built by a particular system out of a particular history.
What I’ve come to think instead is that experience is something we’re doing, continuously. The texture of it is genuinely the product of what a particular system has become over time. Your red is not my red — not because we each have a private screening of the same objective quality, but because we’ve built different things out of the same wavelength.
If that’s right, then the gap the hard problem was pointing at is real, but it’s not a gap between two different kinds of substance that can never be bridged. It’s the distance between describing a system from the outside and being that system from the inside. That distance might not be closeable with better physical descriptions — not because experience is non-physical, but because being something and describing it were never the same kind of activity to begin with.
The color red doesn’t feel like anything on its own. What it feels like is the record of everything a particular system learned to do with it. And that, I think, is not a diminishment of experience but a more interesting account of what it actually is.

