Queerness

It’s LGBTQ+ Pride month! I identify as “queer,” so I thought this would be a good opportunity to write a bit about what that means to me.

In addition to queer, I also identify as a cisgender male, pansexual, and demisexual. This means I’ve always identified as male, and others always assumed as much. I’m rarely interested in sex or romance, but when I am, it’s not about gender. I love people, not parts. The full story is more complicated, but that’s a good start.

Labels like “gay”, “bi”, “pan”, “cis”, “demi”, “aro”, and “ace” are useful for quickly describing myself to others, but I prefer the term “queer” because, honestly, I don’t think any set of labels does a person justice.

Gender and sexuality are fundamentally personal things. Each individual is unique. No set of labels can capture all of who I am, and every label carries some baggage that I don’t want applied to me. Labels are useful, just so long as we remember that they’re always at least a little bit wrong. They cannot serve as a stand-in for a person.

I also love queer philosophy, and try to embrace it in all my thinking. Put simply, that means I don’t believe in categories. I don’t think they have any real existence, or essential qualities. They’re convenient fictions. Just labels we make up to point at collections of disparate things. This applies to all categories, but especially to living things, where there are exceptions to every rule, and no hard boundaries whatsoever.

The problem with categories is that we take them seriously. Once we categorize something, we think we understand it, when really we’re just projecting a stereotype. We make strong assumptions about what’s allowed in a category, and we struggle with exceptions, even common ones. As our understanding of the world changes, things often shift faster than our language can keep up with. Sometimes we don’t notice. We keep trying to sort the world into categories that make no sense, and get upset when reality doesn’t play along.

So, I don’t believe that Jews exist. I believe that Jewish people exist, and that we use the word “Jews” to refer to them. Yet, there’s no one quality that all of those people share, except that they are people (another category, subject to change). You and I may not even agree about which set of people the word “Jews” applies to, so how is it meaningful for us to talk about Jews in general?

I don’t think it’s strange to see a Black woman engineer, even though it’s rare. I wouldn’t expect her to be any less competent, just because most folks like her can’t do the job. If anything, I’d assume the opposite, if she can succeed in that role despite the weight of her labels. But, ultimately, it’s about what she has to offer the world, which is surely more and less than the other engineers around her. She has her unique way of doing it, perhaps different in exciting ways.

That’s what queer means to me. Labels can be useful, but they have no power over reality. Reality and people are so much more than words can contain. See them for what they are.

If you’d like to learn more about queerness and queer philosophy, I highly recommend Queer: A Graphic History by Meg-John Barker and Jules Scheele.

Believing is Seeing

(I took this posts’s photo of a banana slug crawling through leaf litter. It’s shape and color resembles some of the leaves, which makes it hard to spot if you don’t know what to look for)

People say all sorts of things about the world, but how can you tell what’s right? If you’re not sure, you probably want to see for yourself. Those other people might be confused, mistaken, suffering from wishful thinking, or actively trying to mislead you, but you see reality for what it is. Right? At the least, you won’t have the same misperceptions as them, so another look is useful. But how much can you trust your own senses? How does perception even work, and how come we’re so often misled?

Like most people, I “just see” everything around me. Sometimes, I become aware of my perspective. I move around to get a clear view. I notice where I’m glancing, and I know I can’t see what’s behind my head. Yet, most of the time I don’t think about those things at all. The visual world just seems to surround me seamlessly, with rich, consistent detail in all directions. Objects are plain to see, trivial to discern from most angles and distances. It all seems so obvious, like a simple “window on reality,” yet nothing could be farther from the truth.

Human eyes have tunnel vision. I only see a tiny spot in clear focus at a time. My eyes constantly dart around, collecting many snapshots of the world as I move through it. My brain gets a continuous stream of these disconnected snatches of imagery that it somehow must turn into an integrated whole. It tracks my position and perspective as I move through the world, to piece the images together and infer a 3D model of my surroundings. This takes a great deal of real time data processing, and more than a little creativity.

One thing humans don’t do is scan a scene from left-to-right, top-to-bottom, like a TV camera, capturing equally high fidelity data of a whole scene. My eyes are drawn to “interesting” features of the visual field, gathering much more detail about those, and leaving large gaps over the “boring” parts of the image where I never bothered to look closely. To get a sense of this yourself, check out this selective attention test on YouTube. It’s pretty shocking how well the brain filters relevant details from irrelevant ones, and shows you only what it thinks is useful. Of course, what’s “interesting” or “useful” is a judgment call, and I’m biased by my context, culture, and evolution. That means I’m blind to important things that I don’t expect, recognize, or know about.

Yet, I don’t notice any gaps in my perception. My brain creates the illusion of a clear and complete view of reality, using a technique called hierarchical segmentation. The image from my eyes is projected into my brain, then layer after layer of neurons interpret that image. The first layer detects patterns and discontinuities in the raw image data: edges. The second layer detects patterns in those edges: shapes. Layers above detect patterns of patterns of patterns, finding textures, objects, faces, bodies, groups, situations, and more. I don’t see pixels, colors, and shapes. I directly perceive the objects and agents in a scene, their properties, activities, and relationships. I experience that as if it were “really there,” even though it’s just a model in my mind, distantly derived from sense data.

The first pass of vision notices low-level features present in the image (edges, corners, curves), but doesn’t know what they mean. Later passes piece those features together to represent larger features (in a desk drawer, that arrangement of curves must be a fidget spinner). Most likely, the lower-level processing didn’t see all the relevant details clearly, but that’s okay. The fidget spinner neurons see enough to recognize what’s there. They tell the edge-detecting neurons what they should have seen, filling in the missing details. This is how I can clearly perceive a whole fidget spinner, even though it’s in shadow and half covered. My brain uses past knowledge of objects, where they appear, what they look like, and how they behave to imagine what was obscured.

This works extremely well, and it’s necessary, since low-level sensory data is noisy and ambiguous. It often helps to have some idea what I’m looking at to make sense of what I’m seeing. Yet, sometimes my brain’s predictions are wrong. That’s not actually a fidget spinner in the drawer, it’s a pile of coins. How could I tell? Well, the fidget spinner neurons projected their predictions down, but looking a little more closely, some of those guesses were clearly wrong. There were some edges that weren’t accounted for, some angles that didn’t fit. The lower level neurons noticed the gap between expectation and reality, so they had to push back and negotiate with the higher level neurons, eventually arriving at an interpretation that was the best compromise across multiple levels of analysis.

What I perceive is a blending of what my senses took in and what “makes sense” for me to see based on past experience. At first glance, I only notice the most eye-catching details and my mind fills in the rest. If I take my time to really look over a scene, exploring every corner and paying attention to details, then my past experience has less influence and I perceive reality more like it truly is. I’m giving my lower-level perceptions the best chance to find evidence that I wasn’t expecting to see, which might revise my first impression. The problem is, I can’t afford to do this all the time, and often don’t think to. When should I bother to put in the extra effort? When should I distrust my own perception of reality enough to double check?

My brain automatically groups every object I see into categories, collections of objects with similar properties. Each category has a mental stereotype, an image that sort of averages all my experiences. This is how I know the “normal” shape of a fidget spinner, even though no two are the same. It’s where my mind draws from when it fills the gaps in my perception. As I gain experience, I learn more useful ways to group things into categories that better predict their similarities and differences. I build more accurate, nuanced, and fine-grain stereotypes, which makes my perceptions clearer. That said, it’s easy to hold onto bad stereotypes. They warp my perception, overwriting key details of a visual scene that might prove me wrong, rendering them literally invisible to me until someone points them out.

Stereotypes play a central role in perception, and all the fancy understanding, thinking, and being human that layers on top of that. Stereotypes are great tools. They’re bite-sized models of reality that let us generalize past experience and predict the future. But they aren’t real. In fact, many of my stereotypes aren’t based on my experience at all. I learned them from other people! Some may be wrong, hurtful, and dangerous, but I wouldn’t know without personal experience. So far we’ve just been talking about objects, but it gets serious when we move onto people.

I saw this when I worked at Google. They would spoil engineers, with easy access to everything from staplers to lunch to massages. That meant lots of staff to keep the place clean, well stocked, and in good working order. These service workers—these people—were generally ignored, treated as part of the environment rather than part of the team. That’s problematic in itself, but also engineers with darker skin tones often reported being mistaken for the service staff. Despite wearing a nerdy T-shirt and an engineering badge, they got categorized as “the help” based on skin alone. They were ignored, or worse, asked to clean up spills. This was demoralizing, even though there was no ill-intent. They just weren’t seen, by folks who were misled by stereotypes and didn’t even notice.

Knowing all this makes me distrust my own senses, but I think that’s a good thing. They’re mostly reliable, but they can fail in specific ways, and it’s important to remember that. It’s also useful to know when to trust my stereotypes. That mostly comes down to knowing where I have deep personal experience and have paid close attention. Where I don’t, my stereotype might be a shallow hand-me-down, even though it feels just as “real” in my mind. What about you? Have you noticed folks seeing what they want to see, or hearing what they want to hear? How does this generalize to other kinds of perceptions? How do you try to see reality for what it truly is? I’d love to hear from you in the comments.

In Every Mind, a Universe

The first life was blind and ignorant. It was little more than a self-perpetuating chemical reaction, constantly rebuilding itself and making new copies. It had no idea where it was because it had no way to perceive the world around it. Even the very notion of existing and moving within a physical space was incomprehensible. It didn’t know what it was, or even that it was, because it had no way to perceive its inner life, either. It just kept on going, making copies of itself, frequently with errors that made it worse or (occasionally) better at being alive.

Eventually, by chance, life discovered something very useful: certain molecules change shape when something happens to them. Some respond to being hit with light, others change in response to temperature, or pressure, or brushing up against another molecule with just the right shape. Life learned to read these signs, understand them as clues about the world, analyze them, make decisions, and respond. At least, metaphorically speaking. In reality, we’re still just talking about chemical reactions here. One shape change might trigger another, which might cause some new protein molecule to be synthesized, or kick off a chain reaction that leads to a cell moving, adjusting its metabolism, or whatever. The cell acts as if it appreciated the meaning of this signal, but without “thinking” except in a purely mechanical way.

This was the origin of meaning. At first, it was a very primitive thing. Life learned to discriminate between “good” and “bad.” That is, it noticed signs correlated with favorable living conditions, survival, and reproduction. Organisms that sought out more “good” signals while avoiding “bad” signals tended to live longer and produce more offspring. In this way, evolution slowly transformed random patterns of stimulus and response into instinct, innate biases baked in from birth, representing a sort of ancestral “knowledge.” Over time, life evolved more nuanced concepts like: light and dark; warm and cold; food and poison; me, us, and them. Teasing apart these subtler shades of meaning helped life develop more complex and successful strategies to survive in the world.

Every organism has this sort of evolved map of meaning (an “innate ontology”), implicit in their genes. It’s defined by their senses, physical capabilities, reflexes, and gut feelings. That means every species has a profoundly different perspective on reality. Fish, for instance, may have no conception of water because to them it’s a lifelong constant with no alternative. However, they have a very nuanced sense of the information carried in the water, which we are totally blind to. They can be very sensitive to things like pressure, temperature, chemical concentrations, currents, and even electrical fields. To a limited extent, they instinctively “know” where these signs are coming from, what they foretell, and how to react.

Talking about “ontology” as something in our genes is a little unusual. Typically that word is applied minds, perhaps just human minds. It’s about how we perceive reality, dividing it up into objects, categories, and relationships. It’s how people fundamentally understand themselves and the world they live in, and it’s heavily influenced by culture. But philosophers like Daniel Dennett insist that the same concepts should be extended to precognitive life, as well. Our physical bodies lead us to perceive and think and act in human ways, laying a foundation upon which conscious learning and culture can build. In that view, our rich mental ontology is a product of evolution, constructed from lower-level, simpler, more instinctual parts that we share with many species.

Like the first living thing, each human is born into a new and unfamiliar world, forced to figure out how to survive from scratch. We do have a major leg up, though: we’re born with senses and instincts and the ability to move our bodies. Our nervous systems carry and integrate the sensations from our many cells to our brain as a coherent bundle of information. Our brains are highly structured, with all the tools we need to make sense of those signals set up and ready to go. For instance, our multi-stage vision pipeline takes in light signals from our optic nerve, then processes them to detect edges, shape, movement, and even faces from day one. As infants, we don’t know what these things mean yet, but our bodies present the information to our minds in a convenient form and draw our attention, making them quick and easy to learn.

But our innate ontology is very vague. We are born with a sort of “knowledge” (or at least a predisposition to learn) that we are bodies that can move around in a 3D world. That world is filled with objects we can interact with. Some of these objects can move, some are useful, and some are alive. We get tired, hungry, and sick. We need to breathe, drink, and eat other living things to survive. That’s all obvious from a very young age. The rest is on us to figure out. How do we tell friend from foe? How do we find shelter? What’s good to eat around here? How do we make a living? What is the purpose of our existence? These questions are context dependent and quick to change, so life hasn’t evolved answers for us. It can’t. Instead, it gave us brains so we could find our own answers.

What makes humans truly special, though, is that we don’t build our ontologies just by trial and error. We talk about our ontologies. We point things out, name them, tell stories, give demonstrations. We learn from our parents, peers, teachers, and the media. We’re immersed in the collective ontology of our species, something all of humanity has been cultivating for over a hundred thousand years. Our minds are built to soak it all in and to very quickly adopt a picture of reality that’s much richer, more accurate, and more nuanced than what’s available to any other species. Much more than any one human could possibly figure out in their lifetime.

This way of understanding reality is powerful, but it leads to a great big illusion: we tend to see our ontology as reality itself. That’s understandable. Our ontology is our window on the world. It encompasses everything we can perceive, understand, and do. Yet, it is not a real thing. It’s an image in our minds, our bodies, and our genes. It’s informed by our genetic ancestors, our senses, and what we’ve learned from each other. But we perceive much more than our senses actually take in. Our brains are running a sort of “image enhancement” algorithm, as seen in Sci Fi classics like Blade Runner and now made manifest by deep learning software. We take in a little data, then use our knowledge and expectations to extrapolate something much bigger, fuller, and richer, making up the details that we can’t directly perceive. That is, we see what we believe. We perceive concepts, not reality as it truly is.

Of course, if our ontology is not a real thing and lives inside our minds, the consequence is that every human being must have a different ontology. They are in many ways similar, sure. We are the same species, living in the same world, with the same basic needs. We may even be from the same community, with a shared culture. Yet, we might disagree about the meaning of important concepts like “freedom,” “equity,” and “justice.” We might have very different ideas about what money is, what purpose a government serves, or how to be a good person. These are not disagreements about facts, but about the structure of reality itself—the framework we use to fit facts together into a coherent picture. These disagreements are particularly hard to reconcile, since it’s hard to even imagine what doesn’t fit inside my ontology.

That was a bit of a whirlwind tour of ontology. I went fast and skipped over plenty, so I’ll ask: what would you like me to go deeper on? Is there anything that doesn’t make sense? Anything that fascinates or excites you? Let me know in the comments. If you’d like to learn more about ontology in its many forms and how it evolved, I highly recommend From Bacteria to Bach and Back by Daniel Dennett.