Do Brains do Back Propagation?

I’ve been thinking about this article Ulrike Hahn shared with me recently (alternative source here). Apparently, I have strong opinions about why we shouldn’t say that the brain is doing something “backprop-like” when we learn!

Before we start, the key thing to know is that computation in a neural network is distributed across many nodes connected by links. To tune the behavior of the network as a whole, you need to tune each of the nodes and links, but how do you know how any one node or link contributed to the final answer? It’s complicated, and each one depends on many others. We call that “credit assignment.”

I think both brains and artificial neural networks (ANNs) need to solve the credit assignment problem. For ANNs, an algorithm called “back propagation” or just “backprop” is the industry standard solution, and it works very well. I think what brains do is different.

There are two big reasons I say this. The first issue is how work is split between nodes and links.

In ANNs, the nodes themselves are trivial, and they’re completely homogeneous across a full layer of the network, if not all the layers. Any deeper computation is about how the nodes are wired together. That is, the program is in the links (synapse weights), not the nodes.

By comparison, brain cells are both complex and diverse. We don’t know how much of the computation happens within cells vs. between them. We’re just starting to figure out what all the different kinds of cells are, but have little idea of what they’re doing. It’s clear that individual neurons do a lot, and that ensembles of cells manage each other in complex ways.

I worry saying the brain “does backprop” implies a network of trivial nodes, where tuning weight vectors is the place where learning happens. That’s likely wrong, and it obscures other possibilities.

Backprop is an algorithm I run to optimize an ANN. It needs a top-down view of the network topology and the weights of all synapses. It solves the credit-assignment problem in a clever way, usually based on the error rate compared to a known target. Then it simultaneously updates all the link weights in the network based on how the ANN responded as a whole. First you train your network, then you can use it, but not both at once.

Rather than being tuned by some external actor, brain cells manage their own relationships with their neighbors. They grow, prune, and modulate their synapses, and they decide when and how to do that based on imperfect feedback, limited information, and evolved heuristics. Brains track and minimize errors, but the targets are internally generated. This is happening continuously, with fluid transitions between acting in the real world, imagining, thinking, and learning.

I’d argue what the brain does is much harder, and much more interesting.

Evolution and Cognition

(This month’s post features a photo of Stentor coeruleus by Jasper Nance on Flickr. Under a microscope, this little guy looks translucent white, with green and brown bubbles trapped inside of it. It’s roughly cone shaped, rooted to some debris at its narrow base, bent at a ninety degree angle as its wide, cilia-lined mouth quests about for food)

Sometimes I like to talk about all sorts of living things as “intelligent,” even ones that are very simple and reflexive in their behaviors. That’s because I tend to think of the process of evolution itself, and everything it creates, as intelligent. In some sense, I feel like the simplest of creatures are particularly intelligent in their design, because they don’t waste any effort on “thinking” or complexity when they don’t have to. But this isn’t how most people use the word “intelligent,” which is a problem. It’s not a particularly well defined word, either, so I think my view deserves some clarification. How are evolution and cognition related to each other? That’s something I’ve been thinking about a lot lately.

Perhaps it’s best to start with some of the simplest life on the planet: bacteria. They are basically just little robots, controlled by a simple “computer” made out of proteins and genes (here’s a great book on the subject). They certainly don’t “think” like we do, but they do perceive their environment and respond to it appropriately. They seem to have a model of their tiny selves, and their relationship with their environment, that lets them make good decisions from moment to moment, to survive in a hostile world with intermittent food, and many poisons and predators about. They aren’t particularly creative, but their evolved programming can handle a very wide range of contingencies, responding in appropriate ways to just about anything that their ancestors have encountered in the past.

Even very simple computer programs can learn. It’s just a matter of observing when two experiences tend to go together, and recording that association for later. This is all that “AI” is doing, and it can be shockingly effective. There’s no reason that a “computer” made of proteins and genes couldn’t do the same thing. But simple cells like bacteria don’t need to do this. They reproduce so quickly, they can simply evolve instead. The difference is surprisingly subtle. Bacteria use a relatively fixed “program” for themselves. Unlike a mind, it isn’t made to reorganize itself to conform with experiences in the world, it just does what it does. However, this “fixed” program is constantly being subjected to random edits (mutations), so it does change. The main difference is that genes accommodate to the environment by death and survival, rather than learning within a lifetime. Bacteria are incredibly prolific. E. coli can easily fill a single square millimeter with over 10 billion cells, and a population can double in twenty minutes. As long as some of the random edits are good, and they spread quickly, bacteria can adapt in real time to their changing environment, without learning.

What’s interesting is that more complex cells do seem to learn. Animals like you and me are built out of vast numbers of eukaryotic cells, but our ancestors were actually single-celled eukaryotes, and there are still plenty of those thriving out in the wild today. These cells are much larger and more complex than bacteria. In the past, they actually swallowed up other cells, that went on living inside of them. So, they’re actually like little clusters of several cells in one, working together as a team. Thanks to one of these symbiotic partners, mitochondria, eukaryote cells have a much larger energy budget than bacteria. Their genomes also have much more complex structure and regulatory mechanisms. They can actually re-write the instructions in their genes after reading them, using a process called splicing. We’re still not completely sure that these single cells do what we’d call “learning,” because we only recently thought to look for that, and it’s rather difficult to prove definitively. But there’s good reason to suspect they can.

There’s a single-celled eukaryote called Stentor coeruleus that is shaped like a tiny trumpet. It’s actually considered to be huge for a single cell, at a whopping two millimeters long. That may not seem like much, but it’s 1,000 times bigger than an E. coli bacterium. This little guy is very sensitive to vibrations, which it uses to detect predators. When it feels threatened, it contracts into a tiny ball to make itself harder to catch. What’s interesting, though, is that when it feels the same vibration over and over again and doesn’t get eaten, it learns to ignore vibrations at that frequency (but not other frequencies). That learning is flexible, because if the vibrations get stronger (as if the predator were getting closer, perhaps), it can change its mind and go back to treating them as a threat. This sort of behavior is pretty simple, but I would consider it “intelligent” in a minimal sense. It has a non-trivial model of the world that it updates in real time without having to evolve to do it.

As organisms get larger and more complex, they also become slower. They move more slowly and they reproduce more slowly. They also tend to have larger, more complex genomes which are much harder to successfully edit with random mutations. As this happens, life needs to find ways to keep responding quickly, and one solution is to invest more in cognition. Even single celled organisms like Stentor seem to do this. Larger organisms like plants have even more complex cognition. Plants may not seem very intelligent, but they can learn, behave strategically, and even communicate. They interpret clues over a season to guess when it’s safe to grow, they maneuver around obstacles and each other to get access to light, and warn each other when pests show up. This isn’t thoughtful, exactly, but neither is it simply hard-wired. There’s flexibility, context sensitivity, memory. Real, non-trivial computation. More complex organisms, like animals, have even evolved their own specialized “computers” built out of cells: brains. They are much more intelligent than plants, in one sense, because their behavior is so much more flexible and dynamic. They really can think, which for most people seems to be the gold standard for intelligence.

The powerful thing about investing in cognition is that it makes evolution easier. An intelligent organism doesn’t need to evolve exactly the right behavior for a situation, it just needs to be “close enough” from birth that learning can take it the rest of the way. For instance, most animals have to learn how to walk. That means when an animal is born a little different, say with legs that are shorter than usual, it can figure out how to work with that. In fact, if having short legs turns out to be useful, they might do better than their peers, even if they have a body type never seen before in their family history. If a species has to re-learn the same behavior in every generation in order to survive, then natural selection will favor individuals that learn that skill more easily and reliably. For instance, baby deer learn to walk very quickly, seemingly because they are born with some innate clues for how to do this, and an urgent desire to do so. In this way, behaviors that were at first creative solutions to novel problems can slowly become reliable programs encoded in the genes. This sort of dynamic is sometimes called “genetic assimilation” or “the Baldwin Effect,” and it’s one of the key ways that life uses its evolved intelligence to shape the process of evolution to make it more efficient.

This is why I like to think of evolution and cognition as continuous with one another. They are different things, but they are both kinds of learning, just happening on different time scales, and there is flow between them. Flexible cognition can make evolution much easier, and evolution can generate new opportunities for cognition to work with. You might say that organisms like bacteria, which are little more than fully-automated evolved mechanisms, aren’t intelligent because their behaviors are rigid and can’t adapt without evolution. But perhaps an evolving population of bacteria is intelligent? It can be quite flexible and dynamic, adapting to dramatic changes in the environment in minutes. This is why they can be so difficult to control, and often very challenging for our immune system (which is very intelligent and flexible) to keep up with.

Organisms like plants rely a lot on evolved mechanisms, but they also have some minimal real-time learning and flexible behavior, so I would consider even a single plant to be intelligent. Animal species like insects, reptiles, and mammals shift the balance even further toward real-time cognition. In a sense, every organism is precisely as smart as it needs to be to live its lifestyle. If it can be successful and thrive without expensive, complicated, biological computation that can learn in real time, that’s a good “design.” I consider that to be intelligent, even if the organism itself doesn’t seem so intelligent. Every organism evolved from a single common ancestor, and has had just as much time to adapt. Some have become extremely good at being extremely simple, with precisely honed, fully automatic routines controlled by genetics that have stayed stable for hundreds of millions of years. Others have less refined innate behaviors, and make up for it with flexible learning and cognition. Neither option is better, and evolution explores the full range of that spectrum, finding a huge variety of strategies to survive and thrive in a complex world!

Learning to Move: Three Kinds of Learning

(This post’s image is a photo I took of my yoga gear. Specialized tools like my mat, blocks, and strap work together to make my practice possible. They extend my body, and help arrange it in the ways my mind imagines)

I was pretty sedentary as a kid, and didn’t get serious about physical fitness until I was an adult. One nice thing about that is I got to watch myself learn, knowing all I do now about the brain. By practicing yoga and working with physical therapists, I’ve learned a lot about myself, but also how mind and body work together. Mastering a new physical skill actually recruits at least three separate learning processes working together. Understanding this changed my expectations, helped me gain more control over my body, and made exercise much more enjoyable.

When I first started yoga, I was startled by how little I knew about my body. My teachers were asking me to observe and discern sensations I’d never noticed before. They asked me to get into certain poses, using certain muscle groups, and I didn’t know how! I didn’t have the names for these things. Worse, I could see what I was supposed to do, but I didn’t know how to make my body do that, or even if I could. It was frustrating.

It’s weird to think how ignorant we are of our bodies, given that we live in them our whole lives. For me, a prime example comes from physical therapy. When recovering from an injury, I relearned how to walk up stairs. I’ve climbed stairs thoughtlessly my whole life, and I never considered there were different ways to do it. But the leg is controlled by opposing muscle groups. I used to climb stairs by lifting each leg, using just the muscles on the front side. I learned to also use the muscles on the back side, to push up and straighten the leg. Either set of muscles can do the job alone. Now, I consciously try to balance the effort from both sides, but this never would have occurred to me without knowing a little anatomy.

That knowledge was game changing for me, but unfortunately knowing how the body works isn’t good enough. I can memorize anatomical diagrams, muscle names, and facts about body mechanics, but the only interface between the brain and body are the spinal nerves. How’s the brain supposed to know what nerve impulses correspond to which movements? There’s actually a region of the brain dedicated to this problem, the cerebellum, but it’s not consciously accessible. This is why yoga instructors use cues: they teach little mental tricks for recruiting muscles, and associate them with relevant postures.

Try this. Bend your elbows ninety degrees to extend your forearms out from your body, palms up. Imagine someone’s handing you a heavy platter. You might notice the trapezius and rhomboid muscles engage between your shoulders. These muscles largely serve a supportive role. For many people, they aren’t needed much in daily life, but using them can improve posture and reduce strain on other muscles. The problem is, they’re easy to ignore and hard to describe. But I can turn them on with the cue, and then I can learn what it feels like to use those muscles. Once I can tell whether they’re working, I can often activate them at will. Or, I can just use the cue, as needed.

Of course, conscious knowledge of form and cues are just step one. Muscle control is mostly unconscious, and for good reason. Remembering all the cues, monitoring my body, and continuously correcting my posture is work. It takes my full attention, leaving no room for anything else. Luckily, that’s just a phase. With enough practice, my cerebellum learns the patterns and can take over. I can hand off that work to my unconscious motion control sub-processor, freeing my conscious mind to think about something else.

This is why physical therapy can be so effective. After an injury, some muscles and joints may not perform like they used to. Some links between mind and body might even be severed or scrambled. Recovery means learning new ways to do old activities. At first, this is a nightmare. Without the support of the cerebellum, even just walking is an intensive conscious effort. Physical therapy can be a painful, tedious, and drawn-out process, but for many patients it makes a world of difference. It teaches the cerebellum new motion programs. Potentially, walking can become fully automatic again. The conscious mind can be used to retrain the unconscious mind in profound and lasting ways.

Yet knowing how to move isn’t enough if the body can’t follow through. The hardest part about learning a new physical activity is that the body usually isn’t ready for it. When I first started yoga, my muscles were weak, rigid, and lazy. They quickly became tired and sore, which just made me want to use them less. They struggled to move my body weight, and were so tight that my range of motion was limited. Some postures were hard, uncomfortable, or impossible. I couldn’t keep up, and when I pushed myself harder, I only injured myself.

That taught me a lesson about patience and acceptance. My body wasn’t ready, so I couldn’t do those poses, but I could work towards them. I learned to listen to my muscles complain, and to distinguish between different sensations. Some indicate hard limits I should not push past, but most are just signs of stress, and those can be good. When muscles, bones, and tendons get stressed, they respond by becoming bigger, stronger, and tougher in a process called anti-fragility. The discomfort I feel is just that physical learning process in action. By embracing the discomfort, I could slowly reshape my body.

Anti-fragility doesn’t involve the brain, conscious or unconscious. It’s a kind of learning that happens in the body tissues themselves. My muscles “know” whether they are getting the job done. They can tell if they are actually contracting and relaxing when they get the signal, whether that was easy or hard, and whether they sustained any damage in the attempt. They recognize how often they are put to use, and whether they are usually exhausted or ready for action.

Generally speaking, muscles conserve energy by doing as little as possible. But when I regularly demand more of them, they adapt. They become bigger, stronger, and more responsive. They consume more energy at rest so they’re always ready for action when I need them. They become less lazy, working harder by default, which makes them stronger still. This requires more protein to build the muscles, and more calories to power them. So my metabolism adapts, too. I eat more and my body burns more calories continuously, rather than storing them as fat.

What’s so fascinating is how all three ways of learning work together. With conscious thought, I choose to change my behavior. I master new facts and cues, so I know what I’m doing at an intellectual level, and can execute new skills (poorly, at first). With practice, not only do I refine those skills, but I engage an unconscious learning process that makes them fully automatic. I can focus my mind on the task I want to accomplish, and trust my body will just perform all the complex movements I need to pull it off. My muscles may not be up to the challenge at first, but that’s fine. With willpower, I push my tissues to their limits, and they learn to do what I ask. By the principle of anti-fragility, my body automatically remodels itself, increasing strength, flexibility, or stamina precisely where they’re needed. It makes itself a better robot, one that can live the lifestyle my mind consciously chooses. These three learning processes work independently, yet together they make a dynamic human being, one that can just as well become a yogi, a warrior, a marathon runner, or a weightlifter.

Intelligence isn’t just about brains, it’s about bodies, too, and about multiple intelligent systems working together in complex ways. I hope this was a helpful example, but as always I’m looking for feedback. Is this an experience you can relate to? Have you observed these different systems within yourself? Do you think it helps to know what’s going on intellectually, or do you approach physical training in a different way? Any other thoughts or observations to share? I’d love to hear from you in the comments.

In Every Mind, a Universe

The first life was blind and ignorant. It was little more than a self-perpetuating chemical reaction, constantly rebuilding itself and making new copies. It had no idea where it was because it had no way to perceive the world around it. Even the very notion of existing and moving within a physical space was incomprehensible. It didn’t know what it was, or even that it was, because it had no way to perceive its inner life, either. It just kept on going, making copies of itself, frequently with errors that made it worse or (occasionally) better at being alive.

Eventually, by chance, life discovered something very useful: certain molecules change shape when something happens to them. Some respond to being hit with light, others change in response to temperature, or pressure, or brushing up against another molecule with just the right shape. Life learned to read these signs, understand them as clues about the world, analyze them, make decisions, and respond. At least, metaphorically speaking. In reality, we’re still just talking about chemical reactions here. One shape change might trigger another, which might cause some new protein molecule to be synthesized, or kick off a chain reaction that leads to a cell moving, adjusting its metabolism, or whatever. The cell acts as if it appreciated the meaning of this signal, but without “thinking” except in a purely mechanical way.

This was the origin of meaning. At first, it was a very primitive thing. Life learned to discriminate between “good” and “bad.” That is, it noticed signs correlated with favorable living conditions, survival, and reproduction. Organisms that sought out more “good” signals while avoiding “bad” signals tended to live longer and produce more offspring. In this way, evolution slowly transformed random patterns of stimulus and response into instinct, innate biases baked in from birth, representing a sort of ancestral “knowledge.” Over time, life evolved more nuanced concepts like: light and dark; warm and cold; food and poison; me, us, and them. Teasing apart these subtler shades of meaning helped life develop more complex and successful strategies to survive in the world.

Every organism has this sort of evolved map of meaning (an “innate ontology”), implicit in their genes. It’s defined by their senses, physical capabilities, reflexes, and gut feelings. That means every species has a profoundly different perspective on reality. Fish, for instance, may have no conception of water because to them it’s a lifelong constant with no alternative. However, they have a very nuanced sense of the information carried in the water, which we are totally blind to. They can be very sensitive to things like pressure, temperature, chemical concentrations, currents, and even electrical fields. To a limited extent, they instinctively “know” where these signs are coming from, what they foretell, and how to react.

Talking about “ontology” as something in our genes is a little unusual. Typically that word is applied minds, perhaps just human minds. It’s about how we perceive reality, dividing it up into objects, categories, and relationships. It’s how people fundamentally understand themselves and the world they live in, and it’s heavily influenced by culture. But philosophers like Daniel Dennett insist that the same concepts should be extended to precognitive life, as well. Our physical bodies lead us to perceive and think and act in human ways, laying a foundation upon which conscious learning and culture can build. In that view, our rich mental ontology is a product of evolution, constructed from lower-level, simpler, more instinctual parts that we share with many species.

Like the first living thing, each human is born into a new and unfamiliar world, forced to figure out how to survive from scratch. We do have a major leg up, though: we’re born with senses and instincts and the ability to move our bodies. Our nervous systems carry and integrate the sensations from our many cells to our brain as a coherent bundle of information. Our brains are highly structured, with all the tools we need to make sense of those signals set up and ready to go. For instance, our multi-stage vision pipeline takes in light signals from our optic nerve, then processes them to detect edges, shape, movement, and even faces from day one. As infants, we don’t know what these things mean yet, but our bodies present the information to our minds in a convenient form and draw our attention, making them quick and easy to learn.

But our innate ontology is very vague. We are born with a sort of “knowledge” (or at least a predisposition to learn) that we are bodies that can move around in a 3D world. That world is filled with objects we can interact with. Some of these objects can move, some are useful, and some are alive. We get tired, hungry, and sick. We need to breathe, drink, and eat other living things to survive. That’s all obvious from a very young age. The rest is on us to figure out. How do we tell friend from foe? How do we find shelter? What’s good to eat around here? How do we make a living? What is the purpose of our existence? These questions are context dependent and quick to change, so life hasn’t evolved answers for us. It can’t. Instead, it gave us brains so we could find our own answers.

What makes humans truly special, though, is that we don’t build our ontologies just by trial and error. We talk about our ontologies. We point things out, name them, tell stories, give demonstrations. We learn from our parents, peers, teachers, and the media. We’re immersed in the collective ontology of our species, something all of humanity has been cultivating for over a hundred thousand years. Our minds are built to soak it all in and to very quickly adopt a picture of reality that’s much richer, more accurate, and more nuanced than what’s available to any other species. Much more than any one human could possibly figure out in their lifetime.

This way of understanding reality is powerful, but it leads to a great big illusion: we tend to see our ontology as reality itself. That’s understandable. Our ontology is our window on the world. It encompasses everything we can perceive, understand, and do. Yet, it is not a real thing. It’s an image in our minds, our bodies, and our genes. It’s informed by our genetic ancestors, our senses, and what we’ve learned from each other. But we perceive much more than our senses actually take in. Our brains are running a sort of “image enhancement” algorithm, as seen in Sci Fi classics like Blade Runner and now made manifest by deep learning software. We take in a little data, then use our knowledge and expectations to extrapolate something much bigger, fuller, and richer, making up the details that we can’t directly perceive. That is, we see what we believe. We perceive concepts, not reality as it truly is.

Of course, if our ontology is not a real thing and lives inside our minds, the consequence is that every human being must have a different ontology. They are in many ways similar, sure. We are the same species, living in the same world, with the same basic needs. We may even be from the same community, with a shared culture. Yet, we might disagree about the meaning of important concepts like “freedom,” “equity,” and “justice.” We might have very different ideas about what money is, what purpose a government serves, or how to be a good person. These are not disagreements about facts, but about the structure of reality itself—the framework we use to fit facts together into a coherent picture. These disagreements are particularly hard to reconcile, since it’s hard to even imagine what doesn’t fit inside my ontology.

That was a bit of a whirlwind tour of ontology. I went fast and skipped over plenty, so I’ll ask: what would you like me to go deeper on? Is there anything that doesn’t make sense? Anything that fascinates or excites you? Let me know in the comments. If you’d like to learn more about ontology in its many forms and how it evolved, I highly recommend From Bacteria to Bach and Back by Daniel Dennett.