Evolution and Cognition

(This month’s post features a photo of Stentor coeruleus by Jasper Nance on Flickr. Under a microscope, this little guy looks translucent white, with green and brown bubbles trapped inside of it. It’s roughly cone shaped, rooted to some debris at its narrow base, bent at a ninety degree angle as its wide, cilia-lined mouth quests about for food)

Sometimes I like to talk about all sorts of living things as “intelligent,” even ones that are very simple and reflexive in their behaviors. That’s because I tend to think of the process of evolution itself, and everything it creates, as intelligent. In some sense, I feel like the simplest of creatures are particularly intelligent in their design, because they don’t waste any effort on “thinking” or complexity when they don’t have to. But this isn’t how most people use the word “intelligent,” which is a problem. It’s not a particularly well defined word, either, so I think my view deserves some clarification. How are evolution and cognition related to each other? That’s something I’ve been thinking about a lot lately.

Perhaps it’s best to start with some of the simplest life on the planet: bacteria. They are basically just little robots, controlled by a simple “computer” made out of proteins and genes (here’s a great book on the subject). They certainly don’t “think” like we do, but they do perceive their environment and respond to it appropriately. They seem to have a model of their tiny selves, and their relationship with their environment, that lets them make good decisions from moment to moment, to survive in a hostile world with intermittent food, and many poisons and predators about. They aren’t particularly creative, but their evolved programming can handle a very wide range of contingencies, responding in appropriate ways to just about anything that their ancestors have encountered in the past.

Even very simple computer programs can learn. It’s just a matter of observing when two experiences tend to go together, and recording that association for later. This is all that “AI” is doing, and it can be shockingly effective. There’s no reason that a “computer” made of proteins and genes couldn’t do the same thing. But simple cells like bacteria don’t need to do this. They reproduce so quickly, they can simply evolve instead. The difference is surprisingly subtle. Bacteria use a relatively fixed “program” for themselves. Unlike a mind, it isn’t made to reorganize itself to conform with experiences in the world, it just does what it does. However, this “fixed” program is constantly being subjected to random edits (mutations), so it does change. The main difference is that genes accommodate to the environment by death and survival, rather than learning within a lifetime. Bacteria are incredibly prolific. E. coli can easily fill a single square millimeter with over 10 billion cells, and a population can double in twenty minutes. As long as some of the random edits are good, and they spread quickly, bacteria can adapt in real time to their changing environment, without learning.

What’s interesting is that more complex cells do seem to learn. Animals like you and me are built out of vast numbers of eukaryotic cells, but our ancestors were actually single-celled eukaryotes, and there are still plenty of those thriving out in the wild today. These cells are much larger and more complex than bacteria. In the past, they actually swallowed up other cells, that went on living inside of them. So, they’re actually like little clusters of several cells in one, working together as a team. Thanks to one of these symbiotic partners, mitochondria, eukaryote cells have a much larger energy budget than bacteria. Their genomes also have much more complex structure and regulatory mechanisms. They can actually re-write the instructions in their genes after reading them, using a process called splicing. We’re still not completely sure that these single cells do what we’d call “learning,” because we only recently thought to look for that, and it’s rather difficult to prove definitively. But there’s good reason to suspect they can.

There’s a single-celled eukaryote called Stentor coeruleus that is shaped like a tiny trumpet. It’s actually considered to be huge for a single cell, at a whopping two millimeters long. That may not seem like much, but it’s 1,000 times bigger than an E. coli bacterium. This little guy is very sensitive to vibrations, which it uses to detect predators. When it feels threatened, it contracts into a tiny ball to make itself harder to catch. What’s interesting, though, is that when it feels the same vibration over and over again and doesn’t get eaten, it learns to ignore vibrations at that frequency (but not other frequencies). That learning is flexible, because if the vibrations get stronger (as if the predator were getting closer, perhaps), it can change its mind and go back to treating them as a threat. This sort of behavior is pretty simple, but I would consider it “intelligent” in a minimal sense. It has a non-trivial model of the world that it updates in real time without having to evolve to do it.

As organisms get larger and more complex, they also become slower. They move more slowly and they reproduce more slowly. They also tend to have larger, more complex genomes which are much harder to successfully edit with random mutations. As this happens, life needs to find ways to keep responding quickly, and one solution is to invest more in cognition. Even single celled organisms like Stentor seem to do this. Larger organisms like plants have even more complex cognition. Plants may not seem very intelligent, but they can learn, behave strategically, and even communicate. They interpret clues over a season to guess when it’s safe to grow, they maneuver around obstacles and each other to get access to light, and warn each other when pests show up. This isn’t thoughtful, exactly, but neither is it simply hard-wired. There’s flexibility, context sensitivity, memory. Real, non-trivial computation. More complex organisms, like animals, have even evolved their own specialized “computers” built out of cells: brains. They are much more intelligent than plants, in one sense, because their behavior is so much more flexible and dynamic. They really can think, which for most people seems to be the gold standard for intelligence.

The powerful thing about investing in cognition is that it makes evolution easier. An intelligent organism doesn’t need to evolve exactly the right behavior for a situation, it just needs to be “close enough” from birth that learning can take it the rest of the way. For instance, most animals have to learn how to walk. That means when an animal is born a little different, say with legs that are shorter than usual, it can figure out how to work with that. In fact, if having short legs turns out to be useful, they might do better than their peers, even if they have a body type never seen before in their family history. If a species has to re-learn the same behavior in every generation in order to survive, then natural selection will favor individuals that learn that skill more easily and reliably. For instance, baby deer learn to walk very quickly, seemingly because they are born with some innate clues for how to do this, and an urgent desire to do so. In this way, behaviors that were at first creative solutions to novel problems can slowly become reliable programs encoded in the genes. This sort of dynamic is sometimes called “genetic assimilation” or “the Baldwin Effect,” and it’s one of the key ways that life uses its evolved intelligence to shape the process of evolution to make it more efficient.

This is why I like to think of evolution and cognition as continuous with one another. They are different things, but they are both kinds of learning, just happening on different time scales, and there is flow between them. Flexible cognition can make evolution much easier, and evolution can generate new opportunities for cognition to work with. You might say that organisms like bacteria, which are little more than fully-automated evolved mechanisms, aren’t intelligent because their behaviors are rigid and can’t adapt without evolution. But perhaps an evolving population of bacteria is intelligent? It can be quite flexible and dynamic, adapting to dramatic changes in the environment in minutes. This is why they can be so difficult to control, and often very challenging for our immune system (which is very intelligent and flexible) to keep up with.

Organisms like plants rely a lot on evolved mechanisms, but they also have some minimal real-time learning and flexible behavior, so I would consider even a single plant to be intelligent. Animal species like insects, reptiles, and mammals shift the balance even further toward real-time cognition. In a sense, every organism is precisely as smart as it needs to be to live its lifestyle. If it can be successful and thrive without expensive, complicated, biological computation that can learn in real time, that’s a good “design.” I consider that to be intelligent, even if the organism itself doesn’t seem so intelligent. Every organism evolved from a single common ancestor, and has had just as much time to adapt. Some have become extremely good at being extremely simple, with precisely honed, fully automatic routines controlled by genetics that have stayed stable for hundreds of millions of years. Others have less refined innate behaviors, and make up for it with flexible learning and cognition. Neither option is better, and evolution explores the full range of that spectrum, finding a huge variety of strategies to survive and thrive in a complex world!

Ramps and Ratchets: A New Paper!

(The image for this post is a screenshot of the video summary of the Baym et. al MEGA-Plate experiment, available here: https://youtu.be/plVk4NVIUh8. It shows a population of bacteria gradually evolving antibiotic resistance as they migrate through a specially designed habitat. The bacteria appear white on a rectangular black background. You can see the bacteria repeatedly breaching into new territory where they fan out, spread, and completely fill the available space.)

In a few days, I’ll be visiting Kyoto to attend the 2025 ALife conference! I couldn’t be more excited for the trip, especially since my wife gets to come along. “ALife” is short for “Artificial Life,” and it encompasses all sorts of research into “life as it might be.” Mostly, this means lifelike computer simulations that help us better understand what life actually is, but part of the fun is there’s a great deal of interpretation, creativity, and art mixed in with the science. Japan is a hub for this sort of research, so I’m sure the conference will be quite an experience! Today I’m sharing the story of my experiment, light on science jargon. But soon the full paper will be published, along with a 15-minute video summary, which I will also share here. So, if you’re curious to follow along, please choose your own adventure.

This project started with a grant. My PhD advisor, Josh Bongard, specializes in evolving the designs for robots. He got funding to explore endosymbiosis: that is, making robots that operate within other robots. I found this topic fascinating, because working and evolving together is a powerful way that organisms in nature change their fate in dramatic ways. For instance: humans depend on our gut microbes to digest the foods we eat, and we would never have been able to power our big brains and live in so many diverse habitats without that collaboration.

I drew inspiration from one of my favorite biology studies, the MEGA-Plate experiment. I highly recommend you check out this two-minute video summary, but I’ll give a quick overview here. They wanted to see how new traits evolve and spread in bacterial colonies over space and time. So, they made a big rectangular habitat for E. coli, and filled the space with bands of gradually increasing amounts of antibiotic. Initially, these bands would block the bacteria from spreading, but eventually one would evolve resistance, spread into new territory, and fill it with a new lineage of slightly antibiotic resistant bacteria. This happened over and over again, gradually increasing the amount of antibiotic the bacteria could tolerate, until they reached the middle and could thrive in a brutally high dose of antibiotics that would have killed their ancestors instantly.

I love that experiment because it shows us something about antibiotics—use a big dose, and keep taking it, because too little is just an invitation for bacteria to evolve resistance—, but it also let us watch evolution unfold in real time, and showed a way to coax life into evolving some new ability, just by manipulating the environment. It got me thinking: maybe there’s something similar going on inside each of us? The gut is a habitat for bacteria, but it’s a hostile environment, one that’s meant to be selective, letting “good” bacteria survive while keeping “bad” ones out. Perhaps our gut evolved to shape how populations of bacteria spread and evolve there, guiding them towards more mutually beneficial lifestyles? I wondered, could I evolve an environment that promotes the evolution of a population inside? Could I reproduce the MEGA-Plate experiment, and maybe even evolve a better environment than what those guys designed by hand?

Actually bringing that idea to life required a lot of technical decisions. First, I had to decide what these “bacteria” would actually do. To simplify things, I chose not to simulate actual bacteria, but instead play a simple number guessing game called HIFF. Each “bacteria” is really just a number, 64 bits long when represented in binary. I kept a population of thousands of these numbers, and in each generation, I would change them one or two bits at a time, which would change the score they got in the HIFF game. It’s a hard game to win, since there are only two right answers out of 18,446,744,073,709,551,616 possible 64-bit numbers. Guesses that are closer to correct get higher scores, so the population will gradually evolve towards the right answers, but the way the scores improve is designed to be very noisy and misleading! You frequently have to get worse for a while on your way to getting better, which makes it nearly impossible for a simple Evolutionary Algorithm (EA) to solve HIFF. This seemed like a good challenge, and it was also very practical: numbers are easy for computers to work with, so I could have many thousands of them and still get results fast.

Next, I had to figure out how to put an EA into a spatial environment. Lots of people have tried that before, and in many different ways, but they almost always design the environment by hand! I couldn’t’ find any examples of someone evolving an environment like this before, which meant my idea was new, but also that I was on my own figuring out how to do it. I turned to a favorite tool of mine: Compositional Pattern Producing Networks, or CPPNs. They can make cool, organic-looking patterns with lines, curves, and smooth gradients, and that seemed like just what I wanted. Instead of laying down different amounts of antibiotics, instead I would set a minimum HIFF score needed to survive at every point on the map. Then I’d evolve those maps, keeping the ones that led the population growing inside to get higher HIFF scores.

Of course, I needed a control, too. I wanted to study how the varying difficulty levels across the environment affected evolution, so one obvious comparison was a flat environment, with no minimum score at all. I also tried a version of the MEGA-Plate experiment design, since that seemed to work so well in real life. It took a lot of tinkering to get it working, and frustratingly it only seemed to work under just the right conditions, but eventually I got results that looked like the bacteria in that video. Even more exciting, I could evolve environments to do the same thing.

The concept worked! But, how? And why did it need to be set up in such a particular way? It felt like only a partial success. What did it mean, and why should anyone care? It wasn’t clear at first, but I decided to stick with it. I read a lot of past research about the factors that might be at play here, drawing on both computer science and biology. I also spent a long time staring at the results and tweaking things to see what would happen.

The clue that really made things click for me was when I did a “hyperparameter sweep.” Originally, I found that the MEGA-Plate copycat beat out the flat environment, but only when I configured the EA juuuuust right. In other cases, the flat environment did just as well or even a bit better. Claiming “success” felt a bit like cheating. So, I decided to try all the possible settings, to see precisely where it did better. Turns out, the key factor was selection pressure. When I made it so that numbers with higher scores could reproduce faster, then evolution worked just fine in any of my environments. But when I didn’t do that, when I let any number survive as long as it was “good enough,” with no particular edge over its neighbors, then the more challenging environments paradoxically produced better HIFF scores. After a few more tests, I convinced myself: somehow the spatial structure of the environment was inducing selection pressure, without me having to explicitly program it into the algorithm.

I think the way this works is fascinating. When I first lay down the “bacteria” into my simulated MEGA-Plate environment, most of them die right away! They simply can’t survive in the toxic middle area, and so they die out and leave empty space behind. The ones lucky enough to be placed near the edges, though, they do fine. They keep evolving, gradually getting better HIFF scores. Eventually, one of them gets a high enough score that it can survive in the next space over, so it migrates there and finds itself alone in a big open space. Since it has no competition, it gets to reproduce like crazy, completely filling the new territory with its children, each one just a little bit different from itself, because of mutations. Except, the minimum score here is a bit higher than before. Any children that get a much lower score can’t survive here; only the ones that are about the same or better can. That means the environment works like a ratchet, pushing the population to ever high scores as they gradually climb the ramp of increasing difficulty.

A diagram that shows how fitness varies across space in this experiment. The threshold for survival is illustrated as a series of three steps. On the first step (a), there are many red dots ranging from dark purple to slightly red, indicating a low fitness population. At (b), a single circle is placed between steps, with arrows indicating migration and producing many children. On the next step (c) we see more dots, ranging from purple to red and even orange, indicating higher fitness. But all the darker circles are crossed out, indicating that they do not survive. At (d), we see another dot moving onto the next higher step, producing children with even higher fitness (red and orange dots).
A figure illustrating the ratchet effect. (a) When the theshold for survival is low, we get lots of variation (b) A gradual ramp up allows numbers with higher scores to migrate into empty space and make many copies (c) Exploration resumes, but more mutations are fatal here. This creates a ratchet effect where fitness can only increase, driving fitness up. (d) The process repeats!

Looking at it another way, my algorithm starts out by broadly exploring the space of possible numbers, without concern for which ones are better or worse. But, when one does get a higher score, that one gets rewarded with access to more territory and more children. That’s like starting up a whole new evolutionary search, beginning from a better baseline. When that happens, I go back to searching very broadly, just with a higher standard for survival. Any number is allowed, just so long as it isn’t worse. The spatial structure of the environment and its gradual ramp up in difficulty is what decides how strong the selection pressure is, and what manages this back-and-forth process of “exploring” the search space very broadly and “exploiting” the numbers that got higher scores. This “explore / exploit trade-off” is a big topic of interest in the field of EAs, and something we typically have to tune by hand, but I got an EA to automatically solve this problem for me!

It’s still not clear how to make this practically useful. HIFF is just a number guessing game that I used it as a stand-in for some more interesting task. This kind of EA ought to be good at open-ended exploration of complex problems, which is why it does well at a deceptive problem, like HIFF. But HIFF only has two best answers! This approach might be much more useful and interesting in a more open-ended problem area, where there are countless different ways to succeed. Perhaps something like a robot, or a game-playing AI, or a search for many good answers, rather than the best answer. But I won’t know until I actually try!

Overall, I’m quite happy with this experiment. I was able to reproduce results from a favorite biology paper on my computer, and show how it relates to the kinds of search and optimization problems that Computer Scientists care about. I also got to apply the CS perspective to a Biology experiment, analyzing this “range expansion” phenomenon in terms of problem solving and search processes. It’s a modest start, but this is exactly the kind of work I set out to do, and I had a lot of fun doing it. It also earned an A+ for me and my partner, Anna Rees, on our final project for Evolutionary Computation class, a trip to Japan, and even a nomination for best paper at ALife 2025! So that’s pretty sweet. I hope to do more like this in the future.

Building Bodies

(The image for this post is a human embryo after six days of development: a blastula. The structure needed to construct a body is just starting to take form. Before this point, it’s mostly just a lump of undifferentiated cells. Image credit: Jenny Nichols)

Humans live in the macroscopic world. We’re used to interacting with other people and animals that are about the same size as us. Of course, everything we see as an “individual” is actually made up of unimaginably vast numbers of cells and molecules in constant churning motion, but to us they just look like solid, physical objects. This is normal to us, but it’s a totally alien experience to our most primitive ancestors. We are descended from individual cells that lived fully autonomous lives in a microscopic world. It’s truly extraordinary and weird to think that they would band together by the trillions to form human bodies, and yet that’s what they do. How did that come to be?

Our ancient ancestors were protists, single cells with complex lives. Each one had to find food, shelter, and resources. Just as we do, they continuously decided how to live, trying to survive and thrive in a chaotic world, and to set their children up for success. Sometimes they lived in communities and ecosystems, building networks of mutually supportive relationships. They’d form vast colonies of closely related cells, generally working together, communicating, maybe even specializing and subdividing tasks. And yet, each cell was still responsible for its own well-being. There was no top-down coordination. Each individual decided for themselves what to do. Cooperation would naturally arise when it was useful, and break down again the moment incentives changed.

As we now know, cells can do extraordinary things when they work together on a shared plan. In today’s animals, each cell has an established role to play. Collectively they build complex, macroscopic bodies that observe, think, move, and reshape the world at a vastly larger scale. But try telling that to a single cell! For two billion years, they were honed by evolution to fight tenaciously for their own survival as free-living individuals, and to prioritize their well-being and their offspring. Collectively, the cells of a macroscopic individual may fare better than they would alone, but not all of them. Think of the cells that line your stomach, whose life’s purpose is to get dissolved by acid so that other cells don’t have to. How does evolution convince a cell to do that voluntarily?

More importantly, most animal cells are evolutionary dead-ends. We have specialized sex cells whose only job is to produce children. Every other cell in the body is denied that privilege. Many of those cells still divide occasionally, but all their offspring will die out when the body does. They have no way to influence the next generation of their species. This is a huge contradiction, because those cells evolved for reproduction. For billions of years, the cells who contributed the most to the next generation were favored. Perhaps the most fundamental fact about life is that it proliferates, rapidly filling up every corner of the planet. How do you reverse this core instinctual drive in every living cell?

It’s quite likely that multicellularity evolved and fell apart many times. Discovering this trick is very hard, for the reasons outlined above and more. What’s even harder, though, is holding onto this innovation once it’s found. In order to maintain an animal body, every cell must compromise its well-being for the good of the whole, for a lifetime. If any cell decided to cheat—to live as a rebel among conformists, and selfishly exploit the body environment—it would have a distinct and powerful advantage. It would outperform the others, undermine their hard work, proliferate much faster, maybe even start a whole new successful family line as single cells, feasting on the remnants of the would-be body. Mutations that broke multicellularity must have been common!

Yet, today we live in a multicellular world. How did we make that work? There’s an interesting theory to explain this, and it explains something else strange, too: every single multicellular organism has sex. Some reproduce asexually, too, but the mechanism of sexual reproduction is universal. This is not at all the case for single cells. They exchange genetic material with other cells, but they don’t depend on others to reproduce. They do that entirely on their own, using whatever genes they have at the time. So, perhaps sex and multicellularity are linked? Perhaps sex came first, and is part of what made multicellularity possible? There are a few good reasons to think this, but for me the most compelling is that sexual reproduction creates mothers, and puts them in control.

Generally speaking, the process of building an animal body is decentralized. There is a sort of top-down coordination, using patterns of hormones and bio-electricity to shape a coherent whole, but this has to be generated by the cells themselves. Every cell autonomously figures out where to be and what role to play, via coordination with its neighbors—but not at first! At the very beginning of the process, the embryo’s genes are switched off and the mother’s genes direct the first stages of growth. In an egg or a womb, the mother also has full control over the environment in which development happens, which further shapes the process even once the embryo takes over. This gives Mom the power to set things up just right such that multicellularity is the only viable outcome.

The single cells of an early embryo aren’t a body yet. They default to the independent lifestyle they had for two billion years before the advent of multicellularity. The cells have to work together to build an organism with unified awareness and agency, but that means at first there isn’t one. There’s nobody to coordinate the cells, and no “greater good” to serve. Why should they work together? That’s where the mother comes in. She takes full control and forces the cells into the right starting configuration. She shapes the growing embryo and assigns roles to each cell by tweaking their gene expression. The child cells only get to take over once the general layout of the embryo is established, and the growth process is already underway. Once an individual emerges from the collective behavior of those cells, it can carry on the rest of the work of building a body.

From that point on, there’s very little temptation to cheat. The clockwork of the body has been set into motion. Each cell’s needs are provided for, and their freedom is restrained by the bonds they’ve already formed with their neighbors. The reproductive cells have already been isolated, making it impossible for rogue cells to influence the next generation. They could still rebel and do their own thing. That’s what cancer is. But it’s much harder than playing along, and carries serious risks. The immune system actively hunts for rebellious cells, and kills them on sight! It’s also a futile exercise. A cancer may grow and thrive and proliferate for years, but this is self defeating. When the cancer cells undermine the host body, they destroy their own environment and drive themselves to extinction.

Once cells are locked into this multicellular arrangement, something interesting happens to the selective forces that shape their evolution. As always, the reproductive cells carry subtle genetic differences from their parent, tiny mutations that serve as evolutionary experiments. Changes in an egg cell’s genes affect the behavior of every cell that follows, and thus the body and behavior of the child as a whole. But a new selective filter has been established. Only variations that are beneficial to both the cells and the individual are allowed through. Mutations that damage either level of the system produce unfit individuals who often won’t even develop to maturity. This creates a pressure for cells to become more cooperative building blocks, and for bodies to become more supportive homes for cells.

The takeaway here is that each living cell is a creative, intelligent, and autonomous survival machine. That makes multicellularity a tricky balancing act that’s hard to discover, and even harder to maintain. It seems likely that sexual reproduction and motherhood were essential ingredients to make this possible. In a sense, multicellularity isn’t passed on genetically, it’s passed on physically; each multicellular organism has to assemble the next generation from individual cells before that child can carry on building itself. The cells don’t know how to do it on their own, and they don’t care to. Yet, once a mother forces them into the shape of a growing embryo, it’s in each cell’s best interest to play along. For about 1.5 billion years, evolution has worked hard to maintain this tenuous arrangement by aligning incentives between cells and bodies. Both systems coevolve to complement each other, to ensure the multicellular compromise is the wise choice going forward, for everyone involved.

This is a speculative story, but well supported by evidence. It’s largely inspired by The Evolution of Individuality by Leo Buss, a very technical book about cellular and developmental biology, which goes into vastly more depth and detail than I did here about the complex lives of single cells and the struggle to align incentives across multiple scales. In particular, that book explores the very different ways plants and fungi have overcome the challenges of building macroscopic bodies. It’s a fantastic reminder that what animals do is not “normal,” it’s just what we’re used to. We’re actually quite strange and exceptional, as life goes. Most living things are single cells, and even when it comes to bodies, our way of doing it is just one of many. It’s a brilliant and eye-opening book, but a challenging read, so I wanted to write a more accessible summary, to share these ideas more broadly.

What Intelligence is Not

(The photo from this post is of a squirrel monkey eating fruit in a tree branch. The monkey is tiny, with golden / silver fur, pale pink skin, and a dark skull cap pattern. The fruit is small and red, perhaps a date. Used without modification under the creative commons license – source)

Life has been steadily driving towards greater and greater intelligence, eventually leading to human beings, who are the very pinnacle of this trend. Our superior minds are what separate us from the animals. They empower us to make a world of human flourishing, and justify our dominion over the planet. These tropes about intelligence are so common in our culture, they almost sound self-evident. Yet, I’ll argue that they’re completely wrong. These ideas are enticing because they appeal to our pride and our sense of specialness, but this way of thinking is destroying our world. So, let’s break down these myths and talk about what intelligence is not.

One problem with this story is it presents intelligence as a linear thing. Life started out dumb, and it gradually got smarter and smarter. In a sense, this is true. More intelligent life is more complicated, so it takes longer to evolve. But life doesn’t evolve towards anything, it evolves in all directions, finding and filling every niche available. Monkeys are brilliant at navigating tree branches and spotting ripe fruit. Trees are brilliant at producing the right amount of fruit at the right moment to use local resources efficiently and maximize the spread of their seeds. Yeasts are brilliant at performing alchemy on that fruit, transmuting sugar into alcohol, which the monkeys love. These are all different kinds of intelligence, and none is “better” than the other because they’re all contextual and interdependent. Every instance of intelligence looks different, because it’s adapted to a unique lifestyle.

We live a very complicated lifestyle that depends on our big brains, so we tend to think that more intelligence is better, but that’s just not the case. Some of the simplest, dumbest organisms on Earth are also the most successful. Microbes, fungus, and plants make up something like 99.5% of Earth’s biomass, while animals (the “smart”ones) make up the rest. Being smart is metabolically expensive. Taking time to think can mean missing a moment of opportunity. Sometimes real intelligence is knowing when a mindless strategy works best. If anything, humans are a great example of how intelligence can backfire. We’ve used our intelligence to make civilization, which is amazing! But in doing so, we accidentally drove many species to extinction, exhausted resources we depend on, and destabilized the global climate. Our kind of big-brained intelligence is a high risk, high reward strategy.

This brings us to the idea that humans are the pinnacle of intelligence. The problem with a word like “pinnacle” is it suggests we are the ultimate form—the thing life’s been building up to, all this time. But we’re not the end of anything. We’re still evolving, and it’s unclear whether our intelligence will go up or down from here. We’re also not the only ones. There are a handful of species that have gone “all in” on the strategy of super intelligence. You know, elephants, dolphins, octopi, the usual suspects. Humans may, in fact, be the smartest of them all, but since intelligence is so contextual, it’s hard to say. Maybe dolphins are more intelligent than us, it just looks different in an ocean species with no hands?

It may seem obvious that human intelligence is something more and different from those other species. We invented the wheel, New York, wars and so on. But that really isn’t because we as individuals are so smart. This is made clear by the tragic case of “wild children,” who grow up without parents or any human community. In the few cases we’ve observed, these children were described as animalistic, violent, and cognitively impaired. They were never able to recover or integrate into human society. Our brains alone do not set us apart from animals. Our society does, and that’s a separate thing, that evolved after our big brains. We’re smarter than other animals not because of our biology, but because of the vast library of practical knowledge and resources that we share with one another.

That’s what sets us apart: other species can’t access human culture. In a sense, that’s because those species are less intelligent; to fully appreciate human society, you need language and abstract thought, which many species lack completely. Yet some species thrive in human society anyway. By being useful (like wheat), or charismatic (like dogs), or sneaky (like raccoons) other species live with us and shape our human world. That’s because nature does not set humans apart from other animals. We set ourselves apart from other life by building walls, by excluding them from our world, to the extent that we can. We decide what plants and animals are pets, food, or pests. Other species don’t need language to live in human society if we choose to accommodate them. We can coexist with nature in community, as many human societies have, and still do. Or, we can perpetuate the myth that we are special to justify excluding and exploiting nature instead.

And, ultimately, that’s the problem with this notion of intelligence: we use it to draw a line between friend and resource. If smarter is better—if our intelligence is what sets us apart from other life, and gives us the right to exploit that life however we see fit—then where do we draw the line? Should smarter people get more rights and privileges than dumber ones? Is a disabled person no better than an animal? Should we simply recycle the feeble minded from our population? This line of thinking is revolting, and it only makes sense if you believe these myths about intelligence. Similarly, if anything less than human is just a dumb resource for us to exploit, why not pave the planet? What’s wrong with processing all of that biomass, every living thing on Earth, into fuel and plastics? I think intuitively we know why: life has a right to exist, and losing all those diverse and beautiful kinds of intelligence would be tragic.

I’m excited to live in a time when our understanding of intelligence is changing so rapidly. It’s hard to define the word, just because we have so many examples that pull in different directions, and seem to contradict one another. Intelligence is many things, and we’re still fleshing out the full picture. Yet, every day we see more clearly that our old conceptions of intelligence that put human beings on a pedestal were wrong, and, more importantly, that they are at the root of so much injustice and destruction. So, while these tropes are still everywhere around us, shape the way our world works, and may still feel intuitively true, I urge you to reject them. We must move on, and embrace a more expansive view, one that doesn’t start from the premise of who to exclude.

The Universe Evolves

(This month’s featured image is a photo of the Carina Nebula taken by NASA’s James Webb telescope. It’s a vast cloud of gas and dust, slowly condensing, with hundreds of stars visible in the background behind it. The colorized image almost looks like orange mountains with a blue mist rising from them, set on a black background with bright, six-sided starbursts.)

Normally, when we talk about evolution, we mean what life does. It’s Darwin’s magic formula. You need reproduction. You need to pass on a copy of your genes, with a little variation, so things don’t just stay the same. Natural selection will weed out the less fit individuals, so they have fewer kids. The more fit individuals become more prevalent and, over time, life as a whole evolves to be more fit. Yet, this isn’t a very satisfying story. For one thing, how did it begin? Did life just start evolving out of the blue? I think story is more compelling if we think about evolution a bit more abstractly. In a sense, the physical universe itself evolves. It doesn’t have reproduction and inheritance, but it sure does have variation and selection, and this has caused it to change dramatically over the course of history.

For the first 370,000 years or so, all of space was filled with a boring, homogeneous cloud of energy and plasma. That universe is now extinct, and for one simple reason: it was unstable. In our universe, stability is the ultimate definition of “fitness.” What persists, exists. Patterns of matter and energy that get generated more often and stick around longer become more prevalent. Those that are rare and fragile exist only fleetingly. The plasma universe is gone because gravity causes matter to clump together. It was like a pencil, balanced on its tip. As soon as it became just a little unbalanced, it rapidly fell farther and farther away from that delicate equilibrium. Plasma condensed into molecules, gas clouds, and stars.

Of course, evolution needs variation to work. To find what’s better, you need to weed out what’s worse. For life, reproduction is the engine of variation, but that isn’t necessary if you have unimaginably vast scale. The universe started out with very little variation, but it steadily increased as matter interacted with itself. Gravity caused hydrogen molecules to group together in uneven clumps, and held them there. They sat around for millions of years, slowly growing bigger, until the force of their own weight ignited a fusion reaction. The gas clouds became stars, and in their cores new elements were born. The universe’s population gradually became more diverse.

That’s the counterintuitive thing about stability: it can generate diversity. When patterns become more numerous, and they stick around for longer, chaos starts to kick in. Every star and every planet is a little different. They have unique histories and influences and opportunities. They might be richer in this element or that one, bigger or smaller, hotter or colder, more or less affected by collisions. This diversity only compounds over time, as these objects smash together and interact in complex ways. The longer they stick around, the more they change, recombine, and become more elaborate.

So, for 13 billion years, the universe evolved. Its population became stranger and more complicated. Today we have about a hundred “naturally occurring” elements that didn’t exist at first, but had to evolve through multiple generations of stars fusing atoms, exploding violently, and gradually reforming. We have many kinds of stars, planets, solar systems, and galaxies, that support an astonishing variety of chemical processes that have had a very, very long time to develop. They produce “primordial soups,” pocket environments full of useful molecules for life, a steady energy source, and self-perpetuating chemical reactions. We think this happened at least once to seed all life on Earth, but it may in fact be very common.

I think this story is an essential foundation for understanding evolution as life does it. Because life didn’t start this process. The universe provides energy and raw materials in vast amounts. It provides the chaos and entropy that drives seemingly random variation, and the slow, continual breaking down that causes natural selection to prefer stable, commonly made forms. The laws of physics cause the universe to evolve towards stability, diversity, and complexity, at least for a while, until it starts to wind down again and settle into entropy. Life merely constrains that process, making it more efficient and productive, for the simple reason that matter that does so becomes more prevalent.

In these primordial soups, some chemical systems evolved to enclose themselves in bubbles, protecting delicate reactions from the outside world. These self-made “individuals” evolved regular cycles of reproduction, explicitly making copies of themselves rather than waiting for the right reactants to come together again by chance. They evolved DNA to constrain these copies, and make them more precise reproductions of the original. They evolved sophisticated error checking, which made the copies more robust and reliable. But this also gave life the power to manage variation across generations, and thus shape its own evolution. Life evolved sex to further manage variation, accelerating innovation by sharing genetic recipes across lineages. Life evolved an astonishing variety of sexual and reproductive practices, allowing it to evolve in different ways, with different patterns of variation and selection, each suited to a different range of environments and lifestyles.

The physical Universe evolves—in the most primitive way imaginable, but it still produces stability and complexity in a vast number of diverse forms. It generates the seeds of life, without any guidance or direction. Life evolves differently, because it constrains this process, making it discrete, digital, and managed. This started very simply, just discovering chemical reactions that isolate and maintain themselves. But perhaps this is the origin of what we think of as “intelligence” or “agency”? Without noticing, matter became “opinionated,” preferring certain forms and acting explicitly to promote them. From there, life’s “opinions” about itself only became more demanding and elaborate.

We often present evolution as one simple story, but there are many ways to evolve. Evolution is more like a general principle than a specific algorithm. Even just life as we know it, all based on the DNA molecule, has invented an astonishing variety of different and complex ways of evolving. Bacteria, fungi, plants, and animals use DNA differently. They grow, behave, and reproduce in completely different ways. How many other ways might there be to do it? When we present evolution as a single, constant thing, we limit our imagination. Evolution evolves, and it takes as many diverse forms as it makes.

Status Update: Semester 3

I’m at an interesting moment in my studies, so I thought I’d let you know what’s going on!

Year two of my PhD program has begun. I’m about a month into my third semester, and things are going well. I’m taking two classes right now: Evolutionary Computation, and Deep Learning. Most of my Computer Science education has been about how to design algorithms and write software to solve different kinds of problems, but these classes are different. This semester, I’m learning how to get computers to discover their own algorithms, and write their own software. Honestly, the state of the art here is still quite primitive. We’ve found some very impressive techniques, but they each apply to a narrow domain, and we don’t understand them nearly as well as we’d like. Which makes them fun topics to study. 🙂

The other fun thing about this semester is that both of my classes are built around student projects. More or less, I get to pick projects that fit with my research, and the class is there to help me find the time, resources, and guidance to complete the projects successfully. I like this much better than undergraduate style courses built around assignments and exams that are very generic and may not be relevant to my work. We’ll see how things unfold, but I’m currently planning to work on two projects that I’m excited about.

For Evolutionary Computation, I’m working on an experiment about endosymbiosis. I was inspired by this classic experiment, which examined how bacteria evolve antibiotic resistance, and how genetic innovations spread through the population spatially. I’m going to try evolving a host environment that supports an inner population, a bit like how my gut supports a microbiome. The hope is that the host will be able to design a supportive environment, with different regions that cultivate “microbes” with different traits, such that it can guide and coax them into evolving more specialized forms. This is an exciting experiment for me, because I’m not sure what to expect, but I’m pretty confident that something interesting will happen.

A screenshot from the video linked above, showing strains of bacteria gradually growing into bands with increasing concentrations of antibiotic, fanning out from points where key mutations occurred.

A screenshot from the video linked above, showing strains of bacteria gradually growing into bands with increasing concentrations of antibiotic, fanning out from points where key mutations occurred.

For Deep Learning, I’m going to use computer vision techniques to detect interesting patterns in the Game of Life, since I’ve been using that as an environment for my evolution experiments. The Game of Life has very simple rules, but it evolves in complex ways. Most patterns quickly dissolve into empty space or settle into a few boring, stable forms. But rarely, you get something much more interesting. For decades, people have been exploring this space, finding interesting patterns and classifying them. You get huge complex structures that stabilize themselves, change continuously in repeating cycles, or even propel themselves and move at a steady pace. I’ll build a system that can detect and categorize these patterns, so that when my evolutionary algorithm finds them, I can reward it and ask for “more like that.”

Eater 2, a static shape that persists forever, but has the special property of being able to “eat” gliders that collide with it, recovering its shape after.Monogram, a period-four oscillator, which is small, but occurs very rarely from random conditions.

Examples of interesting patterns in the Game of Life. The first is a static shape that persists forever, but has the special property of being able to “eat” gliders that collide with it, recovering its shape after. The second is a period-four oscillator, which is small, but occurs very rarely from random conditions. The third is a middleweight spaceship, which moves forward two spaces as it repeats itself in four time steps.

This month’s essay is inspired by my Evolutionary Computation class, and the work I’ve been doing to develop the specific research questions I want to focus on for my PhD. So, check back on Wednesday to learn more about how evolution got started, and why it’s worth asking: how does evolution evolve?

GECCO Follow-Up

(I took this post’s photo at the Star Trek Original Series Set Tour in Ticonderoga, New York. It’s a view of the warp core of the USS Enterprise, which is only a few feet deep but looks much larger thanks to forced perspective. The room is filled with structures with complicated geometric shapes, technical looking panels, and dramatic lighting in red, blue, and purple.)

In my last post, I wrote about my latest research project and why I was so excited to present it at GECCO, the premier conference for evolutionary computation. I promised a follow-up, and here it is! Unfortunately, I didn’t make it to Melbourne. Instead, I had a very complicated and protracted battle with my University’s travel planning system, United Airlines, and the Australian visa office, all from the comfort of my home in Vermont. I couldn’t even participate in the event remotely, because of the time zone difference. This is all very disappointing, but I tried to make the best of it. I’ve been busy with the next iteration of this project, and enjoying a bit of “staycation” time here in New England (hence this month’s cover photo).

In any case, my paper did get published, and I’d still like to share the materials I presented virtually at the conference. It’s mostly intended for a technical audience, but I hope at least some of my readers will find it interesting. The paper is titled A Meta-Evolutionary Algorithm for Co-evolving Genotypes and Genotype / Phenotype Maps. I had to cut it down to just four pages for the official publication, since it was accepted as a poster, but the full length version is available here, and I wrote up an overview of my algorithm’s implementation for those who want to go deeper. There’s also a digital version of my poster and a short video overview of my experiment.

I continue to work on this idea, and it is starting to evolve beyond what I presented in that paper. Right now, I’m actively deconstructing and rebuilding the algorithm. CPPNs are an important and well known part of the AI field, so I’m trying to describe precisely how my algorithm is different, and which of those differences account for the remarkable results I found. Originally I thought of this research as being about epigenetics specifically, but as I try to generalize and simplify, what I’m left with looks like straight-up endosymbiosis. I’ve been thinking of this algorithm as a metaphor for a cell and its genes / nucleus, but it could just as easily be a metaphor for an animal and its community of microbes. This is exciting, since I’d love to do more research on endosymbiosis, and I really like the idea that perhaps symbiosis is the driving force behind intelligence as we know it, fundamentally changing the dynamics of evolution.

Anyway, that’s how I see it for the moment, and where I hope my research will lead in the near future. For now, though, I’m wrapping up my summer with a few more fun outings, and preparing for the start of classes later this month. I’ll be diving deep into both evolutionary computation and deep learning, which I’m really looking forward to.

Why the Game of Life Paper?

(This month’s image is a slime mold growing on a log. It grows in a branching network of banana-yellow tendrils, some of which are engulfing plant debris they encountered. Source)

Later this month, I’ll be attending the Genetic and Evolutionary Computing Conference (GECCO) in Melbourne, Australia. I’m super excited to go, and to present my very first published academic paper as a poster. I’ll share more here when all is said and done, but unfortunately my paper isn’t really intended for a general audience, like this blog. It would probably be hard to understand for anyone outside of the fields of AI or ALife. So, for everybody else, I’d like to share what the paper means to me, and what I’m trying to say by publishing it.

My research is inspired by epigenetics, and new ways of thinking about evolution. I saw that life doesn’t just evolve by chance, it evolves to become more evolvable. It learns how to explore the range of possible forms and lifestyles more efficiently, and to nudge evolution down more fruitful paths. Life uses its intelligence to become more intelligent still. In my mind, this changes everything about evolution, and I was shocked it wasn’t more well known. Most discussions of evolution (and the programmer version: evolutionary computation, or EC) are too simple, and ignore these critical details. So, I figured I’d be the one to bring this up, and show people why it matters.

I started my first experiment before I even got to university. I was so excited by the idea, I just had to get it out of my head. I actually avoided looking for prior work, because I wanted to see how I would manifest this idea without being biased by other people’s thinking. Besides, I didn’t know of any research like mine, and I didn’t know how to find it, either. That’s why I applied to UVM. When I got here, my advisor and lab mates encouraged me to publish this project, and pointed me at the relevant literature. So I hit the books, reading all that had been done before in order to put my own work into context.

And, of course, I found I’m not the first to have this idea. There are many variations of EC inspired by biology, looking for the “secret sauce” that makes life more powerful than our computer models. In particular, how life evolves to be more evolvable is an active area of research, which has been building momentum in recent years. At first, I was disappointed. My idea was already taken! So much of what I thought made my project interesting had been tried before in some other context. But not exactly. Identifying those subtle differences has been tremendously helpful.

You see, it’s pretty well established now that “evolvability” is important. In our experiments, simulated life that’s more evolvable finds fitter solutions faster. It’s better at adapting to changing circumstances, too. It seems to be smarter and more creative. I find this exhilarating, yet these discoveries didn’t “change everything” like I had hoped. In the experiments so far, it feels like an incremental improvement. It helps, but not enough to draw much attention away from other areas of AI research, like deep learning, which is seen as much more powerful and more productive.

I think that’s because we still haven’t broken out of our old ways of thinking. Traditional EC is all about finding good solutions to a problem, but I would argue that evolution isn’t about problem solving. It’s about problem finding. Life explores the space of possible lifestyles to find and exploit opportunities. The evolution of life is a bit like a slime mold. It grows simultaneously in all directions, questing around obstacles to find resources, reinforcing the branches that get lucky, culling back the ones that don’t. It doesn’t have a top-down view of the world, but it’s still strategic and adaptive. When I look at most of the existing experiments in this space, I feel like we’re putting a slime mold into a narrow tunnel and measuring how fast it can get to the other end. We’re accidentally putting evolution in a straight jacket, and blinding ourselves to what makes it so interesting and powerful.

So, in my first experiment, I try to show a different perspective. I made a single algorithm that can adapt itself to solve many different tasks. Normally, an EC programmer picks one task to solve, then designs an evolutionary search strategy to suit that problem. They invent a genome language, a way of turning that into a solution, and ways of randomly tweaking the genome that might lead to better solutions. In my experiment, I evolved the search strategy, too. As the programmer, I designed a vast and open ended search domain, and many ways that the algorithm could restrict that space. But I wasn’t sure which restricted sub-spaces would work best, and, unlike traditional EC, I didn’t try to guess. I just let the algorithm figure that out for itself.

The way I did this is also interesting. It turns out, the algorithm I invented is strikingly similar to one that’s already popular: “compositional pattern-producing networks,” or CPPNs. Again, it was a little frustrating to be scooped, but I’m using this algorithm in a new way. Instead of evolving new “bodies” for simulated life, I’m evolving new ways of generating bodies. It’s a subtle difference, but an important one, I think. That extra level of indirection gives evolution more influence over its destiny, and the power to make more complex patterns in ways I couldn’t even anticipate. Now that I know how my idea is so similar to, yet different from, an existing algorithm, I’m teasing apart those differences, to measure the impact of each one.

I’m proud of my work, and excited to talk about it with other EC enthusiasts at GECCO. On the other hand, I’m still figuring out how to do science, and there’s a lot I don’t like about my first paper. This project was mostly my way of proving to myself that this crazy idea could work. The results are intriguing, but it’s not yet a clear example of what I want to show. It’s also complicated, unusual, and hard to explain, even to other EC researchers. If I want people to get excited about this, I need to simplify, make my work more relatable, and find better ways to demonstrate and measure the novel behavior I’m talking about here. There are no “obstacle courses for slime molds” in the EC literature that I know of, so perhaps I’ll need to design some.

Hopefully, I’ll get lots of inspiration and feedback at GECCO. As I learn more about the field of EC, I’m finding more and more examples of work similar, yet slightly different, from my own. This is great, because each of those differences is an opportunity for a new experiment, to see if my perspective can shed light on something new. I’m already dreaming up all sorts of new ways to explore my ideas. And that’s more or less how I hope to spend the next several years. Maybe that’s my PhD.

In any case, I hope that explanation was interesting, and not too vague. I’ll get more specific in a few weeks, when I post a follow up with the full GECCO paper, the poster I presented, a video summary of that poster, and links to some supplemental results and analysis. I bet I’ll have some fun things to report from my time in Melbourne, too! As always, I’d love to hear from you in the comments.

New Coding Project

I just published a new coding project! If you’ve been following along, this is the “grown up” version of a demo project I posted last year.

It’s further exploration of what I call an “epigenetic algorithm.” It’s inspired by a simple observation: in living cells, the process of evolution is actively managed by the cell, which itself is an evolved mechanism. Using evolution to optimize evolution seems like a powerful trick, so I’m trying to reproduce it in small-scale AI experiments. I hope to make evolutionary computing more open-ended, more successful in vast search spaces, and less biased by the programmer. In this case, I’m generating cool looking Game of Life simulations, but I hope to find many more practical applications in the future.

I’m not sure whether or not I will publish this as a work of science. It’s complicated and weird and there’s more work involved to make this a proper controlled experiment. As I learn more, I’m already thinking of other ways to explore this idea that might be more effective. So, for now, the plan is just to share the code and I may or may not revisit this later, depending on where my other leads take me.  🙂

I gave a presentation that covers the motivation, results, and challenges of this project for a technical audience.

You can also peruse the source code on Codeberg.

Sorry this is less accessible than my usual blog posts! If you have any questions, just drop them in the comments, and I’ll happily answer them.

Learning to Steer

A Cell’s Eye View of Evolution, Part 3

(The image for this week is an illustration from Waddington’s 1957 paper The Strategy of the Genes, which is often used to explain canalization. It shows a landscape with shallow, forking grooves and a ball rolling down that landscape. Although the ball’s path isn’t fully determined, the existing impressions in the landscape constrain it to one of a few likely paths)

This is part three of a three-part series. You can read it on its own, but to get the whole story, you should start from the beginning.

Darwin explained part of the great mystery of life: how complexity and intelligence can evolve from randomness. When DNA was discovered, this seemed to “seal the deal.” DNA is the molecule that describes an organism’s nature, makes traits heritable, and carries mutations that are fodder for natural selection. That seemed to explain everything, at first, but I would argue that’s just the beginning. The DNA molecule itself lies at the heart of an incredibly complex system of processes that manage its care, use, and replication. These systems are collectively studied as “epigenetics,” and science is just beginning to understand how they work and the impact they have on evolution.

It’s important to remember that DNA is an inert molecule that does nothing by itself. It needs a cell, a sort of organic micro-robot, to interpret that DNA and turn it into form and behavior. So, in a sense, every living thing is made up of two evolved programs: the DNA and the cell. Both are made of physical matter, which are subject to mutations. Both share the same selective pressures and reproductive fate. They evolve together, but they have different purposes. For the most part, the DNA program is what determines the organism’s lifestyle. The cell decides how to read that program, and how to make changes to that program over a lifetime and across generations.

One of the most important ways cells influence their own programming is through mutations. These happen naturally. As molecules bang against each other and get exposed to UV radiation from the sun, they sometimes spontaneously change shape. When those molecules represent critical information for a species’ survival, that could be disastrous. For this reason, life invests a ton of energy into detecting and correcting errors. But this process is never perfect, and it can’t be. If life always copied itself perfectly, there would be no variation for selection to act on, and no evolution. Not only that, getting the error rate much lower than it already is would be prohibitively expensive. So, life strikes a healthy balance, allowing just enough mutation to be useful, but not enough to be dangerous.

Interestingly, that finely-tuned mutation rate is not constant and universal. There are some stretches of DNA that get extra error correction, always triple checked to ensure they stay as stable as possible. On the other hand, some stretches of DNA get actively shuffled, injecting randomness into things like the immune system, creating diversity that makes the population as a whole more robust. Perhaps most remarkable is that when cells get stressed out, they divert energy to other things, and away from error correction. This may just be an accidental side effect of the cell breaking down, but it might also be a survival strategy. To anthropomorphize, perhaps cells get creative when times are tough, trying out crazy ideas in the desperate hope that one might save them.

That said, a single DNA copying error can be devastating, so how does life cope? Remarkably, it can often just work around the problem. Living systems have a lot of redundancy, with many mutually supportive ways of doing basically the same thing. This leads to a phenomenon called “canalization.” The more critical some behavior is to life, and the longer it persists over many generations, the more redundancy builds up around it. This means that single errors may alter the behavior a little, change how it works, or make it less efficient, but probably won’t break it entirely.

When errors are too severe to recover from, an organism might just fail to thrive and die, but sometimes it actually notices the failure and decides to self-terminate. That may seem bizarre, but in multicellular organisms it makes a lot of sense. If the error is in a single cell, then removing that cell lets others take over its job. If the error would prevent the whole embryo from developing into a healthy adult, then it’s better to scrap the work in progress, recycle those materials, and start over from scratch. In other words, life has its own Quality Assurance processes, at multiple levels, which minimize investment into evolutionary dead ends.

Cells can also swap genes with each other, sharing useful recipes and trying them out in new combinations. Sex is one way to do it, aligning and remixing two complete genomes in an incredibly complex way that ensures the resulting DNA program is still valid. Simpler organisms like bacteria don’t do this, but swap genes in a much more free-form process called “horizontal gene transfer.” Basically, cells sometimes leave scraps of DNA lying around, or pick up those scraps and integrate them into their own programming. This can let a new behavior (like resisting some toxin or eating some food) spread very rapidly through a bacterial colony. Either way, randomly adopting genes that have proven successful in another organism is a much safer and more powerful way to create useful diversity than mutation alone.

It’s also worth noting that how a cell reads its DNA can change over a lifetime. Cells annotate their program with notes (ie, methylation) that indicate which recipes to avoid or use more of, depending on context. This is how single celled organisms adapt their behavior to a changing environment, and how cells in multicellular organisms differentiate into different kinds of tissues. Importantly, these notes are sometimes passed down across generations. For instance, an organism might survive near starvation by tuning down its metabolism, staying smaller, slower, and using less energy. That change is heritable. The next few generations will also have a slower metabolism, and if that serves them well, it could lead to long-lasting behavioral changes that eventually get encoded into the DNA itself.

So far, I’ve talked about how life modifies itself, but it also modifies the environment. Organisms can build caches, nests, and tools that make life easier, and these get passed on, too, both as hand-me-downs and as lessons. Organisms form an ecosystem, full of mutualistic relationships between species that make life easier. Over geological time, this has transformed our planet from a barren rock to a lush world full of possibility. Life cultivates a supportive environment for future generations, shaping their behavior and evolutionary fitness. Child care might be the most visible example, protecting each new life when it’s most fragile, then sending off the new generation in a good direction informed by the parent’s life experience.

In the basic Darwinian story, evolution is something that happens to organisms. Accidental changes occur randomly, and nature chooses which ones will persist. But as we’ve just seen, life does not leave things up to chance. Randomness plays a key role in biological evolution, but life manages that randomness carefully and uses it selectively. Life also does everything in its power to influence the next generation, in ways that are not random, but “purposeful” in a sense. A cell can’t understand why it does these things, but it does them for a reason: they worked well in the past, got selected for, and ended up in the cell’s programming.

This leads to a powerful realization: when it comes to influencing evolution, cells don’t understand what they’re doing, but many higher organisms do, at least a little. For instance, an animal can apply its full cognitive capacity, mind and all, to choosing a mate and raising its offspring. In this way, the cell has moved from blindly repeating what worked in the past, to making evolutionarily relevant decisions intentionally, with forethought and analysis. A dog may not understand genetics or think about the future of her species, but she certainly has strong opinions about who would make a good mate, when / where / how to raise her puppies, and which pups to give more or less attention to. She uses her senses, her instincts, and her big brain to make big decisions that shape evolution. She may not see the big picture, but she cares and makes informed choices nonetheless.

This helps explain the paradox of how life managed to become so incredibly smart just by randomly banging molecules together for a few billion years. Life may have started off randomly, but it quickly became more directed. Life harnessed Darwinian evolution to build a more powerful evolutionary algorithm, one that’s opinionated and shapes its own search space. At first, these evolution-shaping behaviors were simple and rigid, just tricks repeated by rote because they tended to make the next generation more successful. Then, as life became more intelligent, it started to apply that intelligence to shaping itself, creating a runaway process of recursive self-improvement.

Conclusion

The main takeaway from all this is that Darwin is the beginning of the story of evolution, not the end. Life uses all of its intelligent capacity to influence its own evolution. This led to a virtuous cycle. Increased intelligence gave life greater influence over evolution, which it used to become more intelligent, which gave it greater influence over evolution. For this reason, I think it’s better to say that life designed itself than to say it evolved by chance. The process of “design” here was more stochastic, collective, and unthinking than we normally associate with that word, but in the end, the result is the same.

This story is still uncertain. The science around autonomous robots, intelligent collectives, and epigenetics is relatively new, and changing all the time. Plenty of biologists push back hard against the idea of any sort of agency or direction in evolution, partly because they’ve been fighting against the theory of Intelligent Design for so long. Others believe we’re overdue for a new story about evolution, and are trying to find the right narrative and the evidence to back it up. I hope my research into evolutionary algorithms might be a useful contribution to that effort. If you’d like to dig deeper into this topic, Evolution in Four Dimensions is an excellent overview of the field of epigenetics.

What do you think? Did reading this make you think of life, cells, or evolution any differently? Any new ideas? Does anything I said sound wrong or misleading? Do you have other ways of looking at it? This post is more speculative than usual, and represents some of the ideas I hope to pursue in my PhD research, so I’m very interested in criticism and feedback. If you have any thoughts, please let me know in the comments!