Brain-Like Computing

(This post’s featured image is Slicing the Connectome by Alfred Anwander, a visualization of the deep structure of the human brain. It shows flowing bundles of thin fibers, color coded by their direction of travel and which parts of the brain they integrate. Used under the Creative Commons ShareAlike 4.0 International license)

Artificial Neural Networks (ANNs) are a revolutionary technology. They’re the foundation of Deep Learning (DL), the technique that’s brought us AI systems that understand images, text, and speech, can beat us playing chess, go, and StarCraft, or even drive a car. These models are supposedly “brain-like.” We call them “neural” after all, and they can do many things people do, but how much are they like us, really? Some argue that in a few years we’ll scale this technology up and find that AI is smarter than we are! But if we take a closer look at just how “brain-like” these systems are, there’s good reason to doubt that. Is “bigger” all we need, or is a human mind something fundamentally different?

It’s worth starting with how brains and ANNs really are similar to each other. ANNs were inspired by observations of actual brains, both studying their anatomy under a microscope, and watching patterns of electrical activity in living subjects. Our understanding of how brains work is still very crude, but there are some general principles we can glean that seem very important. The brain is a vast network of small, relatively simple modules (neurons). Each one is connected to a few or even hundreds of other neurons in a vast tangled network. These neurons sometimes “spike” with electrical activity, and when one does it often nudges its neighbors to spike, too. Information can flow, transform, and cascade as it propagates through the network. Often many neurons fire together, and it seems that the meaning they represent is actually held in the collective firing patterns across the whole network. That suggests what matters in terms of scale isn’t how many neurons there are, but how many different combinations of neurons, which is vastly more. Learning seems to involve the neurons “re-wiring” with each other, adding or removing connections (synapses) to other neurons, or changing their strength.

Computer scientists were inspired by this picture of the brain, seeing it as a sort of programmable circuit. Each neuron does just a small, simple operation, but by wiring them together into a large network you can make more complex functions. Importantly, such a network could represent nearly any function, depending on the particular way the neurons get wired up together. For this reason, ANNs are often known as “universal function approximators.” Another common way of understanding ANNs is through the math used to implement them. Each neuron does a simple operation: it takes in signals from all the synapses that feed into it, multiplies those signals by “weights” that determine how much this neuron listens to each of its neighbors, adds them up, then performs an “activation function” on the result. Often, that’s as simple as returning the final sum, or zero if that sum is negative. ANNs use large arrays of identical neurons, and the final output is basically just a math expression that does all those additions and multiplications. The synapse weights are variables (or “parameters”) in this expression, and “training” a neural network is simply finding the right values for all the variables so that the math expression resolves to the answer you want, rather than something else.

But there are an infinite number of possible functions. If an ANN can represent any of them, how do you get it to be the function you want and not just some random garbage? To start with, you need to have a whole bunch of data. Either you have some task you want the neural network to do and correct answers you want it to learn (say, the name of what appears in an image, or a translation from one language to another), or you have many examples of something you want the network to generate more of (like text or images). Then, you can use an external process that evaluates the network, compares the results to the target, tweaks the connections between neurons to make the result a little better, and then does that over and over again. Researchers have found clever ways to make this training process as efficient and reliable as possible, but it can still be painfully slow and expensive, using vast amounts of data and computing power. Generally you never get a perfect result, so you carry on training until the outputs are “good enough” to satisfy the programmer. Then training stops. All the trainable parameters in the ANN are locked down. What you have left doesn’t learn at all, it’s just a math function that does… whatever it was configured to do.

This already suggests a massive difference with the human brain. For a human being, there is no “right answer” to most challenges we face. There may be serious consequences for getting the answer wrong, but this may not be obvious in the moment. This makes learning how to respond vastly harder, because we have to set our own goals, and intuit for ourselves if we got it right, now and across time. There’s no “external” training process to rewire the neurons, either. Each neuron manages its own connections, somehow knowing which ones to strengthen or weaken without any top-down picture of the network they are making or the problem they are trying to solve. There’s no “training phase” where the brain gets configured once and for all. No, we learn continually as we act, which means we adapt continuously, changing our understanding and our strategy in context (unlike ANNs). It’s even more complicated than that. Animals with brains seem to have an “offline” learning process, too; dreams play an important role in memory consolidation and mastering new skills. It’s not clear how that relates to other kinds of learning and it seems very different from the training process we use for ANNs.

Another major difference is how ANNs are organized. DL’s breakout success came from computer vision models inspired by the visual cortex. Brain researchers showed that animal vision depends on layers of neurons, arranged in a hierarchy. The low-level ones detect primitive patterns like edges of a particular orientation. Then higher layers use those edge detectors to find higher-level patterns, like shapes and objects. For a long time, computer vision systems worked this way, too (though, “vision transformers” are starting to change that). Much effort was put into making systems with more layers in order to learn more complex functions, which is where the “deep” in “Deep Learning” comes from. The hope was that if we made them much larger, they would perform much better, and so far that’s mostly been true.

But are deep, hierarchical ANNs really brain-like? They seem to be vastly smaller and simpler, actually. For one thing, it’s estimated that the human brain has 86 billion neurons with 150 trillion connections between them. By contrast, GPT5 is estimated to have 2-5 trillion parameters, but this is likely not a fair comparison. Each living neuron is vastly more complicated than the simple “weighted sum” of a neuron in an ANN. We don’t fully understand what neurons do (not to mention all the other kinds of cells in the brain, which may also play critical roles in thinking and learning), but they are living cells that autonomously manage their activations and their relationships with other cells. Unlike ANNs, where each neuron produces a single value and all neurons activate at the same time, biological neurons communicate in complex ways across multiple channels with patterns that change over time, at multiple frequencies. That means each biological neuron may be doing much larger and more complex tasks than their artificial counterparts. But aside from being a few orders of magnitude bigger than an ANN, the brain also has a much more complex structure.

The outermost part of the human brain is called the neocortex, and it’s where most of our “higher thought” happens. It’s broken down into regions which each serve a specialized function, like vision, language, logic, or social interaction. But these regions don’t look very different from each other, anatomically speaking. We mostly discovered this structure by measuring electrical activity while a subject sits in a scanner thinking about something. The neocortex is more or less uniform in its structure, but different parts are assigned different roles, and the deep structure of the brain connects them up in a very particular way. Human brains have a highly stereotyped layout, with roughly the same set of faculties interacting with each other in roughly the same way in every person. Except, of course, when something goes wrong. Brains are also very plastic, and can reorganize around damage, and adapt to patterns of use. For instance, people born blind often show activation in the visual cortex when they read braille. I like to think of the neocortex as a sort of general purpose “cognitive fabric.” Evolution, development, and learning subdivide it into modules, then compose those modules into a particular cognitive architecture that suits a species’ lifestyle. Brain cells are like an evolved platform for building brains and minds of all sorts.

If the “universal function approximator” metaphor applies to brains, then brains have vast numbers of them, arranged into a complex network, where information flows through paths that diverge and converge and loop back on each other in very particular ways. It would be like a whole computer, specialized for the purpose of generating the human mind, built from these programmable circuits. If so, then what makes our minds “human-like” is not the fine structure of the brain that inspired ANNs, but the coarse structure that determines the shape and flow of our thoughts. In nature, that design is evolved, but flexible, and it continuously adapts to meet demands. By comparison, a programmer working with ANNs chooses an architecture by intuition, it’s usually very simple, and then it never changes. Researchers are continually exploring different kinds of neural modules and ways of composing them together, but they’re all much simpler than actual brain networks, which really aren’t the inspiration any more. AI researchers have found ways to do useful work with ANNs, and don’t care whether they resemble brains or not.

But we shouldn’t commit to the “function approximator” metaphor too quickly. Some would argue the brain is nothing at all like a function! An ANN is a function. It is a math expression that describes how to turn inputs to outputs. When a programmer evaluates that expression, they get an answer, but otherwise the ANN does nothing. It’s just a pile of math. By contrast, the mind is always active, whether it’s observing the senses, controlling the body, thinking hard, or just daydreaming. In some sense, the brain has a narrow interface with the body and the rest of the world through the brain stem. If the brain implements a function, this could be where the inputs and outputs pass… except, it’s not clear where the mind ends and the body begins! The nervous system extends throughout the body, sensing, controlling, and interacting with the various organ systems, often without checking in with the brain at all. The human gut (and the microbes that live there) play a role in cognition and can shape our mood and behavior. It’s probably not the only organ system that does that, either. The mind is deeply integrated with the body, driven by its needs, cravings, and instincts. The mind is also creative, generating its own inputs, setting goals, identifying problems to solve and strategies to solve them. This doesn’t seem very function-like at all!

By using the same low-level principle as the brain, ANNs let us automatically design functions to solve a wide range of difficult real-world problems. Often, we do this by observing humans solve some problem, and copying what they do. In short, we choose what human skills to automate, and we make functions that simulate those skills. The AI doesn’t “figure out” the answers; we have to already know the answers, and then bake them into the AI. Using many processor cores running in parallel, we spend the equivalent of hundreds of years of compute time to train our models to approximate what a human can do effortlessly by intuition. Once we pay that up front cost, then that knowledge is baked into the network, unchanging until we decide to rebuild it. In short, all the things that make the human mind open-ended, creative, and continuously adaptive are missing from ANNs. They don’t have the high-level architecture that makes human minds so robust, capable, and well-rounded, and we don’t know how to make anything like that. We have no idea what “thought” even is, let alone how to create it.

There’s no reason to think AI couldn’t become more brain-like and mind-like over time. There’s nothing magical about our biology. But it seems absurd to think that scaling up our current designs will get us there. Modern AIs are very good at mimicking human data. They appear smart, because they reflect and reshape the intelligence we feed into them. But they simply do not do all the extraordinary things our brains do. Most importantly, they do not continuously set their own goals, improve their design, or adapt to changing needs through evolution, development, and learning. No, every innovation in AI is an act of human cleverness imposed onto the machine. This is why I think we have a long way to go. What we’ve built is a toy compared to an actual human brain. We’ve successfully borrowed a few of nature’s tricks, but there’s vastly more there, and we hardly even know what we’re missing. We will need many more clever insights, and those do not come often or easily.

Do Brains do Back Propagation?

I’ve been thinking about this article Ulrike Hahn shared with me recently (alternative source here). Apparently, I have strong opinions about why we shouldn’t say that the brain is doing something “backprop-like” when we learn!

Before we start, the key thing to know is that computation in a neural network is distributed across many nodes connected by links. To tune the behavior of the network as a whole, you need to tune each of the nodes and links, but how do you know how any one node or link contributed to the final answer? It’s complicated, and each one depends on many others. We call that “credit assignment.”

I think both brains and artificial neural networks (ANNs) need to solve the credit assignment problem. For ANNs, an algorithm called “back propagation” or just “backprop” is the industry standard solution, and it works very well. I think what brains do is different.

There are two big reasons I say this. The first issue is how work is split between nodes and links.

In ANNs, the nodes themselves are trivial, and they’re completely homogeneous across a full layer of the network, if not all the layers. Any deeper computation is about how the nodes are wired together. That is, the program is in the links (synapse weights), not the nodes.

By comparison, brain cells are both complex and diverse. We don’t know how much of the computation happens within cells vs. between them. We’re just starting to figure out what all the different kinds of cells are, but have little idea of what they’re doing. It’s clear that individual neurons do a lot, and that ensembles of cells manage each other in complex ways.

I worry saying the brain “does backprop” implies a network of trivial nodes, where tuning weight vectors is the place where learning happens. That’s likely wrong, and it obscures other possibilities.

Backprop is an algorithm I run to optimize an ANN. It needs a top-down view of the network topology and the weights of all synapses. It solves the credit-assignment problem in a clever way, usually based on the error rate compared to a known target. Then it simultaneously updates all the link weights in the network based on how the ANN responded as a whole. First you train your network, then you can use it, but not both at once.

Rather than being tuned by some external actor, brain cells manage their own relationships with their neighbors. They grow, prune, and modulate their synapses, and they decide when and how to do that based on imperfect feedback, limited information, and evolved heuristics. Brains track and minimize errors, but the targets are internally generated. This is happening continuously, with fluid transitions between acting in the real world, imagining, thinking, and learning.

I’d argue what the brain does is much harder, and much more interesting.

Half Way?

(The image for this post is a photo I took of a scenic vista near my home in Burlington, VT. The foreground is a clifftop covered in a few inches of snow, with a fence a few feet from the edge. It overlooks bare trees, a school yard, and a white field around it. The skies are blue with fluffy white clouds and mountains in the distance.)

Recently, when people ask how my PhD is going, I say I’m about halfway through! Really, a PhD takes as long as it takes, but I’ve been at this for over two years now, and four to five years is typical. I’m almost done with classes, I’ve passed my qualifying exams, and I’m starting to sketch out my dissertation. So far, so good! Along the way, I’ve learned a lot about my area of research, academia, and myself. All this makes it seem like a good time to reflect on what I’ve done so far, and where I’ll go from here.

I think the hardest thing for me, in the past two years, was adjusting to life in academia. I worked in the tech industry for over a decade, and always felt I was “academia adjacent.” I was a technical expert with a college degree, and so was just about everyone I worked with. I didn’t do research myself, but I knew plenty of folks who did, currently or in the past. I never really considered graduate studies, but I heard a lot about it from people who had. It sounded tough, but rewarding. I wasn’t rigorous about it, but I tried to keep up with major trends and advances in computer science and AI. I was building systems that used some of that cutting edge technology, so I had to understand it, at least loosely! All in all, I felt pretty confident heading into grad school.

I don’t think anything I learned about academia was wrong, but it also didn’t really prepare me for the experience. The most painful part was discovering just how undignified and precarious the grad student life is. I was very pampered at Google, where engineers ran the show and gave themselves top status and many perks. As a grad student, I barely get paid, and my health insurance and benefits are inadequate. I don’t get most of the perks of being an employee (even though I work for the University) or of being a student (even though I take classes there). In fact, many systems and processes at UVM only take grad students into account as an afterthought, if at all, even though we are an R1 university (top tier in research!), and grad students do most of that work. Many of my peers struggle just to keep food on the table, while they do the most intellectually challenging work of their lives. Luckily my savings make life relatively comfortable, but it was a shock to switch into this lifestyle, to see how we treat the upcoming generation of scientists, and how unnecessarily difficult we make their lives. And that was before Trump started slashing funding. To be clear, UVM is all I know, but I don’t get the impression it’s a bad school, or anything. I think this is sadly typical for America.

I also had no idea what getting a PhD was really about. I knew I’d have to take classes and write a dissertation. I thought I understood the nature of science pretty well. But actually learning how to be a productive scientist has taken a lot of getting used to! The main difference from engineering is that nobody can tell you what to do. Sure, there are open problems that need to be solved, and projects that the professors here need help with. But a lot of being a good scientist is finding the particular ideas that motivate you, or the places where your particular skills will serve you best. To a large extent, you have to explore and see where you fit in. There’s also very little in the way of “best practices.” Science is, by definition, on the very edge of what we know how to do. My advisors teach me the established tools of the trade, but they would never tell me how to solve a problem, because they know we need new techniques and perspectives. More than anything, they want to coax me into finding my own way of investigating the world, rather than simply showing me how they did it. That’s much harder!

The other big challenge of academia is “the literature.” The driving force behind science for the past several decades has been “publish or perish.” Every single scientist is writing new papers all the time, and the sheer volume of text is astonishing. It’s also barely organized. The way you navigate it is just keywords, citations, and (most importantly!) word of mouth. Unfortunately, this makes finding what you need very tricky, until you become an insider. Contributing to science is also very challenging, because everything must be expressed in terms of the existing literature. This was very painful at first, since I had something to say, but I was struggling to express it. I was very ignorant of what came before, and most of what I found was so different, I couldn’t see how my idea would fit in! I’ve come to appreciate this more, though. The biggest challenge in science is communication. If I’ve got a big, complicated idea, how do I get people to understand it? How can I convince them to care? I have to start with what they know! Tedious as it is, this is the only way to narrow down my idea, state it precisely and concisely, justify it with evidence, and make it feel relevant and useful to other researchers.

This leads me to the knowledge itself. When I first set out, I wanted to write a book. I was frustrated by the common understanding of what intelligence is and how evolution works. I had another perspective that seemed better, I was baffled that nobody was talking about it, and I felt compelled to share it with the world! I still might do that, but since my first blog post three years ago I’ve read over 50 books (!!!), and that has changed my perspective. The big surprise is that people have written about these ideas. Many people, for many years! It’s just, these books aren’t very popular. Some of them are very technical or obscure. To get at the good ones, I needed to find the right keywords, references, and recommendations. Again, it’s hard to know unless you’re in the know. Frustrating. But at least I’m not crazy, and I’m not alone. In fact, these ideas have been coming up more often, with more justification, in more accessible places, all the time! Better stories of intelligence are getting out, just very slowly, as I suppose is the way in science.

Reading all those books and digging into science papers has been enlightening. It’s helped me understand the things I care about more precisely, taught me many different ways to study them and talk about them, and shown me exciting new evidence. It’s also shown me what I don’t like. I’ve actually gained a lot from reading works I hate. Often it’s because they’re talking about something I care about deeply, in ways very much like my own, but with some subtle yet all-important difference that gets under my skin. I get all worked up, and I know precisely why. There’s a specific detail I can point to that matters to me, and they got it wrong! That’s priceless. There’s no better way to find out what I need to take a stand on. All this reading has also helped me understand what aspects of my research passion have already been well covered, and which ones remain neglected. This is particularly important for directing my attention as a researcher.

I just went back to read the first blog post I wrote about what I wanted to research. It’s funny how much this has changed! It reminds me that it was being a manager at a software company—making intelligent systems that mix human and algorithmic components—that led to this perspective on what intelligence is. But the way I talked about laying out a system architecture for all intelligence on this planet seems outrageously ambitious! It’s still a lovely long-term goal, but I’ve had to narrow my focus dramatically for my PhD, and I think that’s a good thing. I should focus on what matters most, to make clear and strong claims that will have an impact. Certainly, when it comes to AI, evolution is a part of the story that is sorely neglected. It’s an area of research that’s less competitive, and one with plenty of untapped potential, I think. Evolution is also central to how I’ve come to understand people and society. Despite all our intelligence, sophistication, and technology, humans are also social apes, whose behavior is powerfully shaped by our bodies, our instincts, and the early days of our species. I find human intelligence only makes sense when I think of it as constrained by our evolutionary history from one end, and our evolved culture from the other. We live in that intersection.

I care a lot about evolution, and I have since I was very young! It’s fascinating to me, and poorly understood, especially in the general public. What we’ve learned about how life works in the past 25 years is astonishing, and I believe we need a major rewrite to the story of evolution. I’m not alone in this. There’s a growing movement to look at evolution in a new light, one where organisms play an active role in shaping their environments, each other, and their evolution. The main struggle is figuring out what the new story should be, how it differs from the old story, and what those differences mean. This is where I feel like I have an interesting role to play. The field of biology is working to flesh out this story, gradually, and with plenty of conflict. But they don’t really appreciate that giving organisms agency changes the computational properties of evolution as a process. It’s a different kind of search than we realize, more like a search for better ways of searching than a search for greater fitness, and it ought to be much more powerful because of that. Meanwhile, the field of evolutionary computation has been struggling to overcome major limitations, and discover algorithms that work more like nature does. But those researchers tend not to study biology, and have unwittingly become very stuck on the old “replicator” model of evolution (ie., Dawkins’ Selfish Gene), which I believe is holding the field back.

This leads me to my dissertation, or how I imagine it today. I’m sure this will change plenty as I develop the ideas, run more experiments, and see what they have to show me. But I think I have a great opportunity to fill this gap between fields. I want to build a model of evolution where the evolving organisms can observe the world, have some awareness of their fitness, and use that to influence their own process of evolution. I want the “rules” of evolution to come from them, rather than designing them myself and building them into the algorithm, as computer science researchers have always done in the past. My hope is this will produce evolutionary searches that are both more open-ended and more efficient. They will be less restricted by what I think the right answer should look like, and more able to find solutions I wouldn’t think of. They’ll be able to search more strategically, and adapt the search as they evolve, balancing the trade-off between seeking new opportunities and fine tuning the solutions they’ve already found. If I can do this, then I hope AI researchers, philosophers of evolution, and biologists might better appreciate the significance of this change in perspective, and be more likely embrace it in their own work.

I’m very excited to pursue this. I care a lot about the concept, I have very specific ideas of how to implement it, and enough theoretical knowledge from my studies that this result feels plausible. On the other hand, it’s also frustrating. There’s so much more I’d like to say than will fit into a PhD! What I’m planning isn’t even a complete model of how I think about single-celled organisms. This work is all inspired by proto-cells, ancestors even older bacteria. There’s yet another layer of complexity I’d like to add, just to account for the role of DNA in this story. And, going back to my original vision, it just builds up from there with bodies and brains, ecosystems and cultures. I may never get that far. In fact, I think I may focus purely on evolution, since there’s so much to say about it. But starting small is the only option, especially since I need results to convince other people to pay for this research. Sadly, the United States has very little funding to pursue science for its own sake.

PhD programs are generally about some young student helping a seasoned researcher with their work to gain experience. It’s unusual (but not unheard of!) for a grad student to come with their own project in mind. Unfortunately, that makes funding more difficult for me. My advisors appreciate my ideas and want to support me, but they have funding for their projects, not mine. We’ve had some success finding the intersection of their goals and my own, so we can advance both with one project. But this semester I have no research funding, and am working as a teaching assistant instead. That’s okay, but it means I have a bit less time for research. I’m applying for both a fellowship and a grant from NASA. If I get those, it would give me the freedom to pursue my research full time, and apply it to the design of antennas, since that’s the opportunity that arose. I’m cautiously optimistic about this one, in part because I have support from a researcher at JPL, who wants to see this project happen! But it’s never a sure thing, especially considering the massive budget cuts NASA has suffered recently.

I’m also thinking a lot about the future beyond my PhD. Originally, I set out to pursue my research and study machine learning, with the expectation that I could always just go back to the tech industry with some new skills, if this science thing doesn’t work out. I never imagined that the tech industry would change so dramatically in just a few years! The advent of LLMs has completely changed how we think of “AI,” and not for the better. I recently read The AI Con, which is a great book explaining the dangerous and unethical way this technology has been developed and marketed. Having read it, I no longer want to call my work “AI.” That has become a marketing term more than anything else, used to sell technology that devalues and replaces human creativity, craftsmanship, and labor. It’s become the overwhelming focus of a tech industry bent on extracting value from their customers, rather than serving them or benefiting society. This is not what my research is about, and I want no part in it. Hopefully the “AI” fad will pass, but I’m concerned the tech industry and software engineering work will never be the same, and I’m wary of which corporate projects I might join.

Becoming a professor is still on the table, but I’m very concerned about that route, too. The culture of academia was broken even before I got here. “Publish or perish” is a very bad incentive for researchers, and it has been leading to a crisis in academic publishing. The industry exploits researchers, students, and reviewers; the science has become “safer” but lest creative and diverse; and quantity gets prioritized over quality. The whole academic system is also badly unjust, with toxic power dynamics built-in. Grad students are overworked and underpaid, but in some ways professors have it harder. Their benefits are only marginally better, and the pace and quality of work they’re expected to churn out is often totally unrealistic! On top of this, the recent funding cuts mean that chasing grants is becoming an ever growing part of a professor’s job, that there’s more pressure to work on “high-value” research rather than pure science, and more of the money comes from industry partners with ulterior motives.

I wouldn’t take just any academic job. It would have to be the right opportunity, at the right kind of school. Similarly, I’m no longer interested in a “standard tech job” or a role as an “AI engineer” at any of the big companies you might recognize by name. I’ve found I really love evolutionary algorithms and parallel computing. I’d like to find a job doing that, if possible, but it’s a narrow specialty. This means I’m shopping around for very specific opportunities, and luckily I am finding some. The ALife and Diverse Intelligences communities have become important to me, and they’ve given me several leads and connections. I’m certainly not the only one chasing these sorts of dreams right now, and there are folks (in industry, academia, and government) who believe this will be important for the future. So, I’m cautiously optimistic, despite all the depressing challenges these days, but still unsure where this path is leading. I intend to play things by ear, as I have from the beginning.

In any case, it seems like I’ll be at this for another year or two or three. More and more, my focus will be research and the specific ideas I came here to pursue. I’ll keep reading books and posting reviews on Goodreads. I’ll keep writing blog posts. I imagine they’ll become more focused on evolutionary search processes, but I’ll try to mix things up with more posts about nature and I’m sure I’ll have more to say on “AI” as it continues to be such a large and growing part of our lives. Hopefully I’ll have more adventures to share with you, attending conferences and academic events around the globe. And I’ll keep looking for job opportunities, figuring out what my path forward looks like. I appreciate you coming along for the journey, I hope you’ll keep coming back to see what I’ve been thinking about, and that you’ll join the conversation in the comments section.

Evolution and Cognition

(This month’s post features a photo of Stentor coeruleus by Jasper Nance on Flickr. Under a microscope, this little guy looks translucent white, with green and brown bubbles trapped inside of it. It’s roughly cone shaped, rooted to some debris at its narrow base, bent at a ninety degree angle as its wide, cilia-lined mouth quests about for food)

Sometimes I like to talk about all sorts of living things as “intelligent,” even ones that are very simple and reflexive in their behaviors. That’s because I tend to think of the process of evolution itself, and everything it creates, as intelligent. In some sense, I feel like the simplest of creatures are particularly intelligent in their design, because they don’t waste any effort on “thinking” or complexity when they don’t have to. But this isn’t how most people use the word “intelligent,” which is a problem. It’s not a particularly well defined word, either, so I think my view deserves some clarification. How are evolution and cognition related to each other? That’s something I’ve been thinking about a lot lately.

Perhaps it’s best to start with some of the simplest life on the planet: bacteria. They are basically just little robots, controlled by a simple “computer” made out of proteins and genes (here’s a great book on the subject). They certainly don’t “think” like we do, but they do perceive their environment and respond to it appropriately. They seem to have a model of their tiny selves, and their relationship with their environment, that lets them make good decisions from moment to moment, to survive in a hostile world with intermittent food, and many poisons and predators about. They aren’t particularly creative, but their evolved programming can handle a very wide range of contingencies, responding in appropriate ways to just about anything that their ancestors have encountered in the past.

Even very simple computer programs can learn. It’s just a matter of observing when two experiences tend to go together, and recording that association for later. This is all that “AI” is doing, and it can be shockingly effective. There’s no reason that a “computer” made of proteins and genes couldn’t do the same thing. But simple cells like bacteria don’t need to do this. They reproduce so quickly, they can simply evolve instead. The difference is surprisingly subtle. Bacteria use a relatively fixed “program” for themselves. Unlike a mind, it isn’t made to reorganize itself to conform with experiences in the world, it just does what it does. However, this “fixed” program is constantly being subjected to random edits (mutations), so it does change. The main difference is that genes accommodate to the environment by death and survival, rather than learning within a lifetime. Bacteria are incredibly prolific. E. coli can easily fill a single square millimeter with over 10 billion cells, and a population can double in twenty minutes. As long as some of the random edits are good, and they spread quickly, bacteria can adapt in real time to their changing environment, without learning.

What’s interesting is that more complex cells do seem to learn. Animals like you and me are built out of vast numbers of eukaryotic cells, but our ancestors were actually single-celled eukaryotes, and there are still plenty of those thriving out in the wild today. These cells are much larger and more complex than bacteria. In the past, they actually swallowed up other cells, that went on living inside of them. So, they’re actually like little clusters of several cells in one, working together as a team. Thanks to one of these symbiotic partners, mitochondria, eukaryote cells have a much larger energy budget than bacteria. Their genomes also have much more complex structure and regulatory mechanisms. They can actually re-write the instructions in their genes after reading them, using a process called splicing. We’re still not completely sure that these single cells do what we’d call “learning,” because we only recently thought to look for that, and it’s rather difficult to prove definitively. But there’s good reason to suspect they can.

There’s a single-celled eukaryote called Stentor coeruleus that is shaped like a tiny trumpet. It’s actually considered to be huge for a single cell, at a whopping two millimeters long. That may not seem like much, but it’s 1,000 times bigger than an E. coli bacterium. This little guy is very sensitive to vibrations, which it uses to detect predators. When it feels threatened, it contracts into a tiny ball to make itself harder to catch. What’s interesting, though, is that when it feels the same vibration over and over again and doesn’t get eaten, it learns to ignore vibrations at that frequency (but not other frequencies). That learning is flexible, because if the vibrations get stronger (as if the predator were getting closer, perhaps), it can change its mind and go back to treating them as a threat. This sort of behavior is pretty simple, but I would consider it “intelligent” in a minimal sense. It has a non-trivial model of the world that it updates in real time without having to evolve to do it.

As organisms get larger and more complex, they also become slower. They move more slowly and they reproduce more slowly. They also tend to have larger, more complex genomes which are much harder to successfully edit with random mutations. As this happens, life needs to find ways to keep responding quickly, and one solution is to invest more in cognition. Even single celled organisms like Stentor seem to do this. Larger organisms like plants have even more complex cognition. Plants may not seem very intelligent, but they can learn, behave strategically, and even communicate. They interpret clues over a season to guess when it’s safe to grow, they maneuver around obstacles and each other to get access to light, and warn each other when pests show up. This isn’t thoughtful, exactly, but neither is it simply hard-wired. There’s flexibility, context sensitivity, memory. Real, non-trivial computation. More complex organisms, like animals, have even evolved their own specialized “computers” built out of cells: brains. They are much more intelligent than plants, in one sense, because their behavior is so much more flexible and dynamic. They really can think, which for most people seems to be the gold standard for intelligence.

The powerful thing about investing in cognition is that it makes evolution easier. An intelligent organism doesn’t need to evolve exactly the right behavior for a situation, it just needs to be “close enough” from birth that learning can take it the rest of the way. For instance, most animals have to learn how to walk. That means when an animal is born a little different, say with legs that are shorter than usual, it can figure out how to work with that. In fact, if having short legs turns out to be useful, they might do better than their peers, even if they have a body type never seen before in their family history. If a species has to re-learn the same behavior in every generation in order to survive, then natural selection will favor individuals that learn that skill more easily and reliably. For instance, baby deer learn to walk very quickly, seemingly because they are born with some innate clues for how to do this, and an urgent desire to do so. In this way, behaviors that were at first creative solutions to novel problems can slowly become reliable programs encoded in the genes. This sort of dynamic is sometimes called “genetic assimilation” or “the Baldwin Effect,” and it’s one of the key ways that life uses its evolved intelligence to shape the process of evolution to make it more efficient.

This is why I like to think of evolution and cognition as continuous with one another. They are different things, but they are both kinds of learning, just happening on different time scales, and there is flow between them. Flexible cognition can make evolution much easier, and evolution can generate new opportunities for cognition to work with. You might say that organisms like bacteria, which are little more than fully-automated evolved mechanisms, aren’t intelligent because their behaviors are rigid and can’t adapt without evolution. But perhaps an evolving population of bacteria is intelligent? It can be quite flexible and dynamic, adapting to dramatic changes in the environment in minutes. This is why they can be so difficult to control, and often very challenging for our immune system (which is very intelligent and flexible) to keep up with.

Organisms like plants rely a lot on evolved mechanisms, but they also have some minimal real-time learning and flexible behavior, so I would consider even a single plant to be intelligent. Animal species like insects, reptiles, and mammals shift the balance even further toward real-time cognition. In a sense, every organism is precisely as smart as it needs to be to live its lifestyle. If it can be successful and thrive without expensive, complicated, biological computation that can learn in real time, that’s a good “design.” I consider that to be intelligent, even if the organism itself doesn’t seem so intelligent. Every organism evolved from a single common ancestor, and has had just as much time to adapt. Some have become extremely good at being extremely simple, with precisely honed, fully automatic routines controlled by genetics that have stayed stable for hundreds of millions of years. Others have less refined innate behaviors, and make up for it with flexible learning and cognition. Neither option is better, and evolution explores the full range of that spectrum, finding a huge variety of strategies to survive and thrive in a complex world!

Are Viruses Alive?

Are viruses alive?

It’s a paradox, but one that disappears with a small change in perspective.

The problem is that viruses seem alive. They infect us, then manipulate our bodies in elaborate (and dangerous) ways, to reproduce and spread. They evolve. They seem to be intelligent, and have goals. Yet, they are little more than packets of DNA. They can’t do anything without hijacking the mechanisms of a living cell first.

So are viruses alive? The paradox arises from thinking of an object as either alive or not. This raises all sorts of problems, because a living thing is actually a process that continually rebuilds itself. The physical stuff comes and goes, and is not in itself alive.

The solution is to stop talking about “life” and switch to “living”. A virus is living when it is part of a living system. This isn’t just wordplay. Life is a process, and it is fundamentally collective and interconnected. We are living. Earth is living. But no single object here is.

ALife Paper and Presentation

The ALife conference has officially released all the papers published there, and videos of most of the event. I presented my paper about using environmental factors to induce selection pressure in evolving population. This is a formal scientific paper intended for a technical audience, but as always I try to make it as easy to read as possible. I also gave a 15-minute talk (plus Q&A) summarizing this paper, which might be the easiest way to learn the details of what I did and what I found. Just sharing in case anyone’s interested in delving a little deeper.

Happy holidays, everyone!

Status Update: ALife and Japan

(This post’s image is a photo I took of the gardens at Myoshinji Temple. In the foreground on the right is a stone lantern with dark green moss clinging to it. To the left is a pond with a small island, a bamboo raft, and some lily pads, reflecting the blue sky above. In the distance, the sun colors a mossy hillside, some trees, and a small building in golden light.)

My wife and I just got back from Japan, and what an adventure it was! The conference was eye-opening for me, useful, and a lot of fun, but exploring Kyoto was where I had some of my favorite moments. In any case, I figured I’d share a few reflections from the trip.

I do love traveling the world. It’s fascinating to see different ways of life, both in terms of cultures and ecosystems. It helps to show just how much of life as we know it is sorta arbitrary, and could easily have turned out different. It also shows the universals in life and human nature, the themes that come through again and again wherever you go. There’s so much unique beauty in every place that is never fully captured in photos and stories. You have to see it yourself. That said, busy airports and 12+ hour flights are hell on earth. The airline industry has become so unreliable lately I feel like pain is inevitable in a trip like this. For instance, we got delayed by half a day or more on both legs of our trip. So, while I loved being there, getting there and back was awful. I think it was worth it, but the difficulty of travel is definitely a limiting factor for me.

Since we arrived late, I had to hit the conference first thing the next morning, heavily jet-lagged. It was an incredibly intense week jam packed with research talks, demos, panel conversations, and more. Unfortunately the event was plagued with technical difficulties and logistical problems, but somehow the team pulled off a very successful conference nonetheless. So, a big kudos to them! I really enjoyed many of the talks, and my presentation was a big hit. I received many good questions and compliments, and even won Best Student Paper, which was a shock! I got to see many great presentations from peers I know, total strangers, and big names in the field who I admire and respect. Also plenty I didn’t like, but I suppose that’s part of the fun.

I say that because ALife as a field is incredibly diverse. Really, it’s an umbrella for all kinds of research. The unifying theme is computational work inspired by biology, or “life as it could be” as the catch phrase goes. But, that includes people doing math to propose universal laws of biology across the universe. It includes people who claim to have spontaneously generated true life in a computer program. It includes companies selling lifelike robots, designed using methods that are not lifelike at all. It includes people making abstract models of biological systems to better understand life in the real world. And it includes artists using generative AI for collaborative live DJ performances. This is frustrating, because there are parts that I care about deeply and feel are done really well, and other parts that seem silly, or that I actively dislike and wish were done differently or not at all. I feel like the sheer diversity is much of the point, though, and I appreciate the chaotic mess of it all. Somehow, that feels appropriate for a group studying “life as it could be.”

I feel like I belong in the ALife community. I still don’t know much about it, and I can already see many problems and difficulties there. But this is a place where work like mine is the norm, fully accepted, and appreciated as worthy and interesting. This is a community that “gets it.” I’ve already found a lot of like-minded people, too. I met some lovely folks from Emily Dolson’s lab at MSU, who I might potentially collaborate with. I had some great conversations and arguments. I also started to engage with the Emerging Researchers in ALife community, and found multiple folks pursuing ideas very close to my own, many of whom are working alone with little guidance, much as I was before I took the leap to get my PhD. I’m very excited to find my peers, and feel compelled to help out and foster a community where these ideas can thrive, develop, and maybe find a larger audience.

Although I tried to drink deeply of the conference, I also made plenty of time to escape and have some moments on my own and with my wife to explore Kyoto, which is an amazing place. We had so many delicious meals. Sushi. Ramen. Udon. Tempura. Izekaya. Shabu shabu. Obanzai. Kushiaki. The quality was always very high, and there were so many delightful local specialties that are hard to find in the USA (especially Vermont 🙄). The city itself is beautiful, easy to navigate, and clean. I think perhaps the most striking thing is how often I’d be walking down a perfectly normal looking modern street and then suddenly I’d run into a historic temple or shrine, right in the middle of everything. These could be quite lovely, and often included gardens of stunning beauty. I love how the practice of Buddhism and Shintoism in Japan is so bound up with natural beauty and quiet contemplation. I find Christian cathedrals quite beautiful and peaceful, too, but they feel so much more constructed and enclosed. They remind me of man’s “triumph” over nature, while a Zen garden makes me feel like a guest on God’s planet.

Unfortunately enjoying the natural beauty and historic sites was a little challenging, just because of all the tourists! Kyoto is a very popular destination, and the notable sites are just swarming with people for most of the day. My wife made a point of setting out very early in order to beat the crowds, and seeking out some of the less well known venues. I think that worked out very well for us. We got some private, quiet moments that were truly magical. And speaking of sites that are under-appreciated, I thought the Kyoto Museum of Crafts and Design was a real highlight of the trip! It’s very small, but they have some wonderful exhibits of traditional Japanese crafts and their modern evolution. What makes this place really special, though, is the emphasis on teaching how things are made! There are hands-on exhibits, examples of objects at various stages of their construction, and videos showing the artisans and how they work. This gave me a much deeper appreciation, both for the artworks, and for the amazing traditions that produced them.

All in all, it was an extraordinary trip. I learned a lot, had some great personal interactions, had some great experiences, and overall had a very good time. I feel lucky that I got to go, and I hope to attend many more conferences like this in the future, including venues besides ALife. Most notably, I can’t wait to go to a conference on evolutionary computation, like GECCO. But I see that I have a place in the ALife community, and I suspect it will become an important place for me going forward.

They haven’t yet published the individual papers from the conference or the presentation videos. I’ll share those when they’re ready. But the full proceedings (including my paper) are now available online, for folks who want to dig in, or are just curious to see the full diversity of what was presented.

Ramps and Ratchets: A New Paper!

(The image for this post is a screenshot of the video summary of the Baym et. al MEGA-Plate experiment, available here: https://youtu.be/plVk4NVIUh8. It shows a population of bacteria gradually evolving antibiotic resistance as they migrate through a specially designed habitat. The bacteria appear white on a rectangular black background. You can see the bacteria repeatedly breaching into new territory where they fan out, spread, and completely fill the available space.)

In a few days, I’ll be visiting Kyoto to attend the 2025 ALife conference! I couldn’t be more excited for the trip, especially since my wife gets to come along. “ALife” is short for “Artificial Life,” and it encompasses all sorts of research into “life as it might be.” Mostly, this means lifelike computer simulations that help us better understand what life actually is, but part of the fun is there’s a great deal of interpretation, creativity, and art mixed in with the science. Japan is a hub for this sort of research, so I’m sure the conference will be quite an experience! Today I’m sharing the story of my experiment, light on science jargon. But soon the full paper will be published, along with a 15-minute video summary, which I will also share here. So, if you’re curious to follow along, please choose your own adventure.

This project started with a grant. My PhD advisor, Josh Bongard, specializes in evolving the designs for robots. He got funding to explore endosymbiosis: that is, making robots that operate within other robots. I found this topic fascinating, because working and evolving together is a powerful way that organisms in nature change their fate in dramatic ways. For instance: humans depend on our gut microbes to digest the foods we eat, and we would never have been able to power our big brains and live in so many diverse habitats without that collaboration.

I drew inspiration from one of my favorite biology studies, the MEGA-Plate experiment. I highly recommend you check out this two-minute video summary, but I’ll give a quick overview here. They wanted to see how new traits evolve and spread in bacterial colonies over space and time. So, they made a big rectangular habitat for E. coli, and filled the space with bands of gradually increasing amounts of antibiotic. Initially, these bands would block the bacteria from spreading, but eventually one would evolve resistance, spread into new territory, and fill it with a new lineage of slightly antibiotic resistant bacteria. This happened over and over again, gradually increasing the amount of antibiotic the bacteria could tolerate, until they reached the middle and could thrive in a brutally high dose of antibiotics that would have killed their ancestors instantly.

I love that experiment because it shows us something about antibiotics—use a big dose, and keep taking it, because too little is just an invitation for bacteria to evolve resistance—, but it also let us watch evolution unfold in real time, and showed a way to coax life into evolving some new ability, just by manipulating the environment. It got me thinking: maybe there’s something similar going on inside each of us? The gut is a habitat for bacteria, but it’s a hostile environment, one that’s meant to be selective, letting “good” bacteria survive while keeping “bad” ones out. Perhaps our gut evolved to shape how populations of bacteria spread and evolve there, guiding them towards more mutually beneficial lifestyles? I wondered, could I evolve an environment that promotes the evolution of a population inside? Could I reproduce the MEGA-Plate experiment, and maybe even evolve a better environment than what those guys designed by hand?

Actually bringing that idea to life required a lot of technical decisions. First, I had to decide what these “bacteria” would actually do. To simplify things, I chose not to simulate actual bacteria, but instead play a simple number guessing game called HIFF. Each “bacteria” is really just a number, 64 bits long when represented in binary. I kept a population of thousands of these numbers, and in each generation, I would change them one or two bits at a time, which would change the score they got in the HIFF game. It’s a hard game to win, since there are only two right answers out of 18,446,744,073,709,551,616 possible 64-bit numbers. Guesses that are closer to correct get higher scores, so the population will gradually evolve towards the right answers, but the way the scores improve is designed to be very noisy and misleading! You frequently have to get worse for a while on your way to getting better, which makes it nearly impossible for a simple Evolutionary Algorithm (EA) to solve HIFF. This seemed like a good challenge, and it was also very practical: numbers are easy for computers to work with, so I could have many thousands of them and still get results fast.

Next, I had to figure out how to put an EA into a spatial environment. Lots of people have tried that before, and in many different ways, but they almost always design the environment by hand! I couldn’t’ find any examples of someone evolving an environment like this before, which meant my idea was new, but also that I was on my own figuring out how to do it. I turned to a favorite tool of mine: Compositional Pattern Producing Networks, or CPPNs. They can make cool, organic-looking patterns with lines, curves, and smooth gradients, and that seemed like just what I wanted. Instead of laying down different amounts of antibiotics, instead I would set a minimum HIFF score needed to survive at every point on the map. Then I’d evolve those maps, keeping the ones that led the population growing inside to get higher HIFF scores.

Of course, I needed a control, too. I wanted to study how the varying difficulty levels across the environment affected evolution, so one obvious comparison was a flat environment, with no minimum score at all. I also tried a version of the MEGA-Plate experiment design, since that seemed to work so well in real life. It took a lot of tinkering to get it working, and frustratingly it only seemed to work under just the right conditions, but eventually I got results that looked like the bacteria in that video. Even more exciting, I could evolve environments to do the same thing.

The concept worked! But, how? And why did it need to be set up in such a particular way? It felt like only a partial success. What did it mean, and why should anyone care? It wasn’t clear at first, but I decided to stick with it. I read a lot of past research about the factors that might be at play here, drawing on both computer science and biology. I also spent a long time staring at the results and tweaking things to see what would happen.

The clue that really made things click for me was when I did a “hyperparameter sweep.” Originally, I found that the MEGA-Plate copycat beat out the flat environment, but only when I configured the EA juuuuust right. In other cases, the flat environment did just as well or even a bit better. Claiming “success” felt a bit like cheating. So, I decided to try all the possible settings, to see precisely where it did better. Turns out, the key factor was selection pressure. When I made it so that numbers with higher scores could reproduce faster, then evolution worked just fine in any of my environments. But when I didn’t do that, when I let any number survive as long as it was “good enough,” with no particular edge over its neighbors, then the more challenging environments paradoxically produced better HIFF scores. After a few more tests, I convinced myself: somehow the spatial structure of the environment was inducing selection pressure, without me having to explicitly program it into the algorithm.

I think the way this works is fascinating. When I first lay down the “bacteria” into my simulated MEGA-Plate environment, most of them die right away! They simply can’t survive in the toxic middle area, and so they die out and leave empty space behind. The ones lucky enough to be placed near the edges, though, they do fine. They keep evolving, gradually getting better HIFF scores. Eventually, one of them gets a high enough score that it can survive in the next space over, so it migrates there and finds itself alone in a big open space. Since it has no competition, it gets to reproduce like crazy, completely filling the new territory with its children, each one just a little bit different from itself, because of mutations. Except, the minimum score here is a bit higher than before. Any children that get a much lower score can’t survive here; only the ones that are about the same or better can. That means the environment works like a ratchet, pushing the population to ever high scores as they gradually climb the ramp of increasing difficulty.

A diagram that shows how fitness varies across space in this experiment. The threshold for survival is illustrated as a series of three steps. On the first step (a), there are many red dots ranging from dark purple to slightly red, indicating a low fitness population. At (b), a single circle is placed between steps, with arrows indicating migration and producing many children. On the next step (c) we see more dots, ranging from purple to red and even orange, indicating higher fitness. But all the darker circles are crossed out, indicating that they do not survive. At (d), we see another dot moving onto the next higher step, producing children with even higher fitness (red and orange dots).
A figure illustrating the ratchet effect. (a) When the theshold for survival is low, we get lots of variation (b) A gradual ramp up allows numbers with higher scores to migrate into empty space and make many copies (c) Exploration resumes, but more mutations are fatal here. This creates a ratchet effect where fitness can only increase, driving fitness up. (d) The process repeats!

Looking at it another way, my algorithm starts out by broadly exploring the space of possible numbers, without concern for which ones are better or worse. But, when one does get a higher score, that one gets rewarded with access to more territory and more children. That’s like starting up a whole new evolutionary search, beginning from a better baseline. When that happens, I go back to searching very broadly, just with a higher standard for survival. Any number is allowed, just so long as it isn’t worse. The spatial structure of the environment and its gradual ramp up in difficulty is what decides how strong the selection pressure is, and what manages this back-and-forth process of “exploring” the search space very broadly and “exploiting” the numbers that got higher scores. This “explore / exploit trade-off” is a big topic of interest in the field of EAs, and something we typically have to tune by hand, but I got an EA to automatically solve this problem for me!

It’s still not clear how to make this practically useful. HIFF is just a number guessing game that I used it as a stand-in for some more interesting task. This kind of EA ought to be good at open-ended exploration of complex problems, which is why it does well at a deceptive problem, like HIFF. But HIFF only has two best answers! This approach might be much more useful and interesting in a more open-ended problem area, where there are countless different ways to succeed. Perhaps something like a robot, or a game-playing AI, or a search for many good answers, rather than the best answer. But I won’t know until I actually try!

Overall, I’m quite happy with this experiment. I was able to reproduce results from a favorite biology paper on my computer, and show how it relates to the kinds of search and optimization problems that Computer Scientists care about. I also got to apply the CS perspective to a Biology experiment, analyzing this “range expansion” phenomenon in terms of problem solving and search processes. It’s a modest start, but this is exactly the kind of work I set out to do, and I had a lot of fun doing it. It also earned an A+ for me and my partner, Anna Rees, on our final project for Evolutionary Computation class, a trip to Japan, and even a nomination for best paper at ALife 2025! So that’s pretty sweet. I hope to do more like this in the future.

Status Update: DISI and ALife

(This post’s photo was taken by me at the castle ruins in St. Andrews, Scotland. This is the front of the castle, as seen from the inside. It’s basically just one wall, with the ones on either side collapsed. The wall itself has a main gate to the right, passages on a second story running along the whole wall with windows to the outside, and a crumbling tower to the left. The inside floor is short trimmed grass, with a recently walkway added recently to minimize erosion, made of small parallel slats set flush to the ground. This was shot on a beautiful sunny day, with a lovely blue sky and just a few wispy clouds.)

It’s been a while since I gave a status update. Things are going really well! I’m mostly done with the course requirements for my PhD. Just two more classes to take, and they can be whatever I want, so I might do some independent studies to blend my research with my learning. I just got back from a summer institute, which was a new and fulfilling experience for me! I had a great time, learned a lot, and met some truly incredible people. I also just found that the paper I submitted to the 2025 Artificial Life conference was accepted! Let me share a bit more about all that…

This summer I got to attend the Diverse Intelligences Summer Institute (disi.org), in St. Andrews, Scotland. It was a three-week immersive experience, full of lectures, events, projects, and networking. They brought together researchers from around the globe, studying intelligence from different perspectives. There were folks who study bats, birds, primates, or dogs in the field or in the lab, trying to test their cognitive abilities or unravel the complexity of their communications. There were philosophers, trying to wrap their heads around AI, language, society and more. There were folks studying gut feelings, and psychedelic therapies in humans. Folks doing science communication and wearable art. Folks studying how animals evolve, how software evolves, and how to build evolving cybersecurity systems modeled on the immune system. And so much more! It was truly incredible to meet so many brilliant and interesting people, to explore all our different specialties, and to see how they blend into one another.

St. Andrews itself is a lovely venue, with beautiful beaches, medieval ruins, and one of the oldest universities in the world. It’s also home to the first golf course, which makes it a sort of Mecca for golfers, though that didn’t mean a whole lot to me. I was there in July, when the weather was 60° F (15° C), with a lovely sea breeze, and lots of sunshine, which is pretty unusual for Scotland. The town center was cute, with plenty of pubs, shops, and restaurants, but I also got to explore nature quite a bit, walking along the coastline, hiking in the woods, and visiting the lovely botanical gardens. This was only my second visit to the UK, and the first time where I wasn’t just stuck in a big city. It was a very different biome than I’m used to, so it was fun to see the fine, dark sands, and the dramatic cliff sides and castle ruins overlooking the beach. I also got to see flowers and bees and all sorts of other plants and critters that were new to me.

As part of DISI, I did a group project with four other people. We were thinking about how animals pick up on lots of different cues in their environment to figure out what’s going on. We drew inspiration from a paper on cultural intelligence, where they talked about about “calendar plants.” A good example of that might be some berry bushes that fruit at about the same time that the deer are ready to hunt in a particular locale. We were wondering, when is it better to look for a sign that’s easier to spot but indirect and unreliable (like whether the berries are ripe), rather than paying a high cost to be sure about what you really care about (like scouting for deer)? We made a mathematical model that could work for a wide variety of systems like this, solved some simple versions of that model by hand, then used an evolutionary algorithm to validate our results and scale to larger, more complex systems. This is a bit of a departure from my usual research, but I thought it was a neat idea, I really enjoyed getting to work with biologists and philosophers on a project, and I had fun coding up a simulation of evolution in an unpredictable environment. I got to a probabilistic programming language (probmods.org), which was new to me and hopefully useful in the future!

I really appreciate what the folks at DISI are doing. Fundamentally, I think intelligence is a diverse, messy, and tangled thing, so the only way to study it properly is with a mix of many different perspectives. It’s incredibly useful to see what problems other fields are investigating, and what tools they use to do that. Some ideas transfer really well between fields, and there’s some great opportunities for collaborations at the intersections between them. It’s fantastic to get a chance to do that sort of cross-pollination, and to meet so many folks who care about related topics and who are eager for new perspectives. I made fast friends with several researchers and science communicators who I hope to stay in touch with and maybe collaborate with in the future! I’m really grateful I got to participate, and I’d highly recommend it to anyone who feels like it might be a good fit for them.

I’m also very excited about getting a paper accepted at ALife. I’m very proud of that paper, and will be sharing a longer write-up about it here once it’s published. I’m also excited to attend the conference! Artificial Life is a field of research that studies “life as it might have been.” Basically, it’s a catch all for a very wide variety of experiments where someone builds a model (usually a computer program) inspired by biology, just to see what will happen. The hope is that, even though life itself is incredibly complex, perhaps simple simulations that recreate just a bit of its mysterious qualities could help us understand life better. The kinds of experiments featured there range from small and simple tests of specific hypothesis about animal behavior, to vast and beautiful simulated worlds that can are incredibly lifelike and nearly as mysterious as the real thing. This year’s conference is in Kyoto, Japan, which is exciting for two reasons. First, Japan is kind of the center of the ALife world. They have a lot of ALife specialists, and a lot of great work comes from there. Also: My wife Christina and I have always wanted to visit Japan, and this is a perfect opportunity to do it together!

The ALife conference will be in October, and I imagine the paper will be published around that time. Hopefully my travels will go smoothly, and after the event I can share my experience as well as my work with you here.

Building Bodies

(The image for this post is a human embryo after six days of development: a blastula. The structure needed to construct a body is just starting to take form. Before this point, it’s mostly just a lump of undifferentiated cells. Image credit: Jenny Nichols)

Humans live in the macroscopic world. We’re used to interacting with other people and animals that are about the same size as us. Of course, everything we see as an “individual” is actually made up of unimaginably vast numbers of cells and molecules in constant churning motion, but to us they just look like solid, physical objects. This is normal to us, but it’s a totally alien experience to our most primitive ancestors. We are descended from individual cells that lived fully autonomous lives in a microscopic world. It’s truly extraordinary and weird to think that they would band together by the trillions to form human bodies, and yet that’s what they do. How did that come to be?

Our ancient ancestors were protists, single cells with complex lives. Each one had to find food, shelter, and resources. Just as we do, they continuously decided how to live, trying to survive and thrive in a chaotic world, and to set their children up for success. Sometimes they lived in communities and ecosystems, building networks of mutually supportive relationships. They’d form vast colonies of closely related cells, generally working together, communicating, maybe even specializing and subdividing tasks. And yet, each cell was still responsible for its own well-being. There was no top-down coordination. Each individual decided for themselves what to do. Cooperation would naturally arise when it was useful, and break down again the moment incentives changed.

As we now know, cells can do extraordinary things when they work together on a shared plan. In today’s animals, each cell has an established role to play. Collectively they build complex, macroscopic bodies that observe, think, move, and reshape the world at a vastly larger scale. But try telling that to a single cell! For two billion years, they were honed by evolution to fight tenaciously for their own survival as free-living individuals, and to prioritize their well-being and their offspring. Collectively, the cells of a macroscopic individual may fare better than they would alone, but not all of them. Think of the cells that line your stomach, whose life’s purpose is to get dissolved by acid so that other cells don’t have to. How does evolution convince a cell to do that voluntarily?

More importantly, most animal cells are evolutionary dead-ends. We have specialized sex cells whose only job is to produce children. Every other cell in the body is denied that privilege. Many of those cells still divide occasionally, but all their offspring will die out when the body does. They have no way to influence the next generation of their species. This is a huge contradiction, because those cells evolved for reproduction. For billions of years, the cells who contributed the most to the next generation were favored. Perhaps the most fundamental fact about life is that it proliferates, rapidly filling up every corner of the planet. How do you reverse this core instinctual drive in every living cell?

It’s quite likely that multicellularity evolved and fell apart many times. Discovering this trick is very hard, for the reasons outlined above and more. What’s even harder, though, is holding onto this innovation once it’s found. In order to maintain an animal body, every cell must compromise its well-being for the good of the whole, for a lifetime. If any cell decided to cheat—to live as a rebel among conformists, and selfishly exploit the body environment—it would have a distinct and powerful advantage. It would outperform the others, undermine their hard work, proliferate much faster, maybe even start a whole new successful family line as single cells, feasting on the remnants of the would-be body. Mutations that broke multicellularity must have been common!

Yet, today we live in a multicellular world. How did we make that work? There’s an interesting theory to explain this, and it explains something else strange, too: every single multicellular organism has sex. Some reproduce asexually, too, but the mechanism of sexual reproduction is universal. This is not at all the case for single cells. They exchange genetic material with other cells, but they don’t depend on others to reproduce. They do that entirely on their own, using whatever genes they have at the time. So, perhaps sex and multicellularity are linked? Perhaps sex came first, and is part of what made multicellularity possible? There are a few good reasons to think this, but for me the most compelling is that sexual reproduction creates mothers, and puts them in control.

Generally speaking, the process of building an animal body is decentralized. There is a sort of top-down coordination, using patterns of hormones and bio-electricity to shape a coherent whole, but this has to be generated by the cells themselves. Every cell autonomously figures out where to be and what role to play, via coordination with its neighbors—but not at first! At the very beginning of the process, the embryo’s genes are switched off and the mother’s genes direct the first stages of growth. In an egg or a womb, the mother also has full control over the environment in which development happens, which further shapes the process even once the embryo takes over. This gives Mom the power to set things up just right such that multicellularity is the only viable outcome.

The single cells of an early embryo aren’t a body yet. They default to the independent lifestyle they had for two billion years before the advent of multicellularity. The cells have to work together to build an organism with unified awareness and agency, but that means at first there isn’t one. There’s nobody to coordinate the cells, and no “greater good” to serve. Why should they work together? That’s where the mother comes in. She takes full control and forces the cells into the right starting configuration. She shapes the growing embryo and assigns roles to each cell by tweaking their gene expression. The child cells only get to take over once the general layout of the embryo is established, and the growth process is already underway. Once an individual emerges from the collective behavior of those cells, it can carry on the rest of the work of building a body.

From that point on, there’s very little temptation to cheat. The clockwork of the body has been set into motion. Each cell’s needs are provided for, and their freedom is restrained by the bonds they’ve already formed with their neighbors. The reproductive cells have already been isolated, making it impossible for rogue cells to influence the next generation. They could still rebel and do their own thing. That’s what cancer is. But it’s much harder than playing along, and carries serious risks. The immune system actively hunts for rebellious cells, and kills them on sight! It’s also a futile exercise. A cancer may grow and thrive and proliferate for years, but this is self defeating. When the cancer cells undermine the host body, they destroy their own environment and drive themselves to extinction.

Once cells are locked into this multicellular arrangement, something interesting happens to the selective forces that shape their evolution. As always, the reproductive cells carry subtle genetic differences from their parent, tiny mutations that serve as evolutionary experiments. Changes in an egg cell’s genes affect the behavior of every cell that follows, and thus the body and behavior of the child as a whole. But a new selective filter has been established. Only variations that are beneficial to both the cells and the individual are allowed through. Mutations that damage either level of the system produce unfit individuals who often won’t even develop to maturity. This creates a pressure for cells to become more cooperative building blocks, and for bodies to become more supportive homes for cells.

The takeaway here is that each living cell is a creative, intelligent, and autonomous survival machine. That makes multicellularity a tricky balancing act that’s hard to discover, and even harder to maintain. It seems likely that sexual reproduction and motherhood were essential ingredients to make this possible. In a sense, multicellularity isn’t passed on genetically, it’s passed on physically; each multicellular organism has to assemble the next generation from individual cells before that child can carry on building itself. The cells don’t know how to do it on their own, and they don’t care to. Yet, once a mother forces them into the shape of a growing embryo, it’s in each cell’s best interest to play along. For about 1.5 billion years, evolution has worked hard to maintain this tenuous arrangement by aligning incentives between cells and bodies. Both systems coevolve to complement each other, to ensure the multicellular compromise is the wise choice going forward, for everyone involved.

This is a speculative story, but well supported by evidence. It’s largely inspired by The Evolution of Individuality by Leo Buss, a very technical book about cellular and developmental biology, which goes into vastly more depth and detail than I did here about the complex lives of single cells and the struggle to align incentives across multiple scales. In particular, that book explores the very different ways plants and fungi have overcome the challenges of building macroscopic bodies. It’s a fantastic reminder that what animals do is not “normal,” it’s just what we’re used to. We’re actually quite strange and exceptional, as life goes. Most living things are single cells, and even when it comes to bodies, our way of doing it is just one of many. It’s a brilliant and eye-opening book, but a challenging read, so I wanted to write a more accessible summary, to share these ideas more broadly.