Half Way?

(The image for this post is a photo I took of a scenic vista near my home in Burlington, VT. The foreground is a clifftop covered in a few inches of snow, with a fence a few feet from the edge. It overlooks bare trees, a school yard, and a white field around it. The skies are blue with fluffy white clouds and mountains in the distance.)

Recently, when people ask how my PhD is going, I say I’m about halfway through! Really, a PhD takes as long as it takes, but I’ve been at this for over two years now, and four to five years is typical. I’m almost done with classes, I’ve passed my qualifying exams, and I’m starting to sketch out my dissertation. So far, so good! Along the way, I’ve learned a lot about my area of research, academia, and myself. All this makes it seem like a good time to reflect on what I’ve done so far, and where I’ll go from here.

I think the hardest thing for me, in the past two years, was adjusting to life in academia. I worked in the tech industry for over a decade, and always felt I was “academia adjacent.” I was a technical expert with a college degree, and so was just about everyone I worked with. I didn’t do research myself, but I knew plenty of folks who did, currently or in the past. I never really considered graduate studies, but I heard a lot about it from people who had. It sounded tough, but rewarding. I wasn’t rigorous about it, but I tried to keep up with major trends and advances in computer science and AI. I was building systems that used some of that cutting edge technology, so I had to understand it, at least loosely! All in all, I felt pretty confident heading into grad school.

I don’t think anything I learned about academia was wrong, but it also didn’t really prepare me for the experience. The most painful part was discovering just how undignified and precarious the grad student life is. I was very pampered at Google, where engineers ran the show and gave themselves top status and many perks. As a grad student, I barely get paid, and my health insurance and benefits are inadequate. I don’t get most of the perks of being an employee (even though I work for the University) or of being a student (even though I take classes there). In fact, many systems and processes at UVM only take grad students into account as an afterthought, if at all, even though we are an R1 university (top tier in research!), and grad students do most of that work. Many of my peers struggle just to keep food on the table, while they do the most intellectually challenging work of their lives. Luckily my savings make life relatively comfortable, but it was a shock to switch into this lifestyle, to see how we treat the upcoming generation of scientists, and how unnecessarily difficult we make their lives. And that was before Trump started slashing funding. To be clear, UVM is all I know, but I don’t get the impression it’s a bad school, or anything. I think this is sadly typical for America.

I also had no idea what getting a PhD was really about. I knew I’d have to take classes and write a dissertation. I thought I understood the nature of science pretty well. But actually learning how to be a productive scientist has taken a lot of getting used to! The main difference from engineering is that nobody can tell you what to do. Sure, there are open problems that need to be solved, and projects that the professors here need help with. But a lot of being a good scientist is finding the particular ideas that motivate you, or the places where your particular skills will serve you best. To a large extent, you have to explore and see where you fit in. There’s also very little in the way of “best practices.” Science is, by definition, on the very edge of what we know how to do. My advisors teach me the established tools of the trade, but they would never tell me how to solve a problem, because they know we need new techniques and perspectives. More than anything, they want to coax me into finding my own way of investigating the world, rather than simply showing me how they did it. That’s much harder!

The other big challenge of academia is “the literature.” The driving force behind science for the past several decades has been “publish or perish.” Every single scientist is writing new papers all the time, and the sheer volume of text is astonishing. It’s also barely organized. The way you navigate it is just keywords, citations, and (most importantly!) word of mouth. Unfortunately, this makes finding what you need very tricky, until you become an insider. Contributing to science is also very challenging, because everything must be expressed in terms of the existing literature. This was very painful at first, since I had something to say, but I was struggling to express it. I was very ignorant of what came before, and most of what I found was so different, I couldn’t see how my idea would fit in! I’ve come to appreciate this more, though. The biggest challenge in science is communication. If I’ve got a big, complicated idea, how do I get people to understand it? How can I convince them to care? I have to start with what they know! Tedious as it is, this is the only way to narrow down my idea, state it precisely and concisely, justify it with evidence, and make it feel relevant and useful to other researchers.

This leads me to the knowledge itself. When I first set out, I wanted to write a book. I was frustrated by the common understanding of what intelligence is and how evolution works. I had another perspective that seemed better, I was baffled that nobody was talking about it, and I felt compelled to share it with the world! I still might do that, but since my first blog post three years ago I’ve read over 50 books (!!!), and that has changed my perspective. The big surprise is that people have written about these ideas. Many people, for many years! It’s just, these books aren’t very popular. Some of them are very technical or obscure. To get at the good ones, I needed to find the right keywords, references, and recommendations. Again, it’s hard to know unless you’re in the know. Frustrating. But at least I’m not crazy, and I’m not alone. In fact, these ideas have been coming up more often, with more justification, in more accessible places, all the time! Better stories of intelligence are getting out, just very slowly, as I suppose is the way in science.

Reading all those books and digging into science papers has been enlightening. It’s helped me understand the things I care about more precisely, taught me many different ways to study them and talk about them, and shown me exciting new evidence. It’s also shown me what I don’t like. I’ve actually gained a lot from reading works I hate. Often it’s because they’re talking about something I care about deeply, in ways very much like my own, but with some subtle yet all-important difference that gets under my skin. I get all worked up, and I know precisely why. There’s a specific detail I can point to that matters to me, and they got it wrong! That’s priceless. There’s no better way to find out what I need to take a stand on. All this reading has also helped me understand what aspects of my research passion have already been well covered, and which ones remain neglected. This is particularly important for directing my attention as a researcher.

I just went back to read the first blog post I wrote about what I wanted to research. It’s funny how much this has changed! It reminds me that it was being a manager at a software company—making intelligent systems that mix human and algorithmic components—that led to this perspective on what intelligence is. But the way I talked about laying out a system architecture for all intelligence on this planet seems outrageously ambitious! It’s still a lovely long-term goal, but I’ve had to narrow my focus dramatically for my PhD, and I think that’s a good thing. I should focus on what matters most, to make clear and strong claims that will have an impact. Certainly, when it comes to AI, evolution is a part of the story that is sorely neglected. It’s an area of research that’s less competitive, and one with plenty of untapped potential, I think. Evolution is also central to how I’ve come to understand people and society. Despite all our intelligence, sophistication, and technology, humans are also social apes, whose behavior is powerfully shaped by our bodies, our instincts, and the early days of our species. I find human intelligence only makes sense when I think of it as constrained by our evolutionary history from one end, and our evolved culture from the other. We live in that intersection.

I care a lot about evolution, and I have since I was very young! It’s fascinating to me, and poorly understood, especially in the general public. What we’ve learned about how life works in the past 25 years is astonishing, and I believe we need a major rewrite to the story of evolution. I’m not alone in this. There’s a growing movement to look at evolution in a new light, one where organisms play an active role in shaping their environments, each other, and their evolution. The main struggle is figuring out what the new story should be, how it differs from the old story, and what those differences mean. This is where I feel like I have an interesting role to play. The field of biology is working to flesh out this story, gradually, and with plenty of conflict. But they don’t really appreciate that giving organisms agency changes the computational properties of evolution as a process. It’s a different kind of search than we realize, more like a search for better ways of searching than a search for greater fitness, and it ought to be much more powerful because of that. Meanwhile, the field of evolutionary computation has been struggling to overcome major limitations, and discover algorithms that work more like nature does. But those researchers tend not to study biology, and have unwittingly become very stuck on the old “replicator” model of evolution (ie., Dawkins’ Selfish Gene), which I believe is holding the field back.

This leads me to my dissertation, or how I imagine it today. I’m sure this will change plenty as I develop the ideas, run more experiments, and see what they have to show me. But I think I have a great opportunity to fill this gap between fields. I want to build a model of evolution where the evolving organisms can observe the world, have some awareness of their fitness, and use that to influence their own process of evolution. I want the “rules” of evolution to come from them, rather than designing them myself and building them into the algorithm, as computer science researchers have always done in the past. My hope is this will produce evolutionary searches that are both more open-ended and more efficient. They will be less restricted by what I think the right answer should look like, and more able to find solutions I wouldn’t think of. They’ll be able to search more strategically, and adapt the search as they evolve, balancing the trade-off between seeking new opportunities and fine tuning the solutions they’ve already found. If I can do this, then I hope AI researchers, philosophers of evolution, and biologists might better appreciate the significance of this change in perspective, and be more likely embrace it in their own work.

I’m very excited to pursue this. I care a lot about the concept, I have very specific ideas of how to implement it, and enough theoretical knowledge from my studies that this result feels plausible. On the other hand, it’s also frustrating. There’s so much more I’d like to say than will fit into a PhD! What I’m planning isn’t even a complete model of how I think about single-celled organisms. This work is all inspired by proto-cells, ancestors even older bacteria. There’s yet another layer of complexity I’d like to add, just to account for the role of DNA in this story. And, going back to my original vision, it just builds up from there with bodies and brains, ecosystems and cultures. I may never get that far. In fact, I think I may focus purely on evolution, since there’s so much to say about it. But starting small is the only option, especially since I need results to convince other people to pay for this research. Sadly, the United States has very little funding to pursue science for its own sake.

PhD programs are generally about some young student helping a seasoned researcher with their work to gain experience. It’s unusual (but not unheard of!) for a grad student to come with their own project in mind. Unfortunately, that makes funding more difficult for me. My advisors appreciate my ideas and want to support me, but they have funding for their projects, not mine. We’ve had some success finding the intersection of their goals and my own, so we can advance both with one project. But this semester I have no research funding, and am working as a teaching assistant instead. That’s okay, but it means I have a bit less time for research. I’m applying for both a fellowship and a grant from NASA. If I get those, it would give me the freedom to pursue my research full time, and apply it to the design of antennas, since that’s the opportunity that arose. I’m cautiously optimistic about this one, in part because I have support from a researcher at JPL, who wants to see this project happen! But it’s never a sure thing, especially considering the massive budget cuts NASA has suffered recently.

I’m also thinking a lot about the future beyond my PhD. Originally, I set out to pursue my research and study machine learning, with the expectation that I could always just go back to the tech industry with some new skills, if this science thing doesn’t work out. I never imagined that the tech industry would change so dramatically in just a few years! The advent of LLMs has completely changed how we think of “AI,” and not for the better. I recently read The AI Con, which is a great book explaining the dangerous and unethical way this technology has been developed and marketed. Having read it, I no longer want to call my work “AI.” That has become a marketing term more than anything else, used to sell technology that devalues and replaces human creativity, craftsmanship, and labor. It’s become the overwhelming focus of a tech industry bent on extracting value from their customers, rather than serving them or benefiting society. This is not what my research is about, and I want no part in it. Hopefully the “AI” fad will pass, but I’m concerned the tech industry and software engineering work will never be the same, and I’m wary of which corporate projects I might join.

Becoming a professor is still on the table, but I’m very concerned about that route, too. The culture of academia was broken even before I got here. “Publish or perish” is a very bad incentive for researchers, and it has been leading to a crisis in academic publishing. The industry exploits researchers, students, and reviewers; the science has become “safer” but lest creative and diverse; and quantity gets prioritized over quality. The whole academic system is also badly unjust, with toxic power dynamics built-in. Grad students are overworked and underpaid, but in some ways professors have it harder. Their benefits are only marginally better, and the pace and quality of work they’re expected to churn out is often totally unrealistic! On top of this, the recent funding cuts mean that chasing grants is becoming an ever growing part of a professor’s job, that there’s more pressure to work on “high-value” research rather than pure science, and more of the money comes from industry partners with ulterior motives.

I wouldn’t take just any academic job. It would have to be the right opportunity, at the right kind of school. Similarly, I’m no longer interested in a “standard tech job” or a role as an “AI engineer” at any of the big companies you might recognize by name. I’ve found I really love evolutionary algorithms and parallel computing. I’d like to find a job doing that, if possible, but it’s a narrow specialty. This means I’m shopping around for very specific opportunities, and luckily I am finding some. The ALife and Diverse Intelligences communities have become important to me, and they’ve given me several leads and connections. I’m certainly not the only one chasing these sorts of dreams right now, and there are folks (in industry, academia, and government) who believe this will be important for the future. So, I’m cautiously optimistic, despite all the depressing challenges these days, but still unsure where this path is leading. I intend to play things by ear, as I have from the beginning.

In any case, it seems like I’ll be at this for another year or two or three. More and more, my focus will be research and the specific ideas I came here to pursue. I’ll keep reading books and posting reviews on Goodreads. I’ll keep writing blog posts. I imagine they’ll become more focused on evolutionary search processes, but I’ll try to mix things up with more posts about nature and I’m sure I’ll have more to say on “AI” as it continues to be such a large and growing part of our lives. Hopefully I’ll have more adventures to share with you, attending conferences and academic events around the globe. And I’ll keep looking for job opportunities, figuring out what my path forward looks like. I appreciate you coming along for the journey, I hope you’ll keep coming back to see what I’ve been thinking about, and that you’ll join the conversation in the comments section.

ALife Paper and Presentation

The ALife conference has officially released all the papers published there, and videos of most of the event. I presented my paper about using environmental factors to induce selection pressure in evolving population. This is a formal scientific paper intended for a technical audience, but as always I try to make it as easy to read as possible. I also gave a 15-minute talk (plus Q&A) summarizing this paper, which might be the easiest way to learn the details of what I did and what I found. Just sharing in case anyone’s interested in delving a little deeper.

Happy holidays, everyone!

Status Update: ALife and Japan

(This post’s image is a photo I took of the gardens at Myoshinji Temple. In the foreground on the right is a stone lantern with dark green moss clinging to it. To the left is a pond with a small island, a bamboo raft, and some lily pads, reflecting the blue sky above. In the distance, the sun colors a mossy hillside, some trees, and a small building in golden light.)

My wife and I just got back from Japan, and what an adventure it was! The conference was eye-opening for me, useful, and a lot of fun, but exploring Kyoto was where I had some of my favorite moments. In any case, I figured I’d share a few reflections from the trip.

I do love traveling the world. It’s fascinating to see different ways of life, both in terms of cultures and ecosystems. It helps to show just how much of life as we know it is sorta arbitrary, and could easily have turned out different. It also shows the universals in life and human nature, the themes that come through again and again wherever you go. There’s so much unique beauty in every place that is never fully captured in photos and stories. You have to see it yourself. That said, busy airports and 12+ hour flights are hell on earth. The airline industry has become so unreliable lately I feel like pain is inevitable in a trip like this. For instance, we got delayed by half a day or more on both legs of our trip. So, while I loved being there, getting there and back was awful. I think it was worth it, but the difficulty of travel is definitely a limiting factor for me.

Since we arrived late, I had to hit the conference first thing the next morning, heavily jet-lagged. It was an incredibly intense week jam packed with research talks, demos, panel conversations, and more. Unfortunately the event was plagued with technical difficulties and logistical problems, but somehow the team pulled off a very successful conference nonetheless. So, a big kudos to them! I really enjoyed many of the talks, and my presentation was a big hit. I received many good questions and compliments, and even won Best Student Paper, which was a shock! I got to see many great presentations from peers I know, total strangers, and big names in the field who I admire and respect. Also plenty I didn’t like, but I suppose that’s part of the fun.

I say that because ALife as a field is incredibly diverse. Really, it’s an umbrella for all kinds of research. The unifying theme is computational work inspired by biology, or “life as it could be” as the catch phrase goes. But, that includes people doing math to propose universal laws of biology across the universe. It includes people who claim to have spontaneously generated true life in a computer program. It includes companies selling lifelike robots, designed using methods that are not lifelike at all. It includes people making abstract models of biological systems to better understand life in the real world. And it includes artists using generative AI for collaborative live DJ performances. This is frustrating, because there are parts that I care about deeply and feel are done really well, and other parts that seem silly, or that I actively dislike and wish were done differently or not at all. I feel like the sheer diversity is much of the point, though, and I appreciate the chaotic mess of it all. Somehow, that feels appropriate for a group studying “life as it could be.”

I feel like I belong in the ALife community. I still don’t know much about it, and I can already see many problems and difficulties there. But this is a place where work like mine is the norm, fully accepted, and appreciated as worthy and interesting. This is a community that “gets it.” I’ve already found a lot of like-minded people, too. I met some lovely folks from Emily Dolson’s lab at MSU, who I might potentially collaborate with. I had some great conversations and arguments. I also started to engage with the Emerging Researchers in ALife community, and found multiple folks pursuing ideas very close to my own, many of whom are working alone with little guidance, much as I was before I took the leap to get my PhD. I’m very excited to find my peers, and feel compelled to help out and foster a community where these ideas can thrive, develop, and maybe find a larger audience.

Although I tried to drink deeply of the conference, I also made plenty of time to escape and have some moments on my own and with my wife to explore Kyoto, which is an amazing place. We had so many delicious meals. Sushi. Ramen. Udon. Tempura. Izekaya. Shabu shabu. Obanzai. Kushiaki. The quality was always very high, and there were so many delightful local specialties that are hard to find in the USA (especially Vermont 🙄). The city itself is beautiful, easy to navigate, and clean. I think perhaps the most striking thing is how often I’d be walking down a perfectly normal looking modern street and then suddenly I’d run into a historic temple or shrine, right in the middle of everything. These could be quite lovely, and often included gardens of stunning beauty. I love how the practice of Buddhism and Shintoism in Japan is so bound up with natural beauty and quiet contemplation. I find Christian cathedrals quite beautiful and peaceful, too, but they feel so much more constructed and enclosed. They remind me of man’s “triumph” over nature, while a Zen garden makes me feel like a guest on God’s planet.

Unfortunately enjoying the natural beauty and historic sites was a little challenging, just because of all the tourists! Kyoto is a very popular destination, and the notable sites are just swarming with people for most of the day. My wife made a point of setting out very early in order to beat the crowds, and seeking out some of the less well known venues. I think that worked out very well for us. We got some private, quiet moments that were truly magical. And speaking of sites that are under-appreciated, I thought the Kyoto Museum of Crafts and Design was a real highlight of the trip! It’s very small, but they have some wonderful exhibits of traditional Japanese crafts and their modern evolution. What makes this place really special, though, is the emphasis on teaching how things are made! There are hands-on exhibits, examples of objects at various stages of their construction, and videos showing the artisans and how they work. This gave me a much deeper appreciation, both for the artworks, and for the amazing traditions that produced them.

All in all, it was an extraordinary trip. I learned a lot, had some great personal interactions, had some great experiences, and overall had a very good time. I feel lucky that I got to go, and I hope to attend many more conferences like this in the future, including venues besides ALife. Most notably, I can’t wait to go to a conference on evolutionary computation, like GECCO. But I see that I have a place in the ALife community, and I suspect it will become an important place for me going forward.

They haven’t yet published the individual papers from the conference or the presentation videos. I’ll share those when they’re ready. But the full proceedings (including my paper) are now available online, for folks who want to dig in, or are just curious to see the full diversity of what was presented.

Status Update: DISI and ALife

(This post’s photo was taken by me at the castle ruins in St. Andrews, Scotland. This is the front of the castle, as seen from the inside. It’s basically just one wall, with the ones on either side collapsed. The wall itself has a main gate to the right, passages on a second story running along the whole wall with windows to the outside, and a crumbling tower to the left. The inside floor is short trimmed grass, with a recently walkway added recently to minimize erosion, made of small parallel slats set flush to the ground. This was shot on a beautiful sunny day, with a lovely blue sky and just a few wispy clouds.)

It’s been a while since I gave a status update. Things are going really well! I’m mostly done with the course requirements for my PhD. Just two more classes to take, and they can be whatever I want, so I might do some independent studies to blend my research with my learning. I just got back from a summer institute, which was a new and fulfilling experience for me! I had a great time, learned a lot, and met some truly incredible people. I also just found that the paper I submitted to the 2025 Artificial Life conference was accepted! Let me share a bit more about all that…

This summer I got to attend the Diverse Intelligences Summer Institute (disi.org), in St. Andrews, Scotland. It was a three-week immersive experience, full of lectures, events, projects, and networking. They brought together researchers from around the globe, studying intelligence from different perspectives. There were folks who study bats, birds, primates, or dogs in the field or in the lab, trying to test their cognitive abilities or unravel the complexity of their communications. There were philosophers, trying to wrap their heads around AI, language, society and more. There were folks studying gut feelings, and psychedelic therapies in humans. Folks doing science communication and wearable art. Folks studying how animals evolve, how software evolves, and how to build evolving cybersecurity systems modeled on the immune system. And so much more! It was truly incredible to meet so many brilliant and interesting people, to explore all our different specialties, and to see how they blend into one another.

St. Andrews itself is a lovely venue, with beautiful beaches, medieval ruins, and one of the oldest universities in the world. It’s also home to the first golf course, which makes it a sort of Mecca for golfers, though that didn’t mean a whole lot to me. I was there in July, when the weather was 60° F (15° C), with a lovely sea breeze, and lots of sunshine, which is pretty unusual for Scotland. The town center was cute, with plenty of pubs, shops, and restaurants, but I also got to explore nature quite a bit, walking along the coastline, hiking in the woods, and visiting the lovely botanical gardens. This was only my second visit to the UK, and the first time where I wasn’t just stuck in a big city. It was a very different biome than I’m used to, so it was fun to see the fine, dark sands, and the dramatic cliff sides and castle ruins overlooking the beach. I also got to see flowers and bees and all sorts of other plants and critters that were new to me.

As part of DISI, I did a group project with four other people. We were thinking about how animals pick up on lots of different cues in their environment to figure out what’s going on. We drew inspiration from a paper on cultural intelligence, where they talked about about “calendar plants.” A good example of that might be some berry bushes that fruit at about the same time that the deer are ready to hunt in a particular locale. We were wondering, when is it better to look for a sign that’s easier to spot but indirect and unreliable (like whether the berries are ripe), rather than paying a high cost to be sure about what you really care about (like scouting for deer)? We made a mathematical model that could work for a wide variety of systems like this, solved some simple versions of that model by hand, then used an evolutionary algorithm to validate our results and scale to larger, more complex systems. This is a bit of a departure from my usual research, but I thought it was a neat idea, I really enjoyed getting to work with biologists and philosophers on a project, and I had fun coding up a simulation of evolution in an unpredictable environment. I got to a probabilistic programming language (probmods.org), which was new to me and hopefully useful in the future!

I really appreciate what the folks at DISI are doing. Fundamentally, I think intelligence is a diverse, messy, and tangled thing, so the only way to study it properly is with a mix of many different perspectives. It’s incredibly useful to see what problems other fields are investigating, and what tools they use to do that. Some ideas transfer really well between fields, and there’s some great opportunities for collaborations at the intersections between them. It’s fantastic to get a chance to do that sort of cross-pollination, and to meet so many folks who care about related topics and who are eager for new perspectives. I made fast friends with several researchers and science communicators who I hope to stay in touch with and maybe collaborate with in the future! I’m really grateful I got to participate, and I’d highly recommend it to anyone who feels like it might be a good fit for them.

I’m also very excited about getting a paper accepted at ALife. I’m very proud of that paper, and will be sharing a longer write-up about it here once it’s published. I’m also excited to attend the conference! Artificial Life is a field of research that studies “life as it might have been.” Basically, it’s a catch all for a very wide variety of experiments where someone builds a model (usually a computer program) inspired by biology, just to see what will happen. The hope is that, even though life itself is incredibly complex, perhaps simple simulations that recreate just a bit of its mysterious qualities could help us understand life better. The kinds of experiments featured there range from small and simple tests of specific hypothesis about animal behavior, to vast and beautiful simulated worlds that can are incredibly lifelike and nearly as mysterious as the real thing. This year’s conference is in Kyoto, Japan, which is exciting for two reasons. First, Japan is kind of the center of the ALife world. They have a lot of ALife specialists, and a lot of great work comes from there. Also: My wife Christina and I have always wanted to visit Japan, and this is a perfect opportunity to do it together!

The ALife conference will be in October, and I imagine the paper will be published around that time. Hopefully my travels will go smoothly, and after the event I can share my experience as well as my work with you here.

Status Update: End of Semester 3

Well, the third semester of my PhD is wrapping up! I really enjoyed my classes this time around. Evolutionary Computation was super interesting, and sparked all sorts of connections with my research. My class project was a big success, and I hope to turn it into a conference paper. I’ll share more on that later, but here’s a sneak peek if you’re curious. Deep Learning was also pretty great, and I feel like I have a more deep and intuitive understanding of what’s actually going on when you train or interact with an AI. One of the more exciting / challenging topics we covered was Transformers, the new model architecture that powers ChatGPT and its ilk. In some sense, Transformers are quite simple, but they’re definitely not intuitive; understanding why they’re built the way they are, and why that works so well takes some effort. So, to help cement my understanding, I wrote a blog post about it, and you get a bonus episode for the holidays!

Over winter break I hope to develop my Evolutionary Computation class project into a full paper. In short, it’s a new kind of evolutionary algorithm, inspired by endosymbiosis, that works in surprising ways. So far I’ve only used it to solve a trivial toy problem, so I’ll probably also start work on a follow-up study, exploring what practical applications this new algorithm might be good for. And, of course, I’ve got more research ideas to explore beyond that. I’m quite excited to try evolving a population without genomes, for instance. So many ideas! I hope I’ll be able to keep a few projects running in parallel, balancing my time across them, and leaving room for me in between. I’ll continue to share updates as I make progress.

In the spring, I’ll delve even deeper into Deep Learning, with a class that explores counter-intuitive results and how surprisingly effective DL is sometimes. How is it even possible to “learn” using nothing but vector math? What are these models really doing, and why are some models better than others? Should be fun. I’ll also be delving into the math of chaos and fractals. I hope that will be useful for my research into self-modifying dynamic systems (ie, simulated life 😛), and lead to some very pretty visuals I can share. We’ll see!

Anyway, I’ll share the post about Transformers right after this, and that will wrap up another year of blog posts. More to come in January!

Status Update: Semester 3

I’m at an interesting moment in my studies, so I thought I’d let you know what’s going on!

Year two of my PhD program has begun. I’m about a month into my third semester, and things are going well. I’m taking two classes right now: Evolutionary Computation, and Deep Learning. Most of my Computer Science education has been about how to design algorithms and write software to solve different kinds of problems, but these classes are different. This semester, I’m learning how to get computers to discover their own algorithms, and write their own software. Honestly, the state of the art here is still quite primitive. We’ve found some very impressive techniques, but they each apply to a narrow domain, and we don’t understand them nearly as well as we’d like. Which makes them fun topics to study. 🙂

The other fun thing about this semester is that both of my classes are built around student projects. More or less, I get to pick projects that fit with my research, and the class is there to help me find the time, resources, and guidance to complete the projects successfully. I like this much better than undergraduate style courses built around assignments and exams that are very generic and may not be relevant to my work. We’ll see how things unfold, but I’m currently planning to work on two projects that I’m excited about.

For Evolutionary Computation, I’m working on an experiment about endosymbiosis. I was inspired by this classic experiment, which examined how bacteria evolve antibiotic resistance, and how genetic innovations spread through the population spatially. I’m going to try evolving a host environment that supports an inner population, a bit like how my gut supports a microbiome. The hope is that the host will be able to design a supportive environment, with different regions that cultivate “microbes” with different traits, such that it can guide and coax them into evolving more specialized forms. This is an exciting experiment for me, because I’m not sure what to expect, but I’m pretty confident that something interesting will happen.

A screenshot from the video linked above, showing strains of bacteria gradually growing into bands with increasing concentrations of antibiotic, fanning out from points where key mutations occurred.

A screenshot from the video linked above, showing strains of bacteria gradually growing into bands with increasing concentrations of antibiotic, fanning out from points where key mutations occurred.

For Deep Learning, I’m going to use computer vision techniques to detect interesting patterns in the Game of Life, since I’ve been using that as an environment for my evolution experiments. The Game of Life has very simple rules, but it evolves in complex ways. Most patterns quickly dissolve into empty space or settle into a few boring, stable forms. But rarely, you get something much more interesting. For decades, people have been exploring this space, finding interesting patterns and classifying them. You get huge complex structures that stabilize themselves, change continuously in repeating cycles, or even propel themselves and move at a steady pace. I’ll build a system that can detect and categorize these patterns, so that when my evolutionary algorithm finds them, I can reward it and ask for “more like that.”

Eater 2, a static shape that persists forever, but has the special property of being able to “eat” gliders that collide with it, recovering its shape after.Monogram, a period-four oscillator, which is small, but occurs very rarely from random conditions.

Examples of interesting patterns in the Game of Life. The first is a static shape that persists forever, but has the special property of being able to “eat” gliders that collide with it, recovering its shape after. The second is a period-four oscillator, which is small, but occurs very rarely from random conditions. The third is a middleweight spaceship, which moves forward two spaces as it repeats itself in four time steps.

This month’s essay is inspired by my Evolutionary Computation class, and the work I’ve been doing to develop the specific research questions I want to focus on for my PhD. So, check back on Wednesday to learn more about how evolution got started, and why it’s worth asking: how does evolution evolve?

GECCO Follow-Up

(I took this post’s photo at the Star Trek Original Series Set Tour in Ticonderoga, New York. It’s a view of the warp core of the USS Enterprise, which is only a few feet deep but looks much larger thanks to forced perspective. The room is filled with structures with complicated geometric shapes, technical looking panels, and dramatic lighting in red, blue, and purple.)

In my last post, I wrote about my latest research project and why I was so excited to present it at GECCO, the premier conference for evolutionary computation. I promised a follow-up, and here it is! Unfortunately, I didn’t make it to Melbourne. Instead, I had a very complicated and protracted battle with my University’s travel planning system, United Airlines, and the Australian visa office, all from the comfort of my home in Vermont. I couldn’t even participate in the event remotely, because of the time zone difference. This is all very disappointing, but I tried to make the best of it. I’ve been busy with the next iteration of this project, and enjoying a bit of “staycation” time here in New England (hence this month’s cover photo).

In any case, my paper did get published, and I’d still like to share the materials I presented virtually at the conference. It’s mostly intended for a technical audience, but I hope at least some of my readers will find it interesting. The paper is titled A Meta-Evolutionary Algorithm for Co-evolving Genotypes and Genotype / Phenotype Maps. I had to cut it down to just four pages for the official publication, since it was accepted as a poster, but the full length version is available here, and I wrote up an overview of my algorithm’s implementation for those who want to go deeper. There’s also a digital version of my poster and a short video overview of my experiment.

I continue to work on this idea, and it is starting to evolve beyond what I presented in that paper. Right now, I’m actively deconstructing and rebuilding the algorithm. CPPNs are an important and well known part of the AI field, so I’m trying to describe precisely how my algorithm is different, and which of those differences account for the remarkable results I found. Originally I thought of this research as being about epigenetics specifically, but as I try to generalize and simplify, what I’m left with looks like straight-up endosymbiosis. I’ve been thinking of this algorithm as a metaphor for a cell and its genes / nucleus, but it could just as easily be a metaphor for an animal and its community of microbes. This is exciting, since I’d love to do more research on endosymbiosis, and I really like the idea that perhaps symbiosis is the driving force behind intelligence as we know it, fundamentally changing the dynamics of evolution.

Anyway, that’s how I see it for the moment, and where I hope my research will lead in the near future. For now, though, I’m wrapping up my summer with a few more fun outings, and preparing for the start of classes later this month. I’ll be diving deep into both evolutionary computation and deep learning, which I’m really looking forward to.

Why the Game of Life Paper?

(This month’s image is a slime mold growing on a log. It grows in a branching network of banana-yellow tendrils, some of which are engulfing plant debris they encountered. Source)

Later this month, I’ll be attending the Genetic and Evolutionary Computing Conference (GECCO) in Melbourne, Australia. I’m super excited to go, and to present my very first published academic paper as a poster. I’ll share more here when all is said and done, but unfortunately my paper isn’t really intended for a general audience, like this blog. It would probably be hard to understand for anyone outside of the fields of AI or ALife. So, for everybody else, I’d like to share what the paper means to me, and what I’m trying to say by publishing it.

My research is inspired by epigenetics, and new ways of thinking about evolution. I saw that life doesn’t just evolve by chance, it evolves to become more evolvable. It learns how to explore the range of possible forms and lifestyles more efficiently, and to nudge evolution down more fruitful paths. Life uses its intelligence to become more intelligent still. In my mind, this changes everything about evolution, and I was shocked it wasn’t more well known. Most discussions of evolution (and the programmer version: evolutionary computation, or EC) are too simple, and ignore these critical details. So, I figured I’d be the one to bring this up, and show people why it matters.

I started my first experiment before I even got to university. I was so excited by the idea, I just had to get it out of my head. I actually avoided looking for prior work, because I wanted to see how I would manifest this idea without being biased by other people’s thinking. Besides, I didn’t know of any research like mine, and I didn’t know how to find it, either. That’s why I applied to UVM. When I got here, my advisor and lab mates encouraged me to publish this project, and pointed me at the relevant literature. So I hit the books, reading all that had been done before in order to put my own work into context.

And, of course, I found I’m not the first to have this idea. There are many variations of EC inspired by biology, looking for the “secret sauce” that makes life more powerful than our computer models. In particular, how life evolves to be more evolvable is an active area of research, which has been building momentum in recent years. At first, I was disappointed. My idea was already taken! So much of what I thought made my project interesting had been tried before in some other context. But not exactly. Identifying those subtle differences has been tremendously helpful.

You see, it’s pretty well established now that “evolvability” is important. In our experiments, simulated life that’s more evolvable finds fitter solutions faster. It’s better at adapting to changing circumstances, too. It seems to be smarter and more creative. I find this exhilarating, yet these discoveries didn’t “change everything” like I had hoped. In the experiments so far, it feels like an incremental improvement. It helps, but not enough to draw much attention away from other areas of AI research, like deep learning, which is seen as much more powerful and more productive.

I think that’s because we still haven’t broken out of our old ways of thinking. Traditional EC is all about finding good solutions to a problem, but I would argue that evolution isn’t about problem solving. It’s about problem finding. Life explores the space of possible lifestyles to find and exploit opportunities. The evolution of life is a bit like a slime mold. It grows simultaneously in all directions, questing around obstacles to find resources, reinforcing the branches that get lucky, culling back the ones that don’t. It doesn’t have a top-down view of the world, but it’s still strategic and adaptive. When I look at most of the existing experiments in this space, I feel like we’re putting a slime mold into a narrow tunnel and measuring how fast it can get to the other end. We’re accidentally putting evolution in a straight jacket, and blinding ourselves to what makes it so interesting and powerful.

So, in my first experiment, I try to show a different perspective. I made a single algorithm that can adapt itself to solve many different tasks. Normally, an EC programmer picks one task to solve, then designs an evolutionary search strategy to suit that problem. They invent a genome language, a way of turning that into a solution, and ways of randomly tweaking the genome that might lead to better solutions. In my experiment, I evolved the search strategy, too. As the programmer, I designed a vast and open ended search domain, and many ways that the algorithm could restrict that space. But I wasn’t sure which restricted sub-spaces would work best, and, unlike traditional EC, I didn’t try to guess. I just let the algorithm figure that out for itself.

The way I did this is also interesting. It turns out, the algorithm I invented is strikingly similar to one that’s already popular: “compositional pattern-producing networks,” or CPPNs. Again, it was a little frustrating to be scooped, but I’m using this algorithm in a new way. Instead of evolving new “bodies” for simulated life, I’m evolving new ways of generating bodies. It’s a subtle difference, but an important one, I think. That extra level of indirection gives evolution more influence over its destiny, and the power to make more complex patterns in ways I couldn’t even anticipate. Now that I know how my idea is so similar to, yet different from, an existing algorithm, I’m teasing apart those differences, to measure the impact of each one.

I’m proud of my work, and excited to talk about it with other EC enthusiasts at GECCO. On the other hand, I’m still figuring out how to do science, and there’s a lot I don’t like about my first paper. This project was mostly my way of proving to myself that this crazy idea could work. The results are intriguing, but it’s not yet a clear example of what I want to show. It’s also complicated, unusual, and hard to explain, even to other EC researchers. If I want people to get excited about this, I need to simplify, make my work more relatable, and find better ways to demonstrate and measure the novel behavior I’m talking about here. There are no “obstacle courses for slime molds” in the EC literature that I know of, so perhaps I’ll need to design some.

Hopefully, I’ll get lots of inspiration and feedback at GECCO. As I learn more about the field of EC, I’m finding more and more examples of work similar, yet slightly different, from my own. This is great, because each of those differences is an opportunity for a new experiment, to see if my perspective can shed light on something new. I’m already dreaming up all sorts of new ways to explore my ideas. And that’s more or less how I hope to spend the next several years. Maybe that’s my PhD.

In any case, I hope that explanation was interesting, and not too vague. I’ll get more specific in a few weeks, when I post a follow up with the full GECCO paper, the poster I presented, a video summary of that poster, and links to some supplemental results and analysis. I bet I’ll have some fun things to report from my time in Melbourne, too! As always, I’d love to hear from you in the comments.

New Coding Project

I just published a new coding project! If you’ve been following along, this is the “grown up” version of a demo project I posted last year.

It’s further exploration of what I call an “epigenetic algorithm.” It’s inspired by a simple observation: in living cells, the process of evolution is actively managed by the cell, which itself is an evolved mechanism. Using evolution to optimize evolution seems like a powerful trick, so I’m trying to reproduce it in small-scale AI experiments. I hope to make evolutionary computing more open-ended, more successful in vast search spaces, and less biased by the programmer. In this case, I’m generating cool looking Game of Life simulations, but I hope to find many more practical applications in the future.

I’m not sure whether or not I will publish this as a work of science. It’s complicated and weird and there’s more work involved to make this a proper controlled experiment. As I learn more, I’m already thinking of other ways to explore this idea that might be more effective. So, for now, the plan is just to share the code and I may or may not revisit this later, depending on where my other leads take me.  🙂

I gave a presentation that covers the motivation, results, and challenges of this project for a technical audience.

You can also peruse the source code on Codeberg.

Sorry this is less accessible than my usual blog posts! If you have any questions, just drop them in the comments, and I’ll happily answer them.

Checking In

Hello there, reader. I hope you’ve been enjoying the blog. It’s been about two years since I began, and in that time a lot has happened. I moved from California to Vermont and started a PhD program at UVM. That’s going great, but it’s keeping me very busy, and I think I have to make some changes.

So far, I’ve been posting a full-length blog post every month, but I don’t think I can keep up that pace. I’ve got a few finished posts queued up, but I’ve had barely any time to work on new ones since classes began. So, this will be the last post for the year, then, starting in January, I will try posting every three months. Hopefully that will be a more sustainable pace. I might also experiment with shorter / less formal posts. We’ll see!

In the meantime, I thought I’d share a bit about my grad school experience so far. At the time of writing this, I’m about two thirds of the way through my first semester.


Here at UVM, I find myself completely surrounded by interesting people, ideas, and projects. It’s electrifying. I don’t know if my mind has ever been so stimulated before. I’m often pooling my mind with many other brilliant people, and I love that feeling. Whenever I have the kind of idea that might show up in this blog, I have many people at hand who want to talk about it and have interesting contributions. And they’re pulling me into interesting conversations all the time! I’m reading all sorts of interesting papers in AI, philosophy of mind, and biology. I’m integrating that knowledge by writing papers and building fun little software projects for class. So far, most of what I’m learning isn’t radically new or different, but I’m fleshing out my understanding in much more depth, discovering fun tangents, learning precise vocabulary and specialized tools, and building out vast webs of mental connections.

I haven’t had much time to work on research yet, but I’m excited about my prospects. I see reflections of my ideas in the work of others’, and vice versa. It’s helping see new angles and new possibilities. I’m talking very actively with my advisor and fellow students about potential projects and collaborations. I came to UVM with lots of things I’d like to try (many related to this blog), but now the challenge is integration. How do I pursue my interests in the context of the lab? How do I make use of new ideas I’m discovering and find exciting? How do I connect with the work of other researchers, to make my contributions relevant and interesting to the field? It’s forcing me to question myself, reframe things, explain myself, and find inspiration in others’ ideas.

This is exactly what I was hoping for. I’ve been on my own since leaving Google, and the experience is just underscoring something I know to be true: human minds aren’t meant to work alone. We’re social creatures, and most of us just cannot reach our full capacity in isolation. At UVM, my mind is constantly stimulated, and I’m always challenged to stretch myself, try new ideas, and make new contributions. I feel I can go much further here, and accomplish great things. Of course, the flip side is that it’s exhausting, especially while I’m still new and figuring out how things work here. The pace of activity is very fast, and there’s much more to read, do, and learn than I could possibly fit into a day.

In some sense, that’s already very familiar. My experience at Google (especially near the end, when I was leading teams) was also a fast paced, continuous, information overload. The key skills for dealing with that are prioritization and time management. I just have to pick what to spend my time on, and let everything else pass me by with minimal interference. Thankfully, I’ve gotten really good at that.

Still, I’m definitely feeling challenged right now, and so busy I barely have time to breathe. There are a few reasons for that. One is just that the pace of life was so much gentler over the past couple of years as I made this transition. I was working very hard, but my work was self-paced and I had very little external pressure. I took my time. Nobody was depending on me, or judging my performance. So, I’m having the jarring experience of suddenly stepping onto a moving treadmill. Only now, after a few months of settling in, have I found a good rhythm where I can keep up with all my work, have a life, and also pursue my own research goals.

This is also hard because I’m running on a very different sort of treadmill than what I experienced at Google. In a sense, it’s easier. I have a ton of work to do, but it’s mostly low-stakes reading, writing, and coding at a steady pace. As a manager at Google, my work was mostly decisions, strategy, and people issues. I spent a lot of time moving from crisis to crisis. I had to act quickly and decisively with limited information. My decisions would have a big, often irreversible impact on people’s lives (sometimes individuals I knew and cared about, and sometimes millions of strangers). I was accountable for those human consequences. That’s stressful! By comparison, grad school work is fun and low pressure.

In another sense, Google was actually much easier than academia, because it was highly structured. There, everyone had a well defined job role with explicit expectations. We had official priorities and ways of doing things that everyone adhered to. Everyone was organized into teams, which served complementary purposes and fit into a coherent whole. Leaders held meetings with regular cadences every week, month, quarter, etc., checking in on progress and aligning people on their mission, values, goals, and priorities. We never lived up to the ideal, but Google felt a bit like a clockwork mechanism. We all had a function, individually and collectively, and we worked together in lock-step to make it happen.

Academia and science are far less structured. Every PhD, every lab, every department, and every school do things differently. There are very few hard constraints, and anything that fits within them is valid. In grad school, lectures and assignments feel a bit more like “recommended work,” and you’re expected to invest only as much into them as makes sense for your needs. This is freeing, but also in some sense very frustrating. There are no best practices. Nobody can tell me what to do, or how to succeed. There’s very little indication of how I’m doing, or what progress I’m making. Everyone’s working more or less independently, on only sorta related projects, within a vaguely defined scope, based on their own judgment. There’s little sharing, few standards, and plenty of waste.

That said, science should be less structured than engineering. It must be. The university is designed this way to encourage diverse ways of thinking and of approaching problems. The challenge of science is that we don’t know what we don’t know. We don’t know what to look for, where to look, or how we’ll find it. Very often important truths are hidden behind common ideas and practices that everyone “knows” to be right, even though they are blatantly wrong! It’s important that students can follow in the footsteps of great thinkers and flesh out their works, but also that they can go in completely new directions and find what was overlooked. The chaos and freedom can be challenging, but it’s a necessary ingredient for innovation.


Well, that’s what’s going on with me. I hope my thoughts on grad student life are interesting. I will start experimenting with the format of this blog and see where it goes. If you have any thoughts or suggestions about that, please let me know! If you’d like to stay more up to date on what I’m doing and thinking, I’m most active on Mastodon. For those who don’t know, it’s basically a smaller, nerdier version of Twitter that isn’t owned by an alt-right friendly oligarch, and with much healthier communication norms and moderation practices. I post there pretty often, and I respond to most comments / DMs. You can find my profile here.