
About Me
Find my full C.V. here and my publication list on Google Scholar.
My name is Thomas Anderson (“Andy”) Keller. I study how biological and artificial neural systems build representations that enable efficienct learning and strong generalization in the complex and messy real world. In particular, I am interested in the transformations that happen all around us, all the time – when objects move, viewpoints shift, and context drifts over time, how do we continue to act reliably? My work sits at the intersection of neuroscience and machine learning, and I describe my research agenda as Geometric NeuroAI, using symmetry, dynamics, and physically grounded inductive biases to understand neural computation and to design models that generalize more predictably.
I completed my Ph.D. at the University of Amsterdam under the supervision of Max Welling, where I became fascinated with how the brain’s physical organization might shape computation. My work at the Kempner Institute continues along this trajectory: using symmetry, geometry, and spatiotemporal dynamics as a shared language between modern AI and neuroscience.
Education (click
to expand)
Kempner Research Fellow (2023 - Now) Harvard University, Kempner Institute
Independent.- In 2 years: published 9 conference papers (4 more under review), 1 book, and 4 conference abstracts.
- Closely mentored 2 Ph.D. students, serving as last author on three major conference pubications. Mentored one undergraduate to recieve a Rhodes Scholarship.
- Designed and taught an undergraduate course on "The Neuroscience of Artificial Neural Networks"
Ph.D. Machine Learning (2018 - 2023) University of Amsterdam
Supervisor: Max WellingThesis: Natural Inductive Biases for Artificial Intelligence
M.S. Computer Science (2015 - 2017) University of California San Diego
Supervisor: Garrison CottrellThesis:
B.S. Computer Science w/ Honors (2011 - 2015) California Institute of Technology
Supervisor: Yasser Abu-MostafaExperience
Apple Machine Learning Research (Summer 2022)
- Developed ”Homomorphic Self-Supervised Learning”, a framework which subsumes data augmentation in self-supervised learning through structured equivariant representations.
- Published NeurIPS 2022 Self-Supervised Learninng Workshop paper based on work, full AISTATS paper still under review.
- Additional work from collaboration in submission at ICML 2023
Intel Nervana AI Lab (2016 - 2018)
- Deep Learning Data Scientist (Sept. 2017 - Sept. 2018)
- Algorithms Engineer Intern (June 2016 - June 2017)
Data Science for Social Good (Summer fellow 2015)
Lyve Minds Inc. (Analytics Engineering Intern Summer 2014)
- Developed supervised learning algorithm for automatic editing and summarization of user generated handheld video based on predicted level of interest.
California Institute of Technology (Undergraduate Researcher 2012)
- Paper: Experimental Realization of a Nonlinear Acoustic Lens with a Tunable Focus
- Gathered and analyzed waveforms from an acoustinc lens to determine optimal characteristics of interface materials.
Teaching
I maintain the guiding principle that the measure of a scientist is based not on their own accomplishments, but on those of their students. I am therefore incredibly grateful to have had the opportunity to design and teach my own course at Harvard College, and simultaneously mentor nearly a dozen students throughout their undergraduate and graduate careers. Through this experience I have developed a teaching philosophy which naturally mirrors my research style as an independent investigator. Specifically, as a scientist, my research lies at the intersection of theory and experiment, guided by physical intuition and curiosity. As a teacher and advisor, I strive to guide students in an analogous manner — blending formal analysis with interactive experiment, emphasizing a link between intuition and mathematical formalism, and inspiring scientific curiosity for self-motivated exploration.
Courses
- Neuroscience of Artificial Intelligence (Neuro101GG). Harvard Undergraduate Seminar. Fall 2025. (Syllabus)
Advising
- Emma Finn (Undergraduate, Harvard): Recipient of the Rhodes Scholarship ‘26 and two Kempner Undergraduate Research Fellowships.
- Mozes Jacobs (Ph.D. Candidate, Harvard): Work resulted in a CCN ‘25 Oral presentation and ICLR ‘25 submission.
- Hansen Lillemark (Ph.D. Candidate, UCSD): Daily supervisor for summer research visit; work resulted in ICLR ‘25 submission and NeurIPS ‘25 workshop acceptances.
- Qinghe Gao (Master’s Candidate, UvA): Master’s thesis supervision resulting in Best Paper Award at NeurIPS ‘21 SVRHM workshop: Modeling the Observed Domain-Specificity in the Visual Cortex using Topographic Variational Autoencoders. Now PhD at TU Delft.
- Sid Bharthulwar (Undergraduate, Harvard): Senior Thesis published at NeurReps Workshop. Now quant at Jump Trading.
- Samarth Bhargav (Master’s Candidate, UvA): Master’s thesis supervision on ‘Geometric Priors for Disentangling Representations. Now Postdoc in the Information Retrieval Lab, University of Amsterdam.
- Fiorella Wever (Master’s Candidate, UvA): Master’s thesis supervision resulting in NeurIPS Self-supervised Learning Workshop Paper. Now Machine Learning Engineer at Evvy
As Teaching Assistant
- Teaching Assistant: ”Machine Learning 1”. University of Amsterdam, Bachelor’s. (2020)
- Teaching Assistant: ”Machine Learning 2”. University of Amsterdam, Master’s. (2019)
- Head Teaching Assistant: ”Data Visualization”. University of California San Diego, Master’s. (2016)
Personal
Privately, I enjoy cooking (@TheOtherThomasKeller), running, and playing with my gymnastics rings. I was also an organizing member of the Inclusive AI group at the UvA whose goal is to reduce harmful bias (both algorithmic and human) in the field of machine learning. Please feel free to email me if you have any questions!