Howard Chen
I recently received my PhD in CS from Princeton, where I was co-advised by Danqi Chen and Karthik Narasimhan.
My research focuses on building safe agents that can operate reliably on a long interactive horizon and can continually improve.
Towards this goal, my research covers several topics:
- Building large-scale virtual and embodied environments to train and evaluate agents (WebShop, Touchdown).
- Building agentic long-term memory and developing algorithms for continual learning (MemWalker, Continual Memorization).
- Understanding properties of post-training algorithms: RL on LMs forgets less than SFT (Retaining by Doing).
- Interpretability & safety: interpretability improves robustness (Rationalization Removes Adv. Attacks); agent performing deep research changes it’s stance on political, moral, and safety questions (Context Accumulation Changes Belief).
- AI for advancing science (AI Reverse-Engineering Blackboxes, AI Science Tutor), and benchmarking multi-modal reasoning on scientific knowledge/charts (CharXiv).
During my PhD, I have interned at Meta (FAIR) working with Asli Celikyilmaz and Jason Weston. Prior to Princeton, I was an ML researcher at ASAPP working with Tao Lei. I was also a research assistant at Cornell Tech working with Yoav Artzi.
I obtained my M.Eng. in CS from Cornell and B.S. in Electrical Engineering from National Taiwan University.