I'm an academic working in natural language processing, computer vision, and robotics. My research centers on building trustworthy AI systems by revealing how they work internally.
Right now, I'm exploring interpretability for embodied intelligence. I ask questions like: How do physical agents perceive and interact with their environment? What mechanisms drive their decision making?
I have extensive experience with frameworks such as:
Some of my key projects include:



