About

I'm a fifth year PhD student at the University of Pennsylvania advised by Professors Mayur Naik and Eric Wong. I interned at AWS AI (Fundamental Research Team) for the last two summers advised by Matthew Trager and Stefano Soatto, where I worked on uncertainty quantification and experience-guided reasoning for agents. My research is supported by the NSF Graduate Research Fellowship Program.

My research aims to make AI systems reliably reason, and overall behave as intended. I do this by interfacing foundation models with programs, creating agents and workflows which we can interpret, control, and verify. My position on the role of symbolic abstractions (programs) in the foundation model era is presented in a pre-print, and my follow-up work explores a core challenge in this space, reasoning using per-instance program synthesis. Recently, I've been exploring how to interpret and control foundation models in lightweight, targeted ways, and how to enable general AI systems (agents, workflows, and pipelines) to learn from experience so that they become more reliable and lower cost over time.


Research Summary

The Interface Between Foundation Models (FMs) and Programs

๐Ÿง  Concepts as symbols

Surface and use FM concepts as symbols in programs.

Pre-print '25 ICML '24 ICLR Tiny '23 NeurIPS XAIA '23 ACL '25
Show papers

๐Ÿ’ป Programs for reasoning

Improve model reasoning capabilities with programs.

Pre-print '25 Pre-print '25 NeurIPS '25 AACL '24
Show papers

๐ŸŽ›๏ธ Program verification

Verify, debug, and control AI systems.

OOPSLA '24 NeurIPS MechInterp '25 Pre-print '25
Show papers

Recent News

Pre-Prints

Conference Papers

Workshop Papers

Student Mentoring

Teaching