About

I'm a fifth year PhD student at the University of Pennsylvania advised by Professors Mayur Naik and Eric Wong. I interned at AWS AI (Fundamental Research Team) for the last two summers advised by Matthew Trager and Stefano Soatto, where I worked on uncertainty quantification and experience-guided reasoning for agents. My research is supported by the NSF Graduate Research Fellowship Program.

My research aims to make AI systems reliably reason, and overall behave as intended. I do this by interfacing LLMs with programs, creating agents and workflows which we can interpret, control, and verify. My position on the role of symbolic abstractions (programs) in the LLM era is presented in a pre-print, and my follow-up work explores a core challenge in this space, reasoning using per-instance program synthesis. Recently, I've been exploring how to interpret and control LLMs in lightweight, targeted ways, and how to enable general AI systems, especially agents and workflows, to become safer, more reliable, and lower cost over time.

Research Summary

The Interface Between LLMs and Programs

Concepts as symbols

Surface LLM concepts as symbols that programs can use.

Pre-print '25 ICML '24 ICLR Tiny '23 NeurIPS XAIA '23 ACL '25
Show papers

Programs for reasoning

Use programs to make LLM reasoning more reliable.

Pre-print '25 Pre-print '25 NeurIPS '25 AACL '24
Show papers

Verification and control

Verify, debug, and control LLM systems for safety.

OOPSLA '24 NeurIPS MechInterp '25 Pre-print '25
Show papers

Recent News

Pre-Prints

Conference Papers

Workshop Papers

Student Mentoring

Teaching