My research focuses on the computational foundations of intelligent behavior. We develop theories and systems pertaining to intelligent behavior using a unified methodology -- at the heart of which is the idea that learning has a central role in intelligence.
My work centers around the study of machine learning and inference methods to facilitate natural language understanding. In doing that I have pursued several interrelated lines of work that span multiple aspects of this problem - from fundamental questions in learning and inference and how they interact, to the study of a range of natural language processing (NLP) problems. Over the last few years the focus of my natural language understanding work has been the development of constrained conditional models -- an integer linear programming formulation for (jointly) learning and supporting global inference. Within this framework we have studied fundamental learning and inference issues -- from learning with indirect supervision to response driven learning to decomposed learning to amortized inference -- and addressed multiple problems in semantics and information extraction. In particular, we developed state of the art solutions and systems for semantic role labeling, co-reference resolutions, and textual entailment as well as named entity recognition, Wikification and other information extraction problems. A lot of my recent work has also emphasized the notion of incidental supervision as a way to get around the inherent difficulty in supervising complex problems.
I have also worked on fundamental problems in Natural Language Acquisition, ESL, and Information Trustworthiness. Over the last decade we have also developed a declarative Learning Based Programming language, LBJava, for the rapid development of software systems with learned components; we are currently working on Saul, a next generation Declarative Learning Based Program.