Difflog: Beyond Deductive Methods in Program Analysis

Abstract

Building effective program analysis tools is a challenging endeavor: analysis designers must balancemultiple competing objectives, including scalability, fraction of false alarms, and the possibilityof missed bugs. Not all of these design decisions are optimal when the analysis is applied to anew program with different coding idioms, environment assumptions, and quality requirements.Furthermore, the alarms produced are typically accompanied by limited information such as theirlocation and abstract counter-examples. We present a framework DIFFLOG that fundamentally extendsthe deductive reasoning rules that underlie program analyses with numerical weights. Each alarm isnow naturally accompanied by a score, indicating quantities such as the confidence that the alarmis a real bug, the anticipated severity, or expected relevance of the alarm to the programmer. To theanalysis user, these techniques offer a lens by which to focus their attention on the most importantalarms and a uniform method for the tool to interactively generalize from human feedback. To theanalysis designer, these techniques offer novel ways to automatically synthesize analysis rules ina data-driven style. DIFFLOG shows large reductions in false alarm rates and missed bugs in large,complex programs, and it advances the state-of-the-art in synthesizing non-trivial analyses.

Publication
In Machine Learning for Programming
Date
Links