Thumbnail
Access Restriction
Subscribed

Author Heckman, Sarah Smith
Source ACM Digital Library
Content type Text
Publisher Association for Computing Machinery (ACM)
File Format PDF ♦ HTM / HTML
Language English
Abstract Static analysis tools are useful for finding common programming mistakes that often lead to field failures. However, static analysis tools regularly generate a high number of false positive alerts, requiring manual inspection by the developer to determine if an alert is an indication of a fault. The adaptive ranking model presented in this paper utilizes feedback from developers about inspected alerts in order to rank the remaining alerts by the likelihood that an alert is an indication of a fault. Alerts are ranked based on the homogeneity of populations of generated alerts, historical developer feedback in the form of suppressing false positives and fixing true positive alerts, and historical, application-specific data about the alert ranking factors. The ordering of alerts generated by the adaptive ranking model is compared to a baseline of randomly-, optimally-, and static analysis tool-ordered alerts in a small role-based health care application. The adaptive ranking model provides developers with 81% of true positive alerts after investigating only 20% of the alerts whereas an average of 50 random orderings of the same alerts found only 22% of true positive alerts after investigating 20% of the generated alerts.
Age Range 18 to 22 years ♦ above 22 year
Educational Use Research
Education Level UG and PG
Learning Resource Type Article
Publisher Date 2003-03-01
Publisher Place New York
Journal Crossroads (CROS)
Volume Number 14
Issue Number 1
Page Count 11
Starting Page 1
Ending Page 11


Open content in new tab

   Open content in new tab
Source: ACM Digital Library