A Human Study of Comprehension and Code Summarization
Software developers spend a great deal of time reading and understanding code that is poorly-documented, written by other developers, or developed using differing styles. During the past decade, researchers have investigated techniques for automatically documenting code to improve comprehensibility. In particular, recent advances in deep learning have led to sophisticated summary generation techniques that convert functions or methods to simple English strings that succinctly describe that code’s behavior. However, automatic summarization techniques are assessed using internal metrics such as BLEU scores, which measure natural language properties in translational models, or ROUGE scores, which measure overlap with human-written text. Unfortunately, these metrics do not necessarily capture how machine-generated code summaries actually affect human comprehension or developer productivity. We conducted a human study involving both university students and professional developers (n = 45). Participants reviewed Java methods and summaries and answered established program comprehension questions. In addition, participants completed coding tasks given summaries as specifications. Critically, the experiment controlled the source of the summaries: for a given method, some participants were shown human-written text and some were shown machine-generated text. We found that participants performed significantly better (p = 0.029) using human-written summaries versus machine-generated summaries. However, we found no evidence to support that participants perceive human- and machine-generated summaries to have different qualities. In addition, participants’ performance showed no correlation with the BLEU and ROUGE scores often used to assess the quality of machine-generated summaries. These results suggest a need for revised metrics to assess and guide automatic summarization techniques.
Tue 14 Jul Times are displayed in time zone: (UTC) Coordinated Universal Time change
01:30 - 02:30: Session 4: SummalizationResearch / ERA at ICPC Chair(s): Venera ArnaoudovaWashington State University | |||
01:30 - 01:45 Paper | Improved Code Summarization via a Graph Neural Network Research Alexander LeClairUniversity Of Notre Dame, Sakib HaqueUniversity of Notre Dame, Lingfei WuIBM Research, Collin McMillanUniversity of Notre Dame Pre-print Media Attached | ||
01:45 - 02:00 Paper | BugSum: Deep Context Understanding for Bug Report Summarization Research Haoran LiuNational University of Defense Technology, Yue YuCollege of Computer, National University of Defense Technology, Changsha 410073, China, Shanshan LiNational University of Defense Technology, Yong GuoNational University of Defense Technology, Deze WangNational University of Defense Technology, Xiaoguang MaoNational University of Defense Technology Media Attached | ||
02:00 - 02:15 Paper | A Human Study of Comprehension and Code Summarization Research Sean StapletonUniversity of Michigan, Yashmeet GambhirUniversity of Michigan, Alexander LeClairUniversity Of Notre Dame, Zachary Eberhart, Westley WeimerUniversity of Michigan, USA, Kevin LeachUniversity of Michigan, Yu HuangUniversity of Michigan Pre-print Media Attached | ||
02:15 - 02:30 Paper | Linguistic Documentation of Software History ERA Media Attached |