Changing a software application with many build-time configuration settings may introduce unexpected side effects. For example, a change intended to be specific to a platform (e.g., Windows) or product configuration (e.g., community editions) might impact other platforms or configurations. Moreover, a change intended to apply to a set of platforms or configurations may be unintentionally limited to a subset. Indeed, understanding the exposure of source code changes is an important risk mitigation step in change-based development approaches. In this paper, we present DiPiDi, a new approach to assess the exposure of source code changes under different build-time configuration settings by statically analyzing build specifications. To evaluate our approach, we produce a prototype implementation of DiPiDi for the CMake build system. We measure the effectiveness and efficiency of developers when performing five tasks in which they must identify the deliverable(s) and conditions under which a source code change will propagate. We assign participants into three groups: without explicit tool support, supported by existing impact analysis tools, and supported by DiPiDi. While our study does not have the statistical power to make generalized quantitative claims, we manually analyze the full distribution of our study’s results and show that DiPiDi results in a net benefit for its users. Through our experimental evaluation, we show that DiPiDi results in a 36 average percentage points improvement in F1-score when identifying impacted deliverables and a reduction of 0.62 units of distance when ranking impacted patches. Furthermore, DiPiDi results in a 42% average task time reduction for our participants when compared to a competing impact analysis approach. DiPiDi’s improvements to both effectiveness and efficiency are especially prevalent in complex programs with many compile-time configurations.
Wed 17 AprDisplayed time zone: Lisbon change
14:00 - 15:30 | Evolution 1Research Track / Journal-first Papers / Demonstrations / Industry Challenge Track at Amália Rodrigues Chair(s): Jonathan Sillito Brigham Young University | ||
14:00 15mTalk | Large Language Models are Few-Shot Summarizers: Multi-Intent Comment Generation via In-Context Learning Research Track Mingyang Geng National University of Defense Technology, Shangwen Wang National University of Defense Technology, Dezun Dong NUDT, Haotian Wang National University of Defense Technolog, Ge Li Peking University, Zhi Jin Peking University, Xiaoguang Mao National University of Defense Technology, Liao Xiangke National University of Defense Technology DOI Pre-print | ||
14:15 15mTalk | Block-based Programming for Two-Armed Robots: A Comparative Study Research Track Felipe Fronchetti Virginia Commonwealth University, Nico Ritschel University of British Columbia, Logan Schorr Virginia Commonwealth University, Chandler Barfield Virginia Commonwealth University, Gabriella Chang Virginia Commonwealth University, Rodrigo Spinola Virginia Commonwealth University, Reid Holmes University of British Columbia, David C. Shepherd Louisiana State University DOI Pre-print Media Attached | ||
14:30 15mTalk | Exploiting Library Vulnerability via Migration Based Automating Test Generation Research Track Zirui Chen , Xing Hu Zhejiang University, Xin Xia Huawei Technologies, Yi Gao Zhejiang University, Tongtong Xu Huawei, David Lo Singapore Management University, Xiaohu Yang Zhejiang University | ||
14:45 15mTalk | ReposVul: A Repository-Level High-Quality Vulnerability Dataset Industry Challenge Track Xinchen Wang Harbin Institute of Technology, Ruida Hu Harbin Institute of Technology, Shenzhen, Cuiyun Gao Harbin Institute of Technology, Xin-Cheng Wen Harbin Institute of Technology, Yujia Chen Harbin Institute of Technology, Shenzhen, Qing Liao Harbin Institute of Technology Pre-print File Attached | ||
15:00 7mTalk | JOG: Java JIT Peephole Optimizations and Tests from Patterns Demonstrations Zhiqiang Zang The University of Texas at Austin, Aditya Thimmaiah The University of Texas at Austin, Milos Gligoric The University of Texas at Austin DOI Pre-print | ||
15:07 7mTalk | Predicting the Change Impact of Resolving Defects by Leveraging the Topics of Issue Reports in Open Source Software Systems Journal-first Papers Maram Assi Queen's University, Safwat Hassan University of Toronto, Canada, Stefanos Georgiou Queen's University, Ying Zou Queen's University, Kingston, Ontario | ||
15:14 7mTalk | Assessing the Exposure of Software Changes Journal-first Papers Mehran Meidani University of Waterloo, Maxime Lamothe Polytechnique Montreal, Shane McIntosh University of Waterloo Link to publication Pre-print | ||
15:21 7mTalk | Responding to change over time: A longitudinal case study on changes in coordination mechanisms in large‑scale agile Journal-first Papers Marthe Berntzen University of Oslo, Viktoria Stray University of Oslo, Nils Brede Moe , Rashina Hoda Monash University |