Self-Claimed Assumptions in Deep Learning Frameworks: An Exploratory Study
Deep learning (DL) frameworks have been extensively designed, implemented, and used in software projects across many domains. However, due to the lack of knowledge or information, time pressure, complex context, etc., various uncertainties emerge during the development, leading to assumptions made in DL frameworks. Though not all the assumptions are negative to the frameworks, being unaware of certain assumptions can result in critical problems (e.g., system vulnerability and failures). As the first step of addressing the critical problems, there is a need to explore and understand the assumptions made in DL frameworks. To this end, we conducted an exploratory study to understand self-claimed assumptions (SCAs) about their distribution, classification, and impacts using code comments from nine popular DL framework projects on GitHub. The results are that: (1) 3,084 SCAs are scattered across 1,775 files in the nine DL frameworks, ranging from 1,460 (TensorFlow) to 8 (Keras) SCAs. (2) There are four types of validity of SCAs: Valid SCA, Invalid SCA, Conditional SCA, and Unknown SCA, and four types of SCAs based on their content: Configuration and Context SCA, Design SCA, Tensor and Variable SCA, and Miscellaneous SCA. (3) Both valid and invalid SCAs may have an impact within a specific scope (e.g., in a function) on the DL frameworks. Certain technical debt is induced when making SCAs. There are source code written and decisions made based on SCAs. This is the first study on investigating SCAs in DL frameworks, which helps researchers and practitioners to get a comprehensive understanding on the assumptions made. We also provide the first dataset of SCAs for further research and practice in this area.
Mon 21 JunDisplayed time zone: Amsterdam, Berlin, Bern, Rome, Stockholm, Vienna change
11:00 - 12:30 | |||
11:00 22mFull-paper | CCMC: Code Completion with a Memory Mechanism and a Copy Mechanism EASE 2021 Pre-print | ||
11:22 22mFull-paper | Self-Claimed Assumptions in Deep Learning Frameworks: An Exploratory Study EASE 2021 Chen Yang IBO Technology Co., Ltd, Peng Liang Wuhan University, Liming Fu Wuhan University, Zengyang Li Central China Normal University Pre-print Media Attached | ||
11:45 22mFull-paper | How Should Developers Respond to App Reviews? Features Predicting the Success of Developer Responses EASE 2021 Kamonphop Srisopha University of Southern California, USA, Daniel Link , Barry Boehm University of Southern California Pre-print | ||
12:07 22mFull-paper | A Large-scale Study of Security Vulnerability Support on Developer Q&A Websites EASE 2021 Triet Le The University of Adelaide, Roland Croft , David Hin , Muhammad Ali Babar The University of Adelaide Pre-print |