Write a Blog >>
ICSE 2021
Mon 17 May - Sat 5 June 2021

The boom of DL technology leads to massive DL models built and shared, which facilitates the acquisition and reuse of DL models. For a given task, we encounter multiple DL models available with the same functionality, which are considered as candidates to achieve this task. Testers are expected to compare multiple DL models and select the more suitable ones w.r.t. the whole testing context. Due to the limitation of labeling effort, testers aim to select an efficient subset of samples to make an as precise rank estimation as possible for these models.

To tackle this problem, we propose \textbf{S}ample \textbf{D}iscrimination based \textbf{S}election (\textbf{SDS}) to select efficient samples that could discriminate multiple models, i.e., the prediction behaviors (right/wrong) of these samples would be helpful to indicate the trend of model performance. To evaluate SDS, we conduct an extensive empirical study with three widely-used image datasets and 80 real world DL models. The experiment results show that, compared with state-of-the-art baseline methods, SDS is an effective and efficient sample selection method to rank multiple DL models.

Conference Day
Tue 25 May

Displayed time zone: Amsterdam, Berlin, Bern, Rome, Stockholm, Vienna change

12:05 - 13:05
1.2.1. Deep Neural Networks: Validation #2Technical Track at Blended Sessions Room 1 +12h
Chair(s): Grace LewisCarnegie Mellon Software Engineering Institute
12:05
20m
Paper
Measuring Discrimination to Boost Comparative Testing for Multiple Deep Learning ModelsTechnical Track
Technical Track
Linghan MengNanjing University, Yanhui LiDepartment of Computer Science and Technology, Nanjing University, Lin ChenDepartment of Computer Science and Technology, Nanjing University, Zhi WangNanjing University, Di WuMomenta, Yuming ZhouNanjing University, Baowen XuNanjing University
Pre-print Media Attached
12:25
20m
Paper
Prioritizing Test Inputs for Deep Neural Networks via Mutation AnalysisTechnical Track
Technical Track
Zan WangCollege of Intelligence and Computing, Tianjin University, Hanmo YouCollege of Intelligence and Computing, Tianjin University, Junjie ChenCollege of Intelligence and Computing, Tianjin University, Yingyi ZhangCollege of Intelligence and Computing, Tianjin University, Xuyuan DongInformation and Network Center,Tianjin University, Wenbin ZhangInformation and Network Center,Tianjin University
Pre-print Media Attached
12:45
20m
Paper
Testing Machine Translation via Referential TransparencyTechnical Track
Technical Track
Pinjia HeETH Zurich, Clara MeisterETH Zurich, Zhendong SuETH Zurich
Pre-print Media Attached

Conference Day
Wed 26 May

Displayed time zone: Amsterdam, Berlin, Bern, Rome, Stockholm, Vienna change

00:05 - 01:05
1.2.1. Deep Neural Networks: Validation #2Technical Track at Blended Sessions Room 1
00:05
20m
Paper
Measuring Discrimination to Boost Comparative Testing for Multiple Deep Learning ModelsTechnical Track
Technical Track
Linghan MengNanjing University, Yanhui LiDepartment of Computer Science and Technology, Nanjing University, Lin ChenDepartment of Computer Science and Technology, Nanjing University, Zhi WangNanjing University, Di WuMomenta, Yuming ZhouNanjing University, Baowen XuNanjing University
Pre-print Media Attached
00:25
20m
Paper
Prioritizing Test Inputs for Deep Neural Networks via Mutation AnalysisTechnical Track
Technical Track
Zan WangCollege of Intelligence and Computing, Tianjin University, Hanmo YouCollege of Intelligence and Computing, Tianjin University, Junjie ChenCollege of Intelligence and Computing, Tianjin University, Yingyi ZhangCollege of Intelligence and Computing, Tianjin University, Xuyuan DongInformation and Network Center,Tianjin University, Wenbin ZhangInformation and Network Center,Tianjin University
Pre-print Media Attached
00:45
20m
Paper
Testing Machine Translation via Referential TransparencyTechnical Track
Technical Track
Pinjia HeETH Zurich, Clara MeisterETH Zurich, Zhendong SuETH Zurich
Pre-print Media Attached