Write a Blog >>
ICSE 2021
Mon 17 May - Sat 5 June 2021

The reliability of software that has a Deep Neural Network (DNN) as a component is urgently important today given the increasing number of critical applications being deployed with DNNs. The need for reliability raises a need for rigorous testing of the safety and trustworthiness of these systems. In the last few years, there have been a number of research efforts focused on testing DNNs. However the test generation techniques proposed so far lack a check to determine whether the test inputs they are generating are valid, and thus invalid inputs are produced. To illustrate this situation, we explored three recent DNN testing techniques. Using deep generative model based input validation, we show that all the three techniques generate significant number of invalid test inputs. We further analyzed the test coverage achieved by the test inputs generated by the DNN testing techniques and showed how invalid test inputs can falsely inflate test coverage metrics.

To overcome the inclusion of invalid inputs in testing, we propose a technique to incorporate the valid input space of the DNN model under test in the test generation process. Our technique uses a deep generative model-based algorithm to generate only valid inputs. Results of our empirical studies show that our technique is is an improvement in terms of the number of valid test inputs generated, the time to generate tests, and the test coverage achieved.

Conference Day
Thu 27 May

Displayed time zone: Amsterdam, Berlin, Bern, Rome, Stockholm, Vienna change

16:30 - 17:30
3.4.1. Deep Neural Networks: Data SelectionTechnical Track / SEIP - Software Engineering in Practice / Journal-First Papers at Blended Sessions Room 1 +12h
Chair(s): Ayse TosunIstanbul Technical University
16:30
20m
Paper
Test Selection for Deep Learning SystemsJournal-First
Journal-First Papers
Wei MaSnT, University of Luxembourg, Mike PapadakisUniversity of Luxembourg, Luxembourg, Anestis TsakmalisUniversity of Luxembourg, Maxime CordyUniversity of Luxembourg, Luxembourg, Yves Le TraonUniversity of Luxembourg, Luxembourg
Pre-print Media Attached
16:50
20m
Paper
On the experiences of adopting automated data validation in an industrial machine learning projectSEIP
SEIP - Software Engineering in Practice
Lucy Ellen LwakatareUniversity of Helsinki, Finland, Ellinor RångeEricsson, Ivica CrnkovicChalmers University of Technology, Jan BoschChalmers University of Technology, Sweden
Link to publication Media Attached
17:10
20m
Paper
Distribution-Aware Testing of Neural Networks Using Generative ModelsArtifact ReusableTechnical TrackArtifact Available
Technical Track
Swaroopa DolaUniversity of Virginia, Matthew B DwyerUniversity of Virginia, Mary Lou SoffaUniversity of Virginia
Pre-print Media Attached

Conference Day
Fri 28 May

Displayed time zone: Amsterdam, Berlin, Bern, Rome, Stockholm, Vienna change

04:30 - 05:30
04:30
20m
Paper
Test Selection for Deep Learning SystemsJournal-First
Journal-First Papers
Wei MaSnT, University of Luxembourg, Mike PapadakisUniversity of Luxembourg, Luxembourg, Anestis TsakmalisUniversity of Luxembourg, Maxime CordyUniversity of Luxembourg, Luxembourg, Yves Le TraonUniversity of Luxembourg, Luxembourg
Pre-print Media Attached
04:50
20m
Paper
On the experiences of adopting automated data validation in an industrial machine learning projectSEIP
SEIP - Software Engineering in Practice
Lucy Ellen LwakatareUniversity of Helsinki, Finland, Ellinor RångeEricsson, Ivica CrnkovicChalmers University of Technology, Jan BoschChalmers University of Technology, Sweden
Link to publication Media Attached
05:10
20m
Paper
Distribution-Aware Testing of Neural Networks Using Generative ModelsArtifact ReusableTechnical TrackArtifact Available
Technical Track
Swaroopa DolaUniversity of Virginia, Matthew B DwyerUniversity of Virginia, Mary Lou SoffaUniversity of Virginia
Pre-print Media Attached