Write a Blog >>
DevOps4CPS-Testing 2021
Mon 12 - Fri 16 April 2021
co-located with ICST 2021
VenueICST 2021 is going virtual!
Room nameTamandaré
Room InformationNo extra information available
Program

You're viewing the program in a time zone which is different from your device's time zone change time zone

Mon 12 Apr

Displayed time zone: Brasilia, Distrito Federal, Brazil change

09:00 - 09:10
Welcome and OpeningMutation at Tamandaré
09:00
10m
Day opening
Welcome
Mutation
Amin Alipour University of Houston, Jie M. Zhang University College London, UK
09:10 - 10:00
Keynote 1Mutation at Tamandaré
Chair(s): Jie M. Zhang University College London, UK
09:10
50m
Keynote
Mutation for Compiler Testing
Mutation
Dan Hao Peking University, China
10:00 - 10:50
Session 1Mutation at Tamandaré
10:00
16m
Short-paper
Efficiently Sampling Higher Order Mutants Using Causal Effect
Mutation
Saeyoon Oh Korea Advanced Institute of Science and Technology (KAIST), Seongmin Lee Korea Advanced Institute of Science and Technology, Shin Yoo Korea Advanced Institute of Science and Technology
10:16
16m
Full-paper
Inducing Subtle Mutations with Program RepairBest Paper Award
Mutation
Florian Schwander , Rahul Gopinath CISPA, Germany, Andreas Zeller CISPA Helmholtz Center for Information Security
10:33
16m
Short-paper
Automatic Equivalent Mutants Classification Using Abstract Syntax Tree Neural Networks
Mutation
Samuel Peacock Towson University, Lin Deng Towson University, Josh Dehlinger Towson University, Suranjan Chakraborty Towson University
11:00 - 11:50
Keynote 2Mutation at Tamandaré
11:00
50m
Keynote
What It Would Take to Use Mutation Testing in Industry?
Mutation
Moritz Beller Facebook, USA
11:50 - 12:00
Award SessionMutation at Tamandaré
11:50
10m
Awards
Award Announcement
Mutation
Jie M. Zhang University College London, UK
12:00 - 12:30
Session 2Mutation at Tamandaré
12:00
15m
Short-paper
Random Selection Might Just be Indomitable
Mutation
12:15
15m
Short-paper
MutantBench: an Equivalent Mutant Problem Comparison Framework
Mutation
Lars van Hijfte Universiteit van Amsterdam, Ana Oprescu
12:30 - 12:45
12:30
15m
Day closing
Concluding Remarks
Mutation

Fri 16 Apr

Displayed time zone: Brasilia, Distrito Federal, Brazil change

09:00 - 09:10
Welcome to NEXTANEXTA at Tamandaré
09:00
10m
Talk
Welcome to NEXTA
NEXTA
Sigrid Eldh , Sahar Tahvili Ericsson AB, Vahid Garousi Queen's University Belfast, Michael Felderer University of Innsbruck, Kristian Sandahl Linköping University
09:10 - 09:35
Active Machine Learning to Test Autonomous DrivingNEXTA at Tamandaré

Keynote Speaker: Karl Meinke Session Chair: Michael Felderer

09:10
25m
Keynote
Active Machine Learning to Test Autonomous Driving
NEXTA
Karl Meinke The Royal Institute of Technology
09:35 - 10:00
AI-based Test Automation: A Grey Literature AnalysisNEXTA at Tamandaré

Authors: Filippo Ricca, DIBRIS, Università di Genova, Italy; Alessandro Marchetto, Independent Researcher, Italy and Andrea Stocco, Università della Svizzera italiana (USI), Switzerland Abstract: This paper provides the results of a survey of the grey literature concerning the use of artificial intelligence to improve test automation practices. We surveyed more than 1,200 sources of grey literature (e.g., blogs, white-papers, user manuals, StackOverflow posts) looking for highlights by professionals on how AI is adopted to aid the development and evolution of test code. Ultimately, we filtered 136 relevant documents from which we extracted a taxonomy of problems that AI aims to tackle, along with a taxonomy of AI-enabled solutions to such problems. Manual code development and automated test generation are the most cited problem and solution, respectively. The paper concludes by distilling the six most prevalent tools on the market, along with think-aloud reflections about the current and future status of artificial intelligence for test automation. Session Chair: Michael Felderer

09:35
25m
Paper
AI-based Test Automation: A Grey Literature Analysis
NEXTA
Filippo Ricca Università di Genova
10:00 - 10:25
Flaky Mutants; Another Concern for MutationTestingNEXTA at Tamandaré

Authors: Sten Vercammen, Serge Demeyer, Markus Borg and Robbe Claessens Abstract: Mutation testing is the state-of-the-art technique for assessing the fault detection capability of a test suite. An underlying assumption, rarely mentioned, is that the system under test behaves completely deterministically. This is rarely the case, as each mutant changes the code, it is highly likely that some introduce non-determinism. We call these flaky mutants. As they are only detected intermittently, they cause unreliable mutation testing scores, waste developer time, possibly unfruitful tests, and potential loss in confidence in the mutation testing technique. We want to raise awareness of this issue as we found that these flaky mutants are easy to create and occur in real projects. We also share some thoughts on how to tackle this issue. Mutation testing is the state-of-the-art technique for assessing the fault detection capability of a test suite. An underlying assumption, rarely mentioned, is that the system under test behaves completely deterministically. This is rarely the case, as each mutant changes the code, it is highly likely that some introduce non-determinism. We call these flaky mutants. As they are only detected intermittently, they cause unreliable mutation testing scores, waste developer time, possibly unfruitful tests, and potential loss in confidence in the mutation testing technique. We want to raise awareness of this issue as we found that these flaky mutants are easy to create and occur in real projects. We also share some thoughts on how to tackle this issue. Session Chair: Kristian Sandahl

10:00
25m
Keynote
Flaky Mutants; Another Concern for MutationTesting
NEXTA
Sten Vercammen University of Antwerp, Belgium, Serge Demeyer University of Antwerp, Belgium, Markus Borg RISE Research Institutes of Sweden
10:25 - 10:50
Using Advanced Code Analysis for Boosting Unit Test CreationNEXTA at Tamandaré

Authors: Miroslaw Zielinski, Parasoft Corporation, Poland and Rix Groenboom, Parasoft Corporation, Netherlands. Abstract: Unit testing is a popular testing technique, widespread in enterprise IT and embedded/safety-critical. For enterprise IT, unit testing is considered to be good practice and is frequently followed as an element of test-driven development. In the safety-critical world, there are many standards, such as ISO 26262, IEC 61508, and others, that either directly or indirectly mandate unit testing. Regardless the area of the application, unit testing is very time-consuming and teams are looking for strategies to optimize their efforts. This is especially true in the safety-critical space, where demonstration of test coverage is required for the certification. In this presentation, we share the results of our research regarding the use of advanced code analysis algorithms for augmenting the process of unit test creation. The discussion includes automatic discovery of inputs and responses from mocked components that maximize the code coverage and automated generation of the test cases. Session Chair: Kristian Sandahl

10:25
25m
Paper
Using Advanced Code Analysis for Boosting Unit Test Creation
NEXTA
11:00 - 11:25
QRTest: Automatic Query Reformulation for Information Retrieval Based Regression Test Case PrioritizationNEXTA at Tamandaré

Author: Maral Azizi, East Carolina University, US. Abstract: The most effective regression testing algorithms have long running times and often require dynamic or static code analysis, making them unsuitable for the modern software development environment where the rate of software delivery could be less than a minute. More recently, some researchers have developed information retrieval-based (IR-based) techniques for prioritizing tests such that the higher similar tests to the code changes have a higher likelihood of finding bugs. A vast majority of these techniques are based on standard term similarity calculation, which can be imprecise. One reason for the low accuracy of these techniques is that the original query often is short, therefore, it does not return the relevant test cases. In such cases, the query needs reformulation. The current state of research lacks methods to increase the quality of the query in the regression testing domain. Our research aims at addressing this problem and we conjecture that enhancing the quality of the queries can improve the performance of IR-based regression test case prioritization (RTP). Our empirical evaluation with six open source programs shows that our approach improves the accuracy of IR-based RTP and increases regression fault detection rate, compared to the common prioritization techniques. Session Chair: Sahar Tahvili

11:00
25m
Talk
QRTest: Automatic Query Reformulation for Information Retrieval Based Regression Test Case Prioritization
NEXTA
11:25 - 11:50
An Empirical Study of Parallelizing Test Execution Using CUDA Unified Memory and OpenMP GPU OffloadingNEXTA at Tamandaré

Authors: Taghreed Bagies and Ali Jannesari, Lowa State University, US. Abstract: The execution of software testing is costly and time-consuming. To accelerate the test execution, researchers have applied several methods to run the testing in parallel. One method of parallelizing the test execution is by using a GPU to distribute test case inputs among several threads running in parallel. In this paper, we investigate three programming models CUDA Unified Memory, CUDA Non-Unified Memory, and OpenMP GPU offloading to parallelize the test execution and discuss the challenges using these programing models. We use eleven benchmarks and parallelize their test suites by using these models. We evaluate their performance in terms of execution time, analyze the results, and report the limitations of using these programming models. Session Chair: Vahid Garousi

11:25
25m
Paper
An Empirical Study of Parallelizing Test Execution Using CUDA Unified Memory and OpenMP GPU Offloading
NEXTA
Taghreed Bagies King Abdulaziz University
11:50 - 12:20
Advancing Test Automation Using Artificial Intelligence (AI)NEXTA at Tamandaré

Keynote speaker: Jeremy S. Bradbury, PhD, Associate Professor, Computer Science, Associate Dean, School of Graduate and Postdoctoral Studies Ontario Tech University. Abstract: In recent years, software testing automation has been enhanced through the use of Artificial Intelligence (AI) techniques including genetic algorithms, machine learning and deep learning. The use cases for AI in test automation range from providing recommendations to the complete automation of software testing activities. To demonstrate the breadth of application, I will present several recent examples of how AI can be leveraged to support automated testing in rapid release cycles. Furthermore, I will discuss my own successes and failures in using AI to advance test automation as well as share the lesson I have learned. Session Chair: Sahar Tahvili

11:50
30m
Keynote
Advancing Test Automation Using Artificial Intelligence (AI)
NEXTA
Jeremy Bradbury Ontario Tech University
12:20 - 12:30
ClosingNEXTA at Tamandaré
12:20
10m
Talk
Closing
NEXTA