Write a Blog >>
ICSE 2020
Wed 24 June - Thu 16 July 2020

astlogo


AST 2020, the first virtual conference on Automation of Software Test, is getting close.


The event will take place on July 15 and July 16, between 7.30 and 17.00 UTC, see program for details.
We look forward to “seeing you” soon in the AST virtual space!

Below we provide a few guidelines to get you prepared before the conference and to be part of the live sessions.
In fact, in line with the usually informal and interactive atmosphere of AST events, we invite you to be active during the live keynote and Q&A sessions in asking questions to our speakers or providing comments and feedback, also about your own research and experiences.

PRIOR TO THE SESSIONS:
Video presentations by the authors of all papers in the program have been uploaded in public folders within a Dropbox space.

After registering you should have received the ICSE proceedings, including the one for AST.
If you registered and have not received the link to access the online proceeding, please check with the registration company at REG_ICSE2020 reg@icse2020.org.  

DURING THE SESSIONS:
The live sessions will be hosted on Zoom.
Questions by the audience will be made using the chat of Zoom chat, or by voice, following the recommendations of the session chair.
We are going to have two types of live sessions: two keynote presentations and three Q&A sessions.
Keynote talks are given live, but will be recorded and made available for subsequent listening.
The Q&A sessions will start with a brief recap of paper contributions by their authors, and then the session chairs will moderate the live discussion.

Please be careful in adapting the time of sessions to your time zone!

WHERE ARE ACCESS LINKS?
We have sent already by email the access links of Dropbox and Zoom to all attendees who registered before July 8th, and will send them to those who registered later as soon as we get the data from ICSE registration team. If you registered before July 8th and did not receive the email by the General Chair, or registered after and can’t wait to list to the talks, you are welcome to email a copy of your registration receipt to Antonia, and we will send you the needed info by reply, so you can start immediately enjoy AST presentations.

NEED HELP?
If you got lost or have problems accessing the Dropbox videos, or are missing some info we forgot to distribute please ask.
Concerning AST matters you can email AST General Chair, whereas for issues with registration or proceedings please email the ICSE team at reg@icse2020.org

SEE YOU SOON AT AST 2020!!!

Sponsors and Supporters


Dates
You're viewing the program in a time zone which is different from your device's time zone change time zone

Wed 15 Jul

Displayed time zone: (UTC) Coordinated Universal Time change

07:30 - 08:00
Conference Opening and Participants WelcomeAST at AST
Chair(s): Antonia Bertolino CNR-ISTI
07:30
30m
Day opening
Conference Opening and Participants Welcome
AST
08:00 - 09:00
Live Session 1 - KeynoteAST at AST
Chair(s): Aditya P Mathur Purdue University (USA) and Singapore University of Technology and Design (Singapore)
08:00
60m
Keynote
CROWN 2.0: Automated Test Generation for Industrial Embedded Software - 17 Years Journey from Research To Product
AST
Moonzoo Kim KAIST / VPlusLab Inc.
14:00 - 15:00
Live Session 2 AST at AST
Chair(s): Fevzi Belli Paderborn University, Germany
14:00
10m
Research paper
Exploratory Datamorphic Testing of Classification Applications
AST
Hong Zhu Oxford Brookes University , Ian Bayley Oxford Brookes University
14:10
10m
Research paper
Algorithm or Representation? An Empirical Study on How SAPIENZ Achieves Coverage
AST
Iván Arcuschin Moreno University of Buenos Aires, Argentina, Juan Pablo Galeotti University of Buenos Aires, Diego Garbervetsky University of Buenos Aires and CONICET, Argentina
Pre-print
14:20
10m
Research paper
Automatic Ex-Vivo Regression Testing of Microservices
AST
Luca Gazzola Università degli Studi di Milano-Bicocca, Maayan Goldstein Nokia Bell Labs, Israel, Leonardo Mariani University of Milano Bicocca, Itai Segall Nokia Bell-Labs, Luca Ussi University of Milano-Bicocca, Italy
File Attached
14:30
10m
Research paper
Validating Test Case Migration via Mutation Analysis
AST
Ivan Jovanovikj Paderborn University, Enes Yigitbas University of Paderborn, Germany, Achyuth Nagaraj Paderborn University, Stefan Sauer Paderborn University, Gregor Engels Paderborn University
Pre-print
14:40
10m
Short-paper
Automated Analysis of Flakiness-mitigating Delays
AST
Jean Malm Malardalen University, Adnan Causevic Mälardalen University, Björn Lisper Malardalen University, Sigrid Eldh Ericsson, Sweden
14:50
10m
Short-paper
The Power of String Solving: Simplicity of Comparison
AST
Mitja Kulczynski Kiel University, Florin Manea University of Göttingen, Dirk Nowotka Kiel University, Danny Bøgsted Poulsen Aalborg University
16:00 - 17:00
Live Session 3AST at AST
Chair(s): Hong Zhu Oxford Brookes University
16:00
10m
Research paper
Testing Apps With Real World Inputs
AST
Tanapuch Wanwarang CISPA Helmholtz Center for Information Security, Nataniel Borges Jr. CISPA Helmholtz Center for Information Security, Leon Bettscheider CISPA Helmholtz Center for Information Security, Andreas Zeller CISPA Helmholtz Center for Information Security
Pre-print
16:10
10m
Research paper
A Delta-Debugging Approach to Assessing the Resilience of Actor Programs through Run-time Test Perturbations
AST
Jonas De Bleser Sofware Languages Lab, Vrije Universiteit Brussel, Dario Di Nucci Tilburg University, Coen De Roover Vrije Universiteit Brussel
Pre-print
16:20
10m
Short-paper
Muteria: An Extensible and Flexible Multi-Criteria Software Testing Framework
AST
Thierry Titcheu Chekam University of Luxembourg (SnT), Mike Papadakis University of Luxembourg, Yves Le Traon University of Luxembourg
File Attached
16:30
10m
Industry talk
Difference Grouping and Test Suite Evaluation: Lessons from Automated Differential Testing for Adobe Analytics
AST
Darryl Jarman Adobe, Scott Hunt Adobe, Jeffrey Berry Adobe, Inc., Dave Towey University of Nottingham Ningbo China
16:40
10m
Industry talk
Automatic Prevention of Accidents in Production
AST
Chang-Seo Park Google LLC
16:50
10m
Industry talk
The Effectiveness of Client-side JavaScript Testing
AST
Jonny Moon Adobe, Inc., Brian Farnsworth Adobe, Inc., Riley Smith Adobe, Inc.

Thu 16 Jul

Displayed time zone: (UTC) Coordinated Universal Time change

08:00 - 09:00
Live Session 4AST at AST
Chair(s): Shin Hong Handong Global University
08:00
12m
Research paper
BlockRace: A Big Data Approach to Dynamic Block-based Data Race Detection for Multithreaded Programs
AST
Xiupei Mei City University of Hong Kong, Zhengyuan Wei City University of Hong Kong, Hong Kong, Hao Zhang , Wing-Kwong Chan City University of Hong Kong, Hong Kong
08:12
12m
Research paper
Hybrid Methods for Reducing Database Schema Test Suites: Experimental Insights from Computational and Human Studies
AST
Abdullah Alsharif Saudi Electronic University, Gregory Kapfhammer Allegheny College, USA, Phil McMinn University of Sheffield
08:24
12m
Short-paper
A Quantitative Comparison of Coverage-based Greybox Fuzzers
AST
Natsuki Tsuzuki Nagoya University, Norihiro Yoshida Nagoya University, Koji Toda Fukuoka Institute of Technology, Kenji Fujiwara National Institute of Technology, Toyota College, Ryota Yamamoto Nagoya University, Hiroaki Takada Nagoya University
Link to publication Media Attached
08:36
12m
Short-paper
Fastbot: A Multi-Agent Model-Based Test Generation System
AST
Tianqin Cai Bytedance Network Technology, Zhao Zhang Bytedance Network Technology, Ping Yang Bytedance Network Technology
08:48
12m
Industry talk
AI-Driven Conversational Bot Test Automation Using Industry Specific Data Cartridges
AST
Muralidhar Yalla , Asha Sunil Accenture Technologies, Bangalore India
14:00 - 15:00
Live Session 5 - KeynoteAST at AST
Chair(s): Antonia Bertolino CNR-ISTI
14:00
60m
Keynote
Combating Flaky Tests
AST
Darko Marinov University of Illinois at Urbana-Champaign
15:00 - 15:15
Conference Closure and Invitation to AST 2021AST at AST
Chair(s): Antonia Bertolino CNR-ISTI
15:00
15m
Day closing
Conference Closure and Invitation to AST 2021
AST

Call for Papers

The increasing complexity, pervasiveness and inter-connection of software systems on the one hand, and the ever-shrinking development cycles and time-to-market on the other, make the automation of software test (AST) an urgent requirement today more than ever. Despite significant achievements both in theory and practice, AST remains a challenging research area.

Conference Theme

The AST 2020 conference theme is “Who tests the tests?”.

In software test automation, test code is written and maintained. It is generally accepted that disciplined procedures and quality standards should be applied in application development and coding.

However, somehow we do not demand the same rigour for the test code developed for testing those applications. Several recent empirical studies point out how tests are affected by many problems, bugs, unjustified assumptions, hidden dependencies from other tests or environment, flakiness, and performance issues. Because of such problems, test effectiveness is impacted and several false alarms are raised in regression testing that increase the test costs. The problem of unreliable test code is exacerbated in modern continuous integration and DevOps practices, demanding for fast fully automated recycling of test executions.

We invite contributions that focus on: i) getting a better understanding of the dimensions and characteristics of problems with unreliable, low-quality, test code; ii) automatically identifying test code smells or bugs; iii) providing solutions to prevent test bugs and test flakiness; and iv) improving the quality of test code.

Topic of Interest

Submissions on the AST 2020 theme are especially encouraged, but papers on other topics relevant to the automation of software test are also welcome.

Topics of interest include, but are not limited to the following:

  • Test automation of large, complex system
  • Metrics for testing - test efficiency, test coverage
  • Tools for model-based V&V
  • Test-driven development
  • Standardization of test tools
  • Test coverage metrics and criteria
  • Product line testing
  • Formal methods and theories for testing and test automation
  • Test case generation based on formal and semi-formal models
  • Testing with software usage models
  • Testing of reactive and object-oriented systems
  • Software simulation by models, forecasts of behavior and properties
  • Application of model checking in testing
  • Tools for security specification, models, protocols, testing and evaluation
  • Theoretical foundations of test automation
  • Models as test oracles; test validation with models
  • Testing anomaly detectors
  • Testing cyber physical systems

We are interested in the following aspects related to AST:

  1. Problem identification. Analysis and specification of requirements for AST, and elicitation of problems that hamper wider adoption of AST
  2. Methodology. Novel methods and approaches for AST in the context of up-to-date software development methodologies
  3. Technology. Automation of various test techniques and methods for test-related activities, as well as for testing various types of software
  4. Tools and Environments. Issues and solutions in the development, operation, maintenance and evolution of tools and environments for AST, and their integration with other types of tools and runtime support platforms
  5. Empirical studies, and Experience reports, and Industrial Contributions. Real experiences in using automated testing techniques, methods and tools in industry
  6. Visions of the future. Foresight and thought-provoking ideas for AST that can inspire new powerful research trends.

Submission

Three types of submissions are invited:

  • Regular Papers (up to 10 pages)
    • Research Paper
    • Industrial Case Study
  • Short Papers (up to 4 pages)
    • Research Paper
    • Industrial Case Study
    • Doctoral Student Research
  • Industrial Abstracts (up to 2 pages)

Regular papers include both Research papers that present research in the area of software test automation, and Industrial Case Studies that report on practical applications of test automation.

Regular papers must not exceed 10 pages for all materials (including the main text, appendices, figures, tables, and references).

Short papers also include both Research papers and Industrial Case Studies.

Short papers must not exceed 4 pages for all materials.

As short papers, doctoral students working on software testing are encouraged to submit their work. AST will have an independent session to bring doctoral students working on software testing, with experts assigned to each paper together, to discuss their research in a constructive and international atmosphere, and to prepare for defense exam. The first author in a submission must be the doctoral student and the second author the advisor. Authors of selected submissions will be invited to make a brief presentation followed by a constructive discussion in a session dedicated to doctoral students.

Industrial abstract talks are specifically conceived to promote industrial participation: We require the first author of such papers to come from industry. Authors of accepted papers get invited to give a talk with same time length and within same sessions as regular papers. Industrial abstracts must not exceed 2 pages for all materials.

The submission website is: https://easychair.org/conferences/?conf=ast2020

The format of the papers must follow the ACM formatting guidelines (https://www.acm.org/publications/proceedings-template) both for LaTEX and Word users. LaTEX users must use the provided acmart.cls and ACM-Reference-Format.bst without modification, enable the conference format in the preamble of the document (i.e., \documentclass[sigconf,review]{acmart}),and use the ACM reference format for the bibliography (i.e., \bibliographystyle{ACM-Reference-Format}).

Purchases of additional pages in the proceedings is not allowed.

Submissions must be unpublished original work and should not be under review or submitted elsewhere while being under consideration.

The accepted regular and short papers, case studies, and industrial abstracts will be published in the ICSE 2020 Co-located Event Proceedings and included in the IEEE and ACM Digital Libraries. Authors of accepted papers are required to register and present their accepted paper at the conference in order for the paper to be included in the proceedings and the Digital Libraries.

The official publication date is the date the proceedings are made available in the ACM or IEEE Digital Libraries.

This date may be up to two weeks prior to the first day of ICSE 2020. The official publication date affects the deadline for any patent filings related to published work.

Combating Flaky Tests

Abstract. Testing is the most common approach in practice to check software. Regression testing checks software changes. A key challenge for regression tests is to detect software bugs and fail when a change introduces a bug, signaling to the developer to fix it. However, an emerging challenge is that the tests must also not fail when there is no bug in the change. Unfortunately, some tests, called flaky, can non-deterministically pass or fail on the same software, even when it has no bug. Such flaky tests give false alarms to developers about the existence of bugs. A developer may end up wasting time trying to fix bugs not relevant to the recent changes the developer made. I will present some work done by my group and our collaborators to alleviate the problem of flaky tests.

Speaker: Darko Marinov (Department of Computer Science, University of Illinois at Urbana-Champaign, USA)

Short bio: Darko Marinov is a Professor in the Department of Computer Science at the University of Illinois at Urbana-Champaign. His main research interests are in Software Engineering, especially software testing. He has a lot of fun finding and preventing software bugs.

He published over 90 conference papers, winning three test-of-time awards–two ACM SIGSOFT Impact Paper Awards (2012, 2019) and one ASE Most Influential Paper Award (2015)–and seven more paper awards–six ACM SIGSOFT Distinguished Paper awards and one CHI Best Paper Award (2017).

His work has been supported by Facebook, Google, Huawei, IBM, Intel, Microsoft, NSF, Qualcomm, and Samsung. He served as the PC (Co-)Chair for ISSTA 2014, ICST 2015, ASE 2019, and ICSE 2020.

More info is on his web page: http://mir.cs.illinois.edu/marinov


CROWN 2.0: Automated Test Generation for Industrial Embedded Software - 17 Years Journey from Research To Product

Abstract. Automated software testing techniques such as software model checking and concolic testing have been popular in research communities for the last few decades. However, such advanced techniques are not yet successfully applied to industrial software. For fruitful technology transfer from academia to industry, we have to build a bridge to cross the obstacles such as lack of trained field engineers, hidden (and non-negligible) costs to apply automated techniques, different levels of software quality pursued by different industry sectors and so on.  In this talk, I will discuss lessons learned from my 17 years’ experience of applying automated software testing techniques to industrial embedded software such as Samsung smartphones, LG home appliances, and Hyundai automotive software. Also, I will share my experience of starting an automated software testing company V+Lab, which develops an automated test generation tool CROWN 2.0 that targets safety critical systems such as automotive vehicles, airplanes, and defense systems.

Speaker: Moonzoo Kim (SW Testing & Verification Group, & KAIST, Daejeon, South Korea)

Short bio: Moonzoo Kim is an associate professor in School of Computing at KAIST and the CEO of V+Lab. His research focuses on automated software testing and debugging techniques.

He has applied automated software testing techniques (e.g., concolic testing and fuzzing) to real-world industrial projects and detected critical bugs in Samsung flash memory and smartphones, LG home appliances, and Hyundai automobiles and so on. Based on such success from the industrial projects, he founded V+Lab  ( https://vpluslab.kr ) which develops automated test generation tools for safety critical domains (e.g., automobiles, airplanes, and defense systems).

He has been actively serving research communities as a conference organization committee member (e.g., ICSE SEIP 2020 chair, ISSTA workshop 2019 co-chair, ICSE New Faculty Symp. 2016 co-chair, ASE 2015 publication chair) and an editorial board member for STVR and JCSE. He was invited as a keynote speaker at FACS 2018 and received a Test of Time Award at Runtime Verification 2019 and a best paper award at ICST 2018.

Regular Papers

Testing Apps With Real World Inputs

Tanapuch Wanwarang (CISPA Helmholtz Center for Information Security), Nataniel Borges Jr. (CISPA Helmholtz Center for Information Security), Leon Bettscheider (CISPA Helmholtz Center for Information Security) and Andreas Zeller (CISPA Helmholtz Center for Information Security)

Automatic Ex-Vivo Regression Testing of Microservices

Luca Gazzola (University of Milano – Bicocca), Maayan Goldstein (Nokia Bell Labs), Leonardo Mariani (University of Milano – Bicocca), Itai Segall (Bell Labs USA) and Luca Ussi (University of Milano – Bicocca)

A Delta-Debugging Approach to Assessing the Resilience of Actor Programs through Run-time Test Perturbations

Jonas De Bleser (Vrije Universiteit Brussel (VUB)), Dario Di Nucci (Tilburg University – JADS) and Coen De Roover (Vrije Universiteit Brussel)

Validating Test Case Migration via Mutation Analysis

Ivan Jovanovikj (Paderborn University), Achyuth Nagaraj (Paderborn University), Enes Yigitbas (Paderborn University), Anthony Anjorin (Paderborn University), Stefan Sauer (Paderborn University) and Gregor Engels (Paderborn University)

Hybrid Methods for Reducing Database Schema Test Suites: Experimental Insights from Computational and Human Studies

Abdullah Alsharif (The University of Sheffield), Gregory Kapfhammer (Department of Computer Science, Allegheny College) and Phil McMinn (The University of Sheffield)

Exploratory Datamorphic Testing of Classification Applications

Hong Zhu (Oxford Brookes University) and Ian Bayley (Oxford Brookes University)

Algorithm or Representation? An empirical study on how SAPIENZ achieves coverage

Iván Arcuschin Moreno (University of Buenos Aires), Juan Pablo Galeotti (University of Buenos Aires) and Diego Garbervetsky (University of Buenos Aires)

BlockRace: A Big Data Approach to Dynamic Block-based Data Race Detection for Multithreaded Programs

Xiupei Mei (City University of Hong Kong), Zhengyuan Wei (City University of Hong Kong), Hao Zhang (City University of Hong Kong) and W. K. Chan (City University of Hong Kong)

Short Papers

Automated Analysis of Flakiness-mitigating Delays

Jean Malm (Mälardalen University), Adnan Causevic (Mälardalen University), Björn Lisper (Mälardalen University) and Sigrid Eldh (Ericsson)

The Power of String Solving: Simplicity of Comparison

Mitja Kulczynski (Kiel University), Florin Manea (Institut für Informatik, University of Göttingen), Dirk Nowotka (Christian-Albrechts-Universität zu Kiel) and Danny Bøgsted Poulsen (Aalborg University)

A Quantitative Comparison of Coverage-based Greybox Fuzzers

Natsuki Tsuzuki (Nagoya University), Norihiro Yoshida (Nagoya University), Koji Toda (Department of Computer Science and Engineering, Fukuoka Institute of Technology), Kenji Fujiwara (National Institute of Technology, Toyota College), Ryota Yamamoto (Nagoya University) and Hiroaki Takada (Nagoya University)

Fastbot: A Multi-Agent Model-Based Test Generation System

Tianqin Cai (Beijing Bytedance Network Technology Co., Ltd.), Zhao Zhang (Beijing Bytedance Network Technology Co., Ltd.) and Ping Yang (Beijing Bytedance Network Technology Co., Ltd.)

Muteria: An Extensible and Flexible Multi-Criteria Software Testing Framework

Thierry Titcheu Chekam (University of Luxembourg (SnT)), Mike Papadakis (University of Luxembourg) and Yves Le Traon (University of Luxembourg )

Industrial Abstracts

The Effectiveness of Client-side JavaScript Testing

Brian Farnsworth (Adobe, Inc.), Jonny Moon (Adobe, Inc.) and Riley Smith (Adobe, Inc.)

Difference Grouping and Test Suite Evaluation: Lessons from Automated Differential Testing for Adobe Analytics

Darryl Jarman (Adobe), Scott Hunt (Adobe), Jeff Berry (Adobe) and Dave Towey (University of Nottingham Ningbo China)

AI-Driven Conversational Bot Test Automation Using Industry Specific Data Cartridges

Muralidhar Yalla (Accenture) and Asha Sunil (Accenture)

Automatic Prevention of Accidents in Production

Chang-Seo Park (Google LLC)