Write a Blog >>
ASE 2021
Sun 14 - Sat 20 November 2021 Australia

Call for Papers

This track accepts submissions from:

  • Any member of the SE community wishing to document artifact creation or usage
  • Authors of papers accepted in any of the tracks of ASE 2021 who wish to submit artifacts associated with those papers to the ASE Artifact Track.
  • Authors of any prior SE work (published at ASE or elsewhere) who wish to submit an artifact for evaluation as a candidate replicated or reproduced.

If the artifact is accepted, authors will be invited to give lightning talks on this work at ASE’21. Also, we will do our best to work with the IEEE Xplore and ACM Portal administrator to add badges to the electronic versions of the authors’ paper(s).

Accepted artifact(s) will receive the following badges on the front page of the authors’ paper and in the proceedings.


Available Reusable Reproduced Replicated
Open Research Objects (ORO) Research Objects Reviewed (ROR) Results Reproduced (ROR-R) Results Replicated (RER)
Placed on a publicly accessible archival repository. A DOI or link to this persistent repository along with a unique identifier for the object is provided. Artifacts have not been formally evaluated. Available + Artifacts are very carefully documented and well-structured, consistent, complete, exercisable, and include appropriate evidence of verification and validation to the extent that reuse and repurposing is facilitated. In particular, norms and standards of the research community for artifacts of this type are strictly adhered to. The main results of the paper have been obtained in a subsequent study by a person or team other than the authors, using, in part, artifacts provided by the original authors. Also, artifacts are very carefully documented and well-structured, consistent, complete, exercisable, and include appropriate evidence of verification and validation The main results of the paper have been independently obtained in a subsequent study by a person or team other than the authors, without the use of author-supplied artifacts.

Details of the Badges

Available

This badge is applied to papers in which associated artifacts have been made permanently available for retrieval.

  • We consider temporary drives (e.g., Dropbox, Google Drive) to be non-persistent, same as individual/institutional websites of the submitting authors, as these are prone to changes.
  • We ask that authors use Zenodo as this service is persistent and also offers the possibility to assign a DOI.
  • Artifacts do not need to have been formally evaluated in order for an article to receive this badge. In addition, they need not be complete in the sense described above. They simply need to be relevant to the study and add value beyond the text in the article. Such artifacts could be something as simple as the data from which the figures are drawn, or as complex as a complete software system under study.
Reusable

The artifacts must meet the following requirements.

  • documented: At minimum, an inventory of artifacts is included, and sufficient description provided to enable the artifacts to be exercised.
  • consistent: The artifacts are relevant to the associated paper, and contribute in some inherent way to the generation of its main results.
  • complete: To the extent possible, all components relevant to the paper in question are included. (Proprietary artifacts need not be included. If they are required to exercise the package then this should be documented, along with instructions on how to obtain them. Proxies for proprietary data should be included so as to demonstrate the analysis.)
  • exercisable: Included scripts and / or software used to generate the results in the associated paper can be successfully executed, and

Authors are strongly encouraged to target their artifact submissions for Reusable as the purpose of artifact badges is, among other things, to facilitate reuse and repurposing, which may not be achieved at the Functional level.

Reproduced

This badge is applied to papers in which the main results of the paper have been successfully obtained by a person or team other than the original authors of the work, with, at least in part, artifacts provided by the original authors.

Example: If Asha published a paper with artifacts in 2019, and Tim published a replication in 2020 using the artifacts, then Asha can now apply for the Replicated badge on the 2019 paper.

Replicated

This badge is applied to papers in which the main results of the paper have been successfully obtained by a person or team other than the author, but without any artifacts provided by the original authors.

Example: If Janet published a paper in 2018 with no artifacts, and Miles published a paper with artifacts in 2020 that independently obtained the main result, then Janet can apply for the Reproduced badge on the 2018 paper.


Papers with such badges contain reusable products that other researchers can use to bootstrap their own research. Experience shows that such papers earn increased citations and greater prestige in the research community. Artifacts of interest include (but are not limited to) the following.

  • Software, which are implementations of systems or algorithms potentially useful in other studies.
  • Data repositories, which are data (e.g., logging data, system traces, survey raw data) that can be used for multiple software engineering approaches.
  • Frameworks, which are tools and services illustrating new approaches to software engineering that could be used by other researchers in different contexts.

This list is not exhaustive, so the authors are asked to email the chairs before submitting if their proposed artifact is not on this list.


Submission

Authors are to our HotCRP website https://ase20201-artifact-evaluation.hotcrp.com

Note that that there are two separate separate submission procedures: - One for available and reusable artifacts - Another for replicated and reproduced badges

Submission for available and reusable artifacts

All submittors must make their repositories available using the following steps.

Your Github repo should have documentation files explaining how to obtain the artifact package, how to unpack the artifact, how to get started, and how to use the artifacts in sufficient detail. The artifact submission must describe only the technicalities of the artifacts and uses of the artifact that are not already described in the paper. The submission should contain the following documents (in markdown plain text format).

  • A README.md main file describing what the artifact does and where it can be obtained (with hidden links and access password if necessary).
  • A LICENSE.md file describing the distribution rights. Note that to score “available” or higher, then that license needs to be some form of open source license.
  • An INSTALL.md file with installation instructions. These instructions should include notes illustrating a very basic usage example or a method to test the installation. This could be, for instance, information on what output to expect that confirms that the code is installed and working; and that the code is doing something interesting and useful. IMPORTANT, there should be a clear description of how to reproduce the results presented in the paper.
  • A copy of the accepted paper in pdf format.

Authors may update their research artifacts after submission only for changes requested by reviewers in the rebuttal phase. To update artifacts: (1) Go to Github; (2) Make your changes; (3) Make a new release; (4) Make a comment in HotCrp that “in response to comment XYZ we have made a new release that addresses that issues as follows: ABC”

Submission for replicated and reproduced badges

Submit a one page (max) pdf documenting the maturity of the artifact. This needs to include:

  • TITLE: A (Partial)? (Replication|Reproduction) of XYZ. Please add the term partial to your title if only some of the original work could be replicated/reproduced.
  • WHO: name the original authors (and paper) and the authors that performed the replication/reproduction. Include contact information (emails). Mark one author as the corresponding author.
  • IMPORTANT: include also a web link to a publically available URL directory containing (a) the original paper (that is being reproduced) and (b) any subsequent paper(s)/documents/reports that do the reproduction.
  • IMPORTANT: include also a web link to a publically available URL directory containing (a) the original paper (that is being reproduced) and (b) any subsequent paper(s)/documents/reports that do the reproduction.
  • WHAT: describe the “thing” being replicated/reproduced;
  • WHY: clearly state why that “thing” is interesting/important;
  • PLATFORM: being the operating system where this artifact was mostly developed on;
  • HOW: say how it was done first;
  • WHERE: describe the replication/reproduction. If the replication/reproduction was only partial, then explain what parts could be achieved or had to be missed.
  • DISCUSSION: What aspects of this “thing” made it easier/harder to replicate/reproduce. What are the lessons learned from this work that would enable more replication/reproduction in the future for other kinds of tasks or other kinds of research.

Review

The ASE artifact evaluation track uses a single-blind review process. All artifacts will receive two reviews.

Two PC members will review each abstract, possibly reaching out to the authors of the abstract or original paper. Abstracts will be ranked as follows.

  • If PC members do not find sufficient substantive evidence for replication/reproduction, the abstract will be rejected.
  • Any abstract that is judged to be unnecessarily critical of prior work will be rejected (*).
  • The remaining abstracts will be sorted according to (a) interestingness and (b) correctness.
  • The top ranked abstracts will be invited to give lightning talks.

(*) Our goal is to foster a positive environment that supports and rewards researchers for conducting replications and reproductions. To that end, we require that all abstracts and presentations pay due respect to the work they are reproducing/replicating. Criticism of prior work is acceptable only as part of a balanced and substantive discussion of prior accomplishments.

Note that prior to reviewing, there may be some interactions to handle setup and install. Before the actual evaluation reviewers will check the integrity of the artifact and look for any possible setup problems that may prevent it from being properly evaluated (e.g., corrupted or missing files, VM won’t start, immediate crashes on the simplest example, etc.). The Evaluation Committee may contact the authors to request clarifications on the basic installation and start-up procedures or to resolve simple installation problems. Artifact evaluation can be rejected for artifacts whose configuration and installation takes an undue amount of time.

Plenary
You're viewing the program in a time zone which is different from your device's time zone change time zone

Tue 16 Nov

Displayed time zone: Hobart change

08:00 - 09:00
ASE2021 OpeningPlenary at Kangaroo
08:00
60m
Day opening
ASE2021 Opening
Plenary
G: John Grundy Monash University, P: Dan Hao Peking University, P: Denys Poshyvanyk William and Mary
09:00 - 10:00
MIP Talk 1Plenary at Kangaroo
Chair(s): Myra Cohen Iowa State University
09:00
60m
Talk
MIP: UMLDiff: an Algorithm for Object-Oriented Design Differencing
Plenary
Zhenchang Xing Australian National University, Eleni Stroulia University of Alberta
10:00 - 11:00
Virtual ReceptionSocial/Networking at Kangaroo
Chair(s): Mattia Fazzini University of Minnesota
10:00
60m
Social Event
Virtual Reception
Social/Networking

23:00 - 00:00
Artefacts Plenary (Any Day Band 2)Artifact Evaluation at Kangaroo
Chair(s): Aldeida Aleti Monash University, Tim Menzies North Carolina State University
23:00
5m
Day opening
Opening
Artifact Evaluation
A: Aldeida Aleti Monash University
23:05
7m
Keynote
Keynote
Artifact Evaluation
Dirk Beyer LMU Munich, Germany
23:12
3m
Talk
CiFi: Versatile Analysis of Class and Field ImmutabilityReusableAvailable
Artifact Evaluation
Tobias Roth Technische Universität Darmstadt, Dominik Helm Technische Universität Darmstadt, Michael Reif Technische Universität Darmstadt, Mira Mezini Technische Universität Darmstadt
23:15
3m
Talk
Testing Your Question Answering Software via Asking RecursivelyReusableAvailable
Artifact Evaluation
Songqiang Chen School of Computer Science, Wuhan University, Shuo Jin School of Computer Science, Wuhan University, Xiaoyuan Xie School of Computer Science, Wuhan University, China
23:18
3m
Talk
Restoring the Executability of Jupyter Notebooks by Automatic Upgrade of Deprecated APIsReusableAvailable
Artifact Evaluation
Chenguang Zhu University of Texas at Austin, Ripon Saha Fujitsu Laboratories of America, Inc., Mukul Prasad Fujitsu Research of America, Sarfraz Khurshid The University of Texas at Austin
23:21
3m
Talk
Context Debloating for Object-Sensitive Pointer AnalysisReusableAvailable
Artifact Evaluation
Dongjie He UNSW Sydney, Jingbo Lu UNSW Sydney, Jingling Xue UNSW Sydney
23:24
3m
Talk
Understanding and Detecting Performance Bugs in Markdown CompilersReusableAvailable
Artifact Evaluation
Penghui Li The Chinese University of Hong Kong, Yinxi Liu The Chinese University of Hong Kong, Wei Meng Chinese University of Hong Kong
23:27
5m
Product release
Reuse graphs
Artifact Evaluation
P: Tim Menzies North Carolina State University
23:32
10m
Talk
Most reused artefacts
Artifact Evaluation

23:42
18m
Live Q&A
Discussion
Artifact Evaluation

Not scheduled yet

Not scheduled yet
Talk
Reuse of Perceval: Software Project Data at Your WillReproduced
Artifact Evaluation
Not scheduled yet
Talk
Improving Test Case Generation for REST APIs Through Hierarchical ClusteringReusableAvailable
Artifact Evaluation
Dimitri Stallenberg Delft University of Technology, Mitchell Olsthoorn Delft University of Technology, Annibale Panichella Delft University of Technology
DOI Pre-print
Not scheduled yet
Talk
Efficient SMT-Based Model Checking for Signal Temporal LogicReusableAvailable
Artifact Evaluation
Jia Lee POSTECH, Geunyeol Yu Pohang University of Science and Technology (POSTECH), Kyungmin Bae Pohang University of Science and Technology (POSTECH)
Not scheduled yet
Talk
Nekara: Generalized Concurrency TestingReusableAvailable
Artifact Evaluation
Udit Agarwal IIIT Delhi, Pantazis Deligiannis Microsoft Research, Cheng Huang Microsoft, Kumseok Jung University of British Columbia, Akash Lal Microsoft Research, Immad Naseer Microsoft, Matthew J. Parkinson Microsoft Research, UK, Arun Thangamani Microsoft Research, Jyothi Vedurada IIT Hyderabad, Yunpeng Xiao Microsoft
Not scheduled yet
Talk
SATune: A Study-Driven Auto-Tuning Approach for Configurable Software Verification ToolsReusableAvailable
Artifact Evaluation
Ugur Koc University of Maryland, College Park, Austin Mordahl The University of Texas at Dallas, Shiyi Wei The University of Texas at Dallas, Jeffrey S. Foster Tufts University, Adam Porter University of Maryland
Not scheduled yet
Talk
ISPY: Automatic Issue-Solution Pair Extraction from Community Live ChatsReusableAvailable
Artifact Evaluation
Lin Shi Institute of Software at Chinese Academy of Sciences, Ziyou Jiang Institute of Software at Chinese Academy of Sciences, Ye Yang Stevens Institute of Technology, Xiao Chen Institute of Software at Chinese Academy of Sciences, YuMin Zhang Institute of Software Chinese Academy of Sciences, Fangwen Mu Institute of Software Chinese Academy of Sciences, Hanzhi Jiang Institute of Software at Chinese Academy of Sciences, Qing Wang Institute of Software at Chinese Academy of Sciences
Not scheduled yet
Talk
PyExplainer: Explaining the Predictions of Just-In-Time Defect ModelsReusableAvailable
Artifact Evaluation
Chanathip Pornprasit Monash University, Kla Tantithamthavorn Monash University, Jirayus Jiarpakdee Monash University, Michael Fu Monash University, Patanamon Thongtanunam University of Melbourne
Not scheduled yet
Talk
A Replication of Experiment on Finding Higher-order Mutants.Replicated
Artifact Evaluation
Xiao Ling North Carolina State University
Not scheduled yet
Talk
A Replication of graph edit distance as a quadratic assignmentproblemReplicated
Artifact Evaluation
Andre Lustosa North Carolina State University
Not scheduled yet
Talk
A Replication of Adversarial Attacks to API Recommender Systems: Time to Wake Up and Smell the Coffee?ReusableAvailable
Artifact Evaluation
Phuong T. Nguyen University of L’Aquila, Claudio Di Sipio University of L'Aquila, Juri Di Rocco University of L'Aquila, Davide Di Ruscio University of L'Aquila, Massimiliano Di Penta University of Sannio, Italy
Pre-print
Not scheduled yet
Talk
A Partial Replication of "A Fast and Elitist Multiobjective Genetic Algorithm: NSGA-II"Replicated
Artifact Evaluation
Kewen Peng North Carolina State University
Not scheduled yet
Talk
CorbFuzz: Checking Browser Security Policies with FuzzingReusableAvailable
Artifact Evaluation
Chaofan Shou University of California, Santa Barbara, Ismet Burak Kadron University of California at Santa Barbara, Qi Su University of California Santa Barbara, Tevfik Bultan University of California, Santa Barbara
Not scheduled yet
Talk
Faster Mutation Analysis with Fewer Processes and Smaller OverheadsReusableAvailable
Artifact Evaluation
Bo Wang Beijing Jiaotong University, Sirui Lu Peking University, Yingfei Xiong Peking University, Feng Liu Beijing Jiaotong University
Not scheduled yet
Talk
Performance Testing for Cloud Computing with Dependent Data BootstrappingAvailable
Artifact Evaluation
Sen He The University of Texas at San Antonio, Tianyi Liu The University of Texas at San Antonio, Palden Lama The University of Texas at San Antonio, Jaewoo Lee University of Georgia, In Kee Kim University of Georgia, Wei Wang University of Texas at San Antonio, USA
Not scheduled yet
Talk
A Replication of LIMEReplicated
Artifact Evaluation
Xiao Ling North Carolina State University
Not scheduled yet
Talk
Data-Driven Design and Evaluation of SMT Meta-Solving Strategies: Balancing Performance, Accuracy, and CostReusableAvailable
Artifact Evaluation
Malte Mues TU Dortmund University, Falk Howar TU Dortmund University
Not scheduled yet
Talk
ASE: A Value Set Decision Procedure for Symbolic ExecutionReusableAvailable
Artifact Evaluation
Alireza S. Abyaneh University of Salzburg, Christoph Kirsch University of Salzburg; Czech Technical University
Not scheduled yet
Talk
A Reproduction of Ref-Finder in a Refactoring StudyReproduced
Artifact Evaluation
Not scheduled yet
Talk
UI Test Migration Across Mobile PlatformsAvailable
Artifact Evaluation
Saghar Talebipour University of Southern California, Yixue Zhao University of Massachusetts Amherst, Luka Dojcilovic University of Southern California, Chenggang Li University of Southern California, Nenad Medvidović University of Southern California, USA
Not scheduled yet
Talk
Automated Verification of Go Programs via Bounded Model CheckingReusableAvailable
Artifact Evaluation
Nicolas Dilley University of Kent, Julien Lange Royal Holloway University of London
Not scheduled yet
Talk
A Replication of XGBoostReplicated
Artifact Evaluation
Xiao Ling North Carolina State University
Not scheduled yet
Talk
State synchronisation in model-based testingAvailable
Artifact Evaluation
Uraz Cengiz Türker University of Leicester, UK, Robert Hierons University of Sheffield, Mohammad Reza Mousavi King's College London, Ivan Tyukin University of Leicester
Not scheduled yet
Talk
Distribution Models for Falsification and Verification of DNNsReusableAvailable
Artifact Evaluation
Felipe Toledo , David Shriver University of Virginia, Sebastian Elbaum University of Virginia, Matthew B Dwyer University of Virginia
Pre-print
Not scheduled yet
Talk
Artifact for "JSTAR: JavaScript Specification Type Analyzer using Refinement"ReusableAvailable
Artifact Evaluation
Jihyeok Park KAIST, Seungmin An KAIST, Shin Wonho KAIST, Yusung Sim KAIST, Sukyoung Ryu KAIST
Not scheduled yet
Talk
Deep-GUI: Black-box GUI Input Generation with Deep LearningReusableAvailable
Artifact Evaluation
Faraz YazdaniBanafsheDaragh University of California, Irvine, Sam Malek University of California at Irvine, USA
Not scheduled yet
Talk
Thinking Like a Developer? Comparing the Attention of Humans with Neural Models of CodeReusableAvailable
Artifact Evaluation
Matteo Paltenghi University of Stuttgart, Michael Pradel University of Stuttgart
Not scheduled yet
Talk
DeepMetis: Augmenting a Deep Learning Test Set to Increase its Mutation ScoreReusableAvailable
Artifact Evaluation
Vincenzo Riccio USI Lugano, Nargiz Humbatova Università della Svizzera Italiana (USI), Gunel Jahangirova USI Lugano, Paolo Tonella USI Lugano
Not scheduled yet
Talk
On the Real-World Effectiveness of Static Bug Detectors at Finding Null Pointer ExceptionsReusableAvailable
Artifact Evaluation
David A Tomassi University of California, Davis, Cindy Rubio-González University of California at Davis
Not scheduled yet
Talk
A Replication of Guidelines for conducting systematic mapping studies in software engineering: An updateReplicated
Artifact Evaluation
Andre Lustosa North Carolina State University

Accepted Papers

Title
A Partial Replication of "A Fast and Elitist Multiobjective Genetic Algorithm: NSGA-II"Replicated
Artifact Evaluation
A Replication of Adversarial Attacks to API Recommender Systems: Time to Wake Up and Smell the Coffee?ReusableAvailable
Artifact Evaluation
Pre-print
A Replication of Experiment on Finding Higher-order Mutants.Replicated
Artifact Evaluation
A Replication of graph edit distance as a quadratic assignmentproblemReplicated
Artifact Evaluation
A Replication of Guidelines for conducting systematic mapping studies in software engineering: An updateReplicated
Artifact Evaluation
A Replication of LIMEReplicated
Artifact Evaluation
A Replication of XGBoostReplicated
Artifact Evaluation
A Reproduction of Ref-Finder in a Refactoring StudyReproduced
Artifact Evaluation
Artifact for "JSTAR: JavaScript Specification Type Analyzer using Refinement"ReusableAvailable
Artifact Evaluation
ASE: A Value Set Decision Procedure for Symbolic ExecutionReusableAvailable
Artifact Evaluation
Automated Verification of Go Programs via Bounded Model CheckingReusableAvailable
Artifact Evaluation
CiFi: Versatile Analysis of Class and Field ImmutabilityReusableAvailable
Artifact Evaluation
Context Debloating for Object-Sensitive Pointer AnalysisReusableAvailable
Artifact Evaluation
CorbFuzz: Checking Browser Security Policies with FuzzingReusableAvailable
Artifact Evaluation
Data-Driven Design and Evaluation of SMT Meta-Solving Strategies: Balancing Performance, Accuracy, and CostReusableAvailable
Artifact Evaluation
Deep-GUI: Black-box GUI Input Generation with Deep LearningReusableAvailable
Artifact Evaluation
DeepMetis: Augmenting a Deep Learning Test Set to Increase its Mutation ScoreReusableAvailable
Artifact Evaluation
Discussion
Artifact Evaluation

Distribution Models for Falsification and Verification of DNNsReusableAvailable
Artifact Evaluation
Pre-print
Efficient SMT-Based Model Checking for Signal Temporal LogicReusableAvailable
Artifact Evaluation
Faster Mutation Analysis with Fewer Processes and Smaller OverheadsReusableAvailable
Artifact Evaluation
Improving Test Case Generation for REST APIs Through Hierarchical ClusteringReusableAvailable
Artifact Evaluation
DOI Pre-print
ISPY: Automatic Issue-Solution Pair Extraction from Community Live ChatsReusableAvailable
Artifact Evaluation
Keynote
Artifact Evaluation
Most reused artefacts
Artifact Evaluation

Nekara: Generalized Concurrency TestingReusableAvailable
Artifact Evaluation
On the Real-World Effectiveness of Static Bug Detectors at Finding Null Pointer ExceptionsReusableAvailable
Artifact Evaluation
Opening
Artifact Evaluation
Performance Testing for Cloud Computing with Dependent Data BootstrappingAvailable
Artifact Evaluation
PyExplainer: Explaining the Predictions of Just-In-Time Defect ModelsReusableAvailable
Artifact Evaluation
Restoring the Executability of Jupyter Notebooks by Automatic Upgrade of Deprecated APIsReusableAvailable
Artifact Evaluation
Reuse graphs
Artifact Evaluation
Reuse of Perceval: Software Project Data at Your WillReproduced
Artifact Evaluation
SATune: A Study-Driven Auto-Tuning Approach for Configurable Software Verification ToolsReusableAvailable
Artifact Evaluation
State synchronisation in model-based testingAvailable
Artifact Evaluation
Testing Your Question Answering Software via Asking RecursivelyReusableAvailable
Artifact Evaluation
Thinking Like a Developer? Comparing the Attention of Humans with Neural Models of CodeReusableAvailable
Artifact Evaluation
UI Test Migration Across Mobile PlatformsAvailable
Artifact Evaluation
Understanding and Detecting Performance Bugs in Markdown CompilersReusableAvailable
Artifact Evaluation