Write a Blog >>

Accepted Papers

Title
A Comprehensive Evaluation of Android ICC Resolution TechniquesReusable
Artifact Evaluation
DOI
Are Neural Bug Detectors Comparable to Software Developers on Variable Misuse bugs?Reusable
Artifact Evaluation
DOI
Artifact for "Selectively Combining Multiple Coverage Goals in Search-Based Unit Test Generation"Reusable
Artifact Evaluation
DOI
Artifact of ICEBAR: Feedback-Driven Iterative Repair of Alloy SpecificationsAvailable
Artifact Evaluation
DOI
Artifact of the ASE'2022 paper "So Many Fuzzers, So Little Time - Experience from Evaluating Fuzzers on the Contiki-NG Network (Hay)Stack"Reusable
Artifact Evaluation
DOI
[Artifact] Reformulator: Automated Refactoring of the N+1 Problem in Database-Backed ApplicationsReusable
Artifact Evaluation
DOI
Artifacts for "Efficient Synthesis of Method Call Sequences for Test Generation and Bounded Verification"Available
Artifact Evaluation
DOI
AST-Probe: Recovering abstract syntax trees from hidden representations of pre-trained language modelsReusable
Artifact Evaluation
DOI
Auto Off-Target: Enabling Thorough and Scalable Testing for Complex Software SystemsReusable
Artifact Evaluation
Boosting the Revealing of Detected Violations in Deep Learning Testing: A Diversity-Guided MethodReusable
Artifact Evaluation
Call Me Maybe: Using NLP to Automatically Generate Unit Test Cases Respecting Temporal ConstraintsReusable
Artifact Evaluation
DOI
CARGO: AI-Guided Dependency Analysis for Migrating Monolithic Applications to Microservices ArchitectureAvailable
Artifact Evaluation
DOI Pre-print
CrystalBLEU: Precisely and Efficiently Measuring the Similarity of CodeReusable
Artifact Evaluation
DOI
Evolving Ranking-Based Failure Proximities for Better Clustering in Fault IsolationReusable
Artifact Evaluation
DOI
FuzzerAid: Grouping Fuzzed Crashes Based On Fault SignaturesAvailable
Artifact Evaluation
DOI
Fuzzle: Making a Puzzle for FuzzersReusable
Artifact Evaluation
DOI
GLITCH: Automated Polyglot Security Smell Detection in Infrastructure as CodeReusable
Artifact Evaluation
DOI
Identifying Solidity Smart Contract API Documentation ErrorsReusable
Artifact Evaluation
DOI
Inline TestsReusable
Artifact Evaluation
DOI Pre-print
Is this Change the Answer to that Problem?Correlating Descriptions of Bug and Code Changes for Evaluating Patch CorrectnessReusable
Artifact Evaluation
DOI Pre-print
Jasmine: A Static Analysis Framework for Spring Core TechnologiesAvailable
Artifact Evaluation
DOI
LISSA: Lazy Initialization with Specialized Solver AidReusable
Artifact Evaluation
DOI
Property-Based Automated Repair of DeFi ProtocolsReusable
Artifact Evaluation
DOI Pre-print
Provably Tightest Linear Approximation for Robustness Verification of Sigmoid-like Neural NetworksReusable
Artifact Evaluation
DOI
Scrutinizing Privacy Policy Compliance of Virtual Personal Assistant AppsReusable
Artifact Evaluation
DOI
SelfAPR: Self-supervised Program Repair with Test Execution DiagnosticsReusable
Artifact Evaluation
DOI
Studying and Understanding the Tradeoffs Between Generality and Reduction in Software DebloatingReusable
Artifact Evaluation
DOI
Tseitin or not Tseitin? The Impact of CNF Transformations on Feature-Model AnalysesReusable
Artifact Evaluation

Call for Submissions

UPDATE: Papers must be submitted by the deadline of August 3, but may be updated until August 7 AoE time. The extra few days is to help accommodate papers from the technical track which were conditionally accepted, but the extra days are available to anyone. The version updated by August 3 must be close to its final form.

For ASE 2022, artifact badges can be earned for papers published at ASE 2022 (available and reusable). Badges can also be earned for papers published previously (at ASE or elsewhere) where the main results of the papers were obtained in a subsequent study by people other than the authors (replicated and reproduced).

Authors of accepted artifacts abstracts will receive recognition of their badge and publication of the abstract via the ASE 2022 conference proceedings.

Badges for Papers Published at ASE 2022

Authors of papers accepted in any of the tracks of ASE 2022 may submit artifacts associated with those papers to the ASE Artifact Track. Submitted artifacts will receive one of the following badges:

Available

Open Research Objects (ORO)

Placed on a publicly accessible archival repository. A DOI or link to this persistent repository along with a unique identifier for the object is provided. Artifacts have not been formally evaluated.

Reusable

Research Objects Reviewed (ROR)

Available + Artifacts are very carefully documented and well-structured, consistent, complete, exercisable, and include appropriate evidence of verification and validation to the extent that reuse and repurposing is facilitated. In particular, norms and standards of the research community for artifacts of this type are strictly adhered to.

Papers with such badges contain reusable products that other researchers can use to bootstrap their own research. Experience shows that such papers earn increased citations and greater prestige in the research community. Artifacts of interest include (but are not limited to) the following.

  • Software, which are implementations of systems or algorithms potentially useful in other studies.
  • Data repositories, which are data (e.g., logging data, system traces, survey raw data) that can be used for multiple software engineering approaches.
  • Frameworks, which are tools and services illustrating new approaches to software engineering that could be used by other researchers in different contexts.

This list is not exhaustive, so the authors are asked to email the chairs before submitting if their proposed artifact is not on this list.

Badges for Reproduced and Replicated Papers

Authors of any prior SE work (published at any previous ASE or other SE venue) may submit an artifact for evaluation as a candidate replicated or reproduced.

Reproduced

Results Reproduced (ROR-R)

The main results of the paper have been obtained in a subsequent study by a person or team other than the authors, using, in part, artifacts provided by the original authors. Also, artifacts are very carefully documented and well-structured, consistent, complete, exercisable, and include appropriate evidence of verification and validation

Replicated

Results Replicated (RER)

The main results of the paper have been independently obtained in a subsequent study by a person or team other than the authors, without the use of author-supplied artifacts.

Details of the Badges

Available

This badge is applied to papers in which associated artifacts have been made permanently available for retrieval.

  • We consider temporary drives (e.g., Dropbox, Google Drive) to be non-persistent, same as individual/institutional websites of the submitting authors, as these are prone to changes.
  • We ask that authors use Zenodo as this service is persistent and also offers the possibility to assign a DOI.
  • Artifacts do not need to have been formally evaluated in order for an article to receive this badge. In addition, they need not be complete in the sense described above. They simply need to be relevant to the study and add value beyond the text in the article. Such artifacts could be something as simple as the data from which the figures are drawn, or as complex as a complete software system under study.

Reusable

The artifacts must meet the following requirements.

  • Documented: At minimum, an inventory of artifacts is included, and sufficient description provided to enable the artifacts to be exercised.
  • Consistent: The artifacts are relevant to the associated paper, and contribute in some inherent way to the generation of its main results.
  • Complete: To the extent possible, all components relevant to the paper in question are included. (Proprietary artifacts need not be included. If they are required to exercise the package then this should be documented, along with instructions on how to obtain them. Proxies for proprietary data should be included so as to demonstrate the analysis.)
  • Exercisable: Included scripts and / or software used to generate the results in the associated paper can be successfully executed, and

Authors are strongly encouraged to target their artifact submissions for Reusable as the purpose of artifact badges is, among other things, to facilitate reuse and repurposing, which may not be achieved at the Functional level.

Reproduced

This badge is applied to papers in which the main results of the paper have been successfully obtained by a person or team other than the original authors of the work, with, at least in part, artifacts provided by the original authors.

Example: If Asha published a paper with artifacts in 2020, and Tim published a reproduction in 2021 using the artifacts, then Asha can now apply for the Reproduced badge on the 2020 paper.

Replicated

This badge is applied to papers in which the main results of the paper have been successfully obtained by a person or team other than the author, but without any artifacts provided by the original authors.

Example: If Janet published a paper in 2020 with no artifacts, and Miles published a paper with artifacts in 2021 that independently obtained the main result, then Janet can apply for the Replicated badge on the 2020 paper.

Papers with such badges contain reusable products that other researchers can use to bootstrap their own research. Experience shows that such papers earn increased citations and greater prestige in the research community. Artifacts of interest include (but are not limited to) the following.

  • Software, which are implementations of systems or algorithms potentially useful in other studies.
  • Data repositories, which are data (e.g., logging data, system traces, survey raw data) that can be used for multiple software engineering approaches.
  • Frameworks, which are tools and services illustrating new approaches to software engineering that could be used by other researchers in different contexts.

This list is not exhaustive, so the authors are asked to email the chairs before submitting if their proposed artifact is not on this list.

Submission

Authors are to submit to our HotCRP website.

Note that that there are two separate separate submission procedures: 1) for available and reusable artifacts, and 2) for replicated and reproduced badges.

Submission for Available and Reusable artifacts

All submitters must make their repositories available using the following steps.

Your Github repo should have documentation files explaining how to obtain the artifact package, how to unpack the artifact, how to get started, and how to use the artifacts in sufficient detail. The artifact submission must describe only the technicalities of the artifacts and uses of the artifact that are not already described in the paper. The submission should contain the following documents (in markdown plain text format).

  • A README.md main file describing what the artifact does and where it can be obtained (with hidden links and access password if necessary).
  • A LICENSE.md file describing the distribution rights. Note that to score “available” or higher, then that license needs to be some form of open source license.
  • An INSTALL.md file with installation instructions. These instructions should include notes illustrating a very basic usage example or a method to test the installation. This could be, for instance, information on what output to expect that confirms that the code is installed and working; and that the code is doing something interesting and useful. IMPORTANT, there should be a clear description of how to reproduce the results presented in the paper.
  • A copy of the accepted paper in pdf format.

Authors may update their research artifacts after submission only for changes requested by reviewers in the rebuttal phase. To update artifacts: (1) Go to Github; (2) Make your changes; (3) Make a new release; (4) Make a comment in HotCrp that “in response to comment XYZ we have made a new release that addresses that issues as follows: ABC”

Submission for Replicated and Reproduced badges

Submit a one page (max) pdf documenting the maturity of the artifact. This needs to include:

  • TITLE: A (Partial)? (Replication|Reproduction) of XYZ. Please add the term partial to your title if only some of the original work could be replicated/reproduced.
  • WHO: name the original authors (and paper) and the authors that performed the replication/reproduction. Include contact information (emails). Mark one author as the corresponding author.
  • IMPORTANT: include also a web link to a publicly available URL directory containing (a) the original paper (that is being reproduced) and (b) any subsequent paper(s)/documents/reports that do the reproduction.
  • WHAT: describe the “thing” being replicated/reproduced;
  • WHY: clearly state why that “thing” is interesting/important;
  • PLATFORM: being the operating system where this artifact was mostly developed on;
  • HOW: say how it was done first;
  • WHERE: describe the replication/reproduction. If the replication/reproduction was only partial, then explain what parts could be achieved or had to be missed.
  • DISCUSSION: What aspects of this “thing” made it easier/harder to replicate/reproduce. What are the lessons learned from this work that would enable more replication/reproduction in the future for other kinds of tasks or other kinds of research.

Review

The ASE artifact evaluation track uses a single-blind review process. All artifacts will receive two reviews.

Two PC members will review each abstract, possibly reaching out to the authors of the abstract or original paper. Abstracts will be ranked as follows.

  • If PC members do not find sufficient substantive evidence for replication/reproduction, the abstract will be rejected.
  • Any abstract that is judged to be unnecessarily critical of prior work will be rejected (*).
  • The remaining abstracts will be sorted according to (a) interestingness and (b) correctness.
  • The top ranked abstracts will be invited to give lightning talks.

(*) Our goal is to foster a positive environment that supports and rewards researchers for conducting replications and reproductions. To that end, we require that all abstracts and presentations pay due respect to the work they are reproducing/replicating. Criticism of prior work is acceptable only as part of a balanced and substantive discussion of prior accomplishments.

Note that prior to reviewing, there may be some interactions to handle setup and install. Before the actual evaluation reviewers will check the integrity of the artifact and look for any possible setup problems that may prevent it from being properly evaluated (e.g., corrupted or missing files, VM won’t start, immediate crashes on the simplest example, etc.). The Evaluation Committee may contact the authors to request clarifications on the basic installation and start-up procedures or to resolve simple installation problems. Artifact evaluation can be rejected for artifacts whose configuration and installation takes an undue amount of time.