ICSE 2024
Fri 12 - Sun 21 April 2024 Lisbon, Portugal

Call for Artifact Submissions

The artifact evaluation track aims to review, promote, share, and catalog the research artifacts of accepted software engineering papers. Authors of papers accepted to the Technical/SEIP/NIER/SEET/SEIS tracks can submit an artifact for the Artifacts Available and Artifacts Reusable badges. Authors of any prior SE work (published at ICSE or elsewhere) are also invited to submit their work for the Results Validated (replicated or reproduced) badges. Definitions for all badges can be found on ACM Artifact Review and Badging Version 1.1.

New this year: our primary goal will be to help authors make their artifacts available and reusable. To this end, we both strongly encourage authors to provide a clean image (Docker or similar) as part of their artifacts for any software components (see the preparation instructions), and will prioritize awarding the corresponding two badges, described below. To ensure that all submitted artifacts can be brought up to the standard of reusable, which requires high-quality documentation and structure, we will enable PC/Author discussions for the entire review period.

image Available: Author-created artifacts relevant to this paper have been placed on a publicly accessible archival repository. A DOI or link to this repository along with a unique identifier for the object is provided.

image Reusable: The artifacts associated with the research are found to be complete, exercisable, and include appropriate evidence of verification and validation. In addition, they are very carefully documented and well-structured to the extent that reuse and repurposing is facilitated. In particular, norms and standards of the research community for artifacts of this type are strictly adhered to.

Important Dates

  • Dec 29, 2023: Artifact abstract deadline.
  • Jan 4, 2024: Artifact submissions deadline.
  • Jan 5 - Jan 23, 2024: review period (PC/authors discussion).
  • Jan 28, 2024: Notifications.

Best Artifact Awards

There will be two ICSE 2024 Best Artifact Awards to recognize the effort of authors creating and sharing outstanding research artifacts.

Submission for Reusable and Available Badges

Only authors of papers accepted to the 2024 Technical/SEIP/NIER/SEET/SEIS tracks can submit candidate reusable or available artifacts.

To submit your artifact for review, submit an abstract describing your research artifact by the abstract deadline, and submit a 2 pages (max) PDF or Markdown file by the submission deadline at the ICSE 2024 HotCRP site, following the instructions below.

For the reusable and available badges, authors must offer “download information” showing how reviewers can access and execute (if appropriate) their artifact.

Authors must perform the following steps to submit an artifact:

  1. Prepare the artifact
  2. Make the artifact available
  3. Document the artifact
  4. Submit the artifact

1. Prepare the artifact

Both executable and non-executable artifacts may be submitted.

Executable artifacts consist of a tool or software system. For these artifacts, authors should prepare an installation package so that the tool can be installed and run in the evaluator’s environment. Following the instructions below, provide enough associated instruction, code, and data such that an average CS professional could build, install, and run the code within a reasonable time-frame. If installation and configuration requires more than 30 minutes, the artifact is unlikely to be accepted on practical grounds, simply because the PC will not have sufficient time to evaluate it.

When preparing executable packages for submission, we recommend vetting the artifact on a clean machine to confirm that it can be setup in a reasonable time frame. We strongly encourage authors to consider using a Docker (or VirtualBox VM) image for this process. Besides providing a clean environment to assess the installation instructions, the resulting image can be submitted as part of the artifact to allow quick replication. In particular, if the artifact contains or requires the use of a special tool or any other non-trivial piece of software, the authors must provide a VirtualBox VM image or a Docker container image with a working environment containing the artifact and all the necessary tools.

Non-executable artifacts only contain data and documents that can be used with a simple text editor, a PDF viewer, or some other common tool (e.g., a spreadsheet program in its basic configuration). These artifacts can be submitted as a single, optionally compressed package file (e.g., a tar, zip, or tar.gz file).

2. Make the artifact available

The authors need to make the packaged artifact available so that the PC can access it.

Artifacts must be made available via an archival repository, such as Software Heritage (see their submission guide), which provides long-term availability of software source code. Other often used solutions, more focused on long-term data archival, include Figshare and Zenodo. Please note that platforms that do not guarantee long-term archival, which presently includes GitHub, do not qualify.

3. Document the artifact

The authors need to write and submit documentation explaining how to obtain, unpack, and use their artifact in detail. The artifact submission must only describe the technicalities of the artifacts and uses of the artifact that are not already described in the paper. The submission should include the three documents described below in a single archive file. Note: a key change compared to prior years is that we are consolidating all files describing the artifact except for the LICENSE into one README. There is no need to submit separate README/INSTALL/STATUS/REQUIREMENTS files. Please provide:

  • A copy of the accepted paper in pdf format including the link to the archival repository.
  • A LICENSE file describing the distribution rights. For submissions aiming for the Available badge, the license needs to ensure public availability. In the spirit of the ICSE Open Science Policy, we recommend adopting an open source license for executable artifacts and open data license for non-executable artifacts.
  • A README file (in Markdown, plain text, or PDF format) that describes the artifact with all appropriate sections from the following:
  • Purpose: a brief description of what the artifact does.
    • Include a list of badge(s) the authors are applying for as well as the reasons why the authors believe that the artifact deserves that badge(s).
  • Provenance: where the artifact can be obtained, preferably with a link to the paper’s preprint if publicly available.
  • Data (for artifacts which focus on data or include a nontrivial dataset): cover aspects related to understanding the context, data provenance, ethical and legal statements (as long as relevant), and storage requirements.
  • Setup (for executable artifacts): provide clear instructions for how to prepare the artifact for execution. This includes:
    • Hardware: performance, storage, and device-type (e.g. GPUs) requirements.
    • Software: Docker or VM requirements, or operating system & package dependencies if not provided as a container or VM. Providing a Dockerfile or image, or at least confirming the tool’s installation in a container is strongly encouraged. Any deviation from standard environments needs to be reasonably justified.
  • Usage (for executable artifacts): provide clear instructions for how to repeat/replicate/reproduce the main results presented in the paper. Include both:
    • A basic usage example or a method to test the installation. For instance, it may describe what command to run and what output to expect to confirm that the code is installed and operational.
    • Detailed commands to replicate the major results from the paper.

4. Submit the artifact

By the abstract submission deadline (see important dates), register your research artifact at the ICSE 2024 HotCRP site by submitting an abstract describing your artifact. The abstract should include the paper title, the purpose of the research artifact, the badge(s) you are claiming, and the technology skills assumed by the reviewer evaluating the artifact. Please also mention if running your artifact requires any specific Operating Systems or other, unusual environments.

The PC may contact the authors, via the submission system, during the entire review period to request clarifications on the basic installation and start-up procedures or to resolve simple installation problems. Reviewers will be encouraged to attempt to execute submitted software artifacts early on, to minimize the time spent iterating on making the artifact functional and in turn provide enough time to ensure that all artifacts can be made reusable. Given the short review time available, the authors are expected to respond within a 72-hour period. Authors may update their research artifact after submission only for changes requested by reviewers during this time. Information on this phase is provided in the Submission and Reviewing Guidelines.

Further information will be constantly made available on the website https://conf.researchr.org/track/icse-2024/icse-2024-artifact-evaluation.

Please do not hesitate to contact the chairs for any questions.

Accepted Research Artifacts

Title
An Empirical Study on Compliance with Ranking Transparency in the Software Documentation of EU Online Platforms
Artifact Evaluation
A Replication of "Generating REST API Specifications through Static Analysis"
Artifact Evaluation
A Replication Package for: "MotorEase: Automated Detection of Motor Impairment Accessibility Issues in Mobile App UIs"
Artifact Evaluation
A Replication Package for: "On Using GUI Interaction Data to Improve Text Retrieval-based Bug Localization"
Artifact Evaluation
Artifact for "Are Prompt Engineering and TODO Comments Friends or Foes? An Evaluation on GitHub Copilot"
Artifact Evaluation
Artifact for Assessing AI Detectors in Identifying AI-Generated Code: Implications for Education
Artifact Evaluation
Artifact for "Crossover in Parametric Fuzzing"
Artifact Evaluation
Artifact for "Data-Driven Evidence-Based Syntactic Sugar Design"
Artifact Evaluation
Artifact for 'Fairness Improvement with Multiple Protected Attributes: How Far Are We?'
Artifact Evaluation
Artifact for Fast Deterministic Black-box Context-free Grammar Inference
Artifact Evaluation
Artifact for ICSE-SEET 2024 paper "Integrating Canvas and GitLab to Enrich Learning Processes"
Artifact Evaluation
Artifact for "Learning and Repair of Deep Reinforcement Learning Policies from Fuzz-Testing Data"
Artifact Evaluation
Artifact for "Revisiting Android App Categorization"
Artifact Evaluation
Artifact for the Paper "Inferring Data Preconditions from Deep Learning Models for Trustworthy Prediction in Deployment"
Artifact Evaluation
Artifact for “TRIAD: Automated Traceability Recovery based on Biterm-enhanced Deduction of Transitive Links among Artifacts”
Artifact Evaluation
Artifact for "Understanding Transaction Bugs in Database Systems"
Artifact Evaluation
Artifact for "Unveiling Hurdles in Software Engineering Education: The Role of Learning Management Systems"
Artifact Evaluation
DOI File Attached
Artifact of CIT4DNN: Generating Diverse and Rare Inputs for Neural Networks Using Latent Space Combinatorial Testing
Artifact Evaluation
Artifact of "ReClues: Representing and Indexing Failures in Parallel Debugging with Program Variables"
Artifact Evaluation
Artifact: S3C: Spatial Semantic Scene Coverage for Autonomous Vehicles
Artifact Evaluation
Artifacts Evaluation Instructions: #2405 Combining Structured Static Code Information and Dynamic Symbolic Traces for Software Vulnerability Prediction
Artifact Evaluation
Artifact: Smart Contract and DeFi Security Tools: Do They Meet the Needs of Practitioners?
Artifact Evaluation
[Artifact] The Classics Never Go Out of Style: An Empirical Study of Downgrades from the Bazel Build Technology
Artifact Evaluation
Assessing the impact of hints in learning formal specification: Research artifact
Artifact Evaluation
Assessment Criteria for Sustainable Software Engineering Processes
Artifact Evaluation
Link to publication DOI
A Study on the Pythonic Functional Constructs’ Understandability
Artifact Evaluation
A Theory of Scientific Programming Efficacy
Artifact Evaluation
Automated Detection of AI-Obfuscated Plagiarism in Modeling Assignments
Artifact Evaluation
DOI Pre-print
Automated Program Repair, What Is It Good For? Not Absolutely Nothing!
Artifact Evaluation
Link to publication DOI Pre-print
Automatically Detecting Reflow Accessibility Issues in Responsive Web Pages
Artifact Evaluation
BOMs Away! Inside the Minds of Stakeholders: A Comprehensive Study of Bills of Materials for Software Systems
Artifact Evaluation
Breaking the Flow: A Study of Interruptions During Software Engineering Activities
Artifact Evaluation
CERT: Finding Performance Issues in Database Systems Through the Lens of Cardinality Estimation
Artifact Evaluation
Challenges, Strengths, and Strategies of Software Engineers with ADHD: A Case Study
Artifact Evaluation
Co-Creation in Fully Remote Software Teams
Artifact Evaluation
CodeGRITS: A Research Toolkit for Developer Behavior and Eye Tracking in IDE
Artifact Evaluation
CoderEval: A Benchmark of Pragmatic Code Generation with Generative Pre-trained Models
Artifact Evaluation
Coding with a Creative Twist: Investigating the Link Between Creativity Scores and problem-solving Strategies
Artifact Evaluation
Concrete Constraint Guided Symbolic Execution
Artifact Evaluation
Pre-print
Constraint Based Program Repair for Persistent Memory Bugs
Artifact Evaluation
CSChecker: Revisiting GDPR and CCPA Compliance of Cookie Banners on the Web
Artifact Evaluation
Data and Material for Energy Patterns for Web: An Exploratory Study
Artifact Evaluation
Dataflow Analysis-Inspired Deep Learning for Efficient Vulnerability Detection
Artifact Evaluation
Dealing With Cultural Dispersion: a Novel Theoretical Framework for Software Engineering Research and Practice—Artifact Evaluation
Artifact Evaluation
Demystifying Compiler Unstable Feature Usage and Impacts in the Rust Ecosystem
Artifact Evaluation
Detecting Automatic Software Plagiarism via Token Sequence Normalization
Artifact Evaluation
DOI Pre-print
Detecting Logic Bugs in Graph Database Management Systems via Injective and Surjective Graph Query Transformation
Artifact Evaluation
ECFuzz: Effective Configuration Fuzzing for Large-Scale Systems
Artifact Evaluation
EDEFuzz: A Web API Fuzzer for Excessive Data Exposures
Artifact Evaluation
eFish’nSea: Unity Game Set for Learning Software Performance Issues Root Causes and Resolutions
Artifact Evaluation
Empirical Study of the Docker Smells Impact on the Image Size
Artifact Evaluation
Evaluation of Information Flows Specifications from Software Documentation
Artifact Evaluation
Extrapolating Coverage Rate in Greybox Fuzzing - Artifacts
Artifact Evaluation
Finding XPath Bugs in XML Document Processors via Differential Testing
Artifact Evaluation
FlakeSync: Automatically Repairing Async Flaky Tests
Artifact Evaluation
FlashSyn: Flash Loan Attack Synthesis via Counter Example Driven Approximation
Artifact Evaluation
Pre-print
Fuzz4All: Universal Fuzzing with Large Language Models
Artifact Evaluation
FuzzSlice: Pruning False Positives in Static Analysis Warnings through Function-Level Fuzzing
Artifact Evaluation
How to Support ML End-User Programmers through a Conversational Agent
Artifact Evaluation
Pre-print
Hypertesting of Programs: Theoretical Foundation and Automated Test Generation (Artifact)
Artifact Evaluation
ICSE 2024 Artifact: Translation Validation for JIT Compiler in the V8 JavaScript Engine
Artifact Evaluation
It's Not a Feature, It's a Bug: Fault-Tolerant Model Mining from Noisy Data
Artifact Evaluation
Kind Controllers and Fast Heuristics for Non-Well-Separated GR(1) Specifications: Artifact
Artifact Evaluation
Knowledge Graph Driven Inference Testing for Question Answering Software
Artifact Evaluation
Let's Ask AI About Their Programs: Exploring ChatGPT's Answers To Program Comprehension Questions
Artifact Evaluation
Link to publication
Leveraging Large Language Models to Improve REST API Testing
Artifact Evaluation
LibvDiff: Library Version Difference Guided OSS Version Identification in Binaries
Artifact Evaluation
Lost in Translation: A Study of Bugs Introduced by Large Language Models while Translating Code
Artifact Evaluation
DOI Pre-print Media Attached
MAFT: Efficient Model-Agnostic Fairness Testing for Deep Neural Networks via Zero-Order Gradient Search
Artifact Evaluation
Modularizing while Training: A New Paradigm for Modularizing DNN Models
Artifact Evaluation
Naturalness of Attention: Revisiting Attention in Code Language Models
Artifact Evaluation
On the Helpfulness of Answering Developer Questions on Discord with Similar Conversations and Posts from the Past
Artifact Evaluation
Optimistic Prediction of Synchronization-Reversal Data Races
Artifact Evaluation
PPT4J: Patch Presence Test for Java Binaries
Artifact Evaluation
Precise Sparse Abstract Execution via Cross-Domain Interaction
Artifact Evaluation
Predicting Performance and Accuracy of Mixed-Precision Programs for Precision Tuning
Artifact Evaluation
PyTy: Repairing Static Type Errors in Python
Artifact Evaluation
Recovering Trace Links Between Software Documentation And Code
Artifact Evaluation
Re(gEx|DoS)Eval: Evaluating Generated Regular Expressions and their Proneness to DoS Attacks
Artifact Evaluation
REOM: A Reverse Engineering Framework for On-device TensorFlow-Lite (TFLite) Models
Artifact Evaluation
Replication of Semantic GUI Scene Learning and Video Alignment for Detecting Duplicate Video-based Bug Reports
Artifact Evaluation
Research Artifact: "My GitHub Sponsors profile is live!" Investigating the Impact of Twitter/X Mentions on GitHub Sponsors
Artifact Evaluation
Resource Usage and Optimization Opportunities in Workflows of GitHub Actions
Artifact Evaluation
Ripples of a Mutation — An Empirical Study of Propagation Effects in Mutation Testing
Artifact Evaluation
RPG: Rust Library Fuzzing with Pool-based Fuzz Target Generation and Generic Support
Artifact Evaluation
Safeguarding DeFi Smart Contracts against Oracle Deviations
Artifact Evaluation
Scalable Teaching of Software Engineering Theory and Practice: An Experience Report
Artifact Evaluation
Scaling Code Pattern Inference with Interactive What-If
Artifact Evaluation
Link to publication Pre-print
SCTrans: Constructing a Large Public Scenario Dataset for Simulation Testing of Autonomous Driving Systems
Artifact Evaluation
Semantic Analysis of Macro Usage for Portability
Artifact Evaluation
Link to publication DOI Pre-print
Semantic-Enhanced Static Vulnerability Detection in Baseband Firmware
Artifact Evaluation
SERGE–Serious Game for the Education of Risk Management in Software Project Management
Artifact Evaluation
Supporting Web-based API Searches in the IDE Using Signatures
Artifact Evaluation
Symbol-Specific Sparsification of Interprocedural Distributive Environment Problems
Artifact Evaluation
Toward Improved Deep Learning-based Vulnerability Detection
Artifact Evaluation
Towards Finding Accounting Errors In Smart Contracts
Artifact Evaluation
TypeEvalPy: A Micro-benchmarking Framework for Python Type Inference Tools
Artifact Evaluation
Using an LLM to Help With Code Understanding
Artifact Evaluation
VeRe: Verification Guided Synthesis for Repairing Deep Neural Networks
Artifact Evaluation
Verifying Declarative Smart Contracts.
Artifact Evaluation
VGX: Large-Scale Sample Generation for Boosting Learning-Based Software Vulnerability Analyses
Artifact Evaluation