Write a Blog >>
ICSE 2021
Mon 17 May - Sat 5 June 2021

Artifacts play a vital role in Software Engineering research and constitute, according to ACM’s Artifact Review and Badging policy, a “digital object that was either created by the authors to be used as part of the study or generated by the experiment itself.” Artifacts can be software systems, scripts used to run experiments, input datasets, raw data collected in the experiment, or scripts used to analyze results. High quality artifacts of published research papers increase the likelihood that results can be independently replicated and reproduced by other researchers.

In this spirit, the artifacts track aims to review, promote, share, and catalog the research artifacts of accepted software engineering papers. Authors of the papers accepted to the Technical Track can submit an artifact for evaluation as a candidate reusable, available, replicated or reproduced artifact. As previous year, authors of any prior SE work (published at ICSE or elsewhere) are also invited to submit an artifact for evaluation as a candidate replicated or reproduced artifact. The top two artifacts selected by the program committee will be awarded the best artifact awards.

Accepted Research Artifacts

Title
Abacus: A Tool for Precise Side-channel AnalysisArtifactArtifact Reusable
AE - Artifact Evaluation
DOI Pre-print
A dataset of Vulnerable Code Changes of the Chormium OS projectArtifactArtifact AvailableArtifact Reusable
AE - Artifact Evaluation
An Empirical Assessment of Global COVID-19 Contact Tracing ApplicationsArtifactArtifact AvailableArtifact Reusable
AE - Artifact Evaluation
An Evolutionary Study of Configuration Design and Implementation in Cloud SystemsArtifactArtifact Reusable
AE - Artifact Evaluation
An Open Dataset for Onboarding new Contributors-- Empirical Study of OpenStack EcosystemArtifactArtifact AvailableArtifact Reusable
AE - Artifact Evaluation
A Partial Replication of "RAICC: Revealing Atypical Inter-Component Communication in Android Apps"ArtifactArtifact AvailableArtifact Reusable
AE - Artifact Evaluation
A Replication of Are Machine Learning Cloud APIs Used CorrectlyArtifactArtifact Reusable
AE - Artifact Evaluation
A replication package for It Takes Two to Tango: Combining Visual and Textual Information for Detecting Duplicate Video-Based Bug ReportsArtifactArtifact Reusable
AE - Artifact Evaluation
A Replication Package for PyCG - Practicall Call Graph Generation in PythonArtifactArtifact AvailableArtifact Reusable
AE - Artifact Evaluation
Artifact Abstract for An Empirical Analysis of UI-based Flaky TestsArtifactArtifact AvailableArtifact Reusable
AE - Artifact Evaluation
Artifact: Distribution-Aware Testing of Neural Networks Using Generative ModelsArtifactArtifact AvailableArtifact Reusable
AE - Artifact Evaluation
Artifact for Enhancing Genetic Improvement of Software with Regression Test SelectionACM SIGSOFT Distinguished Artifact AwardArtifactArtifact AvailableArtifact Reusable
AE - Artifact Evaluation
DOI Pre-print
Artifact for "GenTree: Using Decision Trees to Learn Interactions for Configurable Software"ArtifactArtifact AvailableArtifact Reusable
AE - Artifact Evaluation
Artifact for Improving Fault Localization by Integrating Value and Predicate Based Causal Inference TechniquesArtifactArtifact AvailableArtifact Reusable
AE - Artifact Evaluation
Artifact of "FLACK: Counterexample-Guided Fault Localization for Alloy Models"ArtifactArtifact AvailableArtifact Reusable
AE - Artifact Evaluation
Artifact of ICSE 2021 Technical Track Submission #653: Bounded Exhaustive Search of Alloy Specification RepairsArtifactArtifact AvailableArtifact Reusable
AE - Artifact Evaluation
Artifact: Reducing DNN Properties to Enable Falsification with Adversarial AttacksArtifactArtifact AvailableArtifact Reusable
AE - Artifact Evaluation
A Survey on Method Naming Standards: Questions and Responses ArtifactArtifactArtifact AvailableArtifact Reusable
AE - Artifact Evaluation
CIBench: A Dataset and Collection of Techniques for Build and Test Selection and Prioritization in Continuous IntegrationArtifactArtifact AvailableArtifact Reusable
AE - Artifact Evaluation
CodeShovel: A Reusable and Available Tool for Extracting Source Code HistoriesArtifactArtifact AvailableArtifact Reusable
AE - Artifact Evaluation
Data and Materials for: Why don't Developers Detect Improper Input Validation? '; DROP TABLE Papers; --ArtifactArtifact AvailableArtifact Reusable
AE - Artifact Evaluation
Dataset to Study Indirectly Dependent Documentation in GitHub RepositoriesArtifactArtifact AvailableArtifact Reusable
AE - Artifact Evaluation
EvoSpex: An Evolutionary Algorithm for Learning PostconditionsArtifactArtifact Reusable
AE - Artifact Evaluation
FlakeFlagger: Predicting Flakiness Without Rerunning TestsArtifactArtifact AvailableArtifact Reusable
AE - Artifact Evaluation
IMGDroid: Detecting Image Loading Defects in Android ApplicationsArtifactArtifact AvailableArtifact Reusable
AE - Artifact Evaluation
IoT Bugs and Development ChallengesArtifactArtifact Reusable
AE - Artifact Evaluation
JEST: N+1-version Differential Testing of Both JavaScript Engines and SpecificationArtifactArtifact AvailableArtifact Reusable
AE - Artifact Evaluation
JUSTGen: Effective Test Generation for Unspecified JNI Behaviors on JVMsArtifactArtifact AvailableArtifact Reusable
AE - Artifact Evaluation
MAANA: An Automated Tool for DoMAin-specific HANdling of AmbiguityArtifactArtifact AvailableArtifact Reusable
AE - Artifact Evaluation
PASTA: Synthesizing Object State Transformers for Dynamic Software UpdatesArtifactArtifact Reusable
AE - Artifact Evaluation
PLELog: Log-based Anomaly Detection via Probabilistic Label EstimationArtifactArtifact AvailableArtifact Reusable
AE - Artifact Evaluation
Program Comprehension and Code Complexity Metrics: A Replication Package of an fMRI StudyArtifactArtifact AvailableArtifact Reusable
AE - Artifact Evaluation
PyART: Python API Recommendation in Real-TimeArtifactArtifact Reusable
AE - Artifact Evaluation
Replication of SOAR: A Synthesis Approach for Data Science API RefactoringArtifactArtifact AvailableArtifact Reusable
AE - Artifact Evaluation
Replication Package for Article: Data-Oriented Differential Testing of Object-Relational Mapping SystemsACM SIGSOFT Distinguished Artifact AwardArtifactArtifact AvailableArtifact Reusable
AE - Artifact Evaluation
Replication package for Input AlgebrasArtifactArtifact AvailableArtifact Reusable
AE - Artifact Evaluation
Replication package for Representation of Developer Expertise in Open Source SoftwareArtifactArtifact AvailableArtifact Reusable
AE - Artifact Evaluation
Research Artifact: The Potential of Meta-Maintenance on GitHubArtifactArtifact AvailableArtifact Reusable
AE - Artifact Evaluation
Research tools, survey responses, and interview analysis from a case study of onboarding software teams at MicrosoftArtifactArtifact AvailableArtifact Reusable
AE - Artifact Evaluation
RusTINA: Automatically checking and Patching Inline Assembly Interface Compliance (Artifact Evaluation)ArtifactArtifact AvailableArtifact Reusable
AE - Artifact Evaluation
Scalable Quantitative Verification For Deep Neural NetworksArtifactArtifact Reusable
AE - Artifact Evaluation
Semantic Patches for Adaptation of JavaScript Programs to Evolving LibrariesArtifactArtifact Reusable
AE - Artifact Evaluation
Shipwright: A Human-in-the-Loop System for Dockerfile RepairArtifactArtifact AvailableArtifact Reusable
AE - Artifact Evaluation
Smart Contract Security: a Practitioners' PerspectiveArtifactArtifact AvailableArtifact Reusable
AE - Artifact Evaluation
Survey Instruments for "How Was Your Weekend?" Software Development Teams Working From Home During COVID-19ArtifactArtifact AvailableArtifact Reusable
AE - Artifact Evaluation
ThEodorE: a Trace Checker for CPS PropertiesArtifactArtifact AvailableArtifact Reusable
AE - Artifact Evaluation
Too Quiet in the Library: An Empirical Study of Security Updates in Android Apps' Native CodeArtifactArtifact AvailableArtifact Reusable
AE - Artifact Evaluation
Understanding Community Smells Variability: A Statistical Approach - Replication Package InstructionsArtifactArtifact AvailableArtifact Reusable
AE - Artifact Evaluation
Unrealizable Cores for Reactive Systems Specifications: ArtifactArtifactArtifact Reusable
AE - Artifact Evaluation
Verifying Determinism in Sequential ProgramsArtifactArtifact Reusable
AE - Artifact Evaluation
White-Box Performance-Influence Models: A Profiling and Learning Approach (Replication Package)ArtifactArtifact AvailableArtifact Reusable
AE - Artifact Evaluation

Call For Artifact Submissions

Authors of papers accepted to the 2021 Technical/SEIP/NIER/SEET/SEIS Track are invited to submit artifacts associated with those papers to the ICSE Artifact Track for evaluation as candidate reusable, available, replicated or reproduced artifacts. Authors of any prior Software Engineering work (published at ICSE or elsewhere) are also eligible (and invited) to submit an artifact for evaluation as a candidate reusable, available, replicated or reproduced artifact. If those artifact(s) are accepted, they will each receive one (and only one) of the badges below on the front page of the authors’ paper and in the proceedings.

In addition, authors of any prior SE research work (published at ICSE or elsewhere) are invited to submit an artifact to the ICSE Artifact Track for evaluation as a candidate replicated or reproduced artifact. Those badges indicate that the original work has been independently (externally) replicated or reproduced by authors other than those of the original work and will be assigned digitally in retrospect (if supported by the respective publisher). If the artifact is accepted:

  • Authors will be invited to give lightning talks on this work at ICSE’21
  • We will do our best to work with the IEEE Xplore and ACM Portal administrator to add badges to the electronic versions of the authors’ paper(s).


Functional Reusable
Open to ICSE 21 submissions only
Available
Open to ICSE 21 submissions only
Replicated
Open to any submission
Reproduced
Open to any submission
No Badge
Artifacts documented, consistent, complete, exercisable, and include appropriate evidence of verification and validation Functional + very carefully documented and well-structured to the extent that reuse and repurposing is facilitated. In particular, norms and standards of the research community for artifacts of this type are strictly adhered to. Reusable + placed on a publicly accessible archival repository. A DOI or link to this repository along with a unique identifier for the object is provided. Available + main results of the paper have been obtained in a subsequent study by a person or team other than the authors, using, in part, artifacts provided by the author. Available + the main results of the paper have been independently obtained in a subsequent study by a person or team other than the authors, without the use of author-supplied artifacts.
Please note, as outlined in the submission instructions and reviewing guidelines below, that we assign either the “Available” badge or the “Reusable” badge, but not both. The “Available” badge is rewarded only to artifacts which are also reusable.

All accepted abstracts documenting the artifacts will be further published in the ICSE 2021 proceedings as a further form of recognition.

In principle, papers with badges from the artifact evaluation track contain reusable products that other researchers can use to bootstrap their own research. Experience shows that such papers earn increased citations and greater prestige in the research community. Artifacts of interest include (but are not limited to) the following:

  • Software, which are implementations of systems or algorithms potentially useful in other studies.

  • Data repositories, which are data (e.g., logging data, system traces, survey raw data) that can be used for multiple software engineering approaches.

  • Frameworks, which are tools and services illustrating new approaches to software engineering that could be used by other researchers in different contexts.

This list is not exhaustive, so the authors are asked to email the chairs before submitting if their proposed artifact is not on this list. Further information on data sharing principles and approaches are further introduced along an introduction of the general notion of open science in the book chapter Open Science in Software Engineering: https://arxiv.org/abs/1904.06499

Important Dates

Artifacts Evaluation Pre-submission Registration Deadline: 15 January 2021 (mandatory)

Artifact Evaluation Submissions Deadline: 22 January 2021

Artifact Evaluation Acceptance Notification: 24 February 2021

Best Artifact Awards

There will be two ICSE 2021 Best Artifact Awards to recognize the effort of authors creating and sharing outstanding research artifacts.

Submission Instructions and Reviewing Guidelines

Submission instructions and reviewing guidelines can both be taken from the Submission and Reviewing Guidelines. This document details the submission process, the expected contents of the artifacts as well as the expected criteria to merit awarding the respective badges in the hope to increase the transparency for both authors and reviewers.

It is important that authors submitting to this track carefully read the guidelines prior to their submission.

In the following, we briefly summarize key aspects of the submission process in dependency of the envisioned badge. For details, please refer to the provided guidelines.

Submission Process Overview

In principle, authors are expected to submit through EasyChair their artifact documentation. This documentation distinguishes two basic types of information - captured in one central research abstract (two pages max) - depending on the envisioned badge:

  1. Replicated and Reproduced where the emphasis lies on providing information about how their already published research has been replicated or reproduced as well as links to further material (e.g. the papers and artifacts in question). Note that we encourage submissions for those badges also to nominate other authors (e.g., when authors having reproduced study results want to nominate authors of the original study being replicated/reproduced).
  2. Reusable and Available where the emphasis lies on providing documentation on the research artifact previously prepared and archived. Here, the authors need to write and submit a documentation explaining how to obtain the artifact package, how to unpack the artifact, how to get started, and how to use the artifacts in more detail. The submission must only describe the technicalities of the artifacts and uses of the artifact that are not already described in the paper.

Note that if the authors are aiming for the badges Available and beyond, the artifact needs to be publicly accessible at the time of submission. This means that the EasyChair submission should include the research abstract only providing links to the repositories where the artifact is permanently stored and available. Submitting artifacts themselves through EasyChair without making them publicly accessible (through a repository or an archival service) will not be sufficient for any further badge. In the case of authors applying for the badge Reusable, the artifacts do not necessarily have to be publicly accessible for the review process. In this very case, the authors are asked to provide either a private link / password-protected link to a repository or they may submit the artifact directly through EasyChair (in a zip file) and it should become clear which steps are necessary for authors who would like to reuse the artifact.

Details on the research artifacts themselves are provided next.

Submission for Replicated and Reproduced Badges

For “replicated” and “reproduced” badges, authors will need to offer appropriate documentation that their artifacts have reached that stage.

By January 15, 2021 register your research artifact at the ICSE EasyChair site by submitting a two pages (max) abstract in PDF format describing your artifact.

The abstract should include the paper title, the purpose of the research artifact, the badge(s) you are claiming, and the technology skills assumed by the reviewer evaluating the artifact. Please also mention if running your artifact requires specific Operating Systems or other environments.

  • TITLE: A (Partial)? (Replication|Reproduction) of XYZ . Please add the term partial to your title if only some of the original work could be replicated/reproduced.
  • WHO: name the original authors (and paper) and the authors that performed the replication/reproduction. Include contact information and mark one author as the corresponding author.
    IMPORTANT : include also a web link to a publically available URL directory containing (a) the original paper (that is being reproduced) and (b) any subsequent paper(s)/documents/reports that do the reproduction.

  • WHAT: describe the “thing” being replicated/reproduced;
  • WHY: clearly state why that “thing” is interesting/important;
  • HOW: say how it was done first;
  • WHERE: describe the replication/reproduction. If the replication/reproduction was only partial, then explain what parts could be achieved or had to be missed.
  • DISCUSSION (if applicable): What aspects of this “thing” made it easier/harder to replicate/reproduce. What are the lessons learned from this work that would enable more replication/reproduction in the future for other kinds of tasks or other kinds of research.

Two PC members will review each abstract, possibly reaching out to the authors of the abstract or original paper. Abstracts will be ranked as follows.

  • If PC members do not find sufficient substantive evidence for replication/reproduction, the abstract will be rejected.
  • Any abstract that is judged to be unnecessarily critical towards others in the research community will be rejected (*).
  • The remaining abstracts will be sorted according to (a) how interesting they are to the community and (b) their correctness.
  • The top ranked abstracts will be invited to give lightning talks.

(*) Please note that our goal is to foster a positive environment that supports and rewards researchers for conducting replications and reproductions. To that end, it is important to encourage an atmosphere where presentations pay due respect to both work that is being reproduced/replicated and reproductions/replications. Criticism of other work related to the reproduction/replication is acceptable only as part of a balanced and substantive discussion of prior accomplishments.

Submission for Reusable and Available Badges

Only authors of papers accepted to the 2021Technical/SEIP/NIER/SEET/SEIS Track can submit candidate reusable or available artifacts.

By January 15, 2021 register your research artifact at the ICSE EasyChair site by submitting a two pages (max) abstract in PDF format describing your artifact.

For the reusable and available badges, authors must offer “download information” showing how reviewers can access and execute (if appropriate) their artifact.

Authors must perform the following steps to submit an artifact:

  1. Preparing the artifact
  2. Making the artifact publicly available (by using repositories granting public access)
  3. Documenting the artifact
  4. Submitting the artifact


1. Preparing the Artifact

There are two options depending on the nature of the artifacts: Installation Package or Simple Package. In both cases, the configuration and installation for the artifact should take less than 30 minutes. Otherwise, the artifact is unlikely to be endorsed simply because the committee will not have sufficient time to evaluate it.

Installation Package. If the artifact consists of a tool or software system, then the authors need to prepare an installation package so that the tool can be installed and run in the evaluator’s environment. Provide enough associated instruction, code, and data such that some CS person with a reasonable knowledge of scripting, build tools, etc. could install, build, and run the code. If the artifact contains or requires the use of a special tool or any other non-trivial piece of software the authors must provide a VirtualBox VM image or a Docker container image with a working environment containing the artifact and all the necessary tools.

We expect that the artifacts have been vetted on a clean machine before submission.

Simple Package. If the artifact only contains documents which can be used with a simple text editor, a PDF viewer, or some other common tool (e.g., a spreadsheet program in its basic configuration) the authors can just save all documents in a single package file (zip or tar.gz).


2. Making the Artifact Available

The authors need to make the packaged artifact (installation package or simple package) available so that the Evaluation Committee can access it. We suggest a link to a public repository (e.g., GitHub) or to a single archive file in a widely available archive format.

If the authors are aiming for the badges “available” and beyond, the artifact needs to be publicly accessible. In other cases, the artifacts do not necessarily have to be publicly accessible for the review process. In this case, the authors are asked to provide a private link or a password-protected link. In any case, we encourage the authors to use permanent repositories dedidated at data sharing where no registration is necessary for those accessing the artifacts (e.g., please avoid using services such as GoogleDrive).


3. Documenting the Artifact

The authors need to write and submit a documentation explaining how to obtain the artifact package, how to unpack the artifact, how to get started, and how to use the artifacts in more detail. The artifact submission must only describe the technicalities of the artifacts and uses of the artifact that are not already described in the paper.

The submission should contain the following documents (in plain text or pdf format) in a zip archive:

  • A README main file describing what the artifact does and where it can be obtained (with hidden links and access password if necessary). Also, there should be a clear description how to repeat/replicate/reproduce the results presented in the paper. Artifacts which focus on data should, in principle, cover aspects relevant to understand the context, data provenance, ethical and legal statements (as long as relevant), and storage requirements. Artifacts which focus on software should, in principle, cover aspects relevant to how to install and use it (and be accompanied by a small example).
  • A REQUIREMENTS file for artifacts which focus on software. This file should, in principle, cover aspects of hardware environment requirements (e.g., performance, storage or non-commodity peripherals) and software environments (e.g., Docker, VM, and operating system) but also, if relevant, a requirements.txt with explicit versioning information (e.g. for Python-only environments). Any deviation from standard environments needs to be reasonably justified.
  • A STATUS file stating what kind of badge(s) the authors are applying for as well as the reasons why the authors believe that the artifact deserves that badge(s).
  • A LICENSE file describing the distribution rights. Note that to score “available” or higher, then that license needs to be some form of open source license. Details also under the respective badges and the ICSE 2021 open science policy.
  • An INSTALL file with installation instructions. These instructions should include notes illustrating a very basic usage example or a method to test the installation. This could be, for instance, on what output to expect that confirms that the code is installed and working; and the code is doing something interesting and useful.
  • A copy of the accepted paper in pdf format.


4. Submitting the Artifact

By January 15, 2021 register your research artifact at the ICSE EasyChair site by submitting an abstract describing your artifact. The abstract should include the paper title, the purpose of the research artifact, the badge(s) you are claiming, and the technology skills assumed by the reviewer evaluating the artifact. Please also mention if running your artifact requires specific Operation Systems or other environments.

By January 22, 2021 complete your submission by making sure that all the content related to the actual artifact is available.

The Evaluation Committee may contact the authors within the Rebuttal Period to request clarifications on the basic installation and start-up procedures or to resolve simple installation problems. Information on the rebuttal phase are provided in the Submission and Reviewing Guidelines. Instructions will further be sent to the authors (and reviewers) along the reviewing process.

Given the short review time available, the authors are expected to respond within a 48-hour period. Authors may update their research artifact after submission only for changes requested by reviewers in the rebuttal phase. Author submitting an open source repository link, are expected to give a tag to time-stamp your submission.


Finally, further information will be constantly made available on the website https://conf.researchr.org/track/icse-2021/icse-2021-Artifact-Evaluation.

In case of questions, please do not hesitate contacting the chairs.

Looking forward to welcome you soon!

Silvia Abrahão, Daniel Mendez

Filter
Role Type
Country
Search

Results (244)

A
Abrahão, Silvia
Universitat Politècnica de ValènciaSpain
Abualhaija, Sallam
University of LuxembourgLuxembourg
Adams, Bram
Queens University
Aguirre, Nazareno
University of Rio Cuarto and CONICET, Argentina
Almanee, Sumaya
University of California, IrvineUnited States
Alshammari, Abdulrahman
George Mason UniversityUnited States
Alsuhaibani, Reem S.
Kent State UniversityUnited States
An, Seungmin
KAISTSouth Korea
Apel, Sven
Saarland UniversityGermany
Ardito, Luca
Politecnico di TorinoItaly
Arora, Chetan
Deakin UniversityAustralia
Atlidakis, Vaggelis
Columbia UniversityGreece
B
Bacchelli, Alberto
University of Zurich
Bagheri, Hamid
University of Nebraska-Lincoln
Baluta, Teodora
National University of SingaporeSingapore
Bao, Qinkun
The Pennsylvania State University
Bardin, Sébastien
CEA LIST, University Paris-Saclay, FranceFrance
Bartel, Alexandre
University of Luxembourg
Bell, Jonathan
Northeastern UniversityUnited States
Bernal-Cárdenas, Carlos
MicrosoftUnited States
Bianculli, Domenico
University of LuxembourgLuxembourg
Bissyandé, Tegawendé F.
SnT, University of LuxembourgLuxembourg
Bonichon, Richard
Tweag I/O, Paris, France
Bosu, Amiangshu
Wayne State University
Bradley, Nick
University of British ColumbiaCanada
Braz, Larissa
University of Zurich
Brechmann, Andre
Briand, Lionel
University of Luxembourg and University of Ottawa
Brida, Simón Gutiérrez
University of Rio Cuarto and CONICET, ArgentinaArgentina
Burgueño, Lola
Open University of CataloniaSpain
C
Calikli, Gül
University of ZürichSwitzerland
Camtepe, Seyit
CSIRO Data61Australia
Catolino, Gemma
Tilburg University & ​Jheronimus Academy of Data ScienceNetherlands
Chaliasos, Stefanos
Athens University of Economics and BusinessGreece
Chaparro, Oscar
College of William & MaryUnited States
Chen, Jiachi
Monash University
Chen, Junjie
College of Intelligence and Computing, Tianjin University
Chowdhury, Shaiful Alam
University of British Columbia
Chua, Zheng Leong
Independent ResearcherSingapore
Ciccozzi, Federico
Malardalen UniversitySweden
Collard, Michael L.
The University of AkronUnited States
Cooper, Nathan
William & MaryUnited States
D
d'Amorim, Marcelo
Federal University of PernambucoBrazil
de la Vara, Jose Luis
Universidad de Castilla - La ManchaSpain
Decker, Michael J.
Bowling Green State UniversityUnited States
Dey, Tapajit
Lero - The Irish Software Research Centre and University of LimerickIreland
Di Ruscio, Davide
University of L'AquilaItaly
Dola, Swaroopa
University of VirginiaUnited States
Dong, Wei
National University of Defense TechnologyChina
Dong, Xuyuan
Information and Network Center,Tianjin UniversityChina
Dorner, Michael
Blekinge Institute of TechnologySweden
Dwyer, Matthew B
University of VirginiaUnited States
E
Eghan, Ellis E.
Polytechnique MontrealCanada
Elbaum, Sebastian
University of Virginia
Engstrom, Emelie
Lund UniversitySweden
Ergin, Huseyin
Ball State UniversityUnited States
Ernst, Michael D.
UW CSEUnited States
Ezzini, Saad
University of LuxembourgLuxembourg
F
Famelis, Michalis
Université de MontréalCanada
Feng, Yang
State Key Laboratory for Novel Software Technology, Nanjing UniversityChina
Ferrari, Alessio
CNR-ISTIItaly
Ford, Denae
Microsoft ResearchUnited States
Foundjem, Armstrong
Queens UniversityCanada
Fregnan, Enrico
University of ZurichSwitzerland
Frias, Marcelo F.
Dept. of Software Engineering Instituto Tecnológico de Buenos AiresArgentina
G
Garcia, Joshua
University of California, IrvineUnited States
Gonzalez Huerta, Javier
Blekinge Institute of Technology
Gopinath, Rahul
CISPA Helmholtz Center for Information SecurityGermany
Grandhi, Sampath
University of Texas at DallasUnited States
Grund, Felix
University of British ColumbiaCanada
Gu, Tianxiao
Alibaba GroupChina
Guizzo, Giovani
University College LondonUnited Kingdom
Gupta, Avyakt
IIIT-DelhiIndia
Gómez, Abel
Universitat Oberta de CatalunyaSpain
H
Hall, Braxton
University of British Columbia
Han, Mengqi
Nanjing University of Science & TechnologyChina
Hao, Rui
State Key Laboratory for Novel Software Technology Nanjing University
Harman, Mark
University College London
Hata, Hideaki
Shinshu UniversityJapan
He, Haochen
National University of Defense Technology
He, Xincheng
State Key Laboratory for Novel Software Technology, Nanjing UniversityChina
Henderson, Tim A. D.
GoogleUnited States
Henkel, Jordan
University of Wisconsin--MadisonUnited States
Herzig, Kim
Tools for Software Engineers, Microsoft
Hilton, Michael
Carnegie Mellon University, USAUnited States
Hoffmann, Henry
University of ChicagoUnited States
Holmes, Reid
University of British ColumbiaCanada
Huang, Jeff
Texas A&M UniversityUnited States
Hwang, Sungjae
KAISTSouth Korea
I
Ishio, Takashi
Nara Institute of Science and TechnologyJapan
J
Jiang, Jiajun
College of Intelligence and Computing, Tianjin UniversityChina
Jiang, Yanyan
Nanjing UniversityChina
Jin, Xianhao
Virginia Tech
Ju, An
University of California, Berkeley
K
Karnauch, Andrey
University of Tennessee KnoxvilleUnited States
Kaushal, Deepanshu
IIIT-DelhiIndia
Kelly, Scot
MicrosoftUnited States
Kim, Gyeongwon
KAISTSouth Korea
Kim, Jihoon
KAISTSouth Korea
Klein, Jacques
University of LuxembourgLuxembourg
Kopczyńska, Sylwia
Poznan University of TechnologyPoland
Kroß, Johannes
fortissGermany
Kucuk, Yigit
Case Western Reserve UniversityUnited States
Kula, Raula Gaikovina
NAISTJapan
L
Larus, James
EPFLSwitzerland
Le Goues, Claire
Carnegie Mellon UniversityUnited States
Lee, Sungho
KAISTSouth Korea
Legunsen, Owolabi
Cornell University
Lemerre, Matthieu
CEA LIST, University Paris-Saclay, FranceFrance
Li, Li
Monash UniversityAustralia
Li, Shanshan
National University of Defense Technology
Liebel, Grischa
Reykjavik UniversityIceland
Liu, Shicheng
University of ChicagoUnited States
Lo, David
Singapore Management UniversitySingapore
Louridas, Panos
Athens University of Economics and BusinessGreece
Lu, Shan
University of ChicagoUnited States
Luo, Xiapu
The Hong Kong Polytechnic University
Lynce, Ines
INESC-ID/IST, Universidade de LisboaPortugal
M
Ma, Xiaoxing
Nanjing University
Maire, Michael
University of ChicagoUnited States
Makhshari, Amir
University of British Columbia (UBC)Canada
Maletic, Jonathan I.
Kent State UniversityUnited States
Manesh, Mahdi
Porsche Digital GmbHGermany
Manquinho, Vasco
INESC-ID/IST, Universidade de LisboaPortugal
Maoz, Shahar
Tel Aviv University, IsraelIsrael
Martins, Ruben
Carnegie Mellon UniversityUnited States
Meel, Kuldeep S.
National University of SingaporeSingapore
Mendez, Daniel
Blekinge Institute of TechnologySweden
Menghi, Claudio
University of LuxembourgLuxembourg
Mesbah, Ali
University of British Columbia (UBC)Canada
Miller, Courtney
New College of FloridaUnited States
Millstein, Suzanne
University of WashingtonUnited States
Minku, Leandro
Mitropoulos, Dimitris
National and Kapodistrian University of AthensGreece
Mockus, Audris
Molina, Facundo
University of Rio Cuarto and CONICET, ArgentinaArgentina
Monperrus, Martin
KTH Royal Institute of TechnologySweden
Montgomery, Lloyd
Universität HamburgGermany
Moran, Kevin
George Mason UniversityUnited States
Morris, Christopher
Carnegie Mellon UniversityUnited States
Mosser, Sébastien
Université du Québec à MontréalCanada
Mounier, Laurent
Univ. Grenoble Alpes. VERIMAG, Grenoble, FranceFrance
Mudduluru, Rashmi
University of WashingtonUnited States
Møller, Anders
Aarhus UniversityDenmark
N
Navarro, Elena
University of Castilla-La ManchaSpain
Nemati, Hamed
CISPA Helmholtz Center for Information SecurityGermany
Newman, Christian D.
Rochester Institute of TechnologyUnited States
Nguyen, KimHao
University of Nebraska-LincolnUnited States
Nguyen, ThanhVu
University of Nebraska, LincolnUnited States
Ni, Ansong
Yale University
Nielsen, Benjamin Barslev
Aarhus UniversityDenmark
P
Palomba, Fabio
University of SalernoItaly
Park, Jihyeok
KAISTSouth Korea
Parnin, Chris
North Carolina State University
Paul, Rajshakhar
Wayne State UniversityUnited States
Payer, Mathias
EPFLSwitzerland
Peitek, Norman
Leibniz Institute for NeurobiologyGermany
Petke, Justyna
University College LondonUnited Kingdom
Podgurski, Andy
Case Western Reserve UniversityUnited States
Ponzio, Pablo
Dept. of Computer Science FCEFQyN, University of Rio CuartoArgentina
Poshyvanyk, Denys
College of William & MaryUnited States
Potet, Marie-Laure
Univ. Grenoble Alpes. VERIMAG, Grenoble, FranceFrance
Purandare, Rahul
IIIT-DelhiIndia
Purandare, Salil
IIIT-DelhiIndia
R
R. Fonseca C., Efraín
Universidad de las Fuerzas Armadas ESPEEcuador
Ramos, Daniel
Carnegie Mellon University
Rana, Ankit
IIIT-DelhiIndia
Ranasinghe, Damith C.
The University of AdelaideAustralia
Rath, Istvan
Recoules, Frédéric
CEA, ListFrance
Regis, Germán
University of Rio Cuarto, ArgentinaArgentina
Reps, Thomas
University of Wisconsin--MadisonUnited States
Rodeghero, Paige
Clemson UniversityUnited States
Romano, Alan
University at BuffaloUnited States
Ryu, Sukyoung
KAISTSouth Korea
S
Sabetzadeh, Mehrdad
EECS, University of Ottawa
Sajnani, Hitesh
Salis, Vitalis
Athens University of Economics and Business, National and Technical University of AthensGreece
Samhi, Jordan
University of LuxembourgLuxembourg
Sandobalín, Julio
Escuela Politénica NacionalEcuador
Sarro, Federica
University College LondonUnited Kingdom
Saxena, Prateek
National University of SingaporeSingapore
Scott, Ezequiell
Serebrenik, Alexander
Eindhoven University of TechnologyNetherlands
Servant, Francisco
Virginia TechUnited States
Shalom, Rafi
Tel Aviv University, IsraelIsrael
Shriver, David
University of VirginiaUnited States
Siegmund, Janet
Chemnitz University of TechnologyGermany
Siegmund, Norbert
Leipzig University
Silva, Carla
Universidade Federal de PernambucoBrazil
Silva, Denini
Federal University of PernambucoBrazil
Soffa, Mary Lou
University of VirginiaUnited States
Sondhi, Devika
IIIT-DelhiIndia
Song, Wei
Nanjing University of Science & TechnologyChina
Song, Zihe
University of Texas at DallasUnited States
Sotiropoulos, Thodoris
Athens University of Economics and BusinessGreece
Sousa Brito, Isabel Sofia
Instituto Politécnico de BejaPortugal
Spinellis, Diomidis
Athens University of Economics and Business & TU DelftGreece
Stephan, Matthew
Miami UniversityUnited States
Storey, Margaret-Anne
University of VictoriaCanada
Störrle, Harald
QAware GmbH
Sun, Ruoxi
The University of AdelaideAustralia
Sánchez Cuadrado, Jesús
T
Tamburri, Damian Andrew
TU/eNetherlands
Teixeira, Leopoldo
Federal University of PernambucoBrazil
Torp, Martin Toldam
Aarhus UniversityDenmark
Treude, Christoph
University of AdelaideAustralia
Turzo, Asif Kamal
Wayne State UniversityUnited States
Tyson, Gareth
Queen Mary University of LondonUnited Kingdom
V
Viganò, Enrico
University of LuxembourgLuxembourg
Vogelsang, Andreas
University of CologneGermany
W
Waataja, Jason
UW CSEUnited States
Wan, Chengcheng
University of ChicagoUnited States
Wan, Zhiyuan
Zhejiang UniversityChina
Wang, Shuai
Hong Kong University of Science and Technology
Wang, Wei (Zach)
The University of AdelaideAustralia
Wang, Weihang
University at Buffalo, SUNYUnited States
Wang, Weijing
College of Intelligence and Computing, Tianjin University
Wang, Zan
College of Intelligence and Computing, Tianjin UniversityChina
Wang, Zihao
The Pennsylvania State University
Weber, Max
Leipzig UniversityGermany
Wu, Dinghao
The Pennsylvania State University
X
Xia, Xin
Huawei Software Engineering Application Technology LabChina
Xu, Baowen
Nanjing UniversityChina
Xu, Chang
Nanjing UniversityChina
Xu, Lei
State Key Laboratory for Novel Software Technology, Nanjing University
Xu, Tianyin
University of Illinois Urbana-Champaign
Xue, Minhui (Jason)
The University of AdelaideAustralia
Y
Yang, Aidan Z.H.
Carnegie Mellon UniversityCanada
Yang, Lin
College of Intelligence and Computing, Tianjin University
Yang, Wei
University of Texas at DallasUnited States
Yang, Xiaohu
Zhejiang University
Youn, Dongjun
KAIST
Z
Zaytsev, Vadim
University of Twente, NetherlandsNetherlands
Zeller, Andreas
CISPA Helmholtz Center for Information SecurityGermany
Zhang, Man
Kristiania University College, Norway
Zhang, Wenbin
Information and Network Center,Tianjin UniversityChina
Zhang, Xiangyu
Purdue UniversityUnited States
Zhang, Yuanliang
National University of Defense TechnologyChina
Zhao, Zelin
Nanjing UniversityChina
Zheng, Guolong
University of Nebraska LincolnUnited States
Zimmermann, Thomas
Microsoft ResearchUnited States
Zschaler, Steffen
King's College LondonUnited Kingdom
Ü
Ünal, Arda
University of California, IrvineUnited States