ISSTA 2019
Mon 15 - Fri 19 July 2019 Beijing, China


28th ACM SIGSOFT International Symposium on Software Testing and Analysis
(ISSTA 2019) Beijing, China - July 15-19, 2019.
https://conf.researchr.org/home/issta-2019
Submission deadline: January 28, 2019.


ISSTA is the leading research symposium on software testing and analysis, bringing together academics, industrial researchers, and practitioners to exchange new ideas, problems, and experience on how to analyze and test software systems. ISSTA 2019 will be held in Beijing, China, on July 15-19, 2019.

ISSTA 2019 will be co-located with SPIN 2019.

Dates
Plenary
You're viewing the program in a time zone which is different from your device's time zone change time zone

Wed 17 Jul

Displayed time zone: Beijing, Chongqing, Hong Kong, Urumqi change

11:00 - 12:30
Program RepairTechnical Papers at Grand Ballroom
Chair(s): Yingfei Xiong Peking University
11:00
22m
Talk
Crash-avoiding Program Repair
Technical Papers
Xiang Gao National University of Singapore, Sergey Mechtaev University College London, Abhik Roychoudhury National University of Singapore
11:22
22m
Talk
Practical Program Repair via Bytecode MutationArtifacts Functional
Technical Papers
Ali Ghanbari Iowa State University, Samuel Benton The University of Texas at Dallas, Lingming Zhang
Pre-print
11:45
22m
Talk
TBar: Revisiting Template-based Automated Program RepairArtifacts Functional
Technical Papers
Kui Liu , Anil Koyuncu University of Luxembourg, Luxembourg, Dongsun Kim Furiosa.ai, Tegawendé F. Bissyandé SnT, University of Luxembourg
Pre-print
12:07
22m
Talk
History-driven Build Failure Fixing: How Far Are We?Distinguished Paper Awards
Technical Papers
Yiling Lou Peking University, China, Junjie Chen Peking University, Lingming Zhang , Dan Hao Peking University, Lu Zhang Peking University
14:00 - 15:30
Mobile App TestingTechnical Papers at Grand Ballroom
Chair(s): Xiaoyin Wang University of Texas at San Antonio, USA
14:00
22m
Talk
LibID: Reliable Identification of Obfuscated Third-Party Android Libraries
Technical Papers
Jiexin Zhang University of Cambridge, Alastair R. Beresford University of Cambridge, UK, Stephan A. Kollmann University of Cambridge
DOI Pre-print
14:22
22m
Talk
QADroid: Regression Event Selection for Android ApplicationsArtifacts ReusableArtifacts Functional
Technical Papers
Aman Sharma IIT Madras, Rupesh Nasre IIT Madras, India
14:45
22m
Talk
Mining Android Crash Fixes in the Absence of Issue- and Change-Tracking Systems
Technical Papers
Pingfan Kong Interdisciplinary Centre for Security, Reliability and Trust, University of Luxembourg, Li Li Monash University, Australia, Jun Gao University of Luxembourg, SnT, Tegawendé F. Bissyandé SnT, University of Luxembourg, Jacques Klein University of Luxembourg, SnT
15:07
22m
Talk
SARA: Self-replay Augmented Record and Replay for Android in Industrial Cases
Technical Papers
Jiaqi Guo Xi'an Jiaotong University, Shuyue Li Xi'an Jiaotong University, Jian-Guang Lou Microsoft Research, Zijiang Yang Western Michigan University, Ting Liu MOEKLINNS Lab, Department of Computer Science and Technology, Xi'an Jiaotong University, 710049, China

Thu 18 Jul

Displayed time zone: Beijing, Chongqing, Hong Kong, Urumqi change

11:00 - 12:30
Regression TestingTechnical Papers at Grand Ballroom
Chair(s): Dan Hao Peking University
11:00
22m
Talk
Root Causing Flaky Tests in a Large-scale Industrial Setting
Technical Papers
Wing Lam University of Illinois at Urbana-Champaign, Patrice Godefroid Microsoft Research, Suman Nath Microsoft Corporation, Anirudh Santhiar Indian Institute of Science, Suresh Thummalapenta
11:22
22m
Talk
Mitigating the Effects of Flaky Tests on Mutation Testing
Technical Papers
August Shi University of Illinois at Urbana-Champaign, Jonathan Bell George Mason University, Darko Marinov University of Illinois at Urbana-Champaign
Pre-print Media Attached
11:45
22m
Talk
Assessing the State and Improving the Art of Parallel Testing for CArtifacts ReusableArtifacts Functional
Technical Papers
Oliver Schwahn TU Darmstadt, Nicolas Coppik TU Darmstadt, Stefan Winter TU Darmstadt, Neeraj Suri
12:07
22m
Talk
Failure Clustering Without Coverage
Technical Papers
14:00 - 15:30
Testing and Machine LearningTechnical Papers at Grand Ballroom
Chair(s): Hongyu Zhang The University of Newcastle
14:00
22m
Talk
DeepHunter: A Coverage-Guided Fuzz Testing Framework for Deep Neural Networks
Technical Papers
Xiaofei Xie Nanyang Technological University, Lei Ma Kyushu University, Felix Juefei-Xu Carnegie Mellon University, Minhui Xue , Hongxu Chen Nanyang Technological University, Yang Liu Nanyang Technological University, Singapore, Jianjun Zhao Kyushu University, Bo Li UIUC, Jianxiong Yin NVIDIA AI Tech Centre, Simon See NVIDIA AI Tech Centre
14:22
22m
Talk
Search-based Test and Improvement of Machine-Learning-Based Anomaly Detection SystemsArtifacts ReusableArtifacts Functional
Technical Papers
Maxime Cordy SnT, University of Luxembourg, Steve Muller unaffiliated, Mike Papadakis University of Luxembourg, Yves Le Traon University of Luxembourg
14:45
22m
Talk
DeepFL: Integrating Multiple Fault Diagnosis Dimensions for Deep Fault LocalizationArtifacts ReusableDistinguished Paper AwardsArtifacts Functional
Technical Papers
Xia Li University of Texas at Dallas, USA, Wei Li Southern University of Science and Technology, Yuqun Zhang Southern University of Science and Technology, Lingming Zhang
15:07
22m
Talk
Codebase-Adaptive Detection of Security-Relevant MethodsArtifacts Functional
Technical Papers
Goran Piskachev Fraunhofer IEM, Lisa Nguyen Quang Do Paderborn University, Eric Bodden Heinz Nixdorf Institut, Paderborn University and Fraunhofer IEM
DOI Pre-print Media Attached File Attached
16:00 - 17:30
APIs and Symbolic ExecutionTechnical Papers at Grand Ballroom
Chair(s): Moonzoo Kim KAIST
16:00
22m
Talk
Effective and Efficient API Misuse Detection via Exception Propagation and Search-based TestingArtifacts ReusableArtifacts Functional
Technical Papers
Maria Kechagia University College London, Xavier Devroey Delft University of Technology, Annibale Panichella Deflt University of Technology, Georgios Gousios TU Delft, Arie van Deursen Delft University of Technology
DOI Pre-print Media Attached
16:22
22m
Talk
Automated API-Usage Update for Android AppsArtifacts Functional
Technical Papers
Mattia Fazzini Georgia Institute of Technology, Qi Xin Georgia Institute of Technology, Alessandro Orso Georgia Tech
16:45
22m
Talk
A Large-Scale Study of Application Incompatibilities in AndroidArtifacts Functional
Technical Papers
Haipeng Cai Washington State University, USA, Ziyi Zhang , Li Li Monash University, Australia, Xiaoqin Fu Washington State University
Pre-print
17:07
22m
Talk
Deferred Concretization in Symbolic Execution via Fuzzing
Technical Papers
Awanish Pandey IIT Kanpur, India, Phani Raj Goutham Kotcharlakota , Subhajit Roy IIT Kanpur, India

Fri 19 Jul

Displayed time zone: Beijing, Chongqing, Hong Kong, Urumqi change

11:00 - 12:30
Static Analysis and DebuggingTechnical Papers at Grand Ballroom
Chair(s): Arie van Deursen Delft University of Technology
11:00
22m
Talk
Differentially Testing Soundness and Precision of Program Analyzers
Technical Papers
Christian Klinger University of Texas, Austin, Maria Christakis MPI-SWS, Valentin Wüstholz ConsenSys Diligence
Pre-print
11:22
22m
Talk
Judge: Identifying, Understanding, and Evaluating Sources of Unsoundness in Call Graphs
Technical Papers
Michael Reif TU Darmstadt, Germany, Florian Kübler TU Darmstadt, Germany, Michael Eichberg TU Darmstadt, Germany, Dominik Helm TU Darmstadt, Germany, Mira Mezini TU Darmstadt, Germany
Pre-print File Attached
11:45
22m
Talk
Adlib: Analyzer for Mobile Ad Platform LibrariesArtifacts ReusableArtifacts Functional
Technical Papers
Sungho Lee KAIST, South Korea, Sukyoung Ryu KAIST, South Korea
DOI Pre-print
12:07
22m
Talk
Interactive Metamorphic Testing of Debuggers
Technical Papers
Sandro Tolksdorf TU Darmstadt, Daniel Lehmann TU Darmstadt, Michael Pradel TU Darmstadt and Facebook
Link to publication DOI Pre-print
14:00 - 15:30
Testing GUIs and CarsTechnical Papers at Grand Ballroom
Chair(s): Lingming Zhang
14:00
22m
Talk
TestMig: Migrating GUI Test Cases from iOS to Android
Technical Papers
Xue Qin , Hao Zhong Shanghai Jiao Tong University, Xiaoyin Wang University of Texas at San Antonio, USA
14:22
22m
Talk
Learning User Interface Element Interactions
Technical Papers
Christian Degott CISPA Helmholtz Center for Information Security, Nataniel Borges Jr. CISPA Helmholtz Center for Information Security, Andreas Zeller CISPA Helmholtz Center for Information Security
Pre-print Media Attached
14:45
22m
Talk
Improving Random GUI Testing with Image-based Widget Detection
Technical Papers
Thomas D. White The University of Sheffield, Gordon Fraser University of Passau, Guy J. Brown The University of Sheffield
15:07
22m
Talk
Automatically Testing Self-Driving Cars with Search-based Procedural Content Generation
Technical Papers
Alessio Gambi University of Passau, Marc Mueller BeamNG GmbH, Gordon Fraser University of Passau
16:00 - 17:30
PotpourriTechnical Papers at Grand Ballroom
Chair(s): Andreas Zeller CISPA Helmholtz Center for Information Security
16:00
22m
Talk
Semantic Fuzzing with ZestArtifacts ReusableArtifacts Functional
Technical Papers
Rohan Padhye University of California, Berkeley, Caroline Lemieux University of California, Berkeley, Koushik Sen University of California, Berkeley, Mike Papadakis University of Luxembourg, Yves Le Traon University of Luxembourg
Link to publication DOI Pre-print
16:22
22m
Talk
Detecting Memory Errors at Runtime with Source-Level InstrumentationDistinguished Paper Awards
Technical Papers
Zhe Chen Nanjing University of Aeronautics and Astronautics, Junqi Yan Nanjing University of Aeronautics and Astronautics, Shuanglong Kan Nanjing University of Aeronautics and Astronautics, Ju Qian Nanjing University of Aeronautics and Astronautics, Jingling Xue UNSW Sydney
16:45
22m
Talk
Optimal Context-Sensitive Dynamic Partial Order Reduction with Observers
Technical Papers
Elvira Albert , Maria Garcia de la Banda Monash University, Miguel Gómez-Zamalloa Complutense University of Madrid, Miguel Isabel Complutense University of Madrid, Peter J. Stuckey Monash University
17:07
22m
Talk
Exploiting The Laws of Order in Smart Contracts
Technical Papers
Aashish Kolluri , Ivica Nikolić National University Of Singapore, Ilya Sergey Yale-NUS College and National University of Singapore, Aquinas Hobor , Prateek Saxena National University Of Singapore

Not scheduled yet

Not scheduled yet
Talk
Some Challenges for Software Testing Research (Invited Talk Abstract)
Technical Papers
Nadia Alshahwan Facebook, Andrea Ciancone Facebook, Mark Harman Facebook and University College London, Yue Jia University College London, Ke Mao Meta, Alexandru Marginean University College London, UK, Alexander Mols Facebook, Hila Peleg Technion, Israel, Federica Sarro University College London, UK, Ilya Zorin Facebook
Not scheduled yet
Talk
The Theory and Practice of String Solvers (Invited Talk Abstract)
Technical Papers
Adam Kiezun Principal Engineer, Amazon Inc., Philip Guo UCSD, Pieter Hooimeijer Engineering Manager, Facebook Inc., Michael D. Ernst University of Washington, USA, Vijay Ganesh University of Waterloo

Accepted Papers

Title
Adlib: Analyzer for Mobile Ad Platform LibrariesArtifacts ReusableArtifacts Functional
Technical Papers
DOI Pre-print
A Large-Scale Study of Application Incompatibilities in AndroidArtifacts Functional
Technical Papers
Pre-print
Assessing the State and Improving the Art of Parallel Testing for CArtifacts ReusableArtifacts Functional
Technical Papers
Automated API-Usage Update for Android AppsArtifacts Functional
Technical Papers
Automatically Testing Self-Driving Cars with Search-based Procedural Content Generation
Technical Papers
Codebase-Adaptive Detection of Security-Relevant MethodsArtifacts Functional
Technical Papers
DOI Pre-print Media Attached File Attached
Crash-avoiding Program Repair
Technical Papers
DeepFL: Integrating Multiple Fault Diagnosis Dimensions for Deep Fault LocalizationArtifacts ReusableDistinguished Paper AwardsArtifacts Functional
Technical Papers
DeepHunter: A Coverage-Guided Fuzz Testing Framework for Deep Neural Networks
Technical Papers
Deferred Concretization in Symbolic Execution via Fuzzing
Technical Papers
Detecting Memory Errors at Runtime with Source-Level InstrumentationDistinguished Paper Awards
Technical Papers
Differentially Testing Soundness and Precision of Program Analyzers
Technical Papers
Pre-print
Effective and Efficient API Misuse Detection via Exception Propagation and Search-based TestingArtifacts ReusableArtifacts Functional
Technical Papers
DOI Pre-print Media Attached
Exploiting The Laws of Order in Smart Contracts
Technical Papers
Failure Clustering Without Coverage
Technical Papers
History-driven Build Failure Fixing: How Far Are We?Distinguished Paper Awards
Technical Papers
Improving Random GUI Testing with Image-based Widget Detection
Technical Papers
Interactive Metamorphic Testing of Debuggers
Technical Papers
Link to publication DOI Pre-print
Judge: Identifying, Understanding, and Evaluating Sources of Unsoundness in Call Graphs
Technical Papers
Pre-print File Attached
Learning User Interface Element Interactions
Technical Papers
Pre-print Media Attached
LibID: Reliable Identification of Obfuscated Third-Party Android Libraries
Technical Papers
DOI Pre-print
Mining Android Crash Fixes in the Absence of Issue- and Change-Tracking Systems
Technical Papers
Mitigating the Effects of Flaky Tests on Mutation Testing
Technical Papers
Pre-print Media Attached
Optimal Context-Sensitive Dynamic Partial Order Reduction with Observers
Technical Papers
Practical Program Repair via Bytecode MutationArtifacts Functional
Technical Papers
Pre-print
QADroid: Regression Event Selection for Android ApplicationsArtifacts ReusableArtifacts Functional
Technical Papers
Root Causing Flaky Tests in a Large-scale Industrial Setting
Technical Papers
SARA: Self-replay Augmented Record and Replay for Android in Industrial Cases
Technical Papers
Search-based Test and Improvement of Machine-Learning-Based Anomaly Detection SystemsArtifacts ReusableArtifacts Functional
Technical Papers
Semantic Fuzzing with ZestArtifacts ReusableArtifacts Functional
Technical Papers
Link to publication DOI Pre-print
TBar: Revisiting Template-based Automated Program RepairArtifacts Functional
Technical Papers
Pre-print
TestMig: Migrating GUI Test Cases from iOS to Android
Technical Papers

Call for Submissions


Technical Papers

Authors are invited to submit research papers describing original contributions in testing or analysis of computer software. Papers describing original theoretical or empirical research, new techniques, in-depth case studies, infrastructures of testing and analysis methods or tools are welcome.

Experience Papers

Authors are invited to submit experience papers describing a significant experience in applying software testing and analysis methods or tools and should carefully identify and discuss important lessons learned so that other researchers and/or practitioners can benefit from the experience. Of special interest are experience papers that report on industrial applications of software testing and analysis methods or tools.

Reproducibility Studies

ISSTA would like to encourage researchers to reproduce results from previous papers. A reproducibility study must go beyond simply re-implementing an algorithm and/or re-running the artifacts provided by the original paper. It should at the very least apply the approach to new, significantly broadened inputs. Particularly, reproducibility studies are encouraged to target techniques that previously were evaluated only on proprietary subject programs or inputs. A reproducibility study should clearly report on results that the authors were able to reproduce as well as on aspects of the work that were irreproducible. In the latter case, authors are encouraged to make an effort to communicate or collaborate with the original paper’s authors to determine the cause for any observed discrepancies and, if possible, address them (e.g., through minor implementation changes). We explicitly encourage authors to not focus on a single paper/artifact only, but instead to perform a comparative experiment of multiple related approaches.

In particular, reproducibility studies should follow the ACM guidelines on reproducibility (different team, different experimental setup): The measurement can be obtained with stated precision by a different team, a different measuring system, in a different location on multiple trials. For computational experiments, this means that an independent group can obtain the same result using artifacts which they develop completely independently.
This means that it is also insufficient to focus on repeatability (i.e., same experiment) alone. Reproducibility Studies will be evaluated according to the following standards:

  • Depth and breadth of experiments
  • Clarity of writing
  • Appropriateness of Conclusions
  • Amount of useful, actionable insights
  • Availability of artifacts

In particular, we expect reproducibility studies to clearly point out the artifacts the study is built on, and to submit those artifacts to artifact evaluation (see below). Artifacts evaluated positively will be eligible to obtain the highly prestigious badges Results Replicated or Results Reproduced.

Submissions Guidelines

Submissions must be original and should not have been published previously or be under consideration for publication while being evaluated for this symposium. Authors are required to adhere to the ACM Policy and Procedures on Plagiarism and the ACM Policy on Prior Publication and Simultaneous Submissions. More details are available at the Submission Policies page.

Research and Experience Papers as well as Reproducibility Studies should be at most 10 pages in length, with at most 2 additional pages for references. All papers must be prepared in ACM Conference Format.

Submit your papers via the HotCRP ISSTA 2019 submission website.

Double-Blind Reviewing

ISSTA 2019 will conduct double-blind reviewing. Submissions should not reveal the identity of the authors in any way. Authors should leave out author names and affiliations from the body of their submission. They should also ensure that any citations to related work by themselves are written in third person, that is, "the prior work of XYZ" as opposed to "our prior work". More details are available at the Double-Blind Reviewing page. Authors with further questions on double-blind reviewing are encouraged to contact the Program Chair by email.

Supplementary Material

Authors are free to provide supplementary material if that material supports the claims in the paper. Such material may include proofs, experimental results, and/or data sets. This material should be uploaded at the same time as the submission. Any supplementary material must also be anonymized. Reviewers are not required to examine the supplementary material but may refer to it if they would like to find further evidence supporting the claims in the paper.

Reviews and Responses

Reviewing will happen in two phases. In Phase 1, each paper will receive three reviews, followed by an author response. Depending on the response, papers with negative reviews might be rejected early at this point. Other papers will proceed to Phase 2, at which they might receive additional reviews where necessary, to which authors can respond in a second author-response phase.

Distinguished Paper Awards

The program committee will select the best accepted papers for Distinguished Paper Awards. Positively evaluated artifacts will be taken into account for these awards.

Double-Blind Reviewing


ISSTA 2019 Guidelines on Double-Blind Reviewing

Why is ISSTA 2019 using double-blind reviewing?

Studies have shown that a reviewer’s attitude toward a submission may be affected, even subconsciously, by author identity. We want reviewers to be able to approach each submission without such involuntary reactions as “Barnaby; he writes good papers” or “Who are these people? I have never heard of them.” For this reason, we ask that authors omit their names from their submissions, and avoid revealing their identities through citations and text. Many systems, security, and programming language conferences use double-blind reviewing and have done so for years (e.g., SIGCOMM, OSDI, IEEE Security and Privacy, SIGMOD, PLDI). Software engineering conferences are gradually starting to adopt this model. In 2017, most of the Software Engineering conferences (ESEC-FSE, ISSTA, ICSME, MSR, ICPC) have adopted double-blind reviewing, and in 2018 also ICSE as well. In 2016, ISSTA decided to try out double-blind reviewing for a four-year trial period, ISSTA 2016,17,18,19.

For those who are interested in motivation for double-blind reviewing, a very well­ argued, referenced, and evidenced article in favour of double-blind review processes for Software Engineering conferences can be found in the blog post by Claire Le Goues. Also there is a list of double-blind resources from Robert Feldt, and a more formal study of the subject by Moritz Beller​​ and Alberto Bacchelli​​.

Generally, this process will be cooperative, not adversarial. While the authors should take precautions not to reveal their identities (see details below), if a reviewer discovers the authors’ identities through a subtle oversight by the authors, the authors will not be penalized.

Do you really think blinding works? I suspect reviewers can often guess who the authors are.

Reviewers can sometimes guess the authorship correctly, though studies show this happens less often than people think. Still, imperfect blinding is better than no blinding at all, and even if all reviewers guess all authors’ identities correctly, double-blind reviewing simply becomes traditional single-blind reviewing.

Couldn’t blind submission create an injustice if a paper is inappropriately rejected because a reviewer is aware of prior unpublished work that actually is performed by the same authors?

The double-blind review process that we will be using for ISSTA 2019 is lightweight: author names will be revealed one week before the PC meeting, after all reviews have been collected. In this phase, the authors’ previous work can and will be explicitly considered.

What about additional information to support repeatability or verifiability of the reported results?

ISSTA 2019 puts a strong emphasis on creation of quality artifacts and repeatability and verifiability of experiences reported in the papers. An artifact evaluation committee is put in place to review artifacts accompanying all accepted papers, without the need to conceal identity of the authors.

For Authors

What exactly do I have to do to anonymize my paper?

Your job is not to make your identity undiscoverable, but to make it possible for our reviewers to evaluate your submission without knowing who you are. If you have a concern that particular information is particularly easy to trace to you, consider adding a warning to reviewers in a footnote, e.g., “Note for reviewers: searching the commit logs of the GitHub projects we used in our evaluation may reveal authors’ identities.”

Also please remove any acknowledgements from the paper.

I would like to provide supplementary material for consideration, e.g., the code of my implementation or proofs of theorems. How do I do this?

In general, supplementary material should also be anonymized. Please make your best to avoid (i) having your names/affiliations in artifact’s metadata (e.g. PDFs, spreadsheets, other documents); (ii) having contributors’ names in source code. To create a repository, you could use an anonymized cloud account (i.e., created with a username not clearly attributable to the authors), or similar solutions.

If the code or the repository cannot be anonymized easily, please either (A) provide an anonymized URL (such as using a URL shortener like http://bit.ly) with a prominent warning to reviewers that following the link may unblind them or, (B) if this is not possible, remove the URL to the repository from the paper and, instead, state “link to repository removed for double-blind review” or similar. Once the author names are revealed, the reviewers can ask the PC chair for the URL, who will contact the authors.

Also note that the assessment of artifacts within the Artifact Evaluation happens after paper acceptance and is not double-blind!

I am building on my own past work on the WizWoz system. Do I need to rename this system in my paper for purposes of anonymity, so as to remove the implied connection between my authorship of past work on this system and my present submission?

No. In our opinion the risk involved in misjudging a technical contribution because of such anonymization would outweigh the risk of de-anonymizing authors. Hence you should refer to the original, true system names only.

Am I allowed to post my (non-blinded) paper on my web page? Can I advertise the unblinded version of my paper on mailing lists or send it to colleagues? May I give a talk about my work while it is under review?

As far as the authors’ publicity actions are concerned, a paper under double-blind review is largely the same as a paper under regular (single-blind) review. Double-blind reviewing should not hinder the usual communication of results. But, during the review period, please don’t broadcast the work on social media. Also, to the extent to which this is possible, please avoid to publish the preprint of your work (e.g., on arXiv or on your website) until it is accepted for publication. In exceptional cases this might be required, but then please avoid spreading the paper more actively.

Will the fact that ISSTA is double-blind have an impact on handling conflicts of interest?

Using double-blind reviewing does not change the principle that reviewers should not review papers with which they have a conflict of interest, even if they do not immediately know who the authors are. Conflicts of interest are identified based on the authors’ and reviewers’ names and affiliations, and they can be declared by both the authors and reviewers. Note: Do not over-declare conflicts! The PC chair will double-check author-declared conflicts. In case we are able to identify clearly spurious conflicts that the authors have no good argument for, this can lead to desk rejection of the paper.

For Reviewers

What should I do if I if I learn the authors’ identities? What should I do if a prospective ISSTA author contacts me and asks to visit my institution?

If at any point you feel that the authors’ actions are largely aimed at ensuring that potential reviewers know their identities, you should contact the PC Chair. If you are unsure, contact the PC Chair. Otherwise you should not treat double-blind reviewing differently from regular single-blind reviewing. You should refrain from seeking out information on the authors’ identities, but discovering it accidentally will not automatically remove you from reviewing a paper you have been assigned. Use your best judgment and feel free to contact us with any concerns.

How do we handle potential conflicts of interest since I cannot see the authors’ names?

HotCRP will ask you to identify conflicts of interest before bidding. Please see the text field Collaborators and other affiliations on your HotCRP profile page. Also you can declare individual conflicts for each paper during bidding. To do so, enter a preference of -100.

This FAQ is based on several iterations of ASE, ISSTA, PLDI, and SIGMOD guidelines for double-blind reviewing.

Submission Policies


Papers submitted for consideration to any of the above call for papers should not have been already published elsewhere and should not be under review or submitted for review elsewhere during the duration of consideration. Specifically, authors are required to adhere to the ACM Policy and Procedures on Plagiarism and the ACM Policy on Prior Publication and Simultaneous Submissions. All submissions are subject to the ACM Author Representations policy.

All submissions must be in English and in PDF format. Papers must not exceed the page limits that are listed for each call for papers.

The conference will use the iThenticate plagiarism detection software to screen submissions and will follow the ACM Policy and Procedures on Plagiarism. To prevent double submissions, the Program Chair will compare the submissions with related conferences that have overlapping review period. Possible violations will be reported to ACM for further investigation.

Submission Format

The ACM styles have changed recently, and all authors should use the official "2017 ACM Master article template”, as can be obtained from the ACM Proceedings Template pages.

Latex users should use the “sigconf” option, as well as the “review” (to produce line numbers for easy reference by the reviewers) and “anonymous” (omitting author names) options. To that end, the following latex code can be placed at the start of the latex document:

  • \documentclass[sigconf,review, anonymous]{acmart}
  • \acmConference[ISSTA 2019]{ACM SIGSOFT International Symposium on Software Testing and Analysis}{15–19 July, 2019}{Beijing, China}

Accepted Contributions

All authors of accepted papers will be asked to complete an electronic ACM Copyright form and will receive further instructions for preparing their camera ready versions.

All accepted contributions will be published in the conference electronic proceedings and in the ACM Digital Library

Note that the official publication date is the date the proceedings are made available in the ACM Digital Library. This date may be up to two weeks prior to the first day of ISSTA 2019. The official publication date affects the deadline for any patent filings related to published work.

The names and ordering of authors as well as the title in the camera ready version cannot be modified from the ones in the submitted version unless there is explicit approval from the Program Chair.

At least one author of each accepted paper must register and present the paper at ISSTA 2019 in order for the paper to be published in the proceedings. One-day registrations or student registrations do NOT satisfy the registration requirement, except the SRC and Doctoral track, for which student registrations suffice.

Double-Blind Reviewing

More details are available at the Double-Blind Reviewing page.