Write a Blog >>
ISSTA 2020
Sat 18 - Wed 22 July 2020

29th ACM SIGSOFT International Symposium on Software Testing and Analysis

(ISSTA 2020) Los Angeles, US - July 18-22, 2020.

https://conf.researchr.org/home/issta-2020

Submission deadline: January 27, 2020.

ISSTA is the leading research symposium on software testing and analysis, bringing together academics, industrial researchers, and practitioners to exchange new ideas, problems, and experience on how to analyze and test software systems.

Dates
Plenary
You're viewing the program in a time zone which is different from your device's time zone - change time zone

Mon 20 Jul

Displayed time zone: Tijuana, Baja California change

10:30 - 10:50
Mini BreakBreak at Zoom
10:50 - 11:50
FUZZINGTechnical Papers at Zoom
Chair(s): Rody Kersten Synopsys, Inc.

Public Live Stream/Recording. Registered participants should join via the Zoom link distributed in Slack.

10:50
20m
Talk
WEIZZ: Automatic Grey-Box Fuzzing for Structured Binary Formats
Technical Papers
Andrea Fioraldi Sapienza University Rome, Daniele Cono D'Elia Sapienza University of Rome, Emilio Coppa Sapienza University of Rome, Italy
DOI Pre-print Media Attached
11:10
20m
Talk
Active Fuzzing for Testing and Securing Cyber-Physical Systems
Technical Papers
Yuqi Chen Singapore Management University, Bohan Xuan , Chris Poskitt Singapore Management University, Jun Sun Singapore Management University, Fan Zhang
DOI Pre-print Media Attached
11:30
20m
Talk
Learning Input Tokens for Effective FuzzingArtifacts AvailableArtifacts Evaluated – Functional
Technical Papers
Björn Mathis CISPA Helmholtz Center for Information Security, Rahul Gopinath CISPA Helmholtz Center for Information Security, Andreas Zeller CISPA Helmholtz Center for Information Security
Link to publication DOI
11:50 - 12:10
Mini BreakBreak at Zoom
12:10 - 13:10
SYMBOLIC EXECUTION AND CONSTRAINT SOLVINGTechnical Papers at Zoom
Chair(s): Marcelo d'Amorim Federal University of Pernambuco

Public Live Stream/Recording. Registered participants should join via the Zoom link distributed in Slack.

12:10
20m
Talk
Fast Bit-Vector Satisfiability
Technical Papers
Peisen Yao HKUST, Qingkai Shi The Hong Kong University of Science and Technology, Heqing Huang , Charles Zhang The Hong Kong University of Science and Technology
DOI
12:30
20m
Talk
Relocatable Addressing Model for Symbolic Execution
Technical Papers
David Trabish Tel Aviv University, Noam Rinetzky Tel Aviv University
DOI Pre-print Media Attached
12:50
20m
Talk
Running Symbolic Execution ForeverArtifacts Evaluated – ReusableArtifacts AvailableArtifacts Evaluated – Functional
Technical Papers
Frank Busse Imperial College London, Martin Nowack Imperial College London, Cristian Cadar Imperial College London
DOI Pre-print Media Attached
13:10 - 13:30
Mini BreakBreak at Zoom
13:30 - 14:30
REPAIR AND DEBUGTechnical Papers at Zoom
Chair(s): Xuan Bach D. Le The University of Melbourne

Public Live Stream/Recording. Registered participants should join via the Zoom link distributed in Slack.

13:30
20m
Talk
Can Automated Program Repair Refine Fault Localization? A Unified Debugging ApproachArtifacts Evaluated – ReusableArtifacts AvailableArtifacts Evaluated – Functional
Technical Papers
Yiling Lou Peking University, China, Ali Ghanbari The University of Texas at Dallas, Xia Li Kennesaw State University, Lingming Zhang The University of Texas at Dallas, Haotian Zhang Ant Financial, Dan Hao Peking University, Lu Zhang Peking University, China
DOI Pre-print Media Attached
13:50
20m
Talk
Automated Repair of Feature Interaction Failures in Automated Driving Systems
Technical Papers
Raja Ben Abdessalem SnT Centre/University of Luxembourg, Annibale Panichella Delft University of Technology, Shiva Nejati University of Ottawa, Lionel C. Briand SnT Centre/University of Luxembourg, Thomas Stifter
DOI Pre-print
14:10
20m
Talk
CoCoNuT: Combining Context-Aware Neural Translation Models using Ensemble for Program Repair
Technical Papers
Thibaud Lutellier , Viet Hung Pham University of Waterloo, Lawrence Pang , Yitong Li , Moshi Wei , Lin Tan Purdue University
DOI Media Attached
14:30 - 14:50
Mini BreakBreak at Zoom
14:50 - 15:50
MOBILE APPS Technical Papers at Zoom
Chair(s): Elena Sherman Boise State University

Public Live Stream/Recording. Registered participants should join via the Zoom link distributed in Slack.

14:50
20m
Talk
Detecting and Diagnosing Energy Issues for Mobile Applications
Technical Papers
Xueliang Li Shenzhen University, Yuming Yang Shenzhen University, Yepang Liu Southern University of Science and Technology, John P. Gallagher Roskilde University, Kaishun Wu Shenzhen University
DOI Media Attached
15:10
20m
Talk
Automated Classification of Actions in Bug Reports of Mobile Apps
Technical Papers
Hui Liu Beijing Institute of Technology, Mingzhu Shen Beijing Institute of Technology, Jiahao Jin , Yanjie Jiang Beijing Institute of Technology
DOI Media Attached
15:30
20m
Talk
Data Loss Detector: Automatically Revealing Data Loss Bugs in Android AppsArtifacts Evaluated – ReusableArtifacts AvailableArtifacts Evaluated – FunctionalDistinguished Artifact
Technical Papers
Oliviero Riganelli University of Milano-Bicocca, Italy, Simone Paolo Mottadelli University of Milano-Bicocca, Claudio Rota University of Milano-Bicocca, Daniela Micucci University of Milano-Bicocca, Italy, Leonardo Mariani University of Milano Bicocca
Link to publication DOI Pre-print Media Attached
15:50 - 16:10
Mini BreakBreak at Zoom
16:10 - 17:10
MACHINE LEARNING ITechnical Papers at Zoom
Chair(s): Divya Gopinath NASA Ames (KBR Inc.)

Public Live Stream/Recording. Registered participants should join via the Zoom link distributed in Slack.

16:10
20m
Talk
Reinforcement Learning Based Curiosity-Driven Testing of Android ApplicationsACM SIGSOFT Distinguished Paper Award
Technical Papers
Minxue Pan Nanjing University, An Huang , Guoxin Wang , Tian Zhang Nanjing University, Xuandong Li Nanjing University
DOI Media Attached
16:30
20m
Talk
Effective White-Box Testing of Deep Neural Networks with Adaptive Neuron-Selection StrategyArtifacts Evaluated – ReusableArtifacts AvailableArtifacts Evaluated – FunctionalACM SIGSOFT Distinguished Paper Award
Technical Papers
Seokhyun Lee Korea University, South Korea, Sooyoung Cha Korea University, South Korea, Dain Lee , Hakjoo Oh Korea University, South Korea
DOI Media Attached
16:50
20m
Talk
DeepGini: Prioritizing Massive Tests to Enhance the Robustness of Deep Neural Networks
Technical Papers
Yang Feng Nanjing University, Qingkai Shi The Hong Kong University of Science and Technology, Xinyu Gao , Muhammed Kerem Kahraman , Chunrong Fang Nanjing University, Zhenyu Chen Nanjing University
DOI

Tue 21 Jul

Displayed time zone: Tijuana, Baja California change

10:30 - 10:50
Mini BreakBreak at Zoom
10:50 - 11:50
MACHINE LEARNING IITechnical Papers at Zoom
Chair(s): Baishakhi Ray Columbia University, New York

Public Live Stream/Recording. Registered participants should join via the Zoom link distributed in Slack.

10:50
20m
Talk
Detecting and Understanding Real-World Differential Performance Bugs in Machine Learning LibrariesArtifacts AvailableArtifacts Evaluated – Functional
Technical Papers
Saeid Tizpaz-Niari CU Boulder/UT El Paso, Pavol Cerny TU Wien, Ashutosh Trivedi
Link to publication DOI Pre-print Media Attached
11:10
20m
Talk
Higher Income, Larger Loan? Monotonicity Testing of Machine Learning Models
Technical Papers
Arnab Sharma University of Paderborn, Heike Wehrheim Paderborn University
DOI Media Attached
11:30
20m
Talk
Detecting Flaky Tests in Probabilistic and Machine Learning Applications
Technical Papers
Saikat Dutta University of Illinois at Urbana-Champaign, USA, August Shi The University of Texas at Austin, Rutvik Choudhary , Zhekun Zhang , Aryaman Jain , Sasa Misailovic University of Illinois at Urbana-Champaign
DOI Media Attached
11:50 - 12:10
Mini BreakBreak at Zoom
12:10 - 13:10
BUG LOCALIZATION AND TEST ISOLATION Technical Papers at Zoom
Chair(s): Mattia Fazzini University of Minnesota

Public Live Stream/Recording. Registered participants should join via the Zoom link distributed in Slack.

12:10
20m
Talk
Scaffle: Bug Localization on Millions of Files
Technical Papers
Michael Pradel University of Stuttgart, Vijayaraghavan Murali Facebook, Inc., Rebecca Qian Facebook, Inc., Mateusz Machalica Facebook, Inc., Erik Meijer , Satish Chandra Facebook
DOI Media Attached
12:30
20m
Talk
Abstracting Failure-Inducing InputsArtifacts Evaluated – ReusableArtifacts AvailableArtifacts Evaluated – FunctionalACM SIGSOFT Distinguished Paper Award
Technical Papers
Rahul Gopinath CISPA Helmholtz Center for Information Security, Alexander Kampmann CISPA Helmholtz Center for Information Security, Nikolas Havrikov CISPA Helmholtz Center for Information Security, Ezekiel O. Soremekun CISPA Helmholtz Center for Information Security, Andreas Zeller CISPA Helmholtz Center for Information Security
DOI Pre-print Media Attached
12:50
20m
Talk
Debugging the Performance of Maven’s Test Isolation: Experience Report
Technical Papers
Pengyu Nie The University of Texas at Austin, Ahmet Celik Facebook, Matthew Coley , Aleksandar Milicevic , Jonathan Bell Northeastern University, Milos Gligoric The University of Texas at Austin
DOI
13:10 - 13:30
Mini BreakBreak at Zoom
13:30 - 14:30
SECURITYTechnical Papers at Zoom
Chair(s): Lucas Bang Harvey Mudd College

Public Live Stream/Recording. Registered participants should join via the Zoom link distributed in Slack.

13:30
20m
Talk
Feedback-Driven Side-Channel Analysis for Networked Applications
Technical Papers
Ismet Burak Kadron University of California at Santa Barbara, Nicolás Rosner Amazon Web Services, Tevfik Bultan University of California, Santa Barbara
DOI
13:50
20m
Talk
Scalable Analysis of Interaction Threats in IoT SystemsACM SIGSOFT Distinguished Paper Award
Technical Papers
Mohannad Alhanahnah , Clay Stevens University of Nebraska-Lincoln, Hamid Bagheri University of Nebraska-Lincoln, USA
DOI Pre-print Media Attached
14:10
20m
Talk
DeepSQLi: Deep Semantic Learning for Testing SQL Injection
Technical Papers
Muyang Liu , Ke Li University of Exeter, Tao Chen Loughborough University
DOI Pre-print
14:30 - 14:50
Mini BreakBreak at Zoom
14:50 - 15:50
REGRESSION TESTINGTechnical Papers at Zoom
Chair(s): Alex Orso Georgia Institute of Technology

Public Live Stream/Recording. Registered participants should join via the Zoom link distributed in Slack.

14:50
20m
Talk
Dependent-Test-Aware Regression Testing Techniques
Technical Papers
Wing Lam University of Illinois at Urbana-Champaign, August Shi The University of Texas at Austin, Reed Oei , Sai Zhang Google Cloud, Michael D. Ernst University of Washington, USA, Tao Xie Peking University
DOI Media Attached
15:10
20m
Talk
Differential Regression Testing for REST APIs
Technical Papers
Patrice Godefroid Microsoft Research, Daniel Lehmann University of Stuttgart, Marina Polishchuk Microsoft
DOI Media Attached
15:30
20m
Talk
Empirically Revisiting and Enhancing IR-Based Test-Case Prioritization
Technical Papers
Qianyang Peng , August Shi The University of Texas at Austin, Lingming Zhang The University of Texas at Dallas
DOI
15:50 - 16:10
Mini BreakBreak at Zoom
16:10 - 17:10
CHALLENGING DOMAINSTechnical Papers at Zoom
Chair(s): Yi Li Nanyang Technological University, Singapore

Public Live Stream/Recording. Registered participants should join via the Zoom link distributed in Slack.

16:10
20m
Talk
Intermittently Failing Tests in the Embedded Systems Domain
Technical Papers
Per Erik Strandberg Westermo Network Technologies AB, Thomas Ostrand , Elaine Weyuker Mälardalen University, Wasif Afzal Mälardalen University, Daniel Sundmark Mälardalen University
DOI Pre-print Media Attached
16:30
20m
Talk
Feasible and Stressful Trajectory Generation for Mobile RobotsArtifacts Evaluated – ReusableArtifacts AvailableArtifacts Evaluated – FunctionalDistinguished Artifact
Technical Papers
Carl Hildebrandt University of Virginia, Sebastian Elbaum University of Virginia, USA, Nicola Bezzo University of Virginia, Matthew B Dwyer University of Virginia
DOI
16:50
20m
Talk
Detecting Cache-Related Bugs in Spark ApplicationsArtifacts Evaluated – ReusableArtifacts AvailableArtifacts Evaluated – Functional
Technical Papers
Hui Li , Dong Wang Institute of software, Chinese academy of sciences, Tianze Huang , Yu Gao Institute of Software, Chinese Academy of Sciences, China, Wensheng Dou Institute of Software, Chinese Academy of Sciences, Lijie Xu Institute of Software, Chinese Academy of Sciences, Wei Wang , Jun Wei State Key Laboratory of Computer Science, Institute of Software, Chinese Academy of Sciences & University of Chinese Academy of Sciences, Hua Zhong
DOI

Wed 22 Jul

Displayed time zone: Tijuana, Baja California change

10:30 - 10:50
Mini BreakBreak at Zoom
10:50 - 11:50
BINARY ANALYSISTechnical Papers at Zoom
Chair(s): Junaid Haroon Siddiqui Lahore University of Management Sciences

Public Live Stream/Recording. Registered participants should join via the Zoom link distributed in Slack.

10:50
20m
Talk
Patch Based Vulnerability Matching for Binary Programs
Technical Papers
Yifei Xu , Zhengzi Xu , Bihuan Chen Fudan University, Fu Song , Yang Liu Nanyang Technological University, Singapore, Ting Liu Xi'an Jiaotong University
DOI Media Attached
11:10
20m
Talk
Identifying Java Calls in Native Code via Binary ScanningArtifacts AvailableArtifacts Evaluated – Functional
Technical Papers
George Fourtounis University of Athens, Leonidas Triantafyllou University of Athens, Yannis Smaragdakis University of Athens, Greece
DOI Media Attached
11:30
20m
Talk
An Empirical Study on ARM Disassembly Tools
Technical Papers
Muhui Jiang , Yajin Zhou Zhejiang University, Xiapu Luo The Hong Kong Polytechnic University, Ruoyu Wang , Yang Liu Nanyang Technological University, Singapore, Kui Ren
DOI
11:50 - 12:10
Mini BreakBreak at Zoom
12:10 - 13:10
STATIC ANALYSIS AND SEARCH-BASED TESTINGTechnical Papers at Zoom
Chair(s): Daniel Kroening University of Oxford

Public Live Stream/Recording. Registered participants should join via the Zoom link distributed in Slack.

12:10
20m
Talk
How Effective Are Smart Contract Analysis Tools? Evaluating Smart Contract Static Analysis Tools using Bug InjectionArtifacts AvailableArtifacts Evaluated – Functional
Technical Papers
Asem Ghaleb , Karthik Pattabiraman University of British Columbia
DOI Media Attached
12:30
20m
Talk
A Programming Model for Semi-implicit Parallelization of Static Analyses
Technical Papers
Dominik Helm TU Darmstadt, Germany, Florian Kübler TU Darmstadt, Germany, Jan Thomas Kölzer , Philipp Haller KTH Royal Institute of Technology, Michael Eichberg TU Darmstadt, Germany, Guido Salvaneschi Technische Universität Darmstadt, Mira Mezini Technische Universität Darmstadt
DOI
12:50
20m
Talk
Recovering Fitness Gradients for Interprocedural Boolean Flags in Search-Based Testing
Technical Papers
Yun Lin National University of Singapore, Jun Sun Singapore Management University, Gordon Fraser University of Passau, Ziheng Xiu , Ting Liu Xi'an Jiaotong University, Jin Song Dong National University of Singapore
DOI Pre-print Media Attached
13:10 - 13:30
Mini BreakBreak at Zoom
13:30 - 14:30
BUILD TESTINGTechnical Papers at Zoom
Chair(s): Nazareno Aguirre Dept. of Computer Science FCEFQyN, University of Rio Cuarto

Public Live Stream/Recording. Registered participants should join via the Zoom link distributed in Slack.

13:30
20m
Talk
Scalable Build Service System with Smart Scheduling Service
Technical Papers
DOI Media Attached
13:50
20m
Talk
Escaping Dependency Hell: Finding Build Dependency Errors with the Unified Dependency Graph
Technical Papers
Gang Fan Hong Kong University of Science and Technology, Chengpeng Wang The Hong Kong University of Science and Technology, Rongxin Wu Department of Cyber Space Security, Xiamen University, Xiao Xiao Sourcebrella Inc., Qingkai Shi The Hong Kong University of Science and Technology, Charles Zhang The Hong Kong University of Science and Technology
DOI Media Attached
14:10
20m
Talk
How Far We Have Come: Testing Decompilation Correctness of C DecompilersArtifacts Evaluated – Functional
Technical Papers
Zhibo Liu , Shuai Wang Hong Kong University of Science and Technology
DOI Media Attached
14:30 - 14:50
Mini BreakBreak at Zoom
14:50 - 16:10
NUMERICAL SOFTWARE ANALYSIS & CLONE DETECTIONTechnical Papers at Zoom
Chair(s): Darko Marinov University of Illinois at Urbana-Champaign

Public Live Stream/Recording. Registered participants should join via the Zoom link distributed in Slack.

14:50
20m
Talk
Discovering Discrepancies in Numerical LibrariesArtifacts Evaluated – ReusableArtifacts AvailableArtifacts Evaluated – FunctionalDistinguished Artifact
Technical Papers
Jackson Vanover University of California, Davis, Xuan Deng University of California, Davis, Cindy Rubio-González University of California, Davis
DOI Media Attached
15:10
20m
Talk
Testing High Performance Numerical Simulation Programs: Experience, Lessons Learned, and Open Issues
Technical Papers
Xiao He University of Science and Technology Beijing, China, Xingwei Wang , Jia Shi , Yi Liu
DOI Media Attached
15:30
20m
Talk
Functional Code Clone Detection with Syntax and Semantics Fusion LearningArtifacts Evaluated – ReusableArtifacts AvailableArtifacts Evaluated – Functional
Technical Papers
Chunrong Fang Nanjing University, Zixi Liu Nanjing University, Yangyang Shi , Jeff Huang Texas A&M University, Qingkai Shi The Hong Kong University of Science and Technology
DOI Media Attached
15:50
20m
Talk
Learning to Detect Table Clones in Spreadsheets
Technical Papers
Yakun Zhang Institute of software, Chinese academy of sciences, Wensheng Dou Institute of Software, Chinese Academy of Sciences, Jiaxin Zhu Institute of Software at Chinese Academy of Sciences, China, Liang Xu , Zhiyong Zhou Institute of Software, Chinese Academy of Sciences, Jun Wei State Key Laboratory of Computer Science, Institute of Software, Chinese Academy of Sciences & University of Chinese Academy of Sciences, Dan Ye , Bo Yang
DOI Media Attached
16:10 - 16:30
Mini BreakBreak at Zoom

Double-Blind Reviewing


ISSTA 2020 Guidelines on Double-Blind Reviewing

Why is ISSTA 2020 using double-blind reviewing?

Studies have shown that a reviewer’s attitude toward a submission may be affected, even subconsciously, by author identity. We want reviewers to be able to approach each submission without such involuntary reactions as “Barnaby; he writes good papers” or “Who are these people? I have never heard of them.” For this reason, we ask that authors omit their names from their submissions, and avoid revealing their identities through citations and text. Many systems, security, and programming language conferences use double-blind reviewing and have done so for years (e.g., SIGCOMM, OSDI, IEEE Security and Privacy, SIGMOD, PLDI). Software engineering conferences are gradually starting to adopt this model. In 2017, most of the Software Engineering conferences (ESEC-FSE, ISSTA, ICSME, MSR, ICPC) have adopted double-blind reviewing, and in 2018 also ICSE as well. In 2016, ISSTA decided to try out double-blind reviewing for a four-year trial period, ISSTA 2016,17,18,19.

For those who are interested in motivation for double-blind reviewing, a very well­ argued, referenced, and evidenced article in favour of double-blind review processes for Software Engineering conferences can be found in the blog post by Claire Le Goues. Also there is a list of double-blind resources from Robert Feldt, and a more formal study of the subject by Moritz Beller​​ and Alberto Bacchelli​​.

Generally, this process will be cooperative, not adversarial. While the authors should take precautions not to reveal their identities (see details below), if a reviewer discovers the authors’ identities through a subtle oversight by the authors, the authors will not be penalized.

Do you really think blinding works? I suspect reviewers can often guess who the authors are.

Reviewers can sometimes guess the authorship correctly, though studies show this happens less often than people think. Still, imperfect blinding is better than no blinding at all, and even if all reviewers guess all authors’ identities correctly, double-blind reviewing simply becomes traditional single-blind reviewing.

Couldn’t blind submission create an injustice if a paper is inappropriately rejected because a reviewer is aware of prior unpublished work that actually is performed by the same authors?

The double-blind review process that we will be using for ISSTA 2020 is lightweight: author names will be revealed one week before the PC meeting, after all reviews have been collected. In this phase, the authors’ previous work can and will be explicitly considered.

What about additional information to support repeatability or verifiability of the reported results?

ISSTA 2020 puts a strong emphasis on creation of quality artifacts and repeatability and verifiability of experiences reported in the papers. An artifact evaluation committee is put in place to review artifacts accompanying all accepted papers, without the need to conceal identity of the authors.

For Authors

What exactly do I have to do to anonymize my paper?

Your job is not to make your identity undiscoverable, but to make it possible for our reviewers to evaluate your submission without knowing who you are. If you have a concern that particular information is particularly easy to trace to you, consider adding a warning to reviewers in a footnote, e.g., “Note for reviewers: searching the commit logs of the GitHub projects we used in our evaluation may reveal authors’ identities.”

Also please remove any acknowledgements from the paper.

I would like to provide supplementary material for consideration, e.g., the code of my implementation or proofs of theorems. How do I do this?

In general, supplementary material should also be anonymized. Please make your best to avoid (i) having your names/affiliations in artifact’s metadata (e.g. PDFs, spreadsheets, other documents); (ii) having contributors’ names in source code. To create a repository, you could use an anonymized cloud account (i.e., created with a username not clearly attributable to the authors), or similar solutions.

If the code or the repository cannot be anonymized easily, please either (A) provide an anonymized URL (such as using a URL shortener like http://bit.ly) with a prominent warning to reviewers that following the link may unblind them or, (B) if this is not possible, remove the URL to the repository from the paper and, instead, state “link to repository removed for double-blind review” or similar. Once the author names are revealed, the reviewers can ask the PC chair for the URL, who will contact the authors.

Also note that the assessment of artifacts within the Artifact Evaluation happens after paper acceptance and is not double-blind!

I am building on my own past work on the WizWoz system. Do I need to rename this system in my paper for purposes of anonymity, so as to remove the implied connection between my authorship of past work on this system and my present submission?

No. In our opinion the risk involved in misjudging a technical contribution because of such anonymization would outweigh the risk of de-anonymizing authors. Hence you should refer to the original, true system names only.

Am I allowed to post my (non-blinded) paper on my web page? Can I advertise the unblinded version of my paper on mailing lists or send it to colleagues? May I give a talk about my work while it is under review?

As far as the authors’ publicity actions are concerned, a paper under double-blind review is largely the same as a paper under regular (single-blind) review. Double-blind reviewing should not hinder the usual communication of results. But, during the review period, please don’t broadcast the work on social media. Also, to the extent to which this is possible, please avoid to publish the preprint of your work (e.g., on arXiv or on your website) until it is accepted for publication. In exceptional cases this might be required, but then please avoid spreading the paper more actively.

Will the fact that ISSTA is double-blind have an impact on handling conflicts of interest?

Using double-blind reviewing does not change the principle that reviewers should not review papers with which they have a conflict of interest, even if they do not immediately know who the authors are. Conflicts of interest are identified based on the authors’ and reviewers’ names and affiliations, and they can be declared by both the authors and reviewers. Note: Do not over-declare conflicts! The PC chair will double-check author-declared conflicts. In case we are able to identify clearly spurious conflicts that the authors have no good argument for, this can lead to desk rejection of the paper.

For Reviewers

What should I do if I if I learn the authors’ identities? What should I do if a prospective ISSTA author contacts me and asks to visit my institution?

If at any point you feel that the authors’ actions are largely aimed at ensuring that potential reviewers know their identities, you should contact the PC Chair. If you are unsure, contact the PC Chair. Otherwise you should not treat double-blind reviewing differently from regular single-blind reviewing. You should refrain from seeking out information on the authors’ identities, but discovering it accidentally will not automatically remove you from reviewing a paper you have been assigned. Use your best judgment and feel free to contact us with any concerns.

This FAQ is based on several iterations of ASE, ISSTA, PLDI, and SIGMOD guidelines for double-blind reviewing.

Call for Papers


Technical Papers

Authors are invited to submit research papers describing original contributions in testing or analysis of computer software. Papers describing original theoretical or empirical research, new techniques, in-depth case studies, infrastructures of testing and analysis methods or tools are welcome.

Experience Papers

Authors are invited to submit experience papers describing a significant experience in applying software testing and analysis methods or tools and should carefully identify and discuss important lessons learned so that other researchers and/or practitioners can benefit from the experience. Of special interest are experience papers that report on industrial applications of software testing and analysis methods or tools.

Reproducibility Studies

ISSTA would like to encourage researchers to reproduce results from previous papers. A reproducibility study must go beyond simply re-implementing an algorithm and/or re-running the artifacts provided by the original paper. It should at the very least apply the approach to new, significantly broadened inputs. Particularly, reproducibility studies are encouraged to target techniques that previously were evaluated only on proprietary subject programs or inputs. A reproducibility study should clearly report on results that the authors were able to reproduce as well as on aspects of the work that were irreproducible. In the latter case, authors are encouraged to make an effort to communicate or collaborate with the original paper’s authors to determine the cause for any observed discrepancies and, if possible, address them (e.g., through minor implementation changes). We explicitly encourage authors to not focus on a single paper/artifact only, but instead to perform a comparative experiment of multiple related approaches.

In particular, reproducibility studies should follow the ACM guidelines on reproducibility (different team, different experimental setup): The measurement can be obtained with stated precision by a different team, a different measuring system, in a different location on multiple trials. For computational experiments, this means that an independent group can obtain the same result using artifacts which they develop completely independently. This means that it is also insufficient to focus on repeatability (i.e., same experiment) alone. Reproducibility Studies will be evaluated according to the following standards:

  • Depth and breadth of experiments
  • Clarity of writing
  • Appropriateness of Conclusions
  • Amount of useful, actionable insights
  • Availability of artifacts

In particular, we expect reproducibility studies to clearly point out the artifacts the study is built on, and to submit those artifacts to artifact evaluation (see below). Artifacts evaluated positively will be eligible to obtain the highly prestigious badges Results Replicated or Results Reproduced.

Submissions Guildline

Submissions must be original and should not have been published previously or be under consideration for publication while being evaluated for this symposium. Authors are required to adhere to the ACM Policy and Procedures on Plagiarism and the ACM Policy on Prior Publication and Simultaneous Submissions. More details are available at the Submission Policies page.

Research and Experience Papers as well as Reproducibility Studies should be at most 10 pages in length, with at most 2 additional pages for references. The ACM styles have changed recently, and all authors should use the official “2017 ACM Master article template”, as can be obtained from the ACM Proceedings Template pages.

Latex users should use the “sigconf” option, as well as the “review” (to produce line numbers for easy reference by the reviewers) and “anonymous” (omitting author names) options. To that end, the following latex code can be placed at the start of the latex document:

\documentclass[sigconf,review, anonymous]{acmart}
\acmConference[ISSTA 2020]{ACM SIGSOFT International Symposium on Software Testing and Analysis}{18–22 July, 2020}{Los Angeles, US}

Submit your papers via the EasyChair ISSTA 2020 submission website.

Double-blind Reviewing

ISSTA 2020 will conduct double-blind reviewing. Submissions should not reveal the identity of the authors in any way. Authors should leave out author names and affiliations from the body of their submission. They should also ensure that any citations to related work by themselves are written in third person, that is, “the prior work of XYZ” as opposed to “our prior work”. More details are available at the Double-Blind Reviewing page. Authors with further questions on double-blind reviewing are encouraged to contact the Program Chair by email.

Supplementary Material

Authors are free to provide supplementary material if that material supports the claims in the paper. Such material may include proofs, experimental results, and/or data sets. This material should be uploaded at the same time as the submission. Any supplementary material must also be anonymized. Reviewers are not required to examine the supplementary material but may refer to it if they would like to find further evidence supporting the claims in the paper.

Reviews and Responses

Reviewing will happen in two phases. In Phase 1, each paper will receive three reviews, followed by an author response. Depending on the response, papers with negative reviews might be rejected early at this point. Other papers will proceed to Phase 2, at which they might receive additional reviews where necessary, to which authors can respond in a second author-response phase.

Submission Policies


Papers submitted for consideration to any of the above call for papers should not have been already published elsewhere and should not be under review or submitted for review elsewhere during the duration of consideration. Specifically, authors are required to adhere to the ACM Policy and Procedures on Plagiarism and the ACM Policy on Prior Publication and Simultaneous Submissions. All submissions are subject to the ACM Author Representations policy.

All submissions must be in English and in PDF format. Papers must not exceed the page limits that are listed for each call for papers.

The conference will use the iThenticate plagiarism detection software to screen submissions and will follow the ACM Policy and Procedures on Plagiarism. To prevent double submissions, the Program Chair will compare the submissions with related conferences that have overlapping review period. Possible violations will be reported to ACM for further investigation.

Submission Format

The ACM styles have changed recently, and all authors should use the official “2017 ACM Master article template”, as can be obtained from the ACM Proceedings Template pages.

Latex users should use the “sigconf” option, as well as the “review” (to produce line numbers for easy reference by the reviewers) and “anonymous” (omitting author names) options. To that end, the following latex code can be placed at the start of the latex document:

\documentclass[sigconf,review, anonymous]{acmart}
\acmConference[ISSTA 2020]{ACM SIGSOFT International Symposium on Software Testing and Analysis}{18–22 July, 2020}{Los Angeles, US}

Accepted Contributions

All authors of accepted papers will be asked to complete an electronic ACM Copyright form and will receive further instructions for preparing their camera ready versions.

All accepted contributions will be published in the conference electronic proceedings and in the [ACM Digital Library](ACM Digital Library).

Note that the official publication date is the date the proceedings are made available in the ACM Digital Library. This date may be up to two weeks prior to the first day of ISSTA 2020. The official publication date affects the deadline for any patent filings related to published work.

The names and ordering of authors as well as the title in the camera ready version cannot be modified from the ones in the submitted version unless there is explicit approval from the Program Chair.

At least one author of each accepted paper must register and present the paper at ISSTA 2020 in order for the paper to be published in the proceedings. One-day registrations or student registrations do NOT satisfy the registration requirement, except the SRC and Doctoral track, for which student registrations suffice.

Double-Blind Reviewing

More details are available at the Double-Blind Reviewing page.

Accepted Papers

Title
A Programming Model for Semi-implicit Parallelization of Static Analyses
Technical Papers
DOI
Abstracting Failure-Inducing InputsArtifacts Evaluated – ReusableArtifacts AvailableArtifacts Evaluated – FunctionalACM SIGSOFT Distinguished Paper Award
Technical Papers
DOI Pre-print Media Attached
Active Fuzzing for Testing and Securing Cyber-Physical Systems
Technical Papers
DOI Pre-print Media Attached
An Empirical Study on ARM Disassembly Tools
Technical Papers
DOI
Automated Classification of Actions in Bug Reports of Mobile Apps
Technical Papers
DOI Media Attached
Automated Repair of Feature Interaction Failures in Automated Driving Systems
Technical Papers
DOI Pre-print
Can Automated Program Repair Refine Fault Localization? A Unified Debugging ApproachArtifacts Evaluated – ReusableArtifacts AvailableArtifacts Evaluated – Functional
Technical Papers
DOI Pre-print Media Attached
CoCoNuT: Combining Context-Aware Neural Translation Models using Ensemble for Program Repair
Technical Papers
DOI Media Attached
Data Loss Detector: Automatically Revealing Data Loss Bugs in Android AppsArtifacts Evaluated – ReusableArtifacts AvailableArtifacts Evaluated – FunctionalDistinguished Artifact
Technical Papers
Link to publication DOI Pre-print Media Attached
Debugging the Performance of Maven’s Test Isolation: Experience Report
Technical Papers
DOI
DeepGini: Prioritizing Massive Tests to Enhance the Robustness of Deep Neural Networks
Technical Papers
DOI
DeepSQLi: Deep Semantic Learning for Testing SQL Injection
Technical Papers
DOI Pre-print
Dependent-Test-Aware Regression Testing Techniques
Technical Papers
DOI Media Attached
Detecting Cache-Related Bugs in Spark ApplicationsArtifacts Evaluated – ReusableArtifacts AvailableArtifacts Evaluated – Functional
Technical Papers
DOI
Detecting Flaky Tests in Probabilistic and Machine Learning Applications
Technical Papers
DOI Media Attached
Detecting and Diagnosing Energy Issues for Mobile Applications
Technical Papers
DOI Media Attached
Detecting and Understanding Real-World Differential Performance Bugs in Machine Learning LibrariesArtifacts AvailableArtifacts Evaluated – Functional
Technical Papers
Link to publication DOI Pre-print Media Attached
Differential Regression Testing for REST APIs
Technical Papers
DOI Media Attached
Discovering Discrepancies in Numerical LibrariesArtifacts Evaluated – ReusableArtifacts AvailableArtifacts Evaluated – FunctionalDistinguished Artifact
Technical Papers
DOI Media Attached
Effective White-Box Testing of Deep Neural Networks with Adaptive Neuron-Selection StrategyArtifacts Evaluated – ReusableArtifacts AvailableArtifacts Evaluated – FunctionalACM SIGSOFT Distinguished Paper Award
Technical Papers
DOI Media Attached
Empirically Revisiting and Enhancing IR-Based Test-Case Prioritization
Technical Papers
DOI
Escaping Dependency Hell: Finding Build Dependency Errors with the Unified Dependency Graph
Technical Papers
DOI Media Attached
Fast Bit-Vector Satisfiability
Technical Papers
DOI
Feasible and Stressful Trajectory Generation for Mobile RobotsArtifacts Evaluated – ReusableArtifacts AvailableArtifacts Evaluated – FunctionalDistinguished Artifact
Technical Papers
DOI
Feedback-Driven Side-Channel Analysis for Networked Applications
Technical Papers
DOI
Functional Code Clone Detection with Syntax and Semantics Fusion LearningArtifacts Evaluated – ReusableArtifacts AvailableArtifacts Evaluated – Functional
Technical Papers
DOI Media Attached
Higher Income, Larger Loan? Monotonicity Testing of Machine Learning Models
Technical Papers
DOI Media Attached
How Effective Are Smart Contract Analysis Tools? Evaluating Smart Contract Static Analysis Tools using Bug InjectionArtifacts AvailableArtifacts Evaluated – Functional
Technical Papers
DOI Media Attached
How Far We Have Come: Testing Decompilation Correctness of C DecompilersArtifacts Evaluated – Functional
Technical Papers
DOI Media Attached
Identifying Java Calls in Native Code via Binary ScanningArtifacts AvailableArtifacts Evaluated – Functional
Technical Papers
DOI Media Attached
Intermittently Failing Tests in the Embedded Systems Domain
Technical Papers
DOI Pre-print Media Attached
Learning Input Tokens for Effective FuzzingArtifacts AvailableArtifacts Evaluated – Functional
Technical Papers
Link to publication DOI
Learning to Detect Table Clones in Spreadsheets
Technical Papers
DOI Media Attached
Patch Based Vulnerability Matching for Binary Programs
Technical Papers
DOI Media Attached
Recovering Fitness Gradients for Interprocedural Boolean Flags in Search-Based Testing
Technical Papers
DOI Pre-print Media Attached
Reinforcement Learning Based Curiosity-Driven Testing of Android ApplicationsACM SIGSOFT Distinguished Paper Award
Technical Papers
DOI Media Attached
Relocatable Addressing Model for Symbolic Execution
Technical Papers
DOI Pre-print Media Attached
Running Symbolic Execution ForeverArtifacts Evaluated – ReusableArtifacts AvailableArtifacts Evaluated – Functional
Technical Papers
DOI Pre-print Media Attached
Scaffle: Bug Localization on Millions of Files
Technical Papers
DOI Media Attached
Scalable Analysis of Interaction Threats in IoT SystemsACM SIGSOFT Distinguished Paper Award
Technical Papers
DOI Pre-print Media Attached
Scalable Build Service System with Smart Scheduling Service
Technical Papers
DOI Media Attached
Testing High Performance Numerical Simulation Programs: Experience, Lessons Learned, and Open Issues
Technical Papers
DOI Media Attached
WEIZZ: Automatic Grey-Box Fuzzing for Structured Binary Formats
Technical Papers
DOI Pre-print Media Attached