Call for Papers
We invite high-quality submissions, from both industry and academia, describing original and unpublished results of theoretical, empirical, conceptual, and experimental research on software testing and analysis.
ISSTA invites three kinds of submissions. The majority of submissions are expected to be “Research Papers”, but submissions that best fit the description of “Experience Papers” or “Replicability Studies” should be submitted as such. A good Experience Paper will include lessons learned or other wisdom synthesised for the community from the reported experience. Replicability Studies shall clearly describe their purpose and value beyond the original result.
tl;dr
- Submission link: https://issta2026.hotcrp.com/
- Template:
\documentclass[acmsmall,screen,review,anonymous]{acmart} - Page limit: 18 pages (incl. appendix, excl. data availability section and references) + unlimited references.
- Required section “Data Availability” (before references; does not count towards page limit).
- Artifact available: Link to anonymous artifact, or explanation why not.
- Categories: Choose a Research Area and a Paper Type (Research/Experience/Replicability)
- Compliance: Check compliance (double-blind, open science, plagiarism, human studies, etc.).
- Open Access: All articles will be made Open Access. Article processing charges may apply.
Notes
- Note #1: Like last year, the conference proceedings will be published in the Proceedings of the ACM on Software Engineering (PACMSE), Issue: ISSTA 2026.
- Note #2: Since 2023, ISSTA has an Open Science policy and since 2024 requires a “Data Availability” section before the references which describes where the data and the mechanism used to generate evidence for the main claims in the paper (e.g., tools to generate empirical evidence) can be accessed.
- Note #3: Submissions must follow the “ACM Policy on Authorship” released April 20, 2023, which contains the policy regarding the use of Generative AI tools and technologies, such as ChatGPT. Please also check the ACM FAQ which describes in what situations generative AI tools can be used (with or without acknowledgement).
- Note #4: The names and list of authors as well as the title in the camera-ready version cannot be modified from the ones in the submitted version.
New This Year
- Review criteria. To increase the diversity of perspectives during the evaluation and discussion of a submission in each category, papers in all three submission categories will be evaluated based on specific review criteria. This list is not meant to be exhaustive.
- Auto-bidding. To reduce reviewer workload in paper bidding, to increase expertise for each paper, and to streamline and automate the paper bidding and assignment process, we will leverage the Toronto Paper Matching System (TPMS).
Research Areas
ISSTA welcomes submissions addressing topics across the full spectrum of software analysis. Topics of interest include the following and are grouped into the following eight research areas. Please note that these topics are by no means exhaustive. Each submission will need to indicate one primary and optionally a secondary area. Program chairs will ultimately assign a paper to an area chair (and at least three reviewers), considering the authors’ selection, the paper’s content, and other factors such as (if applicable) possible conflicts of interest.
AI for Analysis and Testing
- Agentic or LLM-based specification inference
- Agentic or LLM-based code generation
- Agentic or LLM-based software testing
- Agentic of LLM-based program analysis
- Agentic or LLM-based program repair
Analysis and Testing for AI
- Testing and analysis of agents or agentic systems
- Testing and analysis of ML-model/software hybrid systems
- Testing and analysis of ML-components and libraries
- Fairness, safety, trustworthiness
Software Test Generation
- Regression, mutation, and model-based testing
- System, unit, and integration testing
- Black-, grey-, and white-box fuzzing
- Search-based software testing
- Symbolic and concolic execution
Software Debugging and Repair
- Fault localization and debugging
- Search-based program repair
- Constraint-based program repair
- Reverse engineering
- Program comprehension
- Specification inference
Software Verification and Analysis Techniques
- Static, dynamic, and empirical program analysis
- Program verification, runtime verification, and model checking
- Refactoring, transformation, and reduction
- Testing and analysis in CI/CD or deployment
- Testing and analysis of evolving systems
- Mutation testing and analysis
- Ecosystem-scale analysis
- Non-functional properties (dependability, safety, reliability, and performance)
Analysis and Testing for Security
- Binary analysis, lifting, and instrumentation
- Vulnerability detection (e.g., sanitizers)
- Side channels and data leakage
- Supply chain analysis
- Malware analysis
- Obfuscation
Domain-specific Analysis and Testing
- Testing and analysis of blockchain systems or smart contracts
- Testing and analysis of concurrent or distributed systems
- Testing and analysis of cyber physical or autonomous systems
- Testing and analysis of database or operating systems
- Testing and analysis of numerical and scientific applications
- Testing and analysis of testing or analysis tools
- Testing and analysis of web, mobile, or quantum applications
Empirical and User Studies of Testing and Analysis Processes
- Software engineering processes (e.g., agile, DevOps)
- Green and sustainable technologies
- Ethics and values
- Software economics
- Systematic code review and inspection
- Program comprehension and visualization
Submission Categories and Review Criteria
ISSTA accepts three types of submissions:
- Research Papers (Innovation) describe innovative research in testing or analysis of computer software. Papers describing original theoretical or empirical research, new techniques, methods for emerging systems, in-depth case studies, infrastructures of testing and analysis.
- Experience Papers (Evaluation) describe significant experience in applying software testing and analysis methods or tools and should carefully identify and discuss important lessons learned so that other researchers and/or practitioners can benefit from the experience.
- Replicability Studies (Replication) describe studies that go beyond simple re-implementation by applying existing techniques to significantly broader inputs, especially those previously tested on proprietary data.
Decisions for Initial Notification
Each paper will be reviewed by at least three reviewers and handled by an area chair who will ensure reviewing consistency among papers submitted within that area. Submitted papers may be accepted, rejected or may receive a chance to submit a major revision of the initial submission to the major revision deadline. Concretely, the outcome of each paper will be one of the following Accept, Major Revision, Reject.
A Major Revision decision will come with a set of revision items which will form a contract between the reviewers and authors. Authors who agree have four weeks to prepare the revision and to properly address all items. Reviewers will check whether the items are properly addressed or deviations explained. Major revisions offer reviewer continuity and reduce reviewer overhead across conferences.
Review Criteria
Each category will be subject to specific review criteria. Reviewers will carefully consider these criteria during the review process. Apart from the category-specific criteria, all submissions will be subject to the following review criteria:
- Verifiability and Transparency: The extent to which the paper includes sufficient information to understand how the evidence for key claims in the paper was produced, and how the paper supports independent verification or replication of the paper’s claimed contributions. Any artifacts attached to or linked from the paper may be checked by one reviewer.
- Presentation: The extent to which the paper’s quality of writing meets the high standards of ISSTA, including clear descriptions, adequate use of the English language, absence of major ambiguity, clearly readable figures and tables, and adherence to the formatting instructions provided below.
Category: Research Papers
Authors are invited to submit research papers describing original contributions in testing or analysis of computer software. Papers describing original theoretical or empirical research, new techniques, methods for emerging systems, in-depth case studies, infrastructures of testing and analysis, or tools are welcome.
Each Research Paper will be evaluated based on the following criteria:
- Importance: The extent to which the paper’s contributions can impact the field of software engineering in practice or in research, and under which assumptions (if any).
- Originality: The extent to which the contributions are sufficiently original with respect to the state-of-the-art, incl. appropriate comparison to related work.
- Soundness: The extent to which the paper’s key claims are supported by rigorous application of appropriate research methods, incl. sound methods for empirical evaluation.
Category: Experience Papers
Authors are invited to submit experience papers describing a significant experience in applying software testing and analysis methods or tools and should carefully identify and discuss important lessons learned so that other researchers and/or practitioners can benefit from the experience.
Each Experience Paper will be evaluated based on the following criteria:
- Importance and Scope: The extent to which the paper describes a problem of practical importance, explains how the problem was investigated, and in what context.
- Insights and Evidence: The extent to which the paper’s conclusions are supported by evidence, and new insights, best practices, tools, or software processes are explained.
- Perspective: The extent to which the paper identifies and discusses important lessons learned so that other researchers and/or practitioners can benefit from the experience.
Category: Replicability Studies
ISSTA would like to encourage researchers to replicate results from previous papers. A replicability study must go beyond simply re-implementing an algorithm and/or re-running the artifacts provided by the original paper. It should at the very least apply the approach to new, significantly broadened inputs. Particularly, replicability studies are encouraged to target techniques that previously were evaluated only on proprietary subject programs or inputs. A replicability study should clearly report on results that the authors were able to replicate as well as on aspects of the work that were not replicable. In the latter case, authors are encouraged to make an effort to communicate or collaborate with the original paper’s authors to determine the cause for any observed discrepancies and, if possible, address them (e.g., through minor implementation changes). We explicitly encourage authors to not focus on a single paper/artifact only, but instead to perform a comparative experiment of multiple related approaches.
Replicability studies should follow the ACM guidelines on replicability (different team, different experimental setup): the measurement can be obtained with stated precision by a different team, a different measuring system, in a different location on multiple trials. For computational experiments, this means that an independent group can obtain the same result using artifacts which they develop completely independently. Moreover, it is generally also insufficient to focus on reproducibility (i.e., different team, same experimental setup) alone.
Replicability Studies will be evaluated according to the following criteria:
- Comprehensiveness: The depth and breadth of the experiments used to replicate the previous results, incl. the study of the impact of parameters chosen for approach and experiment (ablation).
- Validity: The extent to which the paper’s key claims and conclusions are supported by rigorous application of appropriate research methods, incl. the soundness of the methodology used for empirical evaluation and a discussion of threats to validity.
- Perspective: The extent to which the paper identifies and discusses important lessons learned so that other researchers and/or practitioners can benefit from the experience, incl. useful and actionable insights.
- Artifact availability: Whether or not the artifacts have been made available.
We expect replicability studies to clearly point out the artifacts the study is built on, and to submit those artifacts to the artifact evaluation. Artifacts evaluated positively will be eligible to obtain the prestigious Results Reproduced badge.
Submission Guidelines
The conference proceedings will be published in the Proceedings of the ACM on Software Engineering (PACMSE).
At the time of submission, each paper should have no more than 18 pages for all text and figures, plus unlimited references, using the following templates: Latex or Word (Mac) or Word (Windows). Authors using LaTeX should use the sample-acmsmall-conf.tex file (found in the samples folder of the acmart package) with the acmsmall option. We also strongly encourage the use of the review, screen, and anonymous options as well. In sum, you want to use:
\documentclass[acmsmall,screen,review,anonymous]{acmart}
Papers may use either numeric or author-year format for citations. It is a single-column page layout. Submissions that do not comply with the above instructions may be desk rejected without review.
The page limit is strict, i.e., papers that take more than 18 pages for anything apart from references (including any section, figure, text, or appendix), will be desk-rejected. Experience papers and replicability studies should clearly specify their category in the paper title upon submission: “[TITLE](Experience Paper)”, “[TITLE](Replicability Study)”. Papers must be submitted electronically through the ISSTA 2026 submission site.
Each submission will be reviewed by at least three members of the program committee. Submissions will be evaluated on the basis of originality, importance of contribution, soundness, evaluation, quality of presentation, appropriate comparison to related work, and verifiability/transparency of the work. Some papers may have more than three reviews, as the PC chairs may solicit additional reviews based on factors such as reviewer expertise and strong disagreement between reviewers. The program committee as a whole will make final decisions about which submissions to accept for presentation at the conference.
Data Availability Section and Open Science Policy
ISSTA has adopted an open science policy. Openness in science is key to fostering scientific progress via transparency, reproducibility, and replicability. The steering principle is that all research results should be accessible to the public, if possible, and that empirical studies should be reproducible. In particular, we actively support the adoption of open data and open source principles and encourage all contributing authors to disclose (anonymized and curated) data to increase reproducibility and replicability.
Artifacts available. Upon submission, authors are asked to make the data and the mechanism used to generate the evidence for the main claims in the paper (code, data, etc.) available to the program committee (via upload of anonymized supplemental material or a link to an anonymized private or public repository) or to comment on why this is not possible or desirable. At least one of the reviewers will check the provided data. While sharing the data is not mandatory for submission or acceptance, it will inform the program committee’s decision. Of course, we fully understand that there are reasons why the availability of the artifacts might not be possible and appreciate an explanation.
For double-anonymous submission, consider using a private Github repository in combination with https://anonymous.4open.science/ or using the Supplementary Materials field in the submission form. If accepted, we suggest the long-term archival of all artifacts in a digital repository, such as zenodo.org, figshare.com, www.softwareheritage.org, osf.io, or institutional repositories and the release under a proper open data license, such as the CC0 dedication or the CC-BY 4.0 license when publishing the data, or under a proper open source license when releasing software. If accepted, consider preparing a submission for artifact evaluation.
Data Availability section. We ask authors to provide a supporting statement on the data availability (or lack thereof) in their submitted papers in a section named “Data Availability” before the “References”. This section will not count towards the page limit.
Submissions Policies
Submissions must be original and should not have been published previously or be under consideration for publication while being evaluated for ISSTA. Authors are required to adhere to the ACM Policy and Procedures on Plagiarism and the ACM Policy on Prior Publication and Simultaneous Submissions.
Use of Generative AI Tools and Technologies
Paper submissions must follow the ACM Policy on Authorship, which includes the policy with respect to the use of generative AI tools and technologies such as ChatGPT. In particular, the use of generative AI tools and technologies to create content is permitted but must be fully disclosed in the submission. Please also check the ACM FAQ which describes in what situations generative AI tools can be used (with or without acknowledgement).
Double-Anonymous Reviewing
ISSTA 2026 will conduct double-anonymous reviewing. Submissions should not reveal the identity of the authors in any way. Authors should leave out author names and affiliations from the body of their submission and may want to confirm that their generated PDF does not contain any meta-data with their names. They should also ensure that any citations to related work by themselves are written in third person, that is, “the prior work of XYZ” as opposed to “our prior work”.
Multiple Submissions and Plagiarism
Papers submitted to ISSTA 2026 must not have been published elsewhere and must not be under review or submitted for review elsewhere when being considered for ISSTA 2026. Authors should be aware of the ACM Policy and Procedures on Plagiarism.
To check for double submission and plagiarism issues, the chairs reserve the right to (1) share the list of submissions with the PC Chairs of other conferences with overlapping review periods and (2) use external plagiarism detection software, under contract to the ACM, to detect violations of these policies. Contravention of the submission policy will be deemed a serious breach of scientific ethics and appropriate action will be taken in all such cases.
Open Access
Starting 2026, all articles published by ACM will be made Open Access. This is greatly beneficial to the advancement of computer science and leads to increased usage and citation of research.
- Most authors will be covered by ACM OPEN agreements by that point and will not have to pay Article Processing Charges (APC). Check if your institution participates in ACM OPEN.
- Authors not covered by ACM OPEN agreements may have to pay APC; however, ACM is offering several automated and discretionary APC Waivers and Discounts.
- Costs for 2026 for full papers will be $250 (US) for ACM members, $350 for non-members, paid when the camera-ready version of the accepted paper is uploaded.
Publication Date
The official publication date is the date the proceedings are made available in the ACM Digital Library. This date may be up to two weeks prior to the first day of the ISSTA conference. The official publication date affects the deadline for any patent filings related to published work.
Questions and Comments
If you have any further questions, please contact the PC chairs at issta2026.pc.chairs@gmail.com.