Write a Blog >>
ICSE 2022
Sun 8 May - Mon 27 June 2022 Location to be announced

Call for Papers

The New Ideas and Emerging Results (NIER) track at ICSE provides a vibrant forum for forward-looking, innovative research in software engineering. Our aim is to accelerate the exposure of the software engineering community to early yet potentially ground-breaking research results, and to techniques and perspectives that challenge the status quo in the discipline. To broadly capture this goal, NIER 2022 will publish the following types of papers.

  • Forward-looking ideas: exciting new directions or techniques that may have yet to be supported by solid experimental results, but are nonetheless supported by strong and well-argued scientific intuitions as well as concrete plans going forward.

  • Thought-provoking reflections: bold and unexpected results and reflections that can help us look at current research directions under a new light, calling for new directions for future research.

(New to this year!) Idea-Matchmaking - Optional

New to this year, a paper can optionally be developed through an Idea-Matchmaking process aimed to encourage collaboration among researchers. The process is described below:

  • Ideas Collection Phase (Due May 14 2021): Authors of potential papers should submit a title, an abstract outlining the idea, a set of keywords. Authors can optionally include an expertise statement indicating whether they are looking for a specific type of expertise to develop their idea. Abstracts should not exceed 300 words and do not have to discuss critical details of the idea. We recommend the authors to clarify in the abstract the reason(s) why they are looking for a collaboration. The expertise statement should not exceed 100 words. Examples of title, abstract and expertise statement can be found here. Submissions can be made following this link. If you are unable to access the link, please contact the track chairs.

The NIER Co-Chairs will assess the abstracts and may desk reject those that are out of scope. Accepted abstracts will be published on the ICSE NIER 2022 website without including the authors’ name.

  • Ideas Bidding Phase (Due July 2 2021): Potential collaborators should bid for one of the proposed abstract by submitting a document clearly highlighting:
    • Contact information: name, affiliation, contact details;

    • Expertise: collaborator(s) should demonstrate (providing evidence) to have expertise on the topics of the idea they are bidding for.

    • Proposal: collaborator(s) should describe how they intend to contribute to the idea, ideally including references to previous research. The document submitted during the idea bidding phase should not exceed 1000 words.

Submissions can be made following this link. If you are unable to access the link, please contact the track chairs.

  • Ideas Matchmaking Phase (Due July 16 2021): Authors of potential papers should make contact with the potential collaborator(s) who bid for their abstract(s). Authors should inform the NIER Co-Chairs about whether they identified one or more suitable collaborators.

  • Paper Preparation Phase: During this phase, the authors and the matched collaborators should work together to prepare their NIER paper.

Papers developed following the Idea-Matchmaking process can be of any of the two types (forward-looking ideas or thought provoking reflections), should follow the NIER submission and formatting instructions, and will be evaluated using the same criteria adopted for papers that did not avail the matchmaking process.

Note that potential authors do not have to follow the Idea-Matchmaking process to prepare their paper and can still submit their manuscript following the formatting and submission instructions below.

Scope of NIER Track

A NIER track paper is not just a scaled-down version of a ICSE full research track paper. The NIER track is reserved for first class, top quality technical contributions. Therefore, a NIER submission is neither an ICSE full research track submission with weaker or no evaluation, nor an op-ed piece advertising existing and already published results. Authors of such submissions should instead consider submitting to either the main track or one of the many satellite events of ICSE. We require all submissions to the NIER track to include a section titled “Future Plans” where the authors outline the work they plan on doing to turn their new idea and emerging results into a full-length paper in the future.

Evaluation Criteria

Each submission will be reviewed and evaluated in terms of the following quality criteria:

  • Value: whether the problem is worth exploring;

  • Impact: the potential for disruption of current practice;

  • Soundness: the validity of the rationale and authors’ plans for future work;

  • Quality: the overall quality of the paper’s writing

Formatting and Submission

All submissions must conform to the ICSE 2022 formatting and submission instructions available at https://www.acm.org/publications/proceedings-template for both LaTeX and Word users. LaTeX users must use the provided acmart.cls and ACM-Reference-Format.bst without modification, enable the conference format in the preamble of the document (i.e., \documentclass[sigconf,review]{acmart}), and use the ACM reference format for the bibliography (i.e., \bibliographystyle{ACM-Reference-Format}). The review option adds line numbers, thereby allowing referees to refer to specific lines in their comments.

All NIER submissions must not exceed 4 pages for the main text, inclusive of all figures, tables, appendices, etc. An extra page is allowed for references. All submissions must be in PDF. The page limit is strict, and it will not be possible to purchase additional pages at any point in the process (including after the paper is accepted).

Submissions may be made through HotCrp at this link. For submissions to the Idea-Matchmaking, see above.

By submitting to this track, authors acknowledge that they are aware of and agree to be bound by the ACM Policy and Procedures on Plagiarism (https://www.acm.org/publications/policies/plagiarism) and the IEEE Plagiarism FAQ (https://www.ieee.org/publications/rights/plagiarism/plagiarism-faq.html). In particular, papers submitted to ICSE 2022 must not have been published elsewhere and must not be under review or submitted for review elsewhere whilst under consideration for ICSE 2022. Contravention of this concurrent submission policy will be deemed a serious breach of scientific ethics, and appropriate action will be taken in all such cases. To check for double submission and plagiarism issues, the chairs reserve the right to (1) share the list of submissions with the PC Chairs of other conferences with overlapping review periods and (2) use external plagiarism detection software, under contract to the ACM or IEEE, to detect violations of these policies. By submitting to this track, authors acknowledge that they conform to the authorship policy of the ACM (https://www.acm.org/publications/policy-on-authorship), and the authorship policy of the IEEE (https://journals.ieeeauthorcenter.ieee.org/become-an-ieee-journal-author/publishing-ethics/definition-of-authorship/)

Important Dates

  • NIER Submissions Deadline: 15 October 2021 - Submissions close at 23:59 AoE (Anywhere on Earth, UTC-12)

  • NIER Acceptance Notification: 7 January 2022

  • NIER Camera Ready: 11 February 2022

Double-Blind Submission Guidelines

The ICSE 2022 NIER track will adopt a double-blind review process. No submitted paper may reveal its authors’ identities. The authors must make every effort to honor the double-blind review process; reviewers will be asked to honor the double-blind review process as much as possible. Any author having further questions on double-blind reviewing is encouraged to contact the track’s program co-chairs by e-mail. Any submission that does not comply with the double-blind review process will be desk-rejected. Further advice, guidance and explanation about the double-blind review process can be found in the Q&A page.

Conference Attendance Expectation

If a submission is accepted, at least one author of the paper is required to register for and attend the full 3-day technical conference and present the paper. The presentation is expected to be delivered in person, unless this is impossible due to travel limitations (related to, e.g., health, visa, or COVID-19 prevention).

Presentation Format

Each paper will be allocated a presentation slot during the conference program. At the end of each session, there will be time for discussion. Before the conference, we will assign one challenger to each accepted submission who will prepare discussion questions in advance, with the goal of increasing the value of conversations at the conference. Challengers will be selected among members of the ICSE community by the PC Co-Chairs.

Abstract #1

Title: Toward intelligent prediction of refactoring opportunities in software test code

Abstract: Software testability indicates the degree to which a test can be designed and executed for a software artifact. Improving software testability involves mechanisms to control, analyze, and measure the effort and costs to perform testing activities. Many studies investigate static and dynamic metrics to understand the factors that can reduce software testability. However, there is still little evidence on the effects in the test code quality, consequently, in the software quality, when using bad practices to design or implement test code. Such practices lead to the insertion of test smells in the test code, which may harm software testing activities, primarily from a maintenance perspective. Therefore, it is essential to define strategies, techniques, and tools to support preventing, detecting, and removing test smells from the test code. In this study, we aim to investigate software testability from the perspective of test smells. We plan to examine whether widely accepted machine learning algorithms are ready to support improving software testability by predicting opportunities for test smells refactorings. As a result, we aspire to define a novel approach for detecting and refactoring test smells that deal with subjectiveness tied to state-of-the-art strategies, techniques, and tools, mainly based on static and dynamic metrics.

Keywords: Software Quality, Software Testability, Test Smells

Expertise Statement: Our team has explored Software Testing and Maintenance fields for several years. More recently, we have investigated the impact of test smells (in software test code) on software testability and test code maintainability. In this ICSE-NIER-2022 paper proposal, our focus is to explore the boundaries between Software Testing and Machine Learning approaches, aiming at the unveiling to what extent combining such fields could support improvements in software testability and test code maintainability. Therefore, we are seeking cooperation in the Software Engineering x Machine Learning boundary. We aim to explore test smells and other factors that may affect software testability and maintainability.

Abstract #2

Title: Towards a Qualitative Evaluation of source code (QualCode)

Abstract: Program comprehension takes a major proportion (50%) of software developer’s time. The programming design decisions made by a software developer while coding affects the software quality. For instance, following the programming conventions such as the use of descriptive variable names, re-usable code fragments, and documentation, makes the code easy to review and maintain. To support queries of nature “How maintainable is your source code?”, we propose a Machine Learning (ML) based tool to evaluate the source code on various quality attributes, such as maintainability, understandability, and reusability. We plan to come up with new metrics for measuring the quality of source code based on different quality attributes, create a dataset of source code labeled with the degree to which it meets these attributes (performed by various software engineer participants), and finally train ML models to develop a tool automating the process. We have already created a dataset comprising programming construct usage information collected by parsing 30443+ source files from 20 GitHub repositories. We also have the defects information extracted 14950 defect reports linked with the considered source files. The source files considered are written in four different programming languages, viz., C, C++, Java, and Python. We plan to evaluate our tool by comparing it with the existing tools and software engineering participants.

Keywords: Program Comprehension, Code Understanding, Software Maintenance, Software Quality, Open Source Software

Expertise Statement: We have expertise in mining users’ GitHub repositories, Bug reporting engines such as Apache Bugzilla, matching documentation with descriptions of programming tasks. We are looking for complementary software engineering experts working on program comprehension, well-versed with Machine Learning, Deep learning model development and programming essentials. Also, we would like for the approach to work on multiple programming languages. Knowledge of Software Quality aspects (such as Quality Attributes and how they are related to software design), Mining Software Repositories, Developing Knowledge Warehouses, Empirical Software Engineering, and any experience in coming up with software metrics related to it is a plus.

Abstract #3

Title: BuilDiff: Towards Highlighting Key Logs of Continuous Integration Builds

Abstract: Continuous Integration (CI) allows software developers to build their software automatically and more frequently (at the commit-level). CI builds generate log files to allow tracing the entire process of build generation. However, build logs can be very verbose, which may impede developers from identifying the cause of build failures (i.e., errors or failures) quickly. Besides, the verbosity of build logs can cause build failures, especially if the size of a build log exceeds a certain limit. In addition, since a CI build can have multiple jobs, identifying the cause of failures of every independent build job can be challenging and time-consuming. In this research, we propose to develop an approach that performs build log inter-build diffing (i.e., between two subsequent CI builds) and intra-build (i.e., between jobs of the same CI build). Doing so would developers to focus on (i) “what is new” in the most recent builds, thus spotting new build errors/failures easily (ii) how build jobs differ in terms of execution or failures, (iii) what information in the logs of build jobs are duplicate and, thus, can be logged in a shared log file, (iv) what logged information are useless (e.g., not changing) in the past k build logs, thus recommending to avoid logging them. We will evaluate our approach using CI build data collected from GitHub projects that extensively use Travis CI, a cloud-based CI service.

Keywords: Continuous Integration (CI), build log analysis, mining software repositories, build failures

Expertise Statement: We have expertise in mining GitHub repositories and Travis CI builds. We are looking for complementary expertise in log analysis and user-centric evaluation (e.g., developer surveys) to evaluate how the proposed approach is effective in helping developers understand the behavior of CI builds.

Abstract #4

Title: Patented Solution for Software Crisis by addressing Spaghetti code

Abstract: What is a CBP (Component-Based Product)? What is the Structure and Anatomy of CBPs? Finding objective and valid answers and facts, scientifically, to these simple questions proves beyond doubt that Software Engineering is not employing CBE (Component-Based Engineering) paradigm to design and build software products.

A product can be a CBP (Component-Based Product) if and only if the product is built by assembling multiple components as illustrated in FIG-2. Any engineering discipline is said to be employing CBE (Component-Based Engineering) if and only if the engineering discipline designs and builds each product by assembling multiple components as illustrated in FIG-2. In other words, any engineering discipline is said to be employing CBE paradigm if and only if it designs and builds CBPs. Kindly see an picture at: http://real-software-components.com/raju/TwoKindsOfParadigms.pdf

Cars are CBPs, since each car is built by assembling multiple components. Computers are CBPs, since each computer is built by plugging in multiple components as illustrated in FIG-2. However, buildings are not CBPs, since each building (e.g. a house) is built by using reusable parts as illustrated in FIG-1. Likewise, the executable for every software product is built as a monolith as illustrated in FIG-1 by using reusable parts.

Three kinds of inventions are required to transform any engineering discipline that employs the inefficient non-CBE paradigm to design and build each product as illustrated in FIG-1 into the ten times more efficient CBE (Component-Based Engineering) paradigm to design and build each product as illustrated in FIG-2: http://real-software-components.com/raju/Briefs/WhatIsCBP3pg.pdf

(1) Inventions of methods and methodologies to partition the product into multiple optimal-sized self-contained modules,

(2) Inventions of technologies and mechanisms to design and build each module as a component that can be assembled or plugged in, and

(3) Inventions of tools and mechanisms to assemble or plug in the components to build the product.

Keywords: Components, Component-Based Products, Paradigm-Shift, Component-Based Engineering

Abstract #5

Title: Attack Agnostic Metrics for AI Software Verification

Abstract: Despite the fact of achieving high standard accuracy in a variety of machine learning tasks, deep learning models built upon neural networks have recently been identified having the issue of lacking adversarial robustness. The decision making of well-trained deep learning models can be easily falsified and manipulated, resulting in ever-increasing concerns in safety-critical and security-sensitive software applications requiring certified robustness and guaranteed reliability. The goal of this work is to develop attack-agnostic metrics for verification and certification of AI softwares in safety critical domains. Our focus will be on developing a formal verification approach that is scalable for a specific type of AI application softwares. We wish to expand on developing a neurosymbolic representation of a neural network that can be verified/certified against specific attacks using state-of-the-art SAT solvers.

Keywords: Trustworthy AI software, robust AI, SAT/SMT, formal methods

Abstract #6

Title: Unsupervised Time Sensitive Specification Mining

Abstract: Dynamic behavior of a program can be assessed through examination of events emitted by the program during execution. Temporal properties define the order of occurrence and timing constraints on event occurrence. Such specifications are important for safety-critical real-time systems for which a delayed response to an emitted event may lead to a fault in the system. Since temporal properties are rarely specified for programs and due to the complexity of the formalisms, it is desirable to suggest properties by extracting them from traces of program execution for testing, verification, anomaly detection, and debugging purposes.

We propose to address the problem of mining timed sensitive software specification from system traces using state of the art AI models such as the adversarial training.

Keywords: Specification mining, timed specification, adversarial training

Abstract #7

Title: AI Assisted Invariant Mining Frameworks

Abstract: The analysis of large scale data logged from complex cyber-physical systems, such as microgrids, often entails the discovery of invariants capturing functional as well as operational relationships underlying such large systems. Researchers have used a wide variety of techniques to infer invariants in underlying system variables and how one can leverage these relationships to monitor a software systems or a cyber-physical system. We proposed to use novel design of variational autoencoder to extract such invariants from complex software systems. It will help in identifying outliers during system operation.

Keywords: Invariant mining, software specification, dynamic analysis