With the growing complexity of modern software systems, software engineers must cope with the so-called information overloading along the whole development lifecycle, spanning from the requirement elicitation to the development of the actual system. In addition, fast-evolving technologies and frameworks are emerging daily. Therefore, non-expert users may struggle to express the requirements properly or select the proper third-party software libraries needed to implement a specific functionality. Such inconveniences impact mainly the software design and construction phases, which means more than 50% of the effort made by the software engineers during the life of the project. Current software projects need to be easily scalable in order to reduce such maintenance costs.
To cope with these issues, intelligent software assistants have been proposed to ease the burden of choice by providing a set of automated capabilities to help developers in several tasks, e.g., debugging, testing, navigating Q&A forums, or mining information for open-source repositories. After an inference phase, the system can provide a set of valuable items, namely recommendations, according to the current task.
The 2nd edition of the workshop on Evaluation of Qualitative Aspects of Intelligent Software Assistants (EQUISA 2026) aims to provide a forum for researchers and practitioners to present and discuss novel methods and techniques for designing, developing, and assessing intelligent software assistants. We expect that the workshop will help to:
- Provide researchers with a comprehensive landscape of recent intelligent software assistants.
- Investigate how generative AI models can be used in developing software assistants.}
- Reinforce the foundational knowledge around software assistants, with a focus on empirical evaluation, trustworthiness, and ethical considerations.
- Identify new opportunities for applying software assistant research to address the most pressing challenges in modern software engineering.
- Proposing new empirical methodologies, protocols, and metrics to evaluate qualitative aspects.
- Analyze intelligent software assistants in industries and measure their conformity to recent qualitative standards.
Call for Papers
Intelligent software assistants have been defined as automated approaches based on advanced artificial intelligence (AI) models that aim to support end users in several aspects of software development lifecycles. While traditional systems are based on a curated knowledge base that represents the main source of the recommendation process, the advent of cutting-edge AI models (e.g., foundation and pre-trained models) is dramatically changing how those systems are designed, developed, and evaluated. Motivated by the need to ensure quality in advanced systems, EQUISA is a dedicated forum for discussing the qualitative aspects of intelligent software assistants, from their design to their deployment in real-world applications.
Topics of interest include, but are not limited to, the following:
-
Evaluation and assessment of quality aspects of software assistants (e.g., explainability, transparency, and fairness), ensuring that software assistants produce reliable results
-
Re-usage of AI-based tools, techniques, and methodologies in developing intelligent software assistants
-
Foundational theories for software assistants to understand the underlying principles that can drive the development of more robust and generalizable recommendation systems in software engineering, with a focus on their evaluation
-
New methods, tools, and frameworks to support development tasks (e.g., code-related tasks, automated classification of software artifacts, or code generation leveraging generative AI models)
-
Designing specific prompt engineering techniques for intelligent software assistants based on large language models to ensure quality aspects
-
Data-driven approaches for software assistants: leveraging large-scale data from open-source software (OSS) repositories, Q&A forums, and issue trackers to enhance the effectiveness of software assistants
-
Integration with human-in-the-loop systems: balancing automated recommendations with human expertise to improve decision-making in complex software engineering scenarios
-
Low-Code and No-Code approaches to ease the development of intelligent software assistants
-
Adoption of advanced generative AI models, including large language models (LLMs) and pre-trained models (PTMs), for software assistants, particularly emphasizing quality effects
-
Empirical studies and controlled experiments to assess qualitative aspects of intelligent systems
-
Evolution of software systems and long-term recommendations, including how software assistants can cope with the evolving nature of software systems and support long-term maintainability and evolution
-
Cross-disciplinary applications of software assistants: studying how techniques from other domains (e.g., human-computer interaction, natural language processing, and social network analysis) can enhance effectiveness and usability
-
Sustainability in the design and development of intelligent software assistants
-
Surveys and experience reports on software assistants to support software engineering tasks, both in academic and industrial use cases
How to Submit
All papers must be submitted in PDF format through EasyChair. The workshop will consider three different types of submissions:
- Full papers must be at least 5 pages and no more than 10, reporting original research on the topics.
- Short papers must be exactly 5 pages, presenting visions, novel ideas, and experience reports on the topics.
All the page limits include all figures, tables, references, and appendices. All submissions must be submitted in PDF format through EasyChair. All submissions must use the official ACM Primary Article Template. Deviating from the ACM formatting instructions may lead to a desk rejection. Authors must comply with the SIGSOFT Open Science Policy (i.e., to archive data and artifacts in a permanent repository—e.g., Zenodo, not GitHub—and include links in the submission). By submitting to EQUISA, authors agree to the ACM Policy and Procedures on Plagiarism, Misrepresentation, and Falsification.
Papers submitted must not be published or under review elsewhere. The Program Chairs may use plagiarism detection software under contract to the ACM. If the research involves human subjects, the authors must adhere to the ACM Publications Policy on Research Involving Human Participants and Subjects.
Review Criteria
All papers will be subjected to a thorough peer review, focusing on originality, quality, soundness, and relevance, each reviewed by three program committee members. The review process will follow the same criteria as the main conference, namely:
- Soundness: The extent to which the paper’s contribution addresses its research questions and is supported by rigorous application of appropriate research methods.
- Significance: The extent to which the paper’s contributions can impact the field of software engineering, and under which assumptions (if any).
- Novelty: The extent to which the contributions are sufficiently original with respect to the state-of-the-art.
- Verifiability and Transparency: The extent to which the paper includes sufficient information to understand how an innovation works, how data was obtained, analyzed, and interpreted, and how the paper supports independent verification or replication of the paper’s claimed contributions.
- Presentation: The extent to which the quality of writing meets the high standards, including clear descriptions, adequate use of the English language, absence of major ambiguity, clearly readable figures and tables, and adherence to the formatting instructions provided above.
Important dates
- Submission deadline: March 2nd, 2026.
- Notification to authors: April 6th, 2026.
- Camera-ready due: April 20th, 2026.
- Early registration deadline for authors: May 5th, 2026.