The 1st International Workshop on Empirical Prompt Engineering for Software Engineering (PROMPT-SE)
Language Models (LMs), including both Large (LLMs) and Small (SLMs), have significantly advanced automation in Software Engineering (SE), supporting tasks like code generation, bug fixing, and documentation. However, their effectiveness strongly depends on how they are instructed. Prompt engineering—the design and refinement of prompts to guide model behavior—has emerged as a vital research area to ensure reliable and high-quality results.
While interest is growing, current knowledge remains fragmented, and empirical evidence is limited. The PROMPT-SE workshop provides a dedicated forum to examine how prompt design influences LM performance in SE tasks, identify recurring patterns and trade-offs, and discuss best practices grounded in empirical studies.
The workshop aims to bring together researchers and practitioners interested in advancing prompt engineering for SE, fostering discussion, experience sharing, and community building around empirical methods that improve the reliability, reproducibility, and effectiveness of LM-based development tools.
Call for Papers
PROMPT-SE welcomes original contributions that have not been previously submitted to or presented at other forums (although the studies described may have been). All accepted papers will be published in the EASE 2026 Workshop Proceedings.
We invite submissions of papers in two categories:
- Full Research Papers (10 pages, including references): These studies describe empirical research results on prompt engineering for Software Engineering. Submissions should clearly explain the study design and execution, discuss how methodological challenges were addressed, present concrete results, and highlight advances to the state of the art.
- Short Papers (5 pages, including references): These papers report practical experiences, preliminary results, or ongoing studies related to empirical prompt engineering. Submissions should emphasize lessons learned, methodological challenges, and insights that can inform future research.
Topics of Interest
We welcome submissions on topics including, but not limited to:
- Empirical studies on the effectiveness of prompting strategies and patterns for SE tasks;
- Definition and evaluation of metrics for prompt quality, performance, and reproducibility;
- Comparative analyses of prompting techniques;
- Studies on integrating prompt engineering practices into SE tools, environments, and pipelines;
- Assessment of tools supporting prompt creation, optimization, and validation for SE applications;
- User studies exploring the experience of software engineers designing and using prompts in LM-based workflows;
- Case studies of prompt engineering applications in industrial and research SE settings;
- Investigation into the cognitive and human factors influencing prompt design and understanding;
- Lessons learned and challenges faced during the empirical evaluation and application of prompt engineering;
- Green and Sustainable Prompt Engineering practices
Empirical Methods
We welcome papers using any empirical method in SE or Prompt-SE, including studies reporting negative or non-significant results. Accepted methods include, but are not limited to:
- Action Research;
- Benchmarking;
- Case Study;
- Case Survey;
- Data Science;
- Engineering Research (also known as design as research or design science);
- Experiments with human participants;
- Grounded Theory;
- Longitudinal Study;
- Meta-science;
- Mixed Methods (including explicitly mixed methodologies);
- Optimization Studies;
- Qualitative Surveys (e.g., interview studies);
- Quantitative Simulation;
- Questionnaire Surveys (quantitative);
- Repository Mining;
- Systematic Literature Review;
- Mixed-methods and multi-methodology studies;
- Replication studies.
Submission Guidelines
All papers must be submitted in PDF format through the web-based submission system EasyChair. Submissions must not exceed 10 pages for full papers and 5 pages for short papers including all figures, tables, references, and appendices.
All submissions should use the official ACM Primary Article Template. Deviating from the ACM formatting instructions may lead to a desk rejection. LaTeX users should use the following options:
\documentclass[sigconf,review,anonymous]{acmart}
\acmConference[EASE 2026]{The 30th International Conference on Evaluation and Assessment in Software Engineering}{9–12 June, 2026}{Glasgow, Scotland, United Kingdom}
Authors must comply with the SIGSOFT Open Science Policy SIGSOFT Open Science Policies.
PROMT-SE 2026 will employ a double-anonymous review process. Do not include author names or affiliations in submissions. All references to the author’s prior work should be in the third person. Any online supplements, replication packages, etc., referred to in the work should also be anonymized. Advice for sharing supplements anonymously can be found here.
By submitting to PROMT-SE, authors agree to the ACM Policy and Procedures on Plagiarism, Misrepresentation, and Falsification ACM Policy on Plagiarism, etc. If the research involves human participants/subjects, the authors must adhere to the ACM Publications Policy on Research Involving Human Participants and Subjects ACM Policy on Research Involving Human Participants and Subjects.
If accepted, your paper will be published as open access, in line with ACM’s new publishing model for the International Conference Proceedings Series (ICPS). Please note that this requires payment of an ACM Article Processing Charge (APC) for inclusion in the EASE 2026 Proceedings.