ASE 2025
Sun 16 - Thu 20 November 2025 Seoul, South Korea
Dates
Plenary

This program is tentative and subject to change.

You're viewing the program in a time zone which is different from your device's time zone change time zone

Mon 17 Nov

Displayed time zone: Seoul change

09:00 - 09:30
OpeningKeynote at Vista
09:00
30m
Keynote
ASE Opening
Keynote
Shin Yoo KAIST, Marcel Böhme MPI for Security and Privacy, Lingming Zhang University of Illinois at Urbana-Champaign
09:30 - 10:30
Keynote 1Keynote at Vista
09:30
60m
Keynote
We Will Publish No Algorithm Before Its Time
Keynote
Thomas Reps University of Wisconsin-Madison
10:30 - 11:00
10:30
30m
Coffee break
Break
Catering

12:30 - 14:00
12:30
90m
Lunch
Lunch
Catering

15:00 - 18:00
Tools - Testing & AnalysisTool Demonstration Track at Walker Hall
15:00
3h
Demonstration
Towards Context-aware Mobile Privacy Notice: Implementation of A Deployable Contextual Privacy Policies Generator
Tool Demonstration Track
Haochen Gong Australian National University, Zhen Tao Technical University of Munich, Shidong Pan Columbia University & New York University, Zhenchang Xing CSIRO's Data61, Xiaoyu Sun Australian National University, Australia
15:00
3h
Demonstration
Metamorphic Testing of Deep Reinforcement Learning Agents with MDPMORPH
Tool Demonstration Track
Jiapeng Li Beihang University, Zheng Zheng Beihang University, Yuning Xing University of Auckland, Daixu Ren Beihang University, Steven Cho The University of Auckland, New Zealand, Valerio Terragni University of Auckland
15:00
3h
Demonstration
FlowStrider: Low-friction Continuous Threat Modeling
Tool Demonstration Track
Bernd Gruner German Aerospace Center (DLR), Institute of Data Science, Noah Erthel German Aerospace Center (DLR), Clemens-Alexander Brust German Aerospace Center (DLR)
15:00
3h
Demonstration
ReFuzzer: Feedback-Driven Approach to Enhance Validity of LLM-Generated Test Programs
Tool Demonstration Track
Iti Shree King's College London, Karine Even-Mendoza King’s College London, Tomasz Radzik King's College London
15:00
3h
Demonstration
DESIGNATOR: a Toolset for Automated GAN-enhanced Search-based Testing and Retraining of DNNs in Martian Environments
Tool Demonstration Track
Mohammed Attaoui University of Luxembourg, Fabrizio Pastore University of Luxembourg
Pre-print
15:00
3h
Demonstration
Chrysalis: A Lightweight Framework for Metamorphic Testing in Python
Tool Demonstration Track
Jai Parera University of California, Los Angeles, Nathan Huey University of California, Los Angeles, Ben Limpanukorn University of California, Los Angeles, Miryung Kim UCLA and Amazon Web Services
15:00
3h
Demonstration
AndroFL: Evolutionary-Driven Fault Localization for Android Apps
Tool Demonstration Track
Vishal Singh Indian Institute of Technology Kanpur, Ravi Shankar Das Indian Institute of Technology Kanpur, Prajwal H G InMobi, Subhajit Roy IIT Kanpur
DOI
15:00
3h
Demonstration
XRintTest: An Automated Framework for User Interaction Testing in Extended Reality Applications
Tool Demonstration Track
Ruizhen Gu University of Sheffield, José Miguel Rojas University of Sheffield, Donghwan Shin University of Sheffield
Pre-print
15:00
3h
Demonstration
Training-Control-as-Code: Towards a declarative solution to control training
Tool Demonstration Track
Padmanabha V. Seshadri IBM India Research Lab, Harikrishnan Balagopal IBM India Research Lab, Mehant Kammakomati IBM India Research Lab, Ashok Pon Kumar IBM Research - India, Dushyant Behl IBM Research
Media Attached
15:00
3h
Demonstration
VUSC: An Extensible Research Platform for Java-Based Static Analysis
Tool Demonstration Track
Marc Miltenberger Fraunhofer SIT; ATHENE, Steven Arzt Fraunhofer SIT; ATHENE
15:00
3h
Demonstration
BASHIRI: Learning Failure Oracles from Execution Features
Tool Demonstration Track
Marius Smytzek CISPA Helmholtz Center for Information Security, Martin Eberlein Humboldt-Universtität zu Berlin, Tural Mammadov CISPA Helmholtz Center for Information Security, Lars Grunske Humboldt-Universität zu Berlin, Andreas Zeller CISPA Helmholtz Center for Information Security
15:00
3h
Demonstration
FETT: Fault Injection as an Educational and Training Tool in Cybersecurity
Tool Demonstration Track
Anaé De Baets University of Namur, Guillaume Nguyen University of Namur, Xavier Devroey University of Namur, Fabian Gilson University of Canterbury
Pre-print
15:30 - 16:00
15:30
30m
Coffee break
Break
Catering

Tue 18 Nov

Displayed time zone: Seoul change

09:00 - 09:30
MIP Award 1MIP Award at Vista
09:00
30m
Talk
Deep Learning Code Fragments for Code Clone Detection
MIP Award
Martin White Booz Allen Hamilton, Michele Tufano College of William and Mary, Christopher Vendome Miami University, Denys Poshyvanyk William & Mary
DOI
10:30 - 11:00
10:30
30m
Coffee break
Break
Catering

12:30 - 14:00
12:30
90m
Lunch
Lunch
Catering

15:00 - 18:00
Tools - LLMs and AgentsTool Demonstration Track at Walker Hall
15:00
3h
Demonstration
APIDA-Chat: Structured Synthesis of API Search Dialogues to Bootstrap Conversational Agents
Tool Demonstration Track
Zachary Eberhart University of Notre Dame, Collin McMillan University of Notre Dame
15:00
3h
Demonstration
PROXiFY: A Bytecode Analysis Tool for Detecting and Classifying Proxy Contracts in Ethereum Smart Contracts
Tool Demonstration Track
Ilham Qasse Reykjavik University, Mohammad Hamdaqa Polytechnique Montreal, Björn Þór Jónsson Reykjavik University
15:00
3h
Demonstration
DeepTx: Real-Time Transaction Risk Analysis via Multi-Modal Features and LLM Reasoning
Tool Demonstration Track
Yixuan Liu Nanyang Technological University, Xinlei Li Nanyang Technological University, Yi Li Nanyang Technological University
Pre-print
15:00
3h
Demonstration
WIBE: Watermarks for generated Images - Benchmarking & Evaluation
Tool Demonstration Track
Aleksey Yakushev ISP RAS, Aleksandr Akimenkov ISP RAS, Khaled Abud MSU AI Institute, Dmitry Obydenkov ISP RAS, Irina Serzhenko MIPT, Kirill Aistov Huawei Research Center, Egor Kovalev MSU, Stanislav Fomin ISP RAS, Anastasia Antsiferova ISP RAS Research Center, MSU AI Institute, Kirill Lukianov ISP RAS Research Center, MIPT, Yury Markin ISP RAS
15:00
3h
Demonstration
EyeNav: Accessible Webpage Interaction and Testing using Eye-tracking and NLP
Tool Demonstration Track
Juan Diego Yepes-Parra Universidad de los Andes, Colombia, Camilo Escobar-Velásquez Universidad de los Andes, Colombia
Link to publication Media Attached
15:00
3h
Demonstration
Quirx: A Mutation-Based Framework for Evaluating Prompt Robustness in LLM-based Software
Tool Demonstration Track
Souhaila Serbout University of Zurich, Zurich, Switzerland
15:00
3h
Demonstration
BenGQL: An Extensible Benchmarking Framework for Automated GraphQL Testing
Tool Demonstration Track
Abenezer Angamo Independent Researcher, Marcello Maugeri University of Catania
Media Attached
15:00
3h
Demonstration
evalSmarT: An LLM-Based Evaluation Framework for Smart Contract Comment Generation​
Tool Demonstration Track
Fatou Ndiaye MBODJI SnT, University of Luxembourg, Mame Marieme Ciss SOUGOUFARA UCAD, Senegal, Wendkuuni Arzouma Marc Christian OUEDRAOGO SnT, University of Luxembourg, Alioune Diallo University of Luxembourg, Kui Liu Huawei, Jacques Klein University of Luxembourg, Tegawendé F. Bissyandé University of Luxembourg
Pre-print
15:00
3h
Demonstration
LLMorph: Automated Metamorphic Testing of Large Language Models
Tool Demonstration Track
Steven Cho The University of Auckland, New Zealand, Stefano Ruberto JRC European Commission, Valerio Terragni University of Auckland
15:00
3h
Demonstration
TRUSTVIS: A Multi-Dimensional Trustworthiness Evaluation Framework for Large Language Models
Tool Demonstration Track
Ruoyu Sun University of Alberta, Canada, Da Song University of Alberta, Jiayang Song Macau University of Science and Technology, Yuheng Huang The University of Tokyo, Lei Ma The University of Tokyo & University of Alberta
15:00
3h
Demonstration
GUI-ReRank: Enhancing GUI Retrieval with Multi-Modal LLM-based Reranking
Tool Demonstration Track
Kristian Kolthoff Institute for Software and Systems Engineering, Clausthal University of Technology, Felix Kretzer human-centered systems Lab (h-lab), Karlsruhe Institute of Technology (KIT) , Christian Bartelt Institute for Software and Systems Engineering, TU Clausthal, Alexander Maedche human-centered systems Lab (h-lab), Karlsruhe Institute of Technology (KIT) , Simone Paolo Ponzetto Data and Web Science Group, University of Mannheim
Pre-print Media Attached
15:00
3h
Demonstration
StackPlagger: A System for Identifying AI-Code Plagiarism on Stack Overflow
Tool Demonstration Track
Aman Swaraj Dept. of Computer Science & Engineering, Indian Institute of Technology, Roorkee, India, Harsh Goyal Indian Institute of Technology, Roorkee, Sumit Chadgal Indian Institute of Technology, Roorkee, Sandeep Kumar Dept. of Computer Science & Engineering, Indian Institute of Technology, Roorkee, India
15:00
3h
Demonstration
AgentDroid: A Multi-Agent Tool for Detecting Fraudulent Android Applications
Tool Demonstration Track
Ruwei Pan Chongqing University, Hongyu Zhang Chongqing University, Zhonghao Jiang , Ran Hou Chongqing University
15:30 - 16:00
15:30
30m
Coffee break
Break
Catering

Wed 19 Nov

Displayed time zone: Seoul change

09:00 - 09:30
MIP Award 2MIP Award at Vista
09:00
30m
Talk
Automated Test Input Generation for Android: Are We There Yet?
MIP Award
Shauvik Roy Choudhary , Alessandra Gorla IMDEA Software Institute, Alessandro Orso University of Georgia, USA
DOI
09:30 - 10:30
Keynote 3Keynote at Vista
09:30
60m
Keynote
Hyperscale Bug Finding and Fixing: DAPRA AIxCC
Keynote
Taesoo Kim Georgia Institute of Technology
10:30 - 11:00
10:30
30m
Coffee break
Break
Catering

12:30 - 14:00
12:30
90m
Lunch
Lunch
Catering

15:00 - 18:00
Tools - Code and ModelTool Demonstration Track at Walker Hall
15:00
3h
Demonstration
DSBox: A Data Selection Framework for Efficient Deep Code Learning
Tool Demonstration Track
Xinyang Liu TianJin University, Lili Quan Tianjin University, Qiang Hu Tianjin University
15:00
3h
Demonstration
OSSPREY: AI-Driven Forecasting and Intervention for OSS Project Sustainability
Tool Demonstration Track
Nafiz Imtiaz Khan Department of Computer Science, University of California, Davis, Priyal Soni University of California, Davis, Arjun Ashok University of California, Davis, Vladimir Filkov University of California at Davis, USA
15:00
3h
Demonstration
ORMorpher: An Interactive Framework for ORM Translation and Optimization
Tool Demonstration Track
Milan Abrahám Department of Software Engineering, Charles University, Pavel Koupil Charles University, Faculty of Mathematics and Physics
15:00
3h
Demonstration
PrioTestCI: Efficient Test Case Prioritization in GitHub Workflows for CI Optimization
Tool Demonstration Track
Shubham Vasudeo Desai North Carolina State University, Shonil Bhide North Carolina State University, Souhaila Serbout University of Zurich, Zurich, Switzerland, Luciano Marchezan DIRO, University of Montreal, Wesley Assunção North Carolina State University
15:00
3h
Demonstration
CLARA: A Developer’s Companion for Code Comprehension and Analysis
Tool Demonstration Track
Ahmed Adnan , Mushfiqur Rahman Bangladesh University of Business and Technology, saad sakib noor University of Dhaka, Kazi Sakib Institute of Information Technology, University of Dhaka
15:00
3h
Demonstration
CodeGenLink: A Tool to Find the Likely Origin and License of Automatically Generated Code
Tool Demonstration Track
Daniele Bifolco University of Sannio, Guido Annicchiarico University of Sannio, Italy, Pierluigi Barbiero University of Sannio, Italy, Massimiliano Di Penta University of Sannio, Italy, Fiorella Zampetti University of Sannio, Italy
Pre-print Media Attached
15:00
3h
Demonstration
A Large-Scale Evolvable Dataset for Model Context Protocol Ecosystem and Security Analysis
Tool Demonstration Track
Zhiwei Lin National University of Singapore, Bonan Ruan National University of Singapore, Jiahao Liu National University of Singapore, Weibo Zhao National University of Singapore
15:00
3h
Demonstration
Evaluating Program Coverage for Code-Model Training
Tool Demonstration Track
Nandakishore S Menon IBM Research India, Diptikalyan Saha IBM Research India
15:00
3h
Demonstration
BuilDroid: A Self-Correcting LLM Agent for Automated Android Builds
Tool Demonstration Track
Jaehyeon Kim New York University Abu Dhabi, Rui Rua New York University Abu Dhabi, Karim Ali NYU Abu Dhabi
15:00
3h
Demonstration
LitterBox+: An Extensible Framework for LLM-enhanced Scratch Static Code Analysis
Tool Demonstration Track
Benedikt Fein University of Passau, Florian Obermueller University of Passau, Gordon Fraser University of Passau
Pre-print
15:00
3h
Demonstration
PyGress: Tool for Analyzing Progression of Code Proficiency in Python OSS Projects
Tool Demonstration Track
Rujiphart Charatvaraphan Faculty of Information and Communication Technology, Mahidol University, Bunradar Chatchaiyadech Faculty of Information and Communication Technology, Mahidol University, Thitirat Sukijprasert Faculty of Information and Communication Technology, Mahidol University, Chaiyong Rakhitwetsagul Mahidol University, Thailand, Morakot Choetkiertikul Mahidol University, Thailand, Raula Gaikovina Kula The University of Osaka, Thanwadee Sunetnanta Mahidol University, Kenichi Matsumoto Nara Institute of Science and Technology
15:00
3h
Demonstration
PyTrim: A Practical Tool for Reducing Python Dependency Bloat
Tool Demonstration Track
Konstantinos Karakatsanis Athens University of Economics and Business, Georgios Alexopoulos University of Athens, Ioannis Karyotakis Athens University of Economics and Business, Foivos Timotheos Proestakis Athens University of Economics and Business, Evangelos Talos Athens University of Economics and Business, Panos Louridas Athens University of Economics and Business, Dimitris Mitropoulos University of Athens
15:30 - 16:00
15:30
30m
Coffee break
Break
Catering

Accepted Papers

Title
AgentDroid: A Multi-Agent Tool for Detecting Fraudulent Android Applications
Tool Demonstration Track
A Large-Scale Evolvable Dataset for Model Context Protocol Ecosystem and Security Analysis
Tool Demonstration Track
AndroFL: Evolutionary-Driven Fault Localization for Android Apps
Tool Demonstration Track
DOI
APIDA-Chat: Structured Synthesis of API Search Dialogues to Bootstrap Conversational Agents
Tool Demonstration Track
BASHIRI: Learning Failure Oracles from Execution Features
Tool Demonstration Track
BenGQL: An Extensible Benchmarking Framework for Automated GraphQL Testing
Tool Demonstration Track
Media Attached
BuilDroid: A Self-Correcting LLM Agent for Automated Android Builds
Tool Demonstration Track
Chrysalis: A Lightweight Framework for Metamorphic Testing in Python
Tool Demonstration Track
CLARA: A Developer’s Companion for Code Comprehension and Analysis
Tool Demonstration Track
CodeGenLink: A Tool to Find the Likely Origin and License of Automatically Generated Code
Tool Demonstration Track
Pre-print Media Attached
DeepTx: Real-Time Transaction Risk Analysis via Multi-Modal Features and LLM Reasoning
Tool Demonstration Track
Pre-print
DESIGNATOR: a Toolset for Automated GAN-enhanced Search-based Testing and Retraining of DNNs in Martian Environments
Tool Demonstration Track
Pre-print
DSBox: A Data Selection Framework for Efficient Deep Code Learning
Tool Demonstration Track
evalSmarT: An LLM-Based Evaluation Framework for Smart Contract Comment Generation​
Tool Demonstration Track
Pre-print
Evaluating Program Coverage for Code-Model Training
Tool Demonstration Track
EyeNav: Accessible Webpage Interaction and Testing using Eye-tracking and NLP
Tool Demonstration Track
Link to publication Media Attached
FETT: Fault Injection as an Educational and Training Tool in Cybersecurity
Tool Demonstration Track
Pre-print
FlowStrider: Low-friction Continuous Threat Modeling
Tool Demonstration Track
GUI-ReRank: Enhancing GUI Retrieval with Multi-Modal LLM-based Reranking
Tool Demonstration Track
Pre-print Media Attached
LitterBox+: An Extensible Framework for LLM-enhanced Scratch Static Code Analysis
Tool Demonstration Track
Pre-print
LLMorph: Automated Metamorphic Testing of Large Language Models
Tool Demonstration Track
Metamorphic Testing of Deep Reinforcement Learning Agents with MDPMORPH
Tool Demonstration Track
ORMorpher: An Interactive Framework for ORM Translation and Optimization
Tool Demonstration Track
OSSPREY: AI-Driven Forecasting and Intervention for OSS Project Sustainability
Tool Demonstration Track
PrioTestCI: Efficient Test Case Prioritization in GitHub Workflows for CI Optimization
Tool Demonstration Track
PROXiFY: A Bytecode Analysis Tool for Detecting and Classifying Proxy Contracts in Ethereum Smart Contracts
Tool Demonstration Track
PyGress: Tool for Analyzing Progression of Code Proficiency in Python OSS Projects
Tool Demonstration Track
PyTrim: A Practical Tool for Reducing Python Dependency Bloat
Tool Demonstration Track
Quirx: A Mutation-Based Framework for Evaluating Prompt Robustness in LLM-based Software
Tool Demonstration Track
ReFuzzer: Feedback-Driven Approach to Enhance Validity of LLM-Generated Test Programs
Tool Demonstration Track
StackPlagger: A System for Identifying AI-Code Plagiarism on Stack Overflow
Tool Demonstration Track
Towards Context-aware Mobile Privacy Notice: Implementation of A Deployable Contextual Privacy Policies Generator
Tool Demonstration Track
Training-Control-as-Code: Towards a declarative solution to control training
Tool Demonstration Track
Media Attached
TRUSTVIS: A Multi-Dimensional Trustworthiness Evaluation Framework for Large Language Models
Tool Demonstration Track
VUSC: An Extensible Research Platform for Java-Based Static Analysis
Tool Demonstration Track
WIBE: Watermarks for generated Images - Benchmarking & Evaluation
Tool Demonstration Track
XRintTest: An Automated Framework for User Interaction Testing in Extended Reality Applications
Tool Demonstration Track
Pre-print

Call for Papers

The ASE 2025 Demonstrations Track invites researchers and practitioners to present and discuss the most recent advances, experiences, and challenges in the field of software engineering supported by live presentations of new research tools, data, and other artifacts. We encourage innovative research demonstrations, which show early implementations of novel software engineering concepts, as well as mature prototypes. The research demonstrations are intended to highlight underlying scientific contributions.

Whereas a regular research paper points out the scientific contribution of a new software engineering approach, a demonstration paper provides the opportunity to show how a scientific contribution has been transferred into a working tool or data set. Authors of regular research papers are thus encouraged to submit an accompanying demonstration paper. Submissions of independent tools that are not associated with any research papers are welcome.

Papers submitted to the tool demonstration track should describe (a) novel early tool prototypes or (b) novel aspects of mature tools. The submissions must clearly communicate the following information to the audience:

  • the envisioned users;
  • the software engineering challenge the tool addresses;
  • the methodology it implies for its users;
  • the results of validation studies already conducted (for mature tools) or - the design of planned studies (for early prototypes).

Submission

Papers must be submitted electronically through the HotCRP submission site by July 23rd, and must:

  • All submissions must be in PDF format and conform, at time of submission, to the IEEE Conference Proceedings Formatting Guidelines (title in 24pt font and full text in 10pt type, LaTeX users must use \documentclass[10pt,conference]{IEEEtran} without including the compsoc or compsocconf option).
  • All submissions must be in English.
  • A demonstration submission must not exceed four pages (including all text, references, and figures);
  • Authors are required to submit a screencast of the tool, with the video link attached to the end of the abstract;
  • Authors are encouraged to make their code and datasets open source, with the link for the code and datasets attached to the end of the abstract;
  • A submission must not have been previously published in a demonstration form and must not simultaneously be submitted to another symposium other than ASE;
  • Submissions for the tool track DO NOT follow a double-blind review process. If a tool track submission accompanies a submission to the research track (which is double-blind), please make sure to click “Yes” in the “Connection with research track” section on HotCRP during submission.

Tools and Data Availability

To promote replicability and to disseminate the advances achieved with the research tools and data sets, we require that data sets are publicly available for download and use. We strongly encourage the same for tools, ideally through their distribution with an open-source software license. Whenever the tool is not made publicly available, the paper must include a clear explanation for why this was not possible. Authors are also encouraged to distribute their demonstration in a form that can be easily used, such as a virtual machine image, a software container (e.g., Docker), or a system configuration (e.g., Puppet, Ansible, Salt, CFEngine).

Screencast

Authors are required to prepare an up to 5 minutes video demonstrating the tool. For consistency, we require ALL videos to be uploaded to YouTube and made available by the time of submission. The URL of the YouTube video should be added at the end of the abstract. The video should:

  • provide an overview of the tool’s capabilities and show the major tool features in detail;
  • provide clarifying voice-over and/or annotation highlights;
  • be engaging and exciting for the audience!

Please note that authors of successful submissions will have the opportunity to revise the paper, the video (and its hosting location), the code, and the datasets by the camera-ready deadline. Submissions that do not comply with the instructions will be rejected without review.

Evaluation

Each submission will be reviewed by at least three members of the tool demonstrations program committee. The evaluation criteria include: Presentation, i.e., the extent to which the presentation meets the high standards of ASE.

  • Relevance, i.e., the pertinence of the proposed tool for the ASE audience;
  • Positioning, i.e., the degree to which the submission considers differences to related tools (pros and cons);
  • Demo quality, i.e., the quality and usefulness of the accompanied artifacts: video, tool, code, and evaluation datasets.

For further information, please feel free to contact the track chairs.

Accepted Papers

After acceptance, the list of paper authors cannot be changed under any circumstances; the list of authors on camera-ready papers must be identical to those on submitted papers. Paper titles cannot be changed except by permission of the Track Chairs and only when referees recommend a change for clarity or accuracy with respect to the paper content.