ISSTA 2025
Wed 25 - Sat 28 June 2025 Trondheim, Norway
co-located with FSE 2025
Wed 25 Jun 2025 11:25 - 11:50 at Cosmos 3A - Fairness and LLM Testing Chair(s): Andreas Metzger

In recent years, Large Language Models (LLMs) have dramatically advanced the performance of automated code translation, making their computational accuracy score reach up to over 80% on many previous benchmarks. However, most code samples in these benchmarks are short, standalone, statement/method-level, and algorithmic, which is not aligned with practical coding tasks. Therefore, it is still unknown the actual capability of LLMs in translating code samples written for daily development.

To achieve this, we construct a class-level code translation benchmark, ClassEval-T, and make the first attempt to extensively assess recent LLMs’ performance on class-level code translation. ClassEval-T is extended from ClassEval, a well-known class-level Python code generation benchmark consisting of multiple practical coding topics, such as database operation and game design, and diverse contextual dependencies (e.g., fields, methods, and libraries). It cost us 360 person-hours to accomplish the manual migration to Java and C++ with complete code samples and associated test suites. Subsequently, we design three translation strategies (i.e., holistic, min-dependency, and standalone) for class-level code translations and evaluate eight recent LLMs of commercial, general, and code kinds in diverse families and sizes on ClassEval-T. Experimental results demonstrate a remarkable performance drop compared with the most widely studied method-level code translation benchmark, and obvious discrepancies among LLMs appear, showing the effectiveness of ClassEval-T in measuring recent LLMs. Afterwards, we further discuss the usage scenarios for diverse translation strategies and LLMs’ ability to dependency awareness when translating class samples. Finally, 1,243 failure cases made by the best-performing LLM under test are thoroughly analyzed and categorized in this paper for practical guidance and future enlightenment.

Wed 25 Jun

Displayed time zone: Amsterdam, Berlin, Bern, Rome, Stockholm, Vienna change

11:00 - 12:15
Fairness and LLM TestingResearch Papers at Cosmos 3A
Chair(s): Andreas Metzger University of Duisburg-Essen
11:00
25m
Talk
Fairness Mediator: Neutralize Stereotype Associations to Mitigate Bias in Large Language Models
Research Papers
Yisong Xiao Beihang University, Aishan Liu Beihang University; Institute of Dataspace, Siyuan Liang National University of Singapore, Xianglong Liu Beihang University; Institute of Dataspace; Zhongguancun Laboratory, Dacheng Tao Nanyang Technological University
DOI
11:25
25m
Talk
ClassEval-T: Evaluating Large Language Models in Class-Level Code Translation
Research Papers
Pengyu Xue Shandong University, Linhao Wu Shandong University, Zhen Yang Shandong University, Chengyi Wang Shandong University, Xiang Li Shandong University, Yuxiang Zhang Shandong University, Jia Li Tsinghua University, Ruikai Jin Shandong University, Yifei Pei Shandong University, Zhaoyan Shen Shandong University, Xiran Lyu Shandong University, Jacky Keung City University of Hong Kong
DOI
11:50
25m
Talk
No Bias Left Behind: Fairness Testing for Deep Recommender Systems Targeting General Disadvantaged Groups
Research Papers
Zhuo Wu Tianjin International Engineering Institute, Tianjin University, Zan Wang Tianjin University, Chuan Luo Beihang University, Xiaoning Du Monash University, Junjie Chen Tianjin University
DOI

Information for Participants
Wed 25 Jun 2025 11:00 - 12:15 at Cosmos 3A - Fairness and LLM Testing Chair(s): Andreas Metzger
Info for room Cosmos 3A:

Cosmos 3A is the first room in the Cosmos 3 wing.

When facing the main Cosmos Hall, access to the Cosmos 3 wing is on the left, close to the stairs. The area is accessed through a large door with the number “3”, which will stay open during the event.

:
:
:
: