CHASE 2025
Sun 27 - Mon 28 April 2025 Ottawa, Ontario, Canada
co-located with ICSE 2025

This program is tentative and subject to change.

Technical interviews are an opportunity for candidates to showcase their technical proficiency to employers. Feedback on code correctness, optimality and complexity can reveal deficiencies and be invaluable in candidates’ preparation for future interviews. Unfortunately, current technical interview practices lack feedback for candidates. To resolve this, we designed a website to simulate technical interview programming experiences and provide users with LLM-generated feedback on code using ChatGPT. We devised between-subject study and conducted mock technical interviews with 46 participants across two settings: human-administered (a live human sharing generated feedback) and automated. We focus on: 1) evaluating the quality of ChatGPT generated code feedback; and 2) dissecting factors influencing trust and usefulness with regard to feedback delivery. Our results show that candidates perceive coding feedback as useful. However, they regard the automatic feedback as less trustworthy compared to feedback in the human-administered setting. In light of our findings, we discuss implications for increasing trust in AI systems and guidelines for designing technical interview feedback systems.

This program is tentative and subject to change.

Sun 27 Apr

Displayed time zone: Eastern Time (US & Canada) change

09:00 - 10:30
Day 1 Opening / Human Aspects and Machine Learning SessionResearch Track at 210
Chair(s): Rashina Hoda Monash University, Ronnie de Souza Santos University of Calgary, Bianca Trinkenreich Colorado State University, Giuseppe Destefanis Brunel University London
09:00
30m
Talk
Day 1 Opening
Research Track
Rashina Hoda Monash University, Ronnie de Souza Santos University of Calgary, Bianca Trinkenreich Colorado State University
09:30
15m
Talk
Unpacking Organizational Change in AI Transformations of Software Engineering
Research Track
Theocharis Tavantzis Gothenburg University, Robert Feldt Chalmers University of Technology, Sweden
09:45
15m
Talk
Human and Machine: How Software Engineers Perceive and Engage with AI-Assisted Code Reviews Compared to Their Peers
Research Track
Adam Alami University of Southern Denmark, Neil Ernst University of Victoria
10:00
15m
Talk
Will You Trust Me More Than ChatGPT? Evaluating LLM-Generated Code Feedback for Mock Technical Interviews
Research Track
Swanand Vaishampayan Virginia Tech, Chris Brown Virginia Tech
10:15
15m
Talk
Human-Machine Teaming and Team Effectiveness in AI tools for Software Engineering
Research Track
Irum Rauf The Open University, UK, Helen Sharp The Open University, Tamara Lopez The Open University, Michel Wermelinger The Open University