ChatGPT-Resistant Screening Instrument for Identifying Non-Programmers
To ensure the validity of software engineering and IT security studies with professional programmers, it is essential to identify participants without programming skills. Existing screening questions are efficient, cheating robust, and effectively differentiate programmers from non-programmers. However, the release of ChatGPT raises concerns about their continued effectiveness in identifying non-programmers. In a simulated attack, we showed that ChatGPT can easily solve existing screening questions. Therefore, we designed new ChatGPT-resistant screening questions using visual concepts and code comprehension tasks. We evaluated 28 screening questions in an online study with 121 participants involving programmers and non-programmers. Our results showed that questions using visualizations of well-known programming concepts performed best in differentiating between programmers and non-programmers. Participants prompted to use ChatGPT struggled to solve the tasks. They considered ChatGPT ineffective and changed their strategy after a few screening questions. In total, we present six ChatGPT-resistant screening questions that effectively identify non-programmers. We provide recommendations on setting up a ChatGPT-resistant screening instrument that takes less than three minutes to complete by excluding 99.47% of non-programmers while including 94.83% of programmers.
Wed 17 AprDisplayed time zone: Lisbon change
11:00 - 12:30 | Generative AI studiesResearch Track / Software Engineering Education and Training at Luis de Freitas Branco Chair(s): Walid Maalej University of Hamburg | ||
11:00 15mTalk | ChatGPT Incorrectness Detection in Software Reviews Research Track Minaoar Hossain Tanzil University of Calgary, Canada, Junaed Younus Khan University of Calgary, Gias Uddin York University, Canada DOI Pre-print | ||
11:15 15mTalk | ChatGPT-Resistant Screening Instrument for Identifying Non-Programmers Research Track Raphael Serafini Ruhr University Bochum, Clemens Otto Ruhr University Bochum, Stefan Albert Horstmann Ruhr University Bochum, Alena Naiakshina Ruhr University Bochum | ||
11:30 15mTalk | Development in times of hype: How freelancers explore Generative AI? Research Track Mateusz Dolata University of Zurich, Norbert Lange Entschleunigung Lange, Gerhard Schwabe University of Zurich DOI Pre-print File Attached | ||
11:45 15mTalk | How Far Are We? The Triumphs and Trials of Generative AI in Learning Software Engineering Research Track Rudrajit Choudhuri Oregon State University, Dylan Liu Oregon State University, Igor Steinmacher Northern Arizona University, Marco Gerosa Northern Arizona University, Anita Sarma Oregon State University Pre-print | ||
12:00 15mResearch paper | Uncovering the Causes of Emotions in Software Developer Communication Using Zero-shot LLMs Research Track Mia Mohammad Imran Virginia Commonwealth University, Preetha Chatterjee Drexel University, USA, Kostadin Damevski Virginia Commonwealth University Pre-print | ||
12:15 15mTalk | Assessing AI Detectors in Identifying AI-Generated Code: Implications for Education Software Engineering Education and Training Wei Hung Pan School of Information Technology, Monash University Malaysia, Ming Jie Chok School of Information Technology, Monash University Malaysia, Jonathan Leong Shan Wong School of Information Technology, Monash University Malaysia, Yung Xin Shin School of Information Technology, Monash University Malaysia, Yeong Shian Poon School of Information Technology, Monash University Malaysia, Zhou Yang Singapore Management University, Chun Yong Chong Monash University Malaysia, David Lo Singapore Management University, Mei Kuan Lim Monash University Malaysia |