StackPlagger: A System for Identifying AI-Code Plagiarism on Stack Overflow
This program is tentative and subject to change.
Identifying AI code plagiarism on technical forums like Stack Overflow (SO) is critical, as it can directly impact the platform’s trust and credibility. While previous studies have explored AI-generated code detection, they have focused on long, standalone samples from repositories and competitions. In contrast, SO snippets are often short, fragmented, and context-specific, which can make detection more challenging. Furthermore, existing methods have also not adequately addressed the concern of obfuscated or adversarially prompted code that are crafted to mimic human style and evade detection. To address these gaps, we first introduce a curated dataset of 8000 SO-ChatGPT snippet pairs generated using multiple adversarial prompts. While earlier methods solely relied on pre-trained models, we propose an ensemble approach combining stylometric features of code along with the pre-trained embeddings to improve detection performance. Finally, we deploy our fine-tuned model as a Google Chrome extension called `StackPlagger’, which can flag AI-generated code in SO answers and display AI confidence scores. Video demonstration and the associated artifacts of our tool can be found at \url{https://youtu.be/6O9Urp2mvbI} and \url{https://github.com/harsh-g1/StackPlagger}, respectively.
This program is tentative and subject to change.
Tue 18 NovDisplayed time zone: Seoul change
15:00 - 18:00 | |||
15:00 3hDemonstration | APIDA-Chat: Structured Synthesis of API Search Dialogues to Bootstrap Conversational Agents Tool Demonstration Track | ||
15:00 3hDemonstration | PROXiFY: A Bytecode Analysis Tool for Detecting and Classifying Proxy Contracts in Ethereum Smart Contracts Tool Demonstration Track Ilham Qasse Reykjavik University, Mohammad Hamdaqa Polytechnique Montreal, Björn Þór Jónsson Reykjavik University | ||
15:00 3hDemonstration | DeepTx: Real-Time Transaction Risk Analysis via Multi-Modal Features and LLM Reasoning Tool Demonstration Track Yixuan Liu Nanyang Technological University, Xinlei Li Nanyang Technological University, Yi Li Nanyang Technological University Pre-print | ||
15:00 3hDemonstration | WIBE: Watermarks for generated Images - Benchmarking & Evaluation Tool Demonstration Track Aleksey Yakushev ISP RAS, Aleksandr Akimenkov ISP RAS, Khaled Abud MSU AI Institute, Dmitry Obydenkov ISP RAS, Irina Serzhenko MIPT, Kirill Aistov Huawei Research Center, Egor Kovalev MSU, Stanislav Fomin ISP RAS, Anastasia Antsiferova ISP RAS Research Center, MSU AI Institute, Kirill Lukianov ISP RAS Research Center, MIPT, Yury Markin ISP RAS | ||
15:00 3hDemonstration | EyeNav: Accessible Webpage Interaction and Testing using Eye-tracking and NLP Tool Demonstration Track Juan Diego Yepes-Parra Universidad de los Andes, Colombia, Camilo Escobar-Velásquez Universidad de los Andes, Colombia Link to publication Media Attached | ||
15:00 3hDemonstration | Quirx: A Mutation-Based Framework for Evaluating Prompt Robustness in LLM-based Software Tool Demonstration Track Souhaila Serbout University of Zurich, Zurich, Switzerland | ||
15:00 3hDemonstration | BenGQL: An Extensible Benchmarking Framework for Automated GraphQL Testing Tool Demonstration Track Media Attached | ||
15:00 3hDemonstration | evalSmarT: An LLM-Based Evaluation Framework for Smart Contract Comment Generation Tool Demonstration Track Fatou Ndiaye MBODJI SnT, University of Luxembourg, Mame Marieme Ciss SOUGOUFARA UCAD, Senegal, Wendkuuni Arzouma Marc Christian OUEDRAOGO SnT, University of Luxembourg, Alioune Diallo University of Luxembourg, Kui Liu Huawei, Jacques Klein University of Luxembourg, Tegawendé F. Bissyandé University of Luxembourg Pre-print | ||
15:00 3hDemonstration | LLMorph: Automated Metamorphic Testing of Large Language Models Tool Demonstration Track Steven Cho The University of Auckland, New Zealand, Stefano Ruberto JRC European Commission, Valerio Terragni University of Auckland | ||
15:00 3hDemonstration | TRUSTVIS: A Multi-Dimensional Trustworthiness Evaluation Framework for Large Language Models Tool Demonstration Track Ruoyu Sun University of Alberta, Canada, Da Song University of Alberta, Jiayang Song Macau University of Science and Technology, Yuheng Huang The University of Tokyo, Lei Ma The University of Tokyo & University of Alberta | ||
15:00 3hDemonstration | GUI-ReRank: Enhancing GUI Retrieval with Multi-Modal LLM-based Reranking Tool Demonstration Track Kristian Kolthoff Institute for Software and Systems Engineering, Clausthal University of Technology, Felix Kretzer human-centered systems Lab (h-lab), Karlsruhe Institute of Technology (KIT) , Christian Bartelt Institute for Software and Systems Engineering, TU Clausthal, Alexander Maedche human-centered systems Lab (h-lab), Karlsruhe Institute of Technology (KIT) , Simone Paolo Ponzetto Data and Web Science Group, University of Mannheim Pre-print Media Attached | ||
15:00 3hDemonstration | StackPlagger: A System for Identifying AI-Code Plagiarism on Stack Overflow Tool Demonstration Track Aman Swaraj Dept. of Computer Science & Engineering, Indian Institute of Technology, Roorkee, India, Harsh Goyal Indian Institute of Technology, Roorkee, Sumit Chadgal Indian Institute of Technology, Roorkee, Sandeep Kumar Dept. of Computer Science & Engineering, Indian Institute of Technology, Roorkee, India | ||
15:00 3hDemonstration | AgentDroid: A Multi-Agent Tool for Detecting Fraudulent Android Applications Tool Demonstration Track Ruwei Pan Chongqing University, Hongyu Zhang Chongqing University, Zhonghao Jiang , Ran Hou Chongqing University | ||