Write a Blog >>
Tue 11 Oct 2022 14:10 - 14:30 at Banquet A - Technical Session 6 - Source Code Manipulation Chair(s): Collin McMillan

Competitive programming has become a popular way for programmers to test their skills. Large-scale online programming contests attract millions of experienced programmers to compete against each other. Competition-level programming problems are challenging in nature, and participants often fail to solve the problem on their first attempt. Some online platforms for competitive programming allow programmers to practice on competition-level problems as well, and the standard feedback for an incorrect practice submission is the first test case that the submission fails. Often, the failed test case does not provide programmers with enough information to resolve the errors in their code, and they abandon the problem after making several more unsuccessful attempts.

We present Clef, the first data-driven tool that can generate feedback on competition-level code automatically by repairing programmers’ incorrect submissions. The key development is that Clef can learn how to generate repairs for incorrect submissions by examining the repairs that other programmers made to their own submissions over time. Since the differences between a incorrect program and a correct program for the same task may be significant, we introduce a new data structure, merge trees, to capture the changes between submissions. Merge trees are versatile: they can encode both both large algorithm-level redesigns and small statement-level alterations. Clef applies the patterns it learns from a database of submissions to generate repairs for new submissions outside the database. We evaluated Clef on six real-world problems from Codeforces, the world’s largest platform for competitive programming. Clef achieves 42.1% accuracy in repairing programmers’ incorrect submissions. Even when given incorrect submissions from programmers who never found the solution to a problem on their own, Clef repairs the users’ programs 34.1% of the time.

Tue 11 Oct

Displayed time zone: Eastern Time (US & Canada) change

14:00 - 15:30
Technical Session 6 - Source Code ManipulationNIER Track / Research Papers / Late Breaking Results at Banquet A
Chair(s): Collin McMillan University of Notre Dame
14:00
10m
Vision and Emerging Results
Automatic Code Documentation Generation Using GPT-3
NIER Track
Junaed Younus Khan University of Calgary, Gias Uddin University of Calgary, Canada
14:10
20m
Research paper
Automated Feedback Generation for Competition-Level Code
Research Papers
Jialu Zhang Yale University, De Li The MathWorks, Inc., John C. Kolesar Yale University, Hanyuan Shi N/A, Ruzica Piskac Yale University
14:30
10m
Paper
Generalizability of Code Clone Detection on CodeBERT
Late Breaking Results
Tim Sonnekalb German Aerospace Center (DLR), Bernd Gruner German Aerospace Center (DLR), Clemens-Alexander Brust German Aerospace Center (DLR), Patrick Mäder Technische Universität Ilmenau
DOI Pre-print
14:40
10m
Vision and Emerging Results
Next Syntactic-Unit Code Completion and Applications
NIER Track
Hoan Anh Nguyen Amazon, Aashish Yadavally University of Texas at Dallas, Tien N. Nguyen University of Texas at Dallas
14:50
20m
Research paper
CrystalBLEU: Precisely and Efficiently Measuring the Similarity of CodeVirtualACM SIGSOFT Distinguished Paper Award
Research Papers
Aryaz Eghbali University of Stuttgart, Germany, Michael Pradel University of Stuttgart
15:10
20m
Research paper
Low-Resources Project-Specific Code SummarizationVirtual
Research Papers
Rui Xie Peking University, Tianxiang Hu Peking University, Wei Ye Peking University, Shikun Zhang Peking University