In an era increasingly shaped by generative AI, code understanding remains a critical foundation for ensuring software quality, reliability, and maintainability. While AI systems can accelerate code generation, developers still face substantial challenges in comprehending, debugging, and effectively integrating the resulting artifacts. Code proficiency, defined not only by comprehension but also by the ability to write efficient, idiomatic code, plays a central role in addressing these challenges. Existing tools, such as those assigning CEFR-based levels to code constructs, offer initial frameworks for assessing code difficulty. However, their manually derived classifications lack empirical validation and often diverge from pedagogical progressions found in computer science textbooks. This research seeks to address the absence of a standardized, data-driven metric for determining the proficiency levels required to understand specific programming elements. We propose an automated framework grounded in textbook analysis and clustering techniques to establish a scalable proficiency metric applicable to both human-written and AI-generated code. Preliminary findings reveal strong alignment with educational sequencing and suggest promising applications in AI-assisted software development, particularly in enhancing code review workflows and tailoring AI-generated code to developers’ proficiency levels.
Tue 9 SepDisplayed time zone: Auckland, Wellington change
13:30 - 15:00 | |||
13:30 30m | Enhancing Infrastructure Maintenance and Evolution through Graph-Based Visualization and Analysis Doctoral Symposium Stefano Fossati JADS - TU/e | ||
14:00 30m | Understanding and Simulating OSS Evolution: A Case Study on PyMC Doctoral Symposium Toru Sugiyama The Open University of Japan | ||
14:30 30m | Towards Proficiency Assessment through Code Doctoral Symposium Ruksit Rojpaisarnkit Nara Institute of Science and Technology | ||