ICSME 2025
Sun 7 - Fri 12 September 2025 Auckland, New Zealand
Tue 9 Sep 2025 14:30 - 15:00 at Room 260-040 - Session 2

In an era increasingly shaped by generative AI, code understanding remains a critical foundation for ensuring software quality, reliability, and maintainability. While AI systems can accelerate code generation, developers still face substantial challenges in comprehending, debugging, and effectively integrating the resulting artifacts. Code proficiency, defined not only by comprehension but also by the ability to write efficient, idiomatic code, plays a central role in addressing these challenges. Existing tools, such as those assigning CEFR-based levels to code constructs, offer initial frameworks for assessing code difficulty. However, their manually derived classifications lack empirical validation and often diverge from pedagogical progressions found in computer science textbooks. This research seeks to address the absence of a standardized, data-driven metric for determining the proficiency levels required to understand specific programming elements. We propose an automated framework grounded in textbook analysis and clustering techniques to establish a scalable proficiency metric applicable to both human-written and AI-generated code. Preliminary findings reveal strong alignment with educational sequencing and suggest promising applications in AI-assisted software development, particularly in enhancing code review workflows and tailoring AI-generated code to developers’ proficiency levels.

Tue 9 Sep

Displayed time zone: Auckland, Wellington change