Write a Blog >>

Automatically assessing code readability is a relatively new challenge that has attracted growing attention from the software engineering community. In this paper, we outline the idea to regard code readability assessment as a learning-to-rank task. Specifically, we design a pairwise ranking model with siamese neural networks, which takes as input a code pair and outputs their readability ranking order. We have evaluated our approach on three publicly available datasets. The result is promising, with an accuracy of 83.5%, a precision of 86.1%, a recall of 81.6%, an F-measure of 83.6% and an AUC of 83.4%.

Mon 10 Oct

Displayed time zone: Eastern Time (US & Canada) change

11:10 - 11:45
Paper Presentation Session 2: Readability Assessment[Workshop] AeSIR '22 at Online Workshop 3
Chair(s): Fernanda Madeiral Vrije Universiteit Amsterdam
11:10
10m
Paper
How Readable is Model Generated Code? Examining Readability and Visual Inspection of GitHub CopilotVirtual
[Workshop] AeSIR '22
Naser Al Madi Colby College
11:20
10m
Paper
Rank Learning-Based Code Readability Assessment with Siamese Neural NetworksVirtual
[Workshop] AeSIR '22
11:30
15m
Live Q&A
Q&A and Open Discussion on Readability AssessmentVirtual
[Workshop] AeSIR '22
Naser Al Madi Colby College, Qing Mi