On the Applicability of Language Models to Block-Based Programs
Block-based programming languages like Scratch are increasingly popular for programming education and end-user programming. Recent program analyses build on the insight that source code can be modelled using techniques from natural language processing. Many of the regularities of source code that support this approach are due to the syntactic overhead imposed by textual programming languages. This syntactic overhead, however, is precisely what block-based languages remove in order to simplify programming. Consequently, it is unclear how well this modelling approach performs on block-based programming languages. In this paper, we investigate the applicability of language models for the popular block-based programming language Scratch. We model Scratch programs using n-gram models, the most essential type of language model, and transformers, a popular deep learning model. Evaluation on the example tasks of code completion and bug finding confirm that blocks inhibit predictability, but the use of language models is nevertheless feasible. Our findings serve as foundation for improving tooling and analyses for block-based languages.
Fri 19 MayDisplayed time zone: Hobart change
15:45 - 17:15 | SE education methods and toolsTechnical Track / SEET - Software Engineering Education and Training at Meeting Room 101 Chair(s): Andrew Begel Carnegie Mellon University | ||
15:45 15mTalk | On the Applicability of Language Models to Block-Based Programs Technical Track Elisabeth Griebl University of Passau, Benedikt Fein University of Passau, Florian Obermueller University of Passau, Gordon Fraser University of Passau, René Just University of Washington | ||
16:00 15mTalk | Improving Grading Outcomes in Software Engineering Projects Through Automated Contributions Summaries SEET - Software Engineering Education and Training Kai Presler-Marshall Bowdoin College, Sarah Heckman North Carolina State University, Kathryn Stolee North Carolina State University | ||
16:15 15mTalk | Analyzing the Quality of Submissions in Online Programming Courses SEET - Software Engineering Education and Training Maria Tigina JetBrains Research, Anastasiia Birillo JetBrains Research, Yaroslav Golubev JetBrains Research, Hieke Keuning Utrecht University, Nikolay Vyahhi Stepik, Timofey Bryksin JetBrains Research Pre-print | ||
16:30 15mTalk | A Metric for Measuring Software Engineering Post-Graduate Outcomes SEET - Software Engineering Education and Training | ||
16:45 7mTalk | Using Focus to Personalise Learning and Feedback in Software Engineering Education SEET - Software Engineering Education and Training Bansri Amish Modi School of Information Technology, Deakin University, Andrew Cain School of Information Technology, Deakin University, Guy Wood-Bradley Deakin University, Jake Renzella University of New South Wales, Sydney | ||
16:52 7mTalk | Shaping a Tool for Developing Computing Students’ Professional Identity - Industry Perspectives SEET - Software Engineering Education and Training Laura Tubino Deakin University, Kerri Morgan Deakin University, Guy Wood-Bradley Deakin University, Andrew Cain School of Information Technology, Deakin University | ||
17:00 7mTalk | REFERENT: Transformer based Feedback Generation using Assignment Information for Programming Course SEET - Software Engineering Education and Training Jinseok Heo Sungkyunkwan University, Hohyeon Jeong Sungkyunkwan University, Dongwook Choi SungKyunKwan University, Eunseok Lee Sungkyunkwan University | ||
17:07 7mTalk | Factors Affecting Compilable State at Each Keystroke in CS1 SEET - Software Engineering Education and Training Steven Scott Utah State University, Arto Hellas Aalto University, Juho Leinonen The University of Auckland, John Edwards Utah State University |