An Empirical Comparison of Code Generation Approaches for Ansible
The rapid proliferation of LLM-based programming assistants has enabled fast and accurate automatic code generation for general-purpose programming languages. Domain-specific languages like Ansible, a DSL for IT Automation, have seen a lack of support despite being critical to many fields due to limited public-domain code for training models and a lack of interest from tool developers. To address this issue, we collect a novel dataset of permissively licensed Ansible code, and use it to create Warp, an LLM for code fine-tuned to produce Ansible tasks from a natural language prompt. We evaluate state-of-the-art tools for LLM-based code generation models, comparing multiple common strategies, including fine-tuning base models on Ansible code and retrieval-augmented-generation using documentation, in order to understand challenges with existing methodology and identify future research directions to enable better code generation for DSLs.
Mon 15 AprDisplayed time zone: Lisbon change
09:00 - 10:30 | Early Morning SessionInteNSE at Daciano da Costa Chair(s): Reyhaneh Jabbarvand University of Illinois at Urbana-Champaign, Saeid Tizpaz-Niari University of Texas at El Paso | ||
09:00 20mPaper | An Empirical Comparison of Code Generation Approaches for Ansible InteNSE Benjamin Darnell University of California, Santa Barbara, Hetarth Chopra University of Illinois at Urbana-Champaign, Aaron Councilman Univ of Illinois Urbana-Champaign, David Grove IBM Research, Vikram S. Adve University of Illinois at Urbana-Champaign, USA | ||
09:20 70mKeynote | Towards an Interpretable Science of Deep Learning for Software Engineering: A Causal Inference View InteNSE Denys Poshyvanyk William & Mary |