Can LLMs Write CI? A Study on Automatic Generation of GitHub Actions Configurations
Continuous Integration (CI) services, such as GitHub Actions, require developers to write YAML-based configurations, which can be tedious and error-prone. Despite the increasing use of Large Language Models (LLMs) to automate software engineering tasks, their ability to generate CI configurations remains underexplored. This paper presents a preliminary study evaluating six LLMs for generating GitHub Actions configurations from natural language descriptions. We assess three general-purpose foundation models (GPT-4o, Llama, and Gemma) and three code-pretrained models (GPT-4.1, Code Llama, and CodeGemma). We also introduce the first labeled dataset of its kind, constructed from GitHub Actions documentation, pairing descriptions with corresponding best-practice YAML configurations. Zero-shot prompting achieves up to 69% similarity with the ground truth, with only 3% perfect matches. Code-pretrained models slightly underperform compared to general-purpose ones in YAML-based CI tasks, revealing LLM limitations for CI configuration generation. Analyzing GPT-4o outputs reveals issues like missing or renamed steps, misinterpreted descriptions, and unnecessary additions that may affect structural and contextual correctness, indicating a gap between generation quality and the precision required for executable CI configurations. Our research offers insights for improving LLM alignment with configuration languages and guiding future efforts on CI automation and tooling support.
Fri 12 SepDisplayed time zone: Auckland, Wellington change
10:30 - 12:00 | Session 13 - Reuse 1NIER Track / Research Papers Track / Industry Track / Registered Reports at Case Room 3 260-055 Chair(s): Banani Roy University of Saskatchewan | ||
10:30 15m | From Release to Adoption: Challenges in Reusing Pre-trained AI Models for Downstream Developers Research Papers Track Peerachai Banyongrakkul The University of Melbourne, Mansooreh Zahedi The Univeristy of Melbourne, Patanamon Thongtanunam University of Melbourne, Christoph Treude Singapore Management University, Haoyu Gao The University of Melbourne Pre-print | ||
10:45 15m | Are Classical Clone Detectors Good Enough For the AI Era? Research Papers Track Ajmain Inqiad Alam University of Saskatchewan, Palash Ranjan Roy University of Saskatchewan, Farouq Al-Omari Thompson Rivers University, Chanchal K. Roy University of Saskatchewan, Banani Roy University of Saskatchewan, Kevin Schneider University of Saskatchewan | ||
11:00 10m | Can LLMs Write CI? A Study on Automatic Generation of GitHub Actions Configurations NIER Track Taher A. Ghaleb Trent University, Dulina Rathnayake Department of Computer Science, Trent University, Peterborough, Canada Pre-print | ||
11:10 10m | A Preliminary Study on Large Language Models Self-Negotiation in Software Engineering NIER Track Chunrun Tao Kyushu University, Honglin Shu Kyushu University, Masanari Kondo Kyushu University, Yasutaka Kamei Kyushu University | ||
11:20 10m | CIgrate: Automating CI Service Migration with Large Language Models Registered Reports Md Nazmul Hossain Department of Computer Science, Trent University, Peterborough, Canada, Taher A. Ghaleb Trent University Pre-print | ||
11:30 15m | A Deep Dive into Retrieval-Augmented Generation for Code Completion: Experience on WeChat Industry Track Zezhou Yang Tencent Inc., Ting Peng Tencent Inc., Cuiyun Gao Harbin Institute of Technology, Shenzhen, Chaozheng Wang The Chinese University of Hong Kong, Hailiang Huang Tencent Inc., Yuetang Deng Tencent | ||
11:45 10m | Inferring Attributed Grammars from Parser Implementations NIER Track Andreas Pointner University of Applied Sciences Upper Austria, Hagenberg, Austria, Josef Pichler University of Applied Sciences Upper Austria, Herbert Prähofer Johannes Kepler University Linz Pre-print |