Exploring LLM-Driven Explanations for Quantum Algorithms
Background: Quantum computing is a rapidly growing new programming paradigm that brings significant changes to the design and implementation of algorithms. Understanding quantum algorithms requires knowledge of physics and mathematics, which can be challenging for software developers. Aims: In this work, we provide a first analysis of how LLMs can support developers’ understanding of quantum code. Method: We empirically analyse and compare the quality of explanations provided by three widely adopted LLMs (Gpt3.5, Llama2, and Tinyllama) using two different human-written prompt styles for seven state-of-the-art quantum algorithms. We also analyse how consistent LLM explanations are over multiple rounds and how LLMs can improve existing descriptions of quantum algorithms. Results: Llama2 provides the highest quality explanations from scratch, while Gpt3.5 emerged as the LLM best suited to improve existing explanations. In addition, we show that adding a small amount of context to the prompt significantly improves the quality of explanations. Moreover, we observe how explanations are qualitatively and syntactically consistent over multiple rounds. Conclusions: This work explores the ability of LLM to generate explanations for quantum programs highlight promising results and open challenges for future research in the field of LLMs for quantum code explanation. Future work includes refining the methods by means of prompt optimisation and pars- ing of quantum code explanations, and carrying out a systematic assessment of the quality of explanations.
Fri 25 OctDisplayed time zone: Brussels, Copenhagen, Madrid, Paris change
14:00 - 15:30 | Large language models in software engineering IIESEM Emerging Results, Vision and Reflection Papers Track / ESEM IGC at Telensenyament (B3 Building - 1st Floor) Chair(s): Claudio Di Sipio University of l'Aquila | ||
14:00 15mVision and Emerging Results | Debugging with Open-Source Large Language Models: An Evaluation ESEM Emerging Results, Vision and Reflection Papers Track Yacine Majdoub IResCoMath Lab, University of Gabes, Eya Ben Charrada IResCoMath Lab, University of Gabes Link to publication DOI Pre-print | ||
14:15 15mVision and Emerging Results | Multi-language Software Development in the LLM Era: Insights from Practitioners’ Conversations with ChatGPT ESEM Emerging Results, Vision and Reflection Papers Track Lucas Almeida Aguiar State University of Ceará, Matheus Paixao State University of Ceará, Rafael Carmo Federal University of Ceará, Edson Soares Instituto Atlantico & State University of Ceara (UECE), Antonio Leal State University of Ceará, Matheus Freitas State University of Ceará, Eliakim Gama State University of Ceará | ||
14:30 15mVision and Emerging Results | Exploring LLM-Driven Explanations for Quantum Algorithms ESEM Emerging Results, Vision and Reflection Papers Track Giordano d'Aloisio University of L'Aquila, Sophie Fortz King's College London, Carol Hanna University College London, Daniel Fortunato INESC-ID, University of Porto, Avner Bensoussan King's College London, Eñaut Mendiluze Usandizaga Simula Research Laboratory, Norway, Federica Sarro University College London Pre-print | ||
14:45 15mIndustry talk | Beyond Words: On Large Language Models Actionability in Mission-Critical Risk Analysis ESEM IGC Matteo Esposito University of Oulu, Francesco Palagiano Multitel di Lerede Alessandro & C. s.a.s., Valentina Lenarduzzi University of Oulu, Davide Taibi University of Oulu Pre-print | ||
15:00 15mVision and Emerging Results | Detecting Code Smells using ChatGPT: Initial Insights ESEM Emerging Results, Vision and Reflection Papers Track Luciana L. Silva Federal University of Minas Gerais, Janio R. Silva IFMG, João Eduardo Montandon Universidade Federal de Minas Gerais (UFMG), Marcus Andrade IFMG, Marco Tulio Valente Federal University of Minas Gerais, Brazil | ||
15:15 15mIndustry talk | ChatGPT’s Potential in Cryptography Misuse Detection: A Comparative Analysis with Static Analysis Tools ESEM IGC |