Beyond pip install: Evaluating LLM agents for the automated installation of Python projects
This program is tentative and subject to change.
Many works have recently proposed the use of Large Language Model (LLM) based agents for performing `repository level’ tasks, loosely defined as a set of tasks whose scopes are greater than a single file. While the advent of such agents has led to speculation that software engineering tasks could be performed almost independently of human intervention, we argue that one important task is missing, which is the capability to fulfill project level dependency by installing other repositories. To investigate the feasibility of the repository level installation task, we propose an agent whose goal is to install and verify the installation of a given repository by searching for installation instructions in the repository’s documentation. We evaluate our agent using 40 open source Python projects, and report that 45% of the studied repositories cannot be automatically installed by our agent. Through the results of our empirical evaluation, we identify the common causes for our agent’s inability to install a repository and discuss the challenges faced in the design and implementation of such an agent. We also present a benchmark of repository installation tasks which includes a ground truth installation process in the format of Docker files.
This program is tentative and subject to change.
Wed 5 MarDisplayed time zone: Eastern Time (US & Canada) change
11:00 - 12:30 | Empirical Studies & LLMIndustrial Track / Research Papers / Reproducibility Studies and Negative Results (RENE) Track at L-1710 | ||
11:00 15mTalk | Beyond pip install: Evaluating LLM agents for the automated installation of Python projects Research Papers Louis Mark Milliken KAIST, Sungmin Kang Korea Advanced Institute of Science and Technology, Shin Yoo Korea Advanced Institute of Science and Technology | ||
11:18 12mTalk | On the Compression of Language Models for Code: An Empirical Study on CodeBERT Research Papers Giordano d'Aloisio University of L'Aquila, Luca Traini University of L'Aquila, Federica Sarro University College London, Antinisca Di Marco University of L'Aquila Pre-print | ||
11:30 15mTalk | Can Large Language Models Discover Metamorphic Relations? A Large-Scale Empirical Study Research Papers Jiaming Zhang University of Science and Technology Beijing, Chang-ai Sun University of Science and Technology Beijing, Huai Liu Swinburne University of Technology, Sijin Dong University of Science and Technology Beijing | ||
11:45 15mTalk | Revisiting the Non-Determinism of Code Generation by the GPT-3.5 Large Language Model Reproducibility Studies and Negative Results (RENE) Track Salimata Sawadogo Centre d'Excellence Interdisciplinaire en Intelligence Artificielle pour le Développement (CITADEL), Aminata Sabané Université Joseph KI-ZERBO, Centre d'Excellence CITADELLE, Rodrique Kafando Centre d'Excellence Interdisciplinaire en Intelligence Artificielle pour le Développement (CITADEL), Tegawendé F. Bissyandé University of Luxembourg | ||
12:00 15mTalk | Language Models to Support Multi-Label Classification of Industrial Data Industrial Track Waleed Abdeen Blekinge Institute of Technology, Michael Unterkalmsteiner , Krzysztof Wnuk Blekinge Institute of Technology , Alessio Ferrari CNR-ISTI, Panagiota Chatzipetrou |