Debugging is a critical but challenging task for programmers. This paper proposes ChatDBG, an AI-powered debugging assistant. ChatDBG integrates large language models (LLMs) to significantly enhance the capabilities and user-friendliness of conventional debuggers. ChatDBG lets programmers engage in a collaborative dialogue with the debugger, allowing them to pose complex questions about program state, perform root cause analysis for crashes or assertion failures, and explore open-ended queries like “why is x null?”. To handle these queries, ChatDBG grants the LLM autonomy to “take the wheel”: it can act as an independent agent capable of querying and controlling the debugger to navigate through stacks and inspect program state. It then reports its findings and yields back control to the programmer. By leveraging the real-world knowledge embedded in LLMs, ChatDBG can diagnose issues identifiable only through the use of domain-specific reasoning. Our ChatDBG prototype integrates with standard debuggers including LLDB and GDB for native code and Pdb for Python. Our evaluation across a diverse set of code, including C/C++ code with known bugs and a suite of Python code including standalone scripts and Jupyter notebooks, demonstrates that ChatDBG can successfully analyze root causes, explain bugs, and generate accurate fixes for a wide range of real-world errors. For the Python programs, a single query led to an actionable bug fix 67% of the time; one additional follow-up query increased the success rate to 85%. ChatDBG has seen rapid uptake; it has already been downloaded more than 75,000 times.
Wed 25 JunDisplayed time zone: Amsterdam, Berlin, Bern, Rome, Stockholm, Vienna change
11:00 - 12:30 | |||
11:00 20mTalk | ChatDBG: Augmenting Debugging with Large Language Models Research Papers Kyla H. Levin University of Massachusetts Amherst, USA, Nicolas van Kempen University of Massachusetts Amherst, USA, Emery D. Berger University of Massachusetts Amherst and Amazon Web Services, Stephen N. Freund Williams College DOI Pre-print | ||
11:20 10mTalk | Towards Adaptive Software Agents for Debugging Ideas, Visions and Reflections Yacine Majdoub IReSCoMath Research Lab, Faculty of Sciences, University Of Gabes, Tunisia, Eya Ben Charrada IReSCoMath Research Lab, Faculty of Sciences, University Of Gabes, Tunisia, Haifa Touati IReSCoMath Research Lab, Faculty of Sciences, University Of Gabes, Tunisia Pre-print | ||
11:30 20mTalk | Empirically Evaluating the Impact of Object-Centric Breakpoints on the Debugging of Object-Oriented Programs Research Papers Valentin Bourcier INRIA, Pooja Rani University of Zurich, Maximilian Ignacio Willembrinck Santander Univ. Lille, Inria, CNRS, Centrale Lille, UMR 9189 CRIStAL F-59000 Lille, France, Alberto Bacchelli University of Zurich, Steven Costiou INRIA Lille DOI | ||
11:50 20mTalk | An Empirical Study of Bugs in Data Visualization Libraries Research Papers Weiqi Lu The Hong Kong University of Science and Technology, Yongqiang Tian , Xiaohan Zhong The Hong Kong University of Science and Technology, Haoyang Ma Hong Kong University of Science and Technology, Zhenyang Xu University of Waterloo, Shing-Chi Cheung Hong Kong University of Science and Technology, Chengnian Sun University of Waterloo DOI | ||
12:10 20mTalk | DuoReduce: Bug Isolation for Multi-Layer Extensible Compilation Research Papers Jiyuan Wang University of California at Los Angeles, Yuxin Qiu University of California at Riverside, Ben Limpanukorn University of California, Los Angeles, Hong Jin Kang University of Sydney, Qian Zhang University of California at Riverside, Miryung Kim UCLA and Amazon Web Services DOI Pre-print |