Tue 16 Apr 2024 14:00 - 15:30 at Fernando Pessoa - Focus Group: AI/ML for SE Chair(s): Reyhaneh Jabbarvand
Large Language Models (LLMs) are gaining popularity in the field of Natural Language Processing (NLP) due to their remarkable accuracy in various NLP tasks. LLMs designed for coding are trained on massive datasets, which enables them to learn the structure and syntax of programming languages. These datasets are scraped from the web and LLMs memorise information in these datasets. LLMs for code are also growing, making them more challenging to execute and making users increasingly reliant on external infrastructure. We aim to explore the challenges faced by LLMs for code and propose techniques to measure and prevent memorisation. Additionally, we suggest methods to compress models and run them locally on consumer hardware.
Tue 16 AprDisplayed time zone: Lisbon change
Tue 16 Apr
Displayed time zone: Lisbon change
14:00 - 15:30 | Focus Group: AI/ML for SEDoctoral Symposium at Fernando Pessoa Chair(s): Reyhaneh Jabbarvand University of Illinois at Urbana-Champaign | ||
14:00 90mPoster | Beyond Accuracy: Evaluating Source Code Capabilities in Large Language Models for Software Engineering Doctoral Symposium Alejandro Velasco William & Mary | ||
14:00 90mPoster | Towards Interpreting the Behavior of Large Language Models on Software Engineering Tasks Doctoral Symposium Atish Kumar Dipongkor University of Central Florida | ||
14:00 90mPoster | Programming Language Models in Multilingual Settings Doctoral Symposium Jonathan Katzy Delft University of Technology | ||
14:00 90mPoster | Beyond Accuracy and Robustness Metrics for Large Language Models for Code Doctoral Symposium | ||
14:00 90mPoster | Towards Safe, Secure, and Usable LLMs4Code Doctoral Symposium Ali Al-Kaswan Delft University of Technology, Netherlands |