CodeFuse-13B: A Pretrained Multi-lingual Code Large Language Model
Code Large Language Models (Code LLMs) have gained significant attention in the industry due to their wide applications in the full lifecycle of software engineering. However, the effectiveness of existing models in understanding non-English inputs for multi-lingual code-related tasks is still far from well studied. % This paper introduces CodeFuse-13B, an open-sourced pre-trained code LLM. It is specifically designed for code-related tasks with both English and Chinese prompts and supports over 40 programming languages. CodeFuse achieves its effectiveness by utilizing a high-quality pre-training dataset that is carefully filtered by program analyzers and optimized during the training process. % The evaluation is based on % Furthermore, CodeFuse has been successfully integrated into the software development process at AntGroup, where it has received valuable feedback from thousands of developers during their daily work. % Extensive experiments are conducted using real-world usage scenarios, the industry-standard benchmark HumanEval-x, and the specially designed CodeFuseEval for Chinese prompts. To assess the effectiveness of CodeFuse, we actively collected valuable human feedback from the AntGroup’s software development process where CodeFuse has been successfully deployed. % The results demonstrate that CodeFuse-13B achieves a HumanEval pass@1 score of 37.10%, positioning it as one of the top multi-lingual code LLMs with similar parameter sizes. In practical scenarios, such as code generation, code translation, code comments, and testcase generation, CodeFuse performs better than other models when confronted with Chinese prompts.
Fri 19 AprDisplayed time zone: Lisbon change
16:00 - 17:30 | LLM, NN and other AI technologies 7Software Engineering in Society / Software Engineering in Practice / Research Track / New Ideas and Emerging Results at Grande Auditório Chair(s): Vincent J. Hellendoorn Carnegie Mellon University | ||
16:00 15mTalk | Predicting Performance and Accuracy of Mixed-Precision Programs for Precision Tuning Research Track | ||
16:15 15mTalk | A Synthesis of Green Architectural Tactics for ML-Enabled Systems Software Engineering in Society Heli Järvenpää Vrije Universiteit Amsterdam, Patricia Lago Vrije Universiteit Amsterdam, Justus Bogner Vrije Universiteit Amsterdam, Grace Lewis Carnegie Mellon Software Engineering Institute, Henry Muccini University of L'Aquila, Italy, Ipek Ozkaya Carnegie Mellon University Pre-print | ||
16:30 15mTalk | Greening Large Language Models of Code Software Engineering in Society Jieke Shi Singapore Management University, Zhou Yang Singapore Management University, Hong Jin Kang UCLA, Bowen Xu North Carolina State University, Junda He Singapore Management University, David Lo Singapore Management University Pre-print Media Attached | ||
16:45 15mTalk | Lessons from Building CodeBuddy: A Contextualized AI Coding Assistant Software Engineering in Practice Gustavo Pinto Federal University of Pará (UFPA) and Zup Innovation, Cleidson de Souza Federal University of Pará Belém, João Batista Cordeiro Neto Federal University of Santa Catarina and Zup Innovation, Alberto de Souza Zup Innovation, Tarcísio Gotto Zup Innovation, Edward Monteiro StackSpot | ||
17:00 15mTalk | CodeFuse-13B: A Pretrained Multi-lingual Code Large Language Model Software Engineering in Practice | ||
17:15 7mTalk | Breaking the Silence: the Threats of Using LLMs in Software Engineering New Ideas and Emerging Results June Sallou Delft University of Technology, Thomas Durieux TU Delft, Annibale Panichella Delft University of Technology Pre-print |