The integration of Large Language Models (LLMs) into software development workflows has transformed automated programming but introduced significant security challenges. LLMs often generate vulnerable code due to the insecure patterns present in training data, leading to the generation of code vulnerable to threats such as SQL injection, cross-site scripting, and buffer overflows. Existing mitigation strategies, including static and dynamic analysis tools and prompt engineering, are reactive rather than preventive. Recent advances in model training, such as fine-tuning and adversarial training, offer promising avenues for enhancing the security of LLM-generated code. This paper explores different methodologies and proposes an evaluation framework to embed security directly into AI-assisted programming. By integrating security into model training and assessment, we aim to establish a robust standard for secure AI-driven programming.
Program Display Configuration
Wed 25 Jun
Displayed time zone: Amsterdam, Berlin, Bern, Rome, Stockholm, Viennachange