VGX: Large-Scale Sample Generation for Boosting Learning-Based Software Vulnerability Analyses
This artifact contains functional and reusable versions of VGX, the baseline techniques for comparison, the datasets for evaluation, and the downstream techniques to improve. As the tool setup is complex, we have prepared a Docker image on Zenodo [1] that has already contained necessary components to execute the tools and prompts to help users compare the respective experiment results in the paper [2]. As our experiments involve a large amount of data, it is recommended that the host machine has at least 200GB hard disk space, 32GB memory, and NVIDIA GPUs that CUDA 11.1 supports.
The experiments presented in this artifact can be divided into 4 different parts. The first part is an evaluation of VGX and the ablation study, which are presented in Tables 3 and 4 in our paper. The second part is the baseline techniques for comparison, which is presented in Table 3 in our paper. The third part evaluates whether the generated realistic vulnerabilities from VGX can indeed improve the deep learning-based vulnerability detection, localization, and repair tools, which is presented in Tables 5, 6, and 7 in our paper. The fourth part is a demo dataset and the respective processing code for demonstrating how VGX can be reused. In this part, users can follow the scripts we provided to feed new samples into VGX to generate vulnerability samples.
VGX takes a considerable amount of time (e.g., several days) and hardware resources (e.g., >=32GB CPU memory and >= 24GB GPU memory) to train the models. Thus, to make it easier for users to reproduce the experiments, we have saved the trained models (including those deep learning-based and pattern-based) for reproducing the vulnerability generation experiments of VGX. To allow researchers to run VGX as a reusable tool, we also provide a demo dataset to show how to reuse VGX.
In this artifact evaluation, we apply for the “Available”, “Functional”, and “Reusable” badges as our expectations, as we make our code and data publicly available on Zenodo and build our artifact into Docker/Singularity images that users can reproduce our experiments easily. We also provide a demo dataset and respective scripts for reusing VGX on other datasets.
The artifact only needs the reviewers to know how to use common Linux commands, Docker containers, and Singularity.
[1] https://zenodo.org/record/10443177 [2] https://eecs.wsu.edu/~ynong/vgx-icse24.pdf