ESEIW 2024
Sun 20 - Fri 25 October 2024 Barcelona, Spain

In recent years, Prompt Learning, based on pre-training, prompting, and prediction, has achieved significant success in natural language processing (NLP). The current issue-commit link recovery (ILR) method converts the ILR into a classification task using pre-trained language models (PLMs) and dedicated neural networks. However, due to inconsistencies between the ILR task and PLMs, these methods not fully leverage the semantic information in PLMs. To imitate the above problem, we make the first trial of the new paradigm to propose a Multi-template prompt learning method with adversarial training for issue-commit link recovery (PromptLink), which transforms the ILR task into a cloze task through the template. Specifically, a Multi-template PromptLink is designed to enhance the generalisation capability by integrating various templates and adopting adversarial training to mitigate the model overfitting. Experiments are conducted on six open-source projects and comprehensively evaluated across six commonly measures. The results show that PromptLink achieves an average F1 of 96.10%, Precision of 96.49%, Recall of 95.92%, MCC of 94.04%, AUC of 96.05%, and ACC of 98.15%, significantly outperforming existing state-of-the-art methods on all measures. Overall, PromptLink not only enhances performance and generalisation but also emerges new ideas and methods for future research. The source code of PromptLink is available at https://figshare.com/s/6130d42ff464c579cdec.