Fri 8 Dec 2023 15:00 - 15:30 at Foothill G - Language Models Chair(s): Csaba Nagy

CONTEXT: Recent studies on story point estimation used deep learning-based language models. These language models were pre-trained on general corpora. However, using language models further pre-trained with specific corpora might be effective. OBJECTIVE: To examine the effectiveness of further pre-trained language models for the predictive performance of story point estimation. METHOD: Two types of further pre-trained language models, namely, domain-specific and repository-specific models, were compared with off-the-shelf models and Deep-SE. The estimation performance was evaluated with 16 project data. RESULTS: The effectiveness of domain-specific and repository-specific models were limited though they were better than the base model they further pre-trained. CONCLUSION: The effect of further pre-training was small. Large off-the-shelf models might be better to be chosen.

Fri 8 Dec

Displayed time zone: Pacific Time (US & Canada) change

14:00 - 15:30
Language ModelsPROMISE 2023 at Foothill G
Chair(s): Csaba Nagy Software Institute - USI, Lugano
14:00
30m
Paper
The FormAI Dataset: Generative AI in Software Security Through the Lens of Formal Verification
PROMISE 2023
Norbert Tihanyi Technology Innovation Institute, Tamas Bisztray University of Oslo, Ridhi Jain Technology Innovation Institute (TII), Abu Dhabi, UAE, Mohamed Amine Ferrag Technology Innovation Institute, Lucas C. Cordeiro The University of Manchester, UK, Vasileios Mavroeidis University of Oslo
DOI
14:30
30m
Paper
Comparing Word-based and AST-based Models for Design Pattern Recognition
PROMISE 2023
Sivajeet Chand Dept. of CSE Chalmers | University of Gothenburg, Sweden, Sushant Kumar Pandey Chalmers and University of Gothenburg, Jennifer Horkoff Chalmers and the University of Gothenburg, Miroslaw Staron University of Gothenburg, Miroslaw Ochodek Poznan University of Technology, Darko Durisic R&D, Volvo Cars, Gothenburg, Sweden
DOI
15:00
30m
Paper
On Effectiveness of Further Pre-training on BERT models for Story Point Estimation
PROMISE 2023
Sousuke Amasaki Okayama Prefectural University
DOI