The Promise and Challenges of using LLMs to Accelerate the Screening Process of Systematic Reviews
Context: Systematic review (SR) is a popular research method in software engineering (SE). However, conducting an SR takes an average of 67 weeks. Thus, automating any step of the SR process could reduce the effort associated with SRs. Objective: Our objective is to investigate the extent to which Large Language Models (LLMs) can accelerate title-abstract screening by (1) simplifying abstracts for human screeners, and (2) automating title-abstract screening entirely. Method: We performed an experiment where human screeners performed title-abstract screening for 20 papers with both original and simplified abstracts from a prior SR. The experiment with human screeners was reproduced by instructing GPT-3.5 and GPT-4 LLMs to perform the same screening tasks. We also studied whether different prompting techniques (Zero-shot (ZS), One-shot (OS), Few-shot (FS), and Few-shot with Chain-of-Thought (FS-CoT) prompting) improve the screening performance of LLMs. Lastly, we studied if redesigning the prompt used in the LLM reproduction of title-abstract screening leads to improved screening performance. Results: Text simplification did not increase the screeners’ screening performance, but reduced the time used in screening. Screeners’ scientific literacy skills and researcher status predict screening performance. Some LLM and prompt combinations perform as well as human screeners in the screening tasks. Our results indicate that a more recent LLM (GPT-4) is better than its predecessor LLM (GPT-3.5). Additionally, Few-shot and One-shot prompting outperforms Zero-shot prompting. Conclusion: Using LLMs for text simplification in the screening process does not significantly improve human performance. Using LLMs to automate title-abstract screening seems promising, but current LLMs are not significantly more accurate than human screeners. To recommend the use of LLMs in the screening process of SRs, more research is needed. We recommend future SR studies to publish replication packages with screening data to enable more conclusive experimenting with LLM screening.