ICSME 2025
Sun 7 - Fri 12 September 2025 Auckland, New Zealand

Using Large Language Models (LLMs) to perform Natural Language Processing (NLP) tasks has been becoming increasingly pervasive in recent times. The versatile nature of LLMs makes them applicable to a wide range of such tasks. While the performance of recent LLMs is generally outstanding, several studies have shown that LLMs can often produce incorrect results. Automatically identifying these faulty behaviors is extremely useful for improving the effectiveness of LLMs. One obstacle to this is the limited availability of labelled datasets, necessitating an oracle to determine the correctness of LLMs behaviors. Metamorphic Testing (MT) is a popular testing approach that alleviates the oracle problem. At the core of MT are Metamorphic Relations (MRs) that define the relationship between the outputs of related inputs. MT can expose faulty behaviors without the need for explicit oracles (i.e., labelled datasets). This paper presents the most comprehensive study of MT for LLMs to date. We conducted a literature review and collected 191 MRs for NLP tasks. We implemented a representative subset of them (38 MRs) to conduct a series of experiments with four popular LLMs, running ∼550K metamorphic test cases. The results shed light on the capabilities and opportunities of MT for LLMs, as well as its limitations.