Nüwa: Enhancing MLIR Fuzzing with LLM-Driven Generation and Adaptive Mutation
MLIR, a modular compiler framework, evolves quickly, with regular updates expanding its dialects and operations across LLVM versions and downstream projects. This fast development reduces the effectiveness of traditional fuzzing tools, which test only a small portion of dialects, require extensive manual work (e.g., nearly ten thousand lines of C++ code), and do not match the update speed of MLIR. To address these challenges, we propose Nüwa, the first LLM-based approach for MLIR fuzzing. Nüwa employs a two-phase strategy: first generating valid operations by encoding constraints into LLMs prompts, then synthesizing multi-operation test cases by learning inter-operation dependencies. To enhance operation coverage, it incorporates high-coverage cases from MLIR’s test suite and uses LLM-driven mutations to boost diversity. A self-improvement mechanism enhances the prompts using feedback from high-quality test cases, improving the LLMs’ understanding of MLIR’s complex semantics. Nüwa demonstrates that the generation and mutation process can be fully automated via the intrinsic capabilities of LLMs (including in-context learning), while being applicable to MLIR’s fast evolution. The experimental study shows that Nüwa outperforms the state-of-the-art tools MLIRSmith and MLIRod, detecting 2.9x more unique bugs and achieving 1.6x greater code coverage. To date, Nüwa has identified 55 bugs in the MLIR framework, with 18 confirmed or fixed.