Constraint-Guided Unit Test Generation for Machine Learning Libraries
This program is tentative and subject to change.
Machine learning (ML) libraries such as PyTorch and TensorFlow are essential for a wide range of modern applications. Ensuring the correctness of ML libraries through testing is crucial. However, ML APIs often impose strict input constraints involving complex data structures such as tensors. Automated test generation tools such as Pynguin are not aware of these constraints and often create invalid inputs. This leads to early test failures and limited code coverage. Prior work has investigated extracting constraints from official API documentation. In this paper, we present PynguinML, an approach that improves the Pynguin test generator to leverage these constraints to generate valid inputs for ML APIs, enabling more thorough testing and higher code coverage. Our evaluation is based on 165 modules from PyTorch and TensorFlow, comparing PynguinML against Pynguin. The results show that PynguinML significantly improves test effectiveness, achieving up to 63.9 % higher code coverage.
This program is tentative and subject to change.
Sun 16 NovDisplayed time zone: Seoul change
08:30 - 10:00 | |||
08:30 10mTalk | Opening Keynote Shin Hong Chungbuk National University | ||
08:40 20mTalk | Search-based Hyperparameter Tuning for Python Unit Test Generation Research Papers Pre-print | ||
09:00 20mTalk | Constraint-Guided Unit Test Generation for Machine Learning Libraries Research Papers Lukas Krodinger University of Passau, Altin Hajdari University of Passau, Stephan Lukasczyk JetBrains Research, Gordon Fraser University of Passau Pre-print | ||
09:20 20mTalk | LLM-Guided Fuzzing for Pathological Input Generation Research Papers | ||
09:40 20mTalk | The Pursuit of Diversity: Multi-Objective Testing of Deep Reinforcement Learning Agents Research Papers Antony Bartlett TU Delft, The Netherlands, Cynthia C. S. Liem Delft University of Technology, Annibale Panichella Delft University of Technology | ||