DILLEMA: Diffusion and Large Language Models for Multi-Modal Augmentation
Ensuring the robustness of deep learning models requires comprehensive and diverse testing. Existing approaches, often based on simple data augmentation techniques or generative adversarial networks, are limited in their ability to produce realistic and varied test cases. To address these limitations, we present a novel framework for testing vision neural networks that leverages Large Language Models and control-conditioned Diffusion Models to generate synthetic, high-fidelity test cases. Our approach begins by translating images into detailed textual descriptions using a captioning model, allowing the language model to identify modifiable aspects of the image and generate counterfactual descriptions. These descriptions are then used to produce new test images through a text-to-image diffusion process that preserves spatial consistency and maintains the critical elements of the scene. We demonstrate the effectiveness of our method using two datasets: ImageNet1K for image classification and SHIFT for semantic segmentation in autonomous driving. The results show that our approach can generate significant test cases that identify weaknesses and improve the robustness of the model through targeted retraining. To validate the generated images, we conducted a human-based assessment using Amazon Mechanical Turk. The responses from 2500 participants confirmed, with high agreement among voters, that our approach produces valid and realistic images.
Sat 3 MayDisplayed time zone: Eastern Time (US & Canada) change
14:00 - 15:30 | |||
14:00 30mTalk | DANDI: Diffusion as Normative Distribution for Deep Neural Network Input DeepTest Pre-print | ||
14:30 30mTalk | Robust Testing for Deep Learning using Human Label Noise DeepTest Gordon Lim University of Michigan, Stefan Larson Vanderbilt University, Kevin Leach Vanderbilt University Pre-print | ||
15:00 30mTalk | DILLEMA: Diffusion and Large Language Models for Multi-Modal Augmentation DeepTest Luciano Baresi Politecnico di Milano, Davide Yi Xian Hu Politecnico di Milano, Muhammad Irfan Mas'Udi Politecnico di Milano, Giovanni Quattrocchi Politecnico di Milano |