Advances in online Text-to-Image (T2I) generator models allow users and organizations alike to generate millions of images from text prompts. Unfortunately, recent studies have demonstrated how elementary prompts result in displaying noticeable social biases in the models’ output imagery. The potential representational harm of T2I models could lead to further marginalize minority groups. Thus, a systematic approach is required to comprehensively and continuously assess the absence of bias in T2I generators.
In this paper, we present \imagebite, a framework to systematically and thoroughly evaluate representational discrimination in T2I-generated images, seamlessly integrable into AI engineering processes. \imagebite~enables development teams to customize their test scenarios and automatically create and run test cases based on user-defined non-discrimination requirements. We have implemented an open-source tool, available on GitHub, to support the application of our approach.