TCSE logo 
 Sigsoft logo
Sustainability badge
Fri 2 May 2025 17:15 - 17:30 at 206 plus 208 - Human and Social for AI Chair(s): Ramiro Liscano

Large language models (LLMs) have rapidly gained popularity and are being embedded into professional applications due to their capabilities in generating human-like content. However, unquestioned reliance on their outputs and recommendations can be problematic as LLMs can reinforce societal biases and stereotypes. This study investigates how LLMs, specifically OpenAI’s GPT-4 and Microsoft Copilot, can reinforce gender and racial stereotypes within the software engineering (SE) profession through both textual and graphical outputs. We used each LLM to generate 300 profiles, consisting of 100 gender-based and 50 gender-neutral profiles, for a recruitment scenario in SE roles. Recommendations were generated for each profile and evaluated against the job requirements for four distinct SE positions. Each LLM was asked to select the top 5 candidates and subsequently the best candidate for each role. Each LLM was also asked to generate images for the top 5 candidates, providing a dataset for analysing potential biases in both text-based selections and visual representations. Our analysis reveals that both models preferred male and Caucasian profiles, particularly for senior roles, and favoured images featuring traits such as lighter skin tones, slimmer body types, and younger appearances. These findings highlight underlying societal biases influence the outputs of LLMs, contributing to narrow, exclusionary stereotypes that can further limit diversity and perpetuate inequities in the SE field. As LLMs are increasingly adopted within SE research and professional practices, awareness of these biases is crucial to prevent the reinforcement of discriminatory norms and to ensure that AI tools are leveraged to promote an inclusive and equitable engineering culture rather than hinder it.

Fri 2 May

Displayed time zone: Eastern Time (US & Canada) change

16:00 - 17:30
Human and Social for AIResearch Track / SE in Society (SEIS) / SE In Practice (SEIP) at 206 plus 208
Chair(s): Ramiro Liscano Ontario Tech University
16:00
15m
Talk
ChatGPT Inaccuracy Mitigation during Technical Report Understanding: Are We There Yet?
Research Track
Salma Begum Tamanna University of Calgary, Canada, Gias Uddin York University, Canada, Song Wang York University, Lan Xia IBM, Canada, Longyu Zhang IBM, Canada
16:15
15m
Talk
Navigating the Testing of Evolving Deep Learning Systems: An Exploratory Interview Study
Research Track
Hanmo You Tianjin University, Zan Wang Tianjin University, Bin Lin Hangzhou Dianzi University, Junjie Chen Tianjin University
16:30
15m
Talk
An Empirical Study on Decision-Making Aspects in Responsible Software Engineering for AIArtifact-Available
SE In Practice (SEIP)
Lekshmi Murali Rani Chalmers University of Technology and University of Gothenburg, Sweden, Faezeh Mohammadi Chalmers University of Technology and University of Gothenburg, Sweden, Robert Feldt Chalmers | University of Gothenburg, Richard Berntsson Svensson Chalmers | University of Gothenburg
Pre-print
16:45
15m
Talk
Curious, Critical Thinker, Empathetic, and Ethically Responsible: Essential Soft Skills for Data Scientists in Software Engineering
SE in Society (SEIS)
Matheus de Morais Leça University of Calgary, Ronnie de Souza Santos University of Calgary
17:00
15m
Talk
Multi-Modal LLM-based Fully-Automated Training Dataset Generation Software Platform for Mathematics Education
SE in Society (SEIS)
Minjoo Kim Sookmyung Women's University, Tae-Hyun Kim Sookmyung Women's University, Jaehyun Chung Korea University, Hyunseok Choi Korea University, Seokhyeon Min Korea University, Joon-Ho Lim Tutorus Labs, Soohyun Park Sookmyung Women's University
17:15
15m
Talk
What Does a Software Engineer Look Like? Exploring Societal Stereotypes in LLMs
SE in Society (SEIS)
Muneera Bano CSIRO's Data61, Hashini Gunatilake Monash University, Rashina Hoda Monash University
:
:
:
: