Sarah Nadi
New York University Abu Dhabi
When AI Gets It *Almost* Right: Lessons from AI-Assisted Software Development
Abstract Generative AI has become a disruptive force in software development, with applications spanning a wide range of tasks. However, our recent empirical studies across various tasks reveal a consistent pattern: large language models automate substantial portions of the work, yet often produce results that are *almost* right. The remaining incorrect or incomplete work is then left to human developers, who must validate, repair, and reason about AI-generated changes.
In this talk, I will explore two intertwined questions that arise from this reality. First, how can AI-generated results be integrated and presented in ways that effectively support developers in reasoning about and fixing incomplete/incorrect work? Second, and more critically, what happens when increasing reliance on AI affects the very skills developers need to perform this remaining step, a concern that becomes particularly visible in educational settings?
BIO: Sarah Nadi is an Associate Professor in the Computer Science Program at New York University Abu Dhabi (NYUAD). Before joining NYUAD, she was an Associate Professor at the University of Alberta, Canada, where she held a Tier II Canada Research Chair in Software Reuse and led the Software Maintenance and Reuse (SMR) Lab. Sarah obtained her Master’s and PhD degrees from the University of Waterloo, Canada, and was a postdoctoral researcher at the Technische Universität Darmstadt in Germany.
At NYUAD, Sarah co-directs the SANAD Lab, which aims to enhance how software engineers develop and maintain software systems by providing tools and data-driven insights that support real-world practice. Her recent work focuses on AI-assisted software development and maintenance, examining how large language models support tasks such as library migration and API misuse detection, and how issues of correctness, validation, and developer learning arise in practice.
Emerson Murphy-Hill
Microsoft Research
Shared keynote with MSR
The Role of an Empirical Software Engineering Researcher in the Age of Generative AI
Abstract Generative AI has brought significant upheaval and uncertainty to how software is developed—and those of us who study software developers face substantial new challenges and questions of our own. When AI-based developer tools evolve every few months, how can we produce research that endures? What can human-centered academics hope to contribute when the big players are relatively token rich? And why write papers at all when generative AI can both synthesize and peer review them?
As someone who has worked on both human-centered and AI-centered problems, in industry and academia alike, I'll explore these urgent questions facing our community.
BIO: Dr. Emerson Murphy-Hill is a research scientist at Microsoft, where he has studied developer experience and helped build Excel Agent. Previously, he was a research scientist in Engineering Productivity Research at Google, leading efforts to improve diversity and inclusion for software developers. Before Google, he was an Associate Professor at North Carolina State University and led the Developer Liberation Front. His research spans human-computer interaction and software engineering, earning six ACM SIGSOFT Distinguished Paper Awards, an NSF CAREER Award, a VL/HCC Best Paper Award, and a Microsoft Software Engineering Innovation Foundation Award.