TechDebt 2025
Sun 27 - Mon 28 April 2025 Ottawa, Ontario, Canada
co-located with ICSE 2025

Leveraging Research to Inform Structural Technical Debt Management

Ciera Jaspan, Collin Green

Keynote Abstract: The “technical debt” metaphor is centered on recognizing tradeoffs in service of practical decision making: it’s a tool for reasoning about what to do and when to do it. However, it’s often just considered as an impediment to engineering teams. Indeed, technical debt continues to be the top hindrance to developer productivity at Google – and we strongly suspect we aren’t alone in this. Still, we feel it’s important to take a broad view of technical debt and we suspect such a view is necessary to arrive at a complete—and likely multifaceted—solution to the effective management of technical debt. In this talk, we’ll discuss whaGt Google has done to break down the problem of technical debt into smaller pieces and what we’ve learned from our efforts. We’ve found that relying heavily on subjective measures yields a more tractable approach to measuring and managing technical debt than using purely objective measurements. We investigated the differences between teams that are successful and not in managing technical debt, and found that teams that are successful have set up incentives, training, processes, and most importantly, a culture that surrounds technical debt and embeds it into their day-to-day. This requires practitioners to take more systematic and informed approaches to their technical debt management; but, it also requires more research to inform those approaches and evidence to help practitioners rationalize and guide investments in technical debt management. We close with a call to action for the research community to fill in the gaps regarding technical debt that we can’t yet objectively identify, to reduce the remediation cost of technical debt, to improve further on technical debt management techniques, and to continue to evolve all of these as the state of software development changes.

Ciera Profile Picture Ciera's Bio: Ciera Jaspan is the tech lead manager of the Engineering Productivity Research team within Core Developer at Google, where she uses a data-driven, mixed-methods approach to drive tool, process, and culture decisions made by Google leadership. The team’s infrastructure, metrics, and research results are used to motivate changes to Google’s developer tools that will increase productivity and then to measure the impact of these changes to developer productivity across Google. Dr. Jaspan previously worked on Tricorder, Google’s static analysis platform. She received her BS degree in software engineering from Cal Poly and her PhD from Carnegie Mellon, where she worked with Jonathan Aldrich on cost-effective static analysis and software framework design. Dr. Jaspan is the co-editor of the Developer Productivity for Humans column in IEEE Software magazine. Contact her at ciera@google.com.
Collin Profile Picture Collin's Bio: Collin Green is the research lead and manager of the Engineering Productivity Research team within Core Developer at Google. His research focuses on applying combined quantitative and qualitative behavioral and social science research methods to understand developer experience and engineering productivity. In prior roles, Dr. Green has studied the design and usability of software tools for technical users in medicine and aerospace and the impacts those tools have on productivity. Dr. Green received his BS Degree in psychology from the University of Oregon and his PhD in psychology from the University of California, Los Angeles. Dr. Green is co-editor of the Developer Productivity for Humans column in IEEE Software magazine. Contact him at colling@google.com.

Towards an Interpretable Science of Deep Learning for Engineering Quality Software: A Causal Inference View

Denys Poshyvanyk

Keynote Abstract: Neural Code Models (NCMs) are rapidly progressing from research prototypes to commercial developer tools. As such, understanding the capabilities and limitations of such models is becoming critical. However, the abilities of these models are typically measured using automated metrics that often only reveal a portion of their real-world performance. While, in general, the performance of NCMs appears promising, currently much is unknown about how such models arrive at decisions or whether practitioners trust NCMs’ outcomes. In this talk, I will introduce doCode, a post hoc interpretability framework specific to NCMs that can explain model predictions. doCode is based upon causal inference to enable programming language-oriented explanations. While the theoretical underpinnings of doCode are extensible to exploring different model properties, we provide a concrete instantiation that aims to mitigate the impact of spurious correlations by grounding explanations of model behavior in properties of programming languages. doCode can generate causal explanations based on Abstract Syntax Tree information and software engineering-based interventions. To demonstrate the practical benefit of doCode, I will present empirical results of using doCode for detecting confounding bias in NCMs as well as detecting code smells in the code generated by state-of-the-art NCMs, such as CodeLlama and Mistral.

Denys Profile Picture Denys's Bio: Denys Poshyvanyk is a Chancellor Professor and a Graduate Director in the Computer Science Department at William & Mary. He currently serves as a Guest Editor-in-Chief of the AI-SE Continuous Special Section at the ACM Transactions on Software Engineering and Methodology (TOSEM) and a Program Co-Chair for FSE’25 and FORGE’25. He is a recipient of multiple ACM SIGSOFT Distinguished paper awards, most influential paper awards, and the NSF CAREER award (2013). He is an IEEE Fellow and an ACM distinguished member.