|
Tim Menzies (Ph.D. ACM Fellow, IEEE Fellow, ASE Fellow) is a globally recognized leader in software engineering research, best known for his pioneering work in data-driven, explainable, and minimal AI for software systems. Over the past two decades, his contributions have redefined defect prediction, effort estimation, and multi-objective optimization, emphasizing transparency and reproducibility.
As the co-creator of the PROMISE repository, Tim helped establish modern empirical software engineering, showing that small, interpretable AI models can outperform larger, more complex ones. His research has earned over $19 million in funding from agencies such as NSF, DARPA, and NASA, as well as from companies like Microsoft and IBM.
Tim has published over 350 papers, with more than 24,000 citations, and advised 24 Ph.D. students. He is the editor-in-chief of Automated Software Engineering and an associate editor for IEEE TSE and IEEE Software. His work continues to shape the future of software engineering, focusing on creating AI tools that are not only intelligent but also fair, transparent, and trustworthy. For more information, visit timm.fyi.
|
Abstract Industry can get any research it wants, just by publishing a baseline result along with the data and scripts need to reproduce that work. For instance, the paper “Data Mining Static Code Attributes to Learn Defect Predictors” (TSE, 2007)presented such a baseline, using static code attributes from NASA projects. Those result were enthusiastically embraced by a software engineering research community, hungry for data. At its peak (2016) this paper was SE's most cited paper (per month). By 2018, twenty percent of leading TSE papers (according to Google Scholar Metrics), incorporated artifacts introduced and disseminated by this research. This talk reflects on what we should remember, and what we should forget, from that paper.
|