Keynotes
AI for Code Generation to Software Engineering
Abstract.
AI-powered code generation tools like GitHub Copilot and Cursor AI have changed software development by making coding faster and automating repetitive tasks. However, these tools still struggle to understand the full context of real-world software projects. They often miss important details, such as security, efficiency, and maintainability, leading to code that works but is not always the best solution. To fix this, we need to add software engineering best practices to AI models and improve how humans and AI work together. This will help AI-generated code meet the real needs of developers and businesses. I will talk about the challenges of AI code generation and suggest ways to make it better for real-world use.
Biography.
Baishakhi Ray is an Associate Professor in the Department of Computer Science at Columbia University, NY, USA. She has received the IEEE TCSE Rising Star Award and NSF CAREER award. Baishakhi’s research interest is in the intersection of Software Engineering and Machine Learning. Her research has been acknowledged by many Distinguished Paper awards, has also been published in CACM Research Highlights, and has been widely covered in trade media.
Source Code Diff Revolution
Abstract.
Despite the tremendous impact of Large Language Models on facilitating many software engineering tasks, change comprehension still remains a very challenging task. Software developers spend a significant portion of the workday trying to understand and review their teammates’ code changes. Currently, most code reviewing and change comprehension is done using textual diff tools, such as the commit diff provided by GitHub or Gerrit. Such diff tools are insufficient, especially for complex changes, which move code within the same file or between different files. Abstract Syntax Tree (AST) diff tools brought several improvements in making easier the understanding of source code changes. However, they still have some constraints and limitations that affect negatively their accuracy. In this keynote, I will demonstrate these limitations using real case studies from open-source projects. At the same time, I will show how the AST diff generated by our tool addresses these limitations. Moreover, I will introduce a Benchmark we created based on commits from the Defects4J and Refactoring Oracle datasets and present the precision and recall of state-of-the-art AST diff tools on our benchmark. Finally, I will use examples from open-source projects showing that source code diff is extremely more challenging for test code and present our recent progress on documenting and detecting test-specific refactorings. I will conclude the keynote with some interesting future research directions. Vive la révolution!
Biography.
Nikolaos Tsantalis is a Professor in the Department of Computer Science and Software Engineering at Concordia University, Montreal, Canada. His research interests include software maintenance, software evolution, software design, empirical software engineering, refactoring recommendation systems, and refactoring mining. He has developed tools, such as the Design Pattern Detection tool, JDeodorant and RefactoringMiner, which are used by many practitioners, researchers, and educators. He has been awarded three Most Influential Paper awards at SANER 2018, SANER 2019 and CASCON 2023, two ACM SIGSOFT Distinguished Paper awards at FSE 2016 and ICSE 2017, two ACM Distinguished Artifact awards at FSE 2016 and OOPSLA 2017, and four Distinguished Reviewer Awards at MSR 2020, ICPC 2022, ASE 2022 and ICSME 2023. He served as the Program co-chair for ICSME 2021 and SCAM 2020. He currently serves the Editorial Board of the IEEE Transactions on Software Engineering as an Associate Editor. Finally, he is a Senior member of the IEEE and member of the ACM and holds a license from the Association of Professional Engineers of Ontario (PEO).
Advances and Challenges in Foundation Agents: From Brain-Inspired Intelligence to Evolutionary, Collaborative, and Safe Systems
Abstract.
The advent of large language models (LLMs) has revolutionized artificial intelligence, laying the foundation for sophisticated intelligent agents capable of reasoning, perceiving, and acting across diverse domains. These agents are increasingly central to advancing AI research and applications, yet their design, evaluation, and enhancement pose intricate challenges. In this talk, we will offer a fresh perspective by framing intelligent agents through a modular and cognitive science-inspired lens, bridging AI design with insights from neuroscience to propose a unified framework for understanding their core functionalities and future potential. First, we explore the modular design of intelligent agents and propose a framework for cognition, perception, action, memory, reward systems, and so on. Second, we examine self-enhancement mechanisms, uncovering how agents can autonomously refine their skills, adapt to dynamic environments, and achieve continual learning. Third, we address collaborative and evolutionary intelligent systems, highlighting how agents interact, cooperate, and evolve within multi-agent systems and societal structures. Finally, we tackle the imperative of building safe and beneficial AI, focusing on ethical alignment, robustness, and trustworthiness in real-world applications. Our talk aims to provide a holistic and interdisciplinary perspective for intelligent agent research.
Biography.
Bang Liu is an Assistant Professor in the Department of Computer Science and Operations Research (DIRO) at the University of Montreal (UdeM). He is a member of the RALI laboratory (Applied Research in Computer Linguistics) of DIRO, a member of Institut Courtois of UdeM, an associate member of Mila – Quebec Artificial Intelligence Institute, and a Canada CIFAR AI Chair. His research interests primarily lie in the areas of natural language processing, multimodal & embodied learning, theory and techniques for AGI (e.g., understanding and improving large language models, intelligent agents), and AI for science (e.g., material science, health).
LLMs: Facts, Lies, Reasoning and Software Agents in the Real World
Abstract.
Over the past decade AI has gone from a specialized field of research to a full scale technological revolution. I will begin by examining the key steps involved in the creation of modern AI systems based on Transformers and Large Language Models (LLMs). I will discuss some outstanding weaknesses, which are being rapidly addressed through research and are unlocking a wave of new applications. I will discuss our recent work on reducing hallucinations and increasing factuality. I will discuss work examining and improving reasoning, leveraging computation for reasoning at test time, as well as agents that mix code generation and execution with neural reasoning. I will discuss the growing capabilities of LLMs as virtual agents living in information worlds, as well as agents based on Transformers that interact with the real world, controlling robots and autonomous vehicles.
Biography.
Christophper Pal is a Full Professor at Polytechnique Montréal, a Canada CIFAR AI Chair, and an adjunct professor at Université de Montréal’s DIRO. He is also a Distinguished Scientist at ServiceNow Research, a founding faculty member of Mila, and associated with the IVADO. With over 25 years of experience in AI, deep learning and machine learning research he has published extensively on LLMs, reasoning, robotics, computer vision, and generative modeling.