CASCON 2025
Mon 10 - Thu 13 November 2025
Monday • 10 Nov
Foutse Khomh
10:30 – 12:00
Keynote Room

Foutse Khomh

Toward Trustworthy AI Coding Assistants

Polytechnique Montréal
Large Language Models (LLMs) trained on code are increasingly being integrated into software engineering workflows, assisting with tasks such as code synthesis, bug fixing, and refactoring. Despite their impressive capabilities, critical questions remain: What do these models actually learn? How do they reason about source code? Under what conditions do they fail, and why? Addressing these questions is essential to building both technical and human trust in AI-assisted programming.
In this talk, I will present findings from our recent investigations into the behavior of LLMs in software development. I will highlight recurrent inefficiencies in generated code, limitations of existing benchmarks, and introduce novel tools and frameworks (e.g., ReCatcher, PrismBench) that we have developed to assess and improve the reliability of coding assistants. I will also discuss how traditional software engineering practices (e.g., testing, static analysis) can be adapted to strengthen the reliability of AI coding agents.I will conclude with reflections on the opportunities and challenges of trustworthy integration of AI in the software development lifecycle, and outline some directions for building more reliable and effective AI coding assistants.
Bio: Foutse Khomh is a Full Professor of Software Engineering at Polytechnique Montréal, a Canada Research Chair Tier 1 on Trustworthy Intelligent Software Systems, a Canada CIFAR AI Chair on Trustworthy Machine Learning Software Systems, an NSERC Arthur B. McDonald Fellow, an Honoris Genius Prize Laureate, and an FRQ-IVADO Research Chair on Software Quality Assurance for Machine Learning Applications. He received a Ph.D. in Software Engineering from the University of Montreal in 2011, with the Award of Excellence. He also received a CS-Can/Info-Can Outstanding Young Computer Science Researcher Prize for 2019, the Excellence in Research and Innovation Award of Polytechnique Montréal, and the prestigious IEEE CS TCSE New Directions Award in 2025. His work has received four ten-year Most Influential Paper (MIP) Awards, eight Best/Distinguished Paper Awards at major conferences, and two Best Journal Paper of the Year Awards. He initiated and co-organized the Software Engineering for Machine Learning Applications (SEMLA) symposium and the RELENG (Release Engineering) workshop series. He also co-organized the FM+SE Summit series (https://fmse.io/), a platform where leading industrial and academic experts discuss and reflect on the challenges associated with the adoption of foundation and large models in software engineering. He is co-founder of the NSERC CREATE SE4AI: A Training Program on the Development, Deployment, and Servicing of Artificial Intelligence-based Software Systems and one of the Principal Investigators of the DEpendable Explainable Learning (DEEL) project. He is also a co-founder of Quebec's initiative on Trustworthy AI (Confiance IA Quebec) and Scientific co-director of the Institut de Valorisation des Données (IVADO). He is on the editorial board of multiple international software engineering journals (e.g., TOSEM, IEEE Software, EMSE, SQJ, JSEP) and is a Senior Member of IEEE.
Jie M. Zhang
16:30 – 18:00
Keynote Room

Jie M. Zhang

LLMs for Code: Beyond Just Correctness

King’s College London
Large language models (LLMs) are rapidly transforming the practice of software engineering. Code correctness has been the primary benchmark for evaluating these models, but real-world software development requires much more. In this talk, I will discuss how we could move beyond correctness to address broader dimensions of LLM-generated code. I will use my recent work on code efficiency, fairness, and diversity as examples, and discuss the challenges and opportunities that lie ahead in advancing LLMs for code.
Bio: Dr. Jie M. Zhang is a lecturer of computer science at King’s College London. Her main research interests are the trustworthiness of software engineering, AI, and LLMs. She has published numerous papers in top-tier venues including ICML, ACL, NeurIPS, ICLR, ICSE, FSE, ASE, ISSTA, TSE, and TOSEM. She is a steering committee member of conferences IEEE ICST and ACM AIware. She is the general chair of AIware 2025, the area chair of ICSE 2026 and ASE 2025, and the program chair of many events such as AIware 2024, Internetware 2024, and ISSTA 2025 Doctoral Symposium. Over the last three years, she has been invited to give over 30 talks at conferences, universities, and IT companies. In recognition of her influence, she was named one of the Top 15 Global Chinese Female Young Scholars in Interdisciplinary AI (2023). Her research has won the FSE 2025 distinguished paper award, the 2022 IEEE Transactions on Software Engineering Best Paper award, and the ICLR 2022 spotlight paper award. She is also the winner of 2025 ACM Sigsoft Early Research Award, one of the most prestigious honours for early-career researchers in the software engineering community.
Tuesday • 11 Nov
Ismael Faro
08:30 – 10:00
Keynote Room

Ismael Faro

Supercharging Quantum Computing with AI tools: From Code to Quantum Advantage

IBM Research
Quantum computing is evolving rapidly, and AI is becoming a key enabler in this transformation. In this talk, we’ll explore how AI is being integrated into every stage of the quantum computing pipeline—from writing code and optimizing circuits to managing resources and improving result quality. Whether you're a developer, researcher, or enthusiast, this session will help you collapse the superposition between theory and practice, and understand how AI is helping unlock the full potential of quantum technologies.
In this session, we’ll present a series of cutting-edge projects that demonstrate how AI is enhancing quantum computing workflows. We’ll begin with how we use LLM trained with Qiskit SDK code and examples, designed to help users write better Qiskit and QASM3 programs. Then, we’ll dive into AI-powered circuit optimization, showcasing how reinforcement learning models are used to synthesize and transpile quantum circuits more efficiently. Next, we’ll explore resource management tools that use AI to estimate execution times, validate jobs, and classify circuits for hybrid execution. We’ll also discuss how machine learning is being applied to improve error mitigation, error correction, and calibration processes—critical for achieving reliable quantum results. Finally, we’ll look ahead to the frontier of quantum advantage, where AI techniques are helping to design experiments that push the boundaries of what quantum systems can achieve. This talk is a journey through the synergy of Quantum and AI, offering practical insights and real-world examples that show how these technologies are converging to solve some of the hardest problems in computing.
Bio: Ismael Faro is the Vice President of Quantum + AI at IBM Research, and he holds the recognition of Distinguished Engineer for his technical contributions. Notably, he was principal architect for first public Quantum Cloud platform, the IBM Quantum Experience, back in 2016. Faro is also recognized as one of the pioneering contributors to the open-source quantum computing software development framework Qiskit. Currently, his responsibilities include spearheading the development of Quantum + AI software and services integrate novel research, focus on leveraging AI to optimize key parts of the quantum software stack and a research agentic open-source stack. Beyond his executive roles, Ismael has a history of collaboration with research, developer, and entrepreneurial communities, actively engaging in open-source projects. As an entrepreneur, he has co-founded several startups that leverage cutting-edge technologies such as edge computing, distributed computing, AI, and prioritize enhancing user experiences through technological innovation.
Shin Hwei Tan
16:30 – 18:00
Keynote Room

Shin Hwei Tan

Towards ethical sourced code generation

Concordia University
There are growing interests over various ethical issues (e.g., unclear licensing, privacy, fairness, and environment impact) in the process of software development. In this talk, I will start by introducing several examples of unethical behavior (e.g., self-promotion, unethical naming) in the context of open-source software projects. Then, I will discuss several recent papers that draws upon research supported by the Government of Canada’s New Frontiers in Research Fund (NFRF). To ensure responsible code generation, we introduce a novel coverage-based harmfulness testing technique that identifies harms in automatically generated code by large language models (LLM). Finally, I will introduce the novel notion of Ethically Sourced Code Generation (ES-CodeGen) to refer to managing all processes involved in code generation model development from data collection to post-deployment via ethical and sustainable practices .The concept of ES-CodeGen draws upon insights gained from our multi-disciplinary literature review and survey with experienced developers. Our study identifies several findings and key challenges for future research towards ethically sourced code generation.
Bio: Shin Hwei Tan is an Associate Professor (Gina Cody Research Chair) in Concordia University. Before moving to Concordia University, she was an Assistant Professor in Southern University of Science and Technology in Shenzhen, China. She obtained her PhD degree from National University of Singapore and her B.S (Hons) and MSc degree from University of Illinois at Urbana-Champaign. Her main research interests are in automated program repair, software testing and open-source development. She is an Associate Editor for TOSEM and the Guest Editors-in-Chief for the New Frontier in Software Engineering track in TOSEM. She has also served as PCs for top-tier software engineering conferences, where she won 3 best reviewers award (FSE 2020, ASE 2020, ICSE 2022 NIER-track). She is also the general chair of FSE26 which will be held in Concordia University. She recently won the 2025 ACM-W Rising Star Award, ACM SIGSOFT Distinguished Paper award for ICSE 2025 and IEEE Distinguished Paper Award for ICST 2025.
Wednesday • 12 Nov
Peter C. Rigby
08:30 – 10:00
Keynote Room

Peter C. Rigby

AI for Software Engineering at Meta’s Scale

Meta / Concordia University

This keynote explores how Meta leverages AI to transform software engineering at scale, focusing on real-world, in-production systems that directly impact developer productivity and code quality. The talk highlights several innovative projects, including:

  • CodeCompose: AI-assisted code authoring
  • Diff Risk Scoring (DRS): Using LLMs to assess risk in code changes
  • MetaMateCR: Automated code review comment-to-patch generation
  • Agentic Systems: Autonomous agents for fixing test failures

Key Themes

  1. Scaling AI for Production
    • Deployment of AI models (CodeCompose, DRS) across Meta’s monorepo
    • System design and offline/online results
    • The importance of internally tuned models and domain-specific datasets
  2. Risk Prediction and Code Freezes
    • Just-in-Time Quality Assurance: Predicting risky changes at commit time
    • Dynamic code freeze strategies to balance stability and velocity
    • LLM-based risk models (iDiffLlama, iCodeLlama)
  3. Human-AI Collaboration
    • Thematic analysis of developer feedback 
    • Engineers prefer partial/incomplete solutions they can modify
    • AI-generated patches as starting points for discussion and review
  4. Model Evaluation and Safety
    • Randomized controlled safety trials for production rollout
    • LLM-as-a-Judge: Ensuring generated code matches Meta’s standards
    • Specialized, smaller models outperforming larger public models
  5. Future Directions
    • Enhancing risk scoring: Why is a change risky? Who should review it?
    • Code review agents for security and knowledge supplementation
    • Automated patch generation and reviewer recommendation

The talk is based on four papers:

  1. AI-Assisted Code Authoring at Scale: Fine-Tuning, Deploying, and Mixed Methods Evaluation
  2. Moving Faster and Reducing Risk: Using LLMs in Release Deployment
  3. AI-Assisted Fixes to Code Review Comments at Scale
  4. Agentic Program Repair from Test Failures at Scale: A Neuro-symbolic approach with static analysis and test execution feedback

 

Bio: Peter C. Rigby is a Software Engineering researcher at Meta and a professor in Software Engineering at Concordia University in Montreal. His overarching research interest is in understanding how developers collaborate to produce successful software systems. His research program is driven by a desire to determine empirically the factors that lead to the development of successful software and to adapt, apply, and validate these techniques in different settings. Empirical Software Engineering involves mining large data sets to provide an empirical basis for software engineering practices. Software Analytics is then used to provide statistical predictions of, for example, the areas of the system that would benefit from increased developer attention. Grounded, empirical findings are necessary to advance software development as an engineering discipline. He is currently focusing on the following research areas: GenAI for software engineering productivity, software testing, developer turnover, and code review. He has extensive industry collaboration with Ericsson and Meta.
https://users.encs.concordia.ca/~pcr/

Guy-Vincent Jourdan
16:30 – 18:00
Keynote Room

Guy-Vincent Jourdan

When AI Breaks and Secures: Lessons from the uOttawa–IBM Cyber Range

University of Ottawa
Like in many other scientific domains, academic research in cybersecurity has been profoundly influenced by recent advances in artificial intelligence, particularly by the widespread adoption of machine learning models and their integration into a large number of products. Only a few years ago, AI-centric papers were relatively rare in top academic venues; today, it is difficult to find a paper that does not include at least some application of machine learning.
In this talk, we will explore some of the impacts that this trend has had on the research conducted at the uOttawa–IBM Cyber Range. We will begin by examining examples of vulnerabilities introduced by these models, and how deploying them hastily can make systems less secure. We will focus on two specific cases: first, the surprising weaknesses of face recognition systems, similar to those routinely used worldwide to unlock smartphones; and second, the ability to poison diffusion models—the same models capable of generating ultra-realistic scenes from simple prompts. We will then turn to the positive side of AI and discuss how it can enhance cybersecurity. In our work, we have explored how models can help produce more secure code. We will present a system we developed to detect specific vulnerabilities in intermediate code, and, given the growing reliance on large language models for code generation, we will conclude by discussing a benchmark we created to evaluate these models’ ability to identify and “understand” insecure code.
Bio: Guy-Vincent Jourdan is a full professor of computer science at the Faculty of Engineering at the University of Ottawa, Canada, and the co-director of the uOttawa–IBM Cyber Range. He joined the School of Electrical Engineering and Computer Science as an associate professor in June 2004, after seven years in the private sector as CTO and later CEO of the Ottawa-based company Decision Academic Graphics. He received his Ph.D. from l’Université de Rennes / INRIA in France in 1995, in the area of distributed systems analysis. He now has over 20 years of experience leading research and industry collaborations in the field of cybersecurity, with a focus on cybercrime detection and prevention. His collaborators include industry leaders such as IBM, Cisco, Fortinet, and OpenText. He has co-authored over 120 scientific publications and holds 19 patents in partnership with various companies. His long-standing collaboration with IBM Security teams around the world has earned numerous accolades, including four “Research Project of the Year” awards (2010, 2012, 2014, 2017), the “Faculty Fellow of the Year” award (2018), and the IBM CAS Canada Award of Excellence (2019). He received the Award for Excellence in Research Partnerships from the Faculty of Engineering at the University of Ottawa in 2023, and he is an IBM Champion in 2025.
Thursday • 13 Nov
Neel Sundaresan
08:30 – 10:00
Keynote Room

Neel Sundaresan

Literate Software: Crafting Intelligent Systems in the Age of AI 

IBM
Software creation has moved beyond code generation into a new era of human–AI collaboration. Developers, architects, and designers now build through dialogue with intelligent systems, turning intent into structure and accelerating how ideas become working software. This shift is embodied in literate software development—where code and context are written as one. AI serves as a co-author that transforms sketches and reasoning into executable, documented systems, making software both expressive and comprehensible. Drawing on our experience building AI-native developer tools, this talk reveals how teams are achieving measurable gains in velocity, productivity, and creative quality. We discuss how literate, AI-first practices unify design, architecture, and engineering into a shared language of creation.
Bio: Neel Sundaresan is the General Manager of Automation and AI at IBM, leading AI-driven products for IT automation and developer productivity. Previously, he served as VP of AI and Engineering at Microsoft, where he pioneered the concepts of the “Internet of Code” and “Code as data” to enhance developer efficiency. Neel has also headed eBay’s research and data labs, driving innovations in search, recommender systems, and Trust & Safety. An accomplished inventor with over 315 patents and more than 125 publications, he holds a Ph.D. in Computer Science from Indiana University and dual master’s degrees in Mathematics and Computer Science from IIT Mumbai. Neel is a frequent speaker at international conferences and a dedicated social impact advocate.