Code review is an auditing process in which all changes are reviewed by someone other than the author of the changes. Because of its diverse benefits, as demonstrated by multiple studies, it has been widely adopted in both industry and open source projects. Various studies have proposed techniques to recommend reviewers who can audit changes, but there is limited support for comprehending existing review pairs (i.e., pairs of change authors and reviewers). Without this understanding, development teams may struggle to distribute workloads effectively. For example, a development team might overlook a concentrated workload on a specific developer, potentially slowing down the overall development process. On the other hand, some developers may submit code review requests (pull requests), but fail to receive timely attention from their colleagues.
To help developers understand the existing review pairs, we propose CRV (Code Review Visualiser). CRV visualises developers interactions during code reviews by quantifying their activities in a graph representation. In the graph, each node represents an individual developer while the edges denote the relationships based on discussions within the same code review requests. The size of a node indicates the total number of comments a developer has written in code review requests. The thickness of an edge between two nodes represents the number of comments that two developers authored within the same code review requests. Both nodes and edges leverage a colour scheme to represent the regularity of interactions. CRV also provides time window control, allowing developers to explore their code review activities across different time frames. We expect that CRV can help developers improve their code review practices by providing a better understanding of their current practices.
Tue 24 JunDisplayed time zone: Amsterdam, Berlin, Bern, Rome, Stockholm, Vienna change
10:30 - 12:20 | Code Review, Build, and ReleaseIdeas, Visions and Reflections / Industry Papers / Demonstrations / Research Papers / Journal First at Aurora A Chair(s): Peter Rigby Concordia University; Meta | ||
10:30 10mTalk | From Overload to Insight: Bridging Code Search and Code Review with LLMs Ideas, Visions and Reflections Nikitha Rao Carnegie Mellon University, Bogdan Vasilescu Carnegie Mellon University, Reid Holmes University of British Columbia | ||
10:40 20mTalk | Explaining Explanations: An Empirical Study of Explanations in Code Reviews Journal First Ratnadira Widyasari Singapore Management University, Singapore, Ting Zhang Singapore Management University, Abir Bouraffa University of Hamburg, Walid Maalej University of Hamburg, David Lo Singapore Management University | ||
11:00 10mTalk | Support, Not Automation: Towards AI-supported Code Review for Code Quality and Beyond Ideas, Visions and Reflections | ||
11:10 20mTalk | BitsAI-CR: Automated Code Review via LLM in Practice Industry Papers Tao Sun Beihang University, Jian Xu ByteDance, Yuanpeng Li ByteDance, Zhao Yan ByteDance, Ge Zhang ByteDance, Lintao Xie ByteDance, Lu Geng ByteDance, Zheng Wang University of Leeds, Yueyan Chen ByteDance, Qin Lin ByteDance, Wenbo Duan ByteDance, Kaixin Sui ByteDance, Yuanshuo Zhu ByteDance | ||
11:30 10mTalk | Visualising Developer Interactions in Code Reviews Demonstrations | ||
11:40 20mTalk | CXXCrafter: An LLM-Based Agent for Automated C/C++ Open Source Software Building Research Papers Zhengmin Yu Fudan University, Yuan Zhang Fudan University, Ming Wen Huazhong University of Science and Technology, Yinan Nie Fudan University, Zhang Wenhui Fudan University, Min Yang Fudan University DOI | ||
12:00 20mTalk | SmartNote: An LLM-Powered, Personalised Release Note Generator That Just Works Research Papers Farbod Daneshyan Peking University, Runzhi He Peking University, Jianyu Wu Peking University, Minghui Zhou Peking University DOI |
Aurora A is the first room in the Aurora wing.
When facing the main Cosmos Hall, access to the Aurora wing is on the right, close to the side entrance of the hotel.