What's bothering developers in code review?
Thu 12 May 2022 05:15 - 05:20 at ICSE room 5-odd hours - Tools and Environments 1 Chair(s): Timo Kehrer
Code review is a common practice in software development and numerous studies have described different aspects of the process; its characteristics, the expectations on that process, issues around reviewer allocation, and more. However, one aspect that has not been studied to a large extent is the experience of the developers in the code review process. This is unfortunate given the significant amount of time that developers spend on this activity, where problems that degrade developers’ experience on a daily basis can be seen as creating a work environment issue.
In this paper, we present an exploratory mixed-method study where we are driven by the question: ``what’s bothering developers in code review?''. We use semi-structured interviews to gather data from two multi-national companies via and conduct a follow-up survey. Our results suggest that developers are frequently being bothered by misalignments in the code review tooling and process, and that this is hindering them in carrying out their code review tasks effectively. We present an initial characterization of seven misalignments that may hamper the developer experience. Based on our findings, we propose five directions for further exploration to improve the developer experience.
Tue 10 MayDisplayed time zone: Eastern Time (US & Canada) change
11:00 - 12:00 | Tools and Environments 4NIER - New Ideas and Emerging Results / Technical Track / SEIP - Software Engineering in Practice at ICSE room 5-odd hours Chair(s): Guido Salvaneschi University of St. Gallen | ||
11:00 5mTalk | Towards Property-Based Tests in Natural Language NIER - New Ideas and Emerging Results Colin Gordon Drexel University Pre-print Media Attached | ||
11:05 5mTalk | Using a Semantic Knowledge Base to Improve the Managementof Security Reports in Industrial DevOps Projects SEIP - Software Engineering in Practice Pre-print Media Attached | ||
11:10 5mTalk | What's bothering developers in code review? SEIP - Software Engineering in Practice Emma Söderberg Lund University, Luke Church University of Cambridge | Lund University | Lark Systems, Jürgen Börstler Blekinge Institute of Technology, Diederick Niehorster Lund University, Christofer Rydenfält Lund University Pre-print Media Attached | ||
11:15 5mTalk | "Project smells" — Experiences in Analysing the Software Quality of ML Projects with mllint SEIP - Software Engineering in Practice Bart van Oort Delft University of Technology, Luís Cruz Deflt University of Technology, Babak Loni ING Bank N.V., Arie van Deursen Delft University of Technology, Netherlands Pre-print Media Attached | ||
11:20 5mTalk | Discovering Repetitive Code Changes in Python ML Systems Technical Track Malinda Dilhara University of Colorado Boulder, USA, Ameya Ketkar Oregon State University, USA, Nikhith Sannidhi University of Colorado Boulder, Danny Dig University of Colorado Boulder, USA DOI Pre-print Media Attached | ||
11:25 5mTalk | OJXPerf: Featherlight Object Replica Detection for Java Programs Technical Track Bolun Li North Carolina State University, Hao Xu College of William and Mary, Qidong Zhao North Carolina State University, Pengfei Su University of California, Merced, Milind Chabbi Scalable Machines Research, Shuyin Jiao North Carolina State University, Xu Liu North Carolina State University, Oak Ridge National Laboratory, USA DOI Pre-print Media Attached |
Thu 12 MayDisplayed time zone: Eastern Time (US & Canada) change
05:00 - 06:00 | Tools and Environments 1Technical Track / SEIP - Software Engineering in Practice / NIER - New Ideas and Emerging Results at ICSE room 5-odd hours Chair(s): Timo Kehrer University of Bern | ||
05:00 5mTalk | MLSmellHound: A Context-Aware Code Analysis Tool NIER - New Ideas and Emerging Results Jai Kannan Deakin University, Scott Barnett Deakin University, Anj Simmons Deakin University, Luís Cruz Deflt University of Technology, Akash Agarwal Deakin University DOI Pre-print | ||
05:05 5mTalk | A Unified Code Review Automation for Large-scale Industry with Diverse Development Environments SEIP - Software Engineering in Practice Hyungjin Kim Samsung Research, Samsung Electronics, Yonghwi Kwon Samsung Research, Samsung Electronics, Hyukin Kwon Samsung Research, Samsung Electronics, Yeonhee Ryou Samsung Research, Samsung Electronics, Sangwoo Joh Samsung Research, Samsung Electronics, Taeksu Kim Samsung Research, Samsung Electronics, Chul-Joo Kim Samsung Research, Samsung Electronics DOI Pre-print Media Attached | ||
05:10 5mTalk | Using a Semantic Knowledge Base to Improve the Managementof Security Reports in Industrial DevOps Projects SEIP - Software Engineering in Practice Pre-print Media Attached | ||
05:15 5mTalk | What's bothering developers in code review? SEIP - Software Engineering in Practice Emma Söderberg Lund University, Luke Church University of Cambridge | Lund University | Lark Systems, Jürgen Börstler Blekinge Institute of Technology, Diederick Niehorster Lund University, Christofer Rydenfält Lund University Pre-print Media Attached | ||
05:20 5mTalk | "Project smells" — Experiences in Analysing the Software Quality of ML Projects with mllint SEIP - Software Engineering in Practice Bart van Oort Delft University of Technology, Luís Cruz Deflt University of Technology, Babak Loni ING Bank N.V., Arie van Deursen Delft University of Technology, Netherlands Pre-print Media Attached | ||
05:25 5mTalk | FlakiMe: Laboratory-Controlled Test Flakiness Impact Assessment Technical Track Maxime Cordy University of Luxembourg, Luxembourg, Renaud Rwemalika University of Luxembourg, Adriano Franci University of Luxembourg, Mike Papadakis University of Luxembourg, Luxembourg, Mark Harman University College London Pre-print Media Attached |