Tue 6 Nov 2018 16:37 - 17:00 at Horizons 10-11 - Developer Studies Chair(s): Thomas LaToza

Peer code review is a practice widely adopted in software projects to improve the quality of code. In current code review practices, code changes are manually inspected by developers other than the author before these changes are integrated into a project or put into production. We conducted a study to obtain an empirical understanding of what makes a code change easier to review. To this end, we surveyed published academic literature and sources from gray literature (e.g., blogs and white papers), we interviewed ten professional developers, and we designed and deployed a reviewability evaluation tool that professional developers used to rate the reviewability of 98 changes. We find that reviewability is defined through several factors, such as the change description, size, and coherent commit history. We provide recommendations for practitioners and researchers. Public preprint [https://doi.org/10.5281/zenodo.1323659]; data and materials [https://doi.org/10.5281/zenodo.1323659].

Tue 6 Nov

fse-2018-research-papers
15:30 - 17:00: Research Papers - Developer Studies at Horizons 10-11
Chair(s): Thomas LaTozaGeorge Mason University
fse-2018-Journal-First15:30 - 15:52
Talk
DOI
fse-2018-Journal-First15:52 - 16:15
Talk
Yuhao Wu, Shaowei WangQueen's University, Cor-Paul BezemerUniversity of Alberta, Canada, Katsuro InoueOsaka University
DOI
fse-2018-research-papers16:15 - 16:37
Talk
Sebastian Baltes University of Trier, Stephan DiehlComputer Science, University Trier, Germany
Pre-print
fse-2018-research-papers16:37 - 17:00
Full-paper
Achyudh RamUniversity of Waterloo, Anand Ashok SawantDelft University of Technology, Marco CastelluccioMozilla Foundation, UK, Alberto BacchelliUniversity of Zurich
Link to publication DOI Pre-print Media Attached