Write a Blog >>
Thu 13 Oct 2022 11:10 - 11:30 at Gold A - Technical Session 24 - Human Aspects Chair(s): Silvia Abrahão

Neural networks are getting increasingly popular thanks to their exceptional performance in solving many real-world problems. At the same time, they are shown to be vulnerable to attacks, difficult to debug and subject to fairness issues. To improve people’s trust in the technology, it is often necessary to provide some human-understandable explanation of neural networks’ decisions, e.g., why is that my loan application is rejected whereas hers is approved? That is, the stakeholder would be interested to minimize the chances of not being able to explain the decision consistently and would like to know how often and how easy it is to explain the decisions of a neural network before it is deployed.

In this work, we provide two measurements on the decision explainability of neural networks. Afterwards, we develop algorithms for evaluating the measurements of user-provided neural networks automatically. We evaluate our approach on multiple neural network models trained on benchmark datasets. The results show that existing neural networks’ decisions often have low explainability according to our measurements. This is in line with the observation that adversarial samples can be easily generated through adversarial perturbation, which are often hard to explain. Our further experiments show that the decisions of the models trained with robust training are not necessarily easier to explain, whereas decisions of the models retrained with samples generated by our algorithms are easier to explain.

Thu 13 Oct

Displayed time zone: Eastern Time (US & Canada) change

10:00 - 12:00
Technical Session 24 - Human AspectsResearch Papers / Journal-first Papers / NIER Track at Gold A
Chair(s): Silvia Abrahão Universitat Politècnica de València
10:00
20m
Research paper
Constructing a System Knowledge Graph of User Tasks and Failures from Bug Reports to Support Soap Opera Testing
Research Papers
Yanqi Su Australian National University, Zheming Han , Zhenchang Xing Australian National University, Xin Xia Huawei Software Engineering Application Technology Lab, Xiwei (Sherry) Xu CSIRO Data61, Liming Zhu CSIRO’s Data61; UNSW, Qinghua Lu CSIRO’s Data61
10:20
20m
Research paper
Data Augmentation for Improving Emotion Recognition in Software Engineering Communication
Research Papers
Mia Mohammad Imran Virginia Commonwealth University, Yashasvi Jain Drexel University, Preetha Chatterjee Drexel University, USA, Kostadin Damevski Virginia Commonwealth University
Pre-print
10:40
10m
Vision and Emerging Results
End-to-End Rationale Reconstruction
NIER Track
Mouna Dhaouadi University of Montreal, Bentley Oakes Université de Montréal, Michalis Famelis Université de Montréal
Pre-print
10:50
20m
Paper
Towards digitalization of requirements: Generating context-sensitive user stories from diverse specifications
Journal-first Papers
Padmalata Nistala Tata Consultancy Services Research, Asha Rajbhoj TCS Research, Vinay Kulkarni Tata Consultancy Services Research, Shivani Soni TCS Research, Kesav Vithal Nori IIIT Hyderabad, Raghu Reddy IIT Hyderabad
Link to publication DOI
11:10
20m
Paper
Which neural network makes more explainable decisions? An approach towards measuring explainabilityVirtual
Journal-first Papers
Mengdi Zhang Singapore Management University, Singapore, Jun Sun Singapore Management University, Jingyi Wang Zhejiang University
Link to publication DOI
11:30
20m
Paper
Automatically Identifying the Quality of Developer Chats for Post Hoc UseVirtual
Journal-first Papers
Preetha Chatterjee Drexel University, USA, Kostadin Damevski Virginia Commonwealth University, Nicholas A. Kraft UserVoice, Lori Pollock University of Delaware
Link to publication Media Attached