Write a Blog >>
ICSE 2023
Sun 14 - Sat 20 May 2023 Melbourne, Australia
Sun 14 May 2023 11:00 - 11:30 at Meeting Room 110 - Session 2 Chair(s): Reyhaneh Jabbarvand

Finding important features that contribute to the prediction of neural models is an active area of research in explainable AI. Neural models are opaque and finding such features sheds light on a better understanding of their predictions. In contrast, in this work, we present an inverse perspective of distractor features: features that cast doubt about the prediction by affecting the model’s confidence in its prediction. Understanding distractors provide a complementary view of the features’ relevance in the predictions of neural models. In this paper, we apply a reduction-based technique to find distractors and provide our preliminary results of their impacts and types. Our experiments across various tasks, models, and datasets of code reveal that the removal of tokens can have a significant impact on the confidence of models in their predictions and the categories of tokens can also play a vital role in the model’s confidence. Our study aims to enhance the transparency of models by emphasizing those tokens that significantly influence the confidence of the models.

Sun 14 May

Displayed time zone: Hobart change

11:00 - 12:30
Session 2InteNSE at Meeting Room 110
Chair(s): Reyhaneh Jabbarvand University of Illinois at Urbana-Champaign
11:00
30m
Research paper
Study of Distractors in Neural Models of Code
InteNSE
Md Rafiqul Islam Rabin University of Houston, Aftab Hussain University of Houston, Sahil Suneja IBM Research, Amin Alipour University of Houston
Pre-print
11:30
30m
Research paper
A Study of Variable-Role-based Feature Enrichment in Neural Models of Code
InteNSE
Aftab Hussain University of Houston, Md Rafiqul Islam Rabin University of Houston, Bowen Xu North Carolina State University, David Lo Singapore Management University, Amin Alipour University of Houston
Pre-print
12:00
30m
Other
Half Day Wrap Up
InteNSE