Study of Distractors in Neural Models of Code
Finding important features that contribute to the prediction of neural models is an active area of research in explainable AI. Neural models are opaque and finding such features sheds light on a better understanding of their predictions. In contrast, in this work, we present an inverse perspective of distractor features: features that cast doubt about the prediction by affecting the model’s confidence in its prediction. Understanding distractors provide a complementary view of the features’ relevance in the predictions of neural models. In this paper, we apply a reduction-based technique to find distractors and provide our preliminary results of their impacts and types. Our experiments across various tasks, models, and datasets of code reveal that the removal of tokens can have a significant impact on the confidence of models in their predictions and the categories of tokens can also play a vital role in the model’s confidence. Our study aims to enhance the transparency of models by emphasizing those tokens that significantly influence the confidence of the models.
Sun 14 MayDisplayed time zone: Hobart change
11:00 - 12:30 | Session 2InteNSE at Meeting Room 110 Chair(s): Reyhaneh Jabbarvand University of Illinois at Urbana-Champaign | ||
11:00 30mResearch paper | Study of Distractors in Neural Models of Code InteNSE Md Rafiqul Islam Rabin University of Houston, Aftab Hussain University of Houston, Sahil Suneja IBM Research, Amin Alipour University of Houston Pre-print | ||
11:30 30mResearch paper | A Study of Variable-Role-based Feature Enrichment in Neural Models of Code InteNSE Aftab Hussain University of Houston, Md Rafiqul Islam Rabin University of Houston, Bowen Xu North Carolina State University, David Lo Singapore Management University, Amin Alipour University of Houston Pre-print | ||
12:00 30mOther | Half Day Wrap Up InteNSE |