Interpretation has been considered as one of key factors for applying defect prediction in practice. As one way for interpretation, local explanation methods has been widely used for certain predictions on datasets of traditional features. There are also attempts to use local explanation methods on source code-based defect prediction models, but unfortunately, it will get poor results. Since it is unclear how effective those local explanation methods are, we evaluate such methods with automatic metrics which focus on local faithfulness and explanation precision. Based on the results of experiments, we find that the effectiveness of local explanation methods depends on the adopted defect prediction models. They are effective on Bag of Word-based models, while they may not be effective enough to explain all predictions of deep semantic models. Besides, we also find that the hyperparameter of local explanation methods should be carefully optimized to get more precise and meaningful explanation