Additional training of a deep learning model can cause negative effects on the results, turning a positive sample into a negative sample (degradation). Such degradation is possible in real-world use cases due to the diversity of sample characteristics. That is, samples are mixture of critical ones which should not be missed and less important ones. Therefore, we cannot understand the performance by accuracy alone. While existing research aims to prevent model degradation, insights into the related techniques are needed to grasp their benefits and limitations. In this talk, we will present implications derived from a comparison of techniques for reducing degradation. Especially, we formulated use cases in terms of arranging data sets regarding real use cases in industrial settings. The results implies that a practitioner should care about better technique continuously considering dataset availability and life cycle of an AI system because of a trade-off between accuracy and preventing degradation.