Toward Improving the Robustness of Deep Learning Models via Model TransformationVirtual
Deep learning (DL) techniques have attracted much attention in recent years, and have been applied to many application scenarios, including those that are safety-critical. Improving the universal robustness of DL models is vital and many approaches have been proposed in the last decades aiming at such a purpose. Among existing approaches, adversarial training is the most representative. It advocates a post model tuning process via incorporating adversarial samples. Although successful, they still suffer from the challenge of generalizability issues in the face of various attacks with unsatisfactory effectiveness. Targeting this problem, in this paper we propose a novel model training framework, which aims at improving the universal robustness of DL models via model transformation incorporated with a data augmentation strategy in a delta debugging fashion. We have implemented our approach in a tool, called Dare, and conducted an extensive evaluation on 9 DL models. The results show that our approach significantly outperforms existing adversarial training techniques. Specifically, Dare has achieved the highest Empirical Robustness in 29 of 45 testing scenarios under various attacks, while the number drops to 5 of 45 for the best baseline approach.
Thu 13 OctDisplayed time zone: Eastern Time (US & Canada) change
10:00 - 12:00 | Technical Session 21 - SE for AI IIResearch Papers / Late Breaking Results / NIER Track / Journal-first Papers at Banquet B Chair(s): Andrea Stocco Università della Svizzera italiana (USI) | ||
10:00 20mResearch paper | DeepPerform: An Efficient Approach for Performance Testing of Resource-Constrained Neural Networks Research Papers Simin Chen University of Texas at Dallas, USA, Mirazul Haque UT Dallas, Cong Liu University of Texas at Dallas, USA, Wei Yang University of Texas at Dallas | ||
10:20 10mPaper | Prototyping Deep Learning Applications with Non-Experts: An Assistant Proposition Late Breaking Results Gustavo Rodrigues dos Reis Rodrigues dos Reis, Adrian Mos NAVER LABS Europe, Cyril Labbé LIG - UGA, Mario Cortes Cornax LIG - UGA | ||
10:30 20mResearch paper | Boosting the Revealing of Detected Violations in Deep Learning Testing: A Diversity-Guided MethodVirtualACM SIGSOFT Distinguished Paper Award Research Papers Xiaoyuan Xie School of Computer Science, Wuhan University, China, Pengbo Yin School of Computer Science, Wuhan University, Songqiang Chen School of Computer Science, Wuhan University | ||
10:50 20mPaper | Faults in Deep Reinforcement Learning Programs: A Taxonomy and A Detection ApproachVirtual Journal-first Papers Amin Nikanjam École Polytechnique de Montréal, Mohammad Mehdi Morovati École Polytechnique de Montréal, Foutse Khomh Polytechnique Montréal, Houssem Ben Braiek École Polytechnique de Montréal Link to publication DOI Authorizer link | ||
11:10 20mResearch paper | Towards Understanding the Faults of JavaScript-Based Deep Learning SystemsVirtual Research Papers Lili Quan Tianjin University, Qianyu Guo College of Intelligence and Computing, Tianjin University, Xiaofei Xie Singapore Management University, Singapore, Sen Chen Tianjin University, Xiaohong Li TianJin University, Yang Liu Nanyang Technological University | ||
11:30 10mVision and Emerging Results | An Empirical Study on Numerical Bugs in Deep Learning ProgramsVirtual NIER Track Gan Wang , Zan Wang Tianjin University, China, Junjie Chen Tianjin University, Xiang Chen Nantong University, Ming Yan College of Intelligence and Computing, Tianjin University | ||
11:40 20mResearch paper | Toward Improving the Robustness of Deep Learning Models via Model TransformationVirtual Research Papers Yingyi Zhang College of Intelligence and Computing, Tianjin University, Zan Wang Tianjin University, China, Jiajun Jiang Tianjin University, Hanmo You College of Intelligence and Computing, Tianjin University, Junjie Chen Tianjin University |