Write a Blog >>
ICSE 2023
Sun 14 - Sat 20 May 2023 Melbourne, Australia

Can we take a recurrent neural network (RNN) trained to translate between languages and augment it to support a new natural language without retraining the model from scratch? Can we fix the faulty behavior of the RNN by replacing portions associated with the faulty behavior? Recent works on decomposing a fully connected neural network (FCNN) and convolutional neural network (CNN) into modules have shown the value of engineering deep models in this manner, which is standard in traditional SE but foreign for deep learning models. However, prior works focus on the image-based multiclass classification problems and cannot be applied to RNN due to (a) different layer structures, (b) loop structures, (c) different types of input-output architectures, and (d) usage of both nonlinear and logistic activation functions. In this work, we propose the first approach to decompose an RNN into modules. We study different types of RNNs, i.e., Vanilla, LSTM, and GRU. Further, we show how such RNN modules can be reused and replaced in various scenarios. We evaluate our approach against 5 canonical datasets (i.e., Math QA, Brown Corpus, Wiki-toxicity, Clinc OOS, and Tatoeba) and 4 model variants for each dataset. We found that decomposing a trained model has a small cost (Accuracy: -0.6%, BLEU score: +0.10%). Also, the decomposed modules can be reused and replaced without needing to retrain.

Wed 17 May

Displayed time zone: Hobart change

15:45 - 17:15
Development and evolution of AI-intensive systemsSEIP - Software Engineering in Practice / Technical Track / NIER - New Ideas and Emerging Results at Meeting Room 104
Chair(s): Sebastian Elbaum University of Virginia
15:45
15m
Talk
Reusing Deep Neural Network Models through Model Re-engineering
Technical Track
Binhang Qi Beihang University, Hailong Sun Beihang University, Xiang Gao Beihang University, China, Hongyu Zhang The University of Newcastle, Zhaotian Li Beihang University, Xudong Liu Beihang University
16:00
15m
Talk
PyEvolve: Automating Frequent Code Changes in Python ML Systems
Technical Track
Malinda Dilhara University of Colorado Boulder, USA, Danny Dig JetBrains Research & University of Colorado Boulder, USA, Ameya Ketkar Uber
Pre-print
16:15
15m
Talk
DeepArc: Modularizing Neural Networks for the Model Maintenance
Technical Track
xiaoning ren , Yun Lin Shanghai Jiao Tong University; National University of Singapore, Yinxing Xue University of Science and Technology of China, Ruofan Liu National University of Singapore, Jun Sun Singapore Management University, Zhiyong Feng Tianjin University, Jin Song Dong National University of Singapore
16:30
15m
Talk
Decomposing a Recurrent Neural Network into Modules for Enabling Reusability and Replacement
Technical Track
Sayem Mohammad Imtiaz Iowa State University, Fraol Batole Dept. of Computer Science, Iowa State University, Astha Singh Dept. of Computer Science, Iowa State University, Rangeet Pan IBM Research, Breno Dantas Cruz Dept. of Computer Science, Iowa State University, Hridesh Rajan Iowa State University
Pre-print
16:45
7m
Talk
Safe-DS: A Domain Specific Language to Make Data Science Safe
NIER - New Ideas and Emerging Results
Lars Reimann University of Bonn, Günter Kniesel-Wünsche University of Bonn
Pre-print
16:52
7m
Talk
Rapid Development of Compositional AI
NIER - New Ideas and Emerging Results
Lee Martie MIT-IBM Watson AI Lab, Jessie Rosenberg IBM, Veronique Demers MIT-IBM Watson AI Lab, Gaoyuan Zhang IBM, Onkar Bhardwaj MIT-IBM Watson AI Lab, John Henning IBM, Aditya Prasad IBM, Matt Stallone MIT-IBM Watson AI Lab, Ja Young Lee IBM, Lucy Yip IBM, Damilola Adesina IBM, Elahe Paikari IBM, Oscar Resendiz IBM, Sarah Shaw IBM, David Cox IBM
Pre-print
17:00
7m
Talk
StreamAI: Challenges of Continual Learning Systems in Production for AI Industrialization
SEIP - Software Engineering in Practice
Mariam Barry BNP Paribas, Albert Bifet University of Waikato, Institut Polytechnique de Paris, Jean Luc Billy BNP Paribas