ISSTA 2022
Mon 18 - Fri 22 July 2022 Online
Wed 20 Jul 2022 07:00 - 07:20 at ISSTA 2 - Session 2-2: Neural Networks, Learning, NLP E
Fri 22 Jul 2022 01:40 - 02:00 at ISSTA 2 - Session 1-12: Neural Networks, Learning, NLP D

Hitherto statistical type inference systems rely thoroughly on supervised learning approaches, which require laborious manual effort to collect and label large amounts of data. Most Turing-complete imperative languages share similar control- and data-flow structures, which make it possible to transfer knowledge learned from one language to another. In this paper, we propose a cross-lingual transfer learning framework, PLATO, for statistical type inference, which allows us to leverage prior knowledge learned from the labeled dataset of one language and transfer it to the others, for example, Python to JavaScript, Java to JavaScript, etc. PLATO is powered by a novel kernelized attention mechanism to constrain the attention scope of the backbone Transformer model such that model is forced to base its prediction on commonly shared features among languages. In addition, we propose the syntax enhancement that augments the learning on the feature overlap among language domains. Furthermore, PLATO can also be used to improve the performance of the conventional supervised-based type inference by introducing cross-language augmentation, which enables the model to learn more general features across multiple languages. We evaluated PLATO under two settings: 1) under the cross-domain scenario that the target language data is not labelled or labelled partially, the results show that PLATO outperforms the state-of-the-art domain transfer techniques by a large margin, \eg, it improves the Python to TypeScript baseline by +14.6@EM, +18.6@weighted-F1, and 2) under the conventional mono-lingual supervised scenario, PLATO improves the Python baseline by +4.10@EM, +1.90@weighted-F1 with the introduction of the cross-lingual augmentation.

Wed 20 Jul

Displayed time zone: Seoul change

07:00 - 08:20
Session 2-2: Neural Networks, Learning, NLP ETechnical Papers at ISSTA 2
07:00
20m
Talk
Cross-Lingual Transfer Learning for Statistical Type InferenceACM SIGSOFT Distinguished Paper
Technical Papers
Zhiming Li Nanyang Technological University, Singapore, Xiaofei Xie Singapore Management University, Singapore, Haoliang Li City University of Hong Kong, Zhengzi Xu Nanyang Technological University, Yi Li Nanyang Technological University, Yang Liu Nanyang Technological University
DOI
07:20
20m
Talk
DocTer: Documentation-Guided Fuzzing for Testing Deep Learning API Functions
Technical Papers
Danning Xie Purdue University, Yitong Li University of Waterloo, Mijung Kim UNIST, Hung Viet Pham University of Waterloo, Lin Tan Purdue University, Xiangyu Zhang Purdue University, Michael W. Godfrey University of Waterloo, Canada
DOI
07:40
20m
Talk
HybridRepair: Towards Annotation-Efficient Repair for Deep Learning Models
Technical Papers
Yu Li The Chinese University of Hong Kong, Muxi Chen The Chinese University of Hong Kong, Xu, Qiang
DOI
08:00
20m
Talk
Human-in-the-Loop Oracle Learning for Semantic Bugs in String Processing Programs
Technical Papers
Charaka Geethal Monash University, Thuan Pham The University of Melbourne, Aldeida Aleti Monash University, Marcel Böhme MPI-SP, Germany and Monash University, Australia
DOI Pre-print

Fri 22 Jul

Displayed time zone: Seoul change

01:40 - 02:20
Session 1-12: Neural Networks, Learning, NLP DTechnical Papers at ISSTA 2
01:40
20m
Talk
Cross-Lingual Transfer Learning for Statistical Type InferenceACM SIGSOFT Distinguished Paper
Technical Papers
Zhiming Li Nanyang Technological University, Singapore, Xiaofei Xie Singapore Management University, Singapore, Haoliang Li City University of Hong Kong, Zhengzi Xu Nanyang Technological University, Yi Li Nanyang Technological University, Yang Liu Nanyang Technological University
DOI
02:00
20m
Talk
DocTer: Documentation-Guided Fuzzing for Testing Deep Learning API Functions
Technical Papers
Danning Xie Purdue University, Yitong Li University of Waterloo, Mijung Kim UNIST, Hung Viet Pham University of Waterloo, Lin Tan Purdue University, Xiangyu Zhang Purdue University, Michael W. Godfrey University of Waterloo, Canada
DOI