Write a Blog >>
Wed 12 Oct 2022 10:30 - 10:50 at Banquet A - Technical Session 10 - Testing I Chair(s): Gordon Fraser

Voice-based virtual assistants are becoming increasingly popular. Such systems provide frameworks to developers on which they can build their own apps. End-users can interact with such apps through a Voice User Interface (VUI), which allows to use natural language commands to perform actions. Testing such systems is far from trivial, especially due to the fact that the same command can be expressed using several semantically equivalent utterances, to which the VUI is expected to correctly react. To support developers in testing VUIs, Deep learning (DL)-based tools have been integrated in the development environments to generate paraphrases for selected seed utterances. This is for example the case of the Alexa Developer Console (ADC). Such tools, however, generate a limited number of paraphrases and do not allow to cover several corner cases. In this paper, we introduce VUI-UPSET, a novel approach that aims at adapting chatbot-testing approaches to VUI-testing. Indeed, both systems provide a similar natural-language-based interface to users. We conducted an empirical study to understand how VUI-UPSET compares to existing approaches in terms of (i) correctness of the generated paraphrases, and (ii) capability of revealing bugs. We manually analyzed 5,872 generated paraphrases, totaling 13,310 evaluations. Our results show that the DL-based tool integrated in the ADC generates a significantly higher percentage of meaningful paraphrases compared to VUI-UPSET. However, VUI-UPSET generates a higher number of bug-revealing paraphrases, which allows developers to test more thoroughly their apps at the cost of discarding a higher number of irrelevant paraphrases.

Wed 12 Oct

Displayed time zone: Eastern Time (US & Canada) change

10:00 - 12:00
Technical Session 10 - Testing IResearch Papers / Industry Showcase / Tool Demonstrations at Banquet A
Chair(s): Gordon Fraser University of Passau
Research paper
Inline Tests
Research Papers
Yu Liu University of Texas at Austin, Pengyu Nie University of Texas at Austin, Owolabi Legunsen Cornell University, Milos Gligoric University of Texas at Austin
LiveRef: a Tool for Live Refactoring Java Code
Tool Demonstrations
Sara Fernandes FEUP, Universidade do Porto, Ademar Aguiar FEUP, Universidade do Porto, André Restivo LIACC, Universidade do Porto, Porto, Portugal
Research paper
Sorry, I don't Understand: Improving Voice User Interface Testing
Research Papers
Emanuela Guglielmi University of Molise, Giovanni Rosa University of Molise, Simone Scalabrino University of Molise, Gabriele Bavota Software Institute, USI Università della Svizzera italiana, Rocco Oliveto University of Molise
Industry talk
MOREST: Industry Practice of Automatic RESTful API Testing
Industry Showcase
YI LIU Nanyang Technological University, Yuekang Li Nanyang Technological University, Yang Liu Nanyang Technological University, Ruiyuan Wan , Runchao Wu Huawei Inc., Qingkun Liu Huawei Cloud Computing Technologies Co., Ltd
Research paper
VITAS : Guided Model-based VUI Testing of VPA AppsVirtual
Research Papers
Suwan Li Nanjing University, Lei Bu Nanjing University, Guangdong Bai University of Queensland, Zhixiu Guo Institute of Information Engineering, Chinese Academy of Sciences, China, Kai Chen SKLOIS, Institute of Information Engineering, Chinese Academy of Sciences, China, Hanlin Wei The University of Queensland