Learning Failure-Inducing Models for Testing Software-Defined Networks
Software-defined networks (SDN) have emerged to enable programmable networks that allow system operators to manage their systems in a flexible and efficient way. SDNs have been widely deployed in many application domains, such as data centers, the Internet of Things, and satellite communications. The main idea behind SDNs is to transfer the control of networks from localized, fixed-behavior controllers distributed over a set of network switches (in traditional networks) to a logically centralized and programmable software controller. With complex software being an integral part of SDNs, developing SDN-based systems (SDN-systems), e.g., data centers, entails interdisciplinary considerations, including software engineering.
In the context of developing SDN-systems, software testing becomes even more important and challenging when compared to what is required in traditional networks that provide static and predictable operations. In particular, even though the centralized controller in an SDN-system enables flexible and efficient services, it can undermine the entire communication network it manages. A software controller presents new attack surfaces that allow malicious users to manipulate the systems. Furthermore, the centralized controller interacts with diverse kinds of components such as applications and network switches, which are typically developed by different vendors. Hence, the controller is prone to receiving unexpected inputs provided by applications, switches, or malicious users, which may cause system failures, e.g., communication breakdown.
To test an SDN controller, engineers need first to explore its possible input space, which is very large. A controller takes as input a stream of control messages which are encoded according to an SDN communication protocol (e.g., OpenFlow). Second, engineers need to understand the characteristics of test data, i.e., control messages, that cause system failures. However, manually inspecting test data that cause failures is time-consuming and error-prone. Furthermore, misunderstanding such causes typically leads to unreliable fixes.
In this article, we propose FuzzSDN, a machine learning-guided \underline{Fuzz}ing method for testing \underline{SDN}-systems. In particular, FuzzSDN targets software controllers deployed in SDN-systems. FuzzSDN relies on fuzzing guided by machine learning (ML) to both (1) efficiently explore the test input space of an SDN-system’s controller and (2) learn failure-inducing models that characterize input conditions under which the system fails. This is done in a synergistic manner where models guide test generation and the latter also aims at improving the models. A failure-inducing model is practically useful for the following reasons: (1) It facilitates the diagnosis of system failures. FuzzSDN provides engineers with an interpretable model specifying how likely are failures to occur, thus providing concrete conditions under which a system will probably fail. Such conditions are much easier to analyze than a large set of individual failures. (2) A failure-inducing model enables engineers to validate their fixes. Engineers can fix and test their code against the generated test data set. A failure-inducing model can also be used as a test data generator to reproduce the system failures captured in the model. Hence, engineers can better validate their fixes using an extended test data set.
We evaluated FuzzSDN by applying it to several systems controlled by well-known open-source SDN controllers: ONOS and RYU. Our experiment results show that, compared to state-of-the-art methods, FuzzSDN generates at least 12 times more failing control messages, within the same time budget, with a controller that is fairly robust to fuzzing. FuzzSDN also produces accurate failure-inducing models with, on average, a precision of 98% and a recall of 86%, which significantly outperform models inferred by the baselines.
Tue 29 OctDisplayed time zone: Pacific Time (US & Canada) change
Thu 31 OctDisplayed time zone: Pacific Time (US & Canada) change
13:30 - 15:00 | Testing 3Tool Demonstrations / Journal-first Papers / Research Papers / Industry Showcase / NIER Track at Camellia Chair(s): Yi Song School of Computer Science, Wuhan University | ||
13:30 12mTalk | General and Practical Property-based Testing for Android Apps Research Papers Yiheng Xiong East China Normal University, Ting Su East China Normal University, Jue Wang Nanjing University, Jingling Sun University of Electronic Science and Technology of China, Geguang Pu East China Normal University, China, Zhendong Su ETH Zurich Pre-print | ||
13:42 12mTalk | ACCESS: Assurance Case Centric Engineering of Safety-critical Systems Journal-first Papers Ran Wei Lancaster University, Simon Foster University of York, Haitao Mei University of York, Fang Yan University of York, Ruizhe Yang Dalian University of Technology, Ibrahim Habli University of York, Colin O'Halloran D-RisQ Software Systems, Nick Tudor D-RisQ Software Systems, Tim Kelly University of York, Yakoub Nemouchi University of York | ||
13:55 12mTalk | Quantum Program Testing Through Commuting Pauli Strings on IBM's Quantum Computers Industry Showcase Asmar Muqeet Simula Research Laboratory and University of Oslo, Shaukat Ali Simula Research Laboratory and Oslo Metropolitan University, Paolo Arcaini National Institute of Informatics
Link to publication Pre-print | ||
14:08 12mTalk | Toward Individual Fairness Testing with Data Validity NIER Track Takashi Kitamura , Sousuke Amasaki Okayama Prefectural University, Jun Inoue National Institute of Advanced Industrial Science and Technology, Japan, Yoshinao Isobe AIST, Takahisa Toda The University of Electro-Communications | ||
14:21 12mTalk | DroneWiS: Automated Simulation Testing of small Unmanned Aerial System in Realistic Windy Conditions Tool Demonstrations | ||
14:34 12mTalk | ARUS: A Tool for Automatically Removing Unnecessary Stubbings from Test Suites Tool Demonstrations | ||
14:47 12mTalk | Learning Failure-Inducing Models for Testing Software-Defined Networks Journal-first Papers Raphaël Ollando University of Luxembourg, Seung Yeob Shin University of Luxembourg, Lionel Briand University of Ottawa, Canada; Lero centre, University of Limerick, Ireland |