ASE 2024
Sun 27 October - Fri 1 November 2024 Sacramento, California, United States

This program is tentative and subject to change.

Tue 29 Oct 2024 17:21 - 17:36 at Camellia - Fuzzing 1

Software-defined networks (SDN) have emerged to enable programmable networks that allow system operators to manage their systems in a flexible and efficient way. SDNs have been widely deployed in many application domains, such as data centers, the Internet of Things, and satellite communications. The main idea behind SDNs is to transfer the control of networks from localized, fixed-behavior controllers distributed over a set of network switches (in traditional networks) to a logically centralized and programmable software controller. With complex software being an integral part of SDNs, developing SDN-based systems (SDN-systems), e.g., data centers, entails interdisciplinary considerations, including software engineering.

In the context of developing SDN-systems, software testing becomes even more important and challenging when compared to what is required in traditional networks that provide static and predictable operations. In particular, even though the centralized controller in an SDN-system enables flexible and efficient services, it can undermine the entire communication network it manages. A software controller presents new attack surfaces that allow malicious users to manipulate the systems. Furthermore, the centralized controller interacts with diverse kinds of components such as applications and network switches, which are typically developed by different vendors. Hence, the controller is prone to receiving unexpected inputs provided by applications, switches, or malicious users, which may cause system failures, e.g., communication breakdown.

To test an SDN controller, engineers need first to explore its possible input space, which is very large. A controller takes as input a stream of control messages which are encoded according to an SDN communication protocol (e.g., OpenFlow). Second, engineers need to understand the characteristics of test data, i.e., control messages, that cause system failures. However, manually inspecting test data that cause failures is time-consuming and error-prone. Furthermore, misunderstanding such causes typically leads to unreliable fixes.

In this article, we propose FuzzSDN, a machine learning-guided \underline{Fuzz}ing method for testing \underline{SDN}-systems. In particular, FuzzSDN targets software controllers deployed in SDN-systems. FuzzSDN relies on fuzzing guided by machine learning (ML) to both (1) efficiently explore the test input space of an SDN-system’s controller and (2) learn failure-inducing models that characterize input conditions under which the system fails. This is done in a synergistic manner where models guide test generation and the latter also aims at improving the models. A failure-inducing model is practically useful for the following reasons: (1) It facilitates the diagnosis of system failures. FuzzSDN provides engineers with an interpretable model specifying how likely are failures to occur, thus providing concrete conditions under which a system will probably fail. Such conditions are much easier to analyze than a large set of individual failures. (2) A failure-inducing model enables engineers to validate their fixes. Engineers can fix and test their code against the generated test data set. A failure-inducing model can also be used as a test data generator to reproduce the system failures captured in the model. Hence, engineers can better validate their fixes using an extended test data set.

We evaluated FuzzSDN by applying it to several systems controlled by well-known open-source SDN controllers: ONOS and RYU. Our experiment results show that, compared to state-of-the-art methods, FuzzSDN generates at least 12 times more failing control messages, within the same time budget, with a controller that is fairly robust to fuzzing. FuzzSDN also produces accurate failure-inducing models with, on average, a precision of 98% and a recall of 86%, which significantly outperform models inferred by the baselines.

This program is tentative and subject to change.

Tue 29 Oct

Displayed time zone: Pacific Time (US & Canada) change

16:30 - 17:30
16:30
12m
Talk
Magneto: A Step-Wise Approach to Exploit Vulnerabilities in Dependent Libraries via LLM-Empowered Directed Fuzzing
Research Papers
Zhuotong Zhou Fudan University, China, Yongzhuo Yang Fudan University, Susheng Wu Fudan University, Yiheng Huang Fudan University, Bihuan Chen Fudan University, Xin Peng Fudan University
16:42
12m
Talk
Applying Fuzz Driver Generation to Native C/C++ Libraries of OEM Android Framework: Obstacles and Solutions
Industry Showcase
Shiyan Peng Fudan University, Yuan Zhang Fudan University, Jiarun Dai Fudan University, Yue Gu Fudan University, Zhuoxiang Shen Fudan University, Jingcheng Liu Fudan University, Lin Wang Fudan University, Yong Chen OPPO, Yu Qin OPPO, Lei Ai OPPO, Xianfeng Lu OPPO, Min Yang Fudan University
16:54
12m
Talk
Olympia: Fuzzer Benchmarking for Solidity
Tool Demonstrations
Jana Chadt TU Wien, Austria, Christoph Hochrainer TU Wien, Valentin Wüstholz ConsenSys, Maria Christakis TU Wien
17:06
15m
Talk
BUGOSS: A Benchmark of Real-world Regression Bugs for Empirical Investigation of Regression Fuzzing Techniques
Journal-first Papers
Jeewoong Kim Chungbuk National University, Shin Hong Chungbuk National University
17:21
15m
Talk
Learning Failure-Inducing Models for Testing Software-Defined Networks
Journal-first Papers
Raphaël Ollando University of Luxembourg, Seung Yeob Shin University of Luxembourg, Lionel Briand University of Ottawa, Canada; Lero centre, University of Limerick, Ireland

Thu 31 Oct

Displayed time zone: Pacific Time (US & Canada) change

13:30 - 15:00
13:30
15m
Talk
General and Practical Property-based Testing for Android Apps
Research Papers
Yiheng Xiong East China Normal University, Ting Su East China Normal University, Jue Wang Nanjing University, Jingling Sun University of Electronic Science and Technology of China, Geguang Pu East China Normal University, China, Zhendong Su ETH Zurich
Pre-print
14:00
15m
Talk
ACCESS: Assurance Case Centric Engineering of Safety-critical Systems
Journal-first Papers
Ran Wei Lancaster University, Simon Foster University of York, Haitao Mei University of York, Fang Yan University of York, Ruizhe Yang Dalian University of Technology, Ibrahim Habli University of York, Colin O'Halloran D-RisQ Software Systems, Nick Tudor D-RisQ Software Systems, Tim Kelly University of York, Yakoub Nemouchi University of York
14:15
15m
Talk
Quantum Program Testing Through Commuting Pauli Strings on IBM's Quantum Computers
Industry Showcase
Asmar Muqeet Simula Research Laboratory and University of Oslo, Shaukat Ali Simula Research Laboratory and Oslo Metropolitan University, Paolo Arcaini National Institute of Informatics
Pre-print
14:30
10m
Talk
Toward Individual Fairness Testing with Data Validity
NIER Track
Takashi Kitamura , Sousuke Amasaki Okayama Prefectural University, Jun Inoue National Institute of Advanced Industrial Science and Technology, Japan, Yoshinao Isobe AIST, Takahisa Toda The University of Electro-Communications
14:40
10m
Talk
DroneWiS: Automated Simulation Testing of small Unmanned Aerial System in Realistic Windy Conditions
Tool Demonstrations
Bohan Zhang Saint Louis University, Missouri, Ankit Agrawal Saint Louis University, Missouri
14:50
10m
Talk
ARUS: A Tool for Automatically Removing Unnecessary Stubbings from Test Suites
Tool Demonstrations
Mengzhen Li University of Minnesota, Mattia Fazzini University of Minnesota