Experimental Comparison of Features and Classifiers for Android Malware DetectionTechnical Papers
Android platform has dominated the smart phone market for years now and, consequently, gained a lot of attention from attackers. Malicious apps (malware) pose a serious threat to the security and privacy of Android smart phone users. Available approaches to detect mobile malware based on machine learning rely on features extracted with static analysis or dynamic analysis techniques. Different types of machine learning classifiers (such as support vector machine and random forest) deep learning classifiers (based on deep neural networks) are then trained on the extracted features, to produce models that can be used to detect mobile malware. The usually-analyzed features include permissions requested/used, frequency of API calls, use of API calls, and sequence of API calls. The API calls are analyzed at various granularity levels such as method, class, package, and family.
In the view of the proposals of different types of classifiers and the use of different types of features and different underlying analyses used for feature extraction, there is a need for a comprehensive evaluation on the effectiveness of the current state-of-the-art in malware detection on a common benchmark. This paper evaluates several conventional machine learning classifiers and deep learning classifiers and the use of different types of features, which characterize the use of API calls at class level and the sequence of API calls at method level. Features have been extracted from a common benchmark of 4572 benign samples and 2399 malware samples, using both static analysis and dynamic analysis.
Among other interesting findings, we observed that classifiers trained on the use of API calls generally perform better than those trained on the sequence of API calls. Classifiers trained on static analysis-based features perform better than those trained on dynamic analysis-based features. Deep learning classifiers, despite their sophistication, are not necessarily better than conventional classifiers, especially when they are not optimized. However, deep learning classifiers do perform better than conventional classifiers when trained on dynamic analysis-based features.
Tue 14 JulDisplayed time zone: (UTC) Coordinated Universal Time change
07:00 - 08:30 | Empirical Software EngineeringPaper Presentations / Technical Papers at MobileSoft Chair(s): Henry Muccini University of L'Aquila, Italy Virtualization chair: Ferdian Thung | ||
07:00 15m | Leave my Apps Alone! A Study on how Android Developers Access Installed Apps on User's DeviceBest Paper AwardTechnical Papers Technical Papers Gian Luca Scoccia University of L'Aquila, Ibrahim Kanj , Ivano Malavolta Vrije Universiteit Amsterdam, Kaveh Razavi ETH Zürich | ||
07:15 15m | Experimental Comparison of Features and Classifiers for Android Malware DetectionTechnical Papers Technical Papers Lwin Khin Shar Singapore Management University, Biniam Fisseha Demissie Fondazione Bruno Kessler, Mariano Ceccato University of Verona, Wei Minn Singapore Management University | ||
07:30 15m | Empirical Study on Code Smells in iOS ApplicationsTechnical Papers Technical Papers | ||
07:45 15m | Q&A - Empirical Software Engineering Paper Presentations | ||
08:00 30m | Discussion with Authors / Attendees Paper Presentations |