Evaluation of Tools and Frameworks for Machine Learning Model ServingSE for AI
Machine learning (ML) models are ubiquitous, as ML expands into numerous application domains. Despite this growth, the software engineering of ML systems remains complex, particularly in production environments. Serving ML models for inference is a critical part, as it provides the interface between the model and the surrounding components. Today, a variety of open source tools and frameworks for model serving exists which promise ease of use and performance. However, they differ in terms of usability, flexibility, scalability, and their overall performance. In this work, we systematically evaluate several popular model serving tools and frameworks in the context of a natural language processing scenario. In detail, we analyze their features and capabilities, conduct runtime experiments, and report on our experiences from various real-world ML projects. Our evaluation results provide valuable insights and considerations for ML engineers and other practitioners seeking for effective serving environments that seamlessly integrate with the existing ML tech stack, simplifying and accelerating the process of serving ML models in production.
Wed 30 AprDisplayed time zone: Eastern Time (US & Canada) change
11:00 - 12:30 | SE for AI 1New Ideas and Emerging Results (NIER) / SE In Practice (SEIP) / Research Track at 215 Chair(s): Houari Sahraoui DIRO, Université de Montréal | ||
11:00 15mTalk | A Test Oracle for Reinforcement Learning Software based on Lyapunov Stability Control TheorySE for AI Research Track Shiyu Zhang The Hong Kong Polytechnic University, Haoyang Song The Hong Kong Polytechnic University, Qixin Wang The Hong Kong Polytechnic University, Henghua Shen The Hong Kong Polytechnic University, Yu Pei The Hong Kong Polytechnic University | ||
11:15 15mTalk | CodeImprove: Program Adaptation for Deep Code ModelsSE for AI Research Track | ||
11:30 15mTalk | FairQuant: Certifying and Quantifying Fairness of Deep Neural NetworksSE for AI Research Track Brian Hyeongseok Kim University of Southern California, Jingbo Wang University of Southern California, Chao Wang University of Southern California Pre-print | ||
11:45 15mTalk | When in Doubt Throw It out: Building on Confident Learning for Vulnerability DetectionSecurity New Ideas and Emerging Results (NIER) Yuanjun Gong Renmin University of China, Fabio Massacci University of Trento; Vrije Universiteit Amsterdam Pre-print File Attached | ||
12:00 15mTalk | Evaluation of Tools and Frameworks for Machine Learning Model ServingSE for AI SE In Practice (SEIP) Niklas Beck Fraunhofer Institute for Intelligent Analysis and Information Systems IAIS, Benny Stein Fraunhofer Institute for Intelligent Analysis and Information Systems IAIS, Dennis Wegener T-Systems International GmbH, Lennard Helmer Fraunhofer Institute for Intelligent Analysis and Information Systems | ||
12:15 15mTalk | Real-time Adapting Routing (RAR): Improving Efficiency Through Continuous Learning in Software Powered by Layered Foundation ModelsSE for AI SE In Practice (SEIP) Kirill Vasilevski Huawei Canada, Dayi Lin Centre for Software Excellence, Huawei Canada, Ahmed E. Hassan Queen’s University Pre-print File Attached | ||