Reduction Fusion for Optimized Distributed Data-Parallel Computations via Inverse Recomputation
Distributed data-parallel computations are critical for both traditional big data applications and emerging large language model tasks. The efficiency of these computations largely depends on reducer performance, particularly in handling extensive data access. This paper introduces a novel reduction fusion algorithm that optimizes distributed data-parallel programs by fusing dependent reducers and mappers into a single, unified reducer. Employing inverse recomputation, the algorithm preserves partial aggregation and reduces storage, network I/O, memory, and cache overheads. Our preliminary evaluation reveals performance improvements of up to 2.47×, demonstrating the practicality and effectiveness of this approach, while also highlighting its potential to address challenges posed by extensive data access in modern distributed computing environments.
Wed 25 JunDisplayed time zone: Amsterdam, Berlin, Bern, Rome, Stockholm, Vienna change
11:00 - 12:30 | SE and AI 2Ideas, Visions and Reflections / Research Papers at Cosmos Hall Chair(s): Massimiliano Di Penta University of Sannio, Italy | ||
11:00 20mTalk | Beyond PEFT: Layer-Wise Optimization for More Effective and Efficient Large Code Model Tuning Research Papers Chaozheng Wang The Chinese University of Hong Kong, jiafeng University of Electronic Science and Technology of China, Shuzheng Gao Chinese University of Hong Kong, Cuiyun Gao Harbin Institute of Technology, Shenzhen, Li Zongjie Hong Kong University of Science and Technology, Ting Peng Tencent Inc., Hailiang Huang Tencent Inc., Yuetang Deng Tencent, Michael Lyu Chinese University of Hong Kong DOI | ||
11:20 20mTalk | Automated Trustworthiness Oracle Generation for Machine Learning Text Classifiers Research Papers Lam Nguyen Tung Monash University, Australia, Steven Cho The University of Auckland, New Zealand, Xiaoning Du Monash University, Neelofar Neelofar Royal Melbourne Institure of Techonlogy (RMIT), Valerio Terragni University of Auckland, Stefano Ruberto JRC European Commission, Aldeida Aleti Monash University DOI Media Attached File Attached | ||
11:40 20mTalk | A Causal Learning Framework for Enhancing Robustness of Source Code Models Research Papers Junyao Ye Huazhong University of Science and Technology, Zhen Li Huazhong University of Science and Technology, Xi Tang Huazhong University of Science and Technology, Deqing Zou Huazhong University of Science and Technology, Shouhuai Xu University of Colorado Colorado Springs, Qiang Weizhong Huazhong University of Science and Technology, Hai Jin Huazhong University of Science and Technology DOI | ||
12:00 20mTalk | Eliminating Backdoors in Neural Code Models for Secure Code Understanding Research Papers Weisong Sun Nanjing University, Yuchen Chen Nanjing University, Chunrong Fang Nanjing University, Yebo Feng Nanyang Technological University, Yuan Xiao Nanjing University, An Guo Nanjing University, Quanjun Zhang School of Computer Science and Engineering, Nanjing University of Science and Technology, Zhenyu Chen Nanjing University, Baowen Xu Nanjing University, Yang Liu Nanyang Technological University DOI | ||
12:20 10mTalk | Reduction Fusion for Optimized Distributed Data-Parallel Computations via Inverse Recomputation Ideas, Visions and Reflections Haoxiang Lin Microsoft Research, Yang Wang Microsoft Research Asia, Yanjie Gao Microsoft Research, Hongyu Zhang Chongqing University, Ming Wu Zero Gravity Labs, Mao Yang Microsoft Research DOI Pre-print |
This is the main event hall of Clarion Hotel, which will be used to host keynote talks and other plenary sessions. The FSE and ISSTA banquets will also happen in this room.
The room is just in front of the registration desk, on the other side of the main conference area. The large doors with numbers “1” and “2” provide access to the Cosmos Hall.