Toggle navigation
Sign in
Sign up
conf.researchr.org
/
Mao Yang
conf.researchr.org general profile
ESEC/FSE 2020 profile
FSE 2025 profile
HPCA/CGO/PPoPP/CC 2026 profile
ICSE 2020 profile
ICSE 2021 profile
ICSE 2022 profile
ICSE 2023 profile
ICSE 2024 profile
PPoPP 2024 profile
PPoPP 2025 profile
Not registered as user
Name:
Mao Yang
Affiliation:
Microsoft Research
Contributions
2026
HPCA
Author of BitDecoding: Unlocking Tensor Cores for Long-Context LLMs with Low-Bit KV Cache within the Main Conference-track
Principles and Practice of Parallel Programming
Author of MetaAttention: A Unified and Performant Attention Framework Across Hardware Backends within the Main Conference-track
2025
ESEC/FSE
Author of dl²: Detecting Communication Deadlocks in Deep Learning Jobs within the Industry Papers-track
Author of Reduction Fusion for Optimized Distributed Data-Parallel Computations via Inverse Recomputation within the Ideas, Visions and Reflections-track
Principles and Practice of Parallel Programming
Author of FlashFFTStencil: Bridging Fast Fourier Transforms to Memory-Efficient Stencil Computations on Tensor Core Units within the Main Conference-track
Author of Jigsaw: Toward Conflict-free Vectorized Stencil Computation by Tessellating Swizzled Registers within the Main Conference-track
2024
ICSE
Author of An Empirical Study on Low GPU Utilization of Deep Learning Jobs within the Research Track-track
Principles and Practice of Parallel Programming
Author of ConvStencil: Transform Stencil Computation to Matrix Multiplication on Tensor Cores within the Main Conference-track
2023
ICSE
Author of An Empirical Study on Quality Issues of Deep Learning Platform within the SEIP - Software Engineering in Practice-track
Author of Runtime Performance Prediction for Deep Learning Models with Graph Neural Network within the SEIP - Software Engineering in Practice-track
2022
ICSE
Author of Refty: Refinement Types for Valid Deep Learning Models within the Technical Track-track
2021
ICSE
Author of Resource-Guided Configuration Space Reduction for Deep Learning Models within the Technical Track-track
2020
ESEC/FSE
Author of Estimating GPU Memory Consumption of Deep Learning Models within the Industry Papers-track
Author of Enhancing the Interoperability between Deep Learning Frameworks by Model Conversion within the Industry Papers-track
ICSE
Author of An Empirical Study on Program Failures of Deep Learning Jobs within the Technical Papers-track
Share
x
Mon 16 Feb 02:43