Analysis and Tool-Support for Scalable and Reliable Imperative Deep Learning Programs
We aim to address a critical research problem regarding to the improvement of methodologies and tool-support systems for the comprehensive analysis and seamless transformation of imperative Deep Learning (DL) programs. DL frameworks have traditionally embraced deferred execution-style DL code. While scalable, such development tends to produce code that is error-prone. A more natural, less error-prone imperative DL frameworks encouraging eager execution have emerged at the expense of run-time performance. Though hybrid approaches aim for the “best of both worlds,” using them effectively requires subtle considerations to make code amenable to safe, accurate, and efficient graph execution— avoiding performance bottlenecks and semantically inequivalent results. Our proposed research tries to bridge this gap by comprehensively investigating scalable and reliable imperative DL programming, focusing in the development of novel methodologies and advanced tool-support mechanisms. We have initial work given where we analyze the challenges of migrating DL programs to graph execution and our progress to developing automated refactoring of imperative DL programs to graph execution.
- Ph.D. student at The Graduate School and University Center of the City University of New York (CUNY).
- Member of the PONDER lab at Hunter College.
Mon 11 SepDisplayed time zone: Amsterdam, Berlin, Bern, Rome, Stockholm, Vienna change
13:30 - 15:00
|Deferring Partial Analysis Execution for Soundness
Anemone Kampkötter TU Dortmund
|Improve the Performance of Large Language Models on Code Generation
Jinhao Dong Peking University
|Analysis and Tool-Support for Scalable and Reliable Imperative Deep Learning Programs
Tatiana Castro Vélez City University of New York (CUNY) Graduate Center