Automatic Checkpointing and Deterministic Training for Deep Learning
Deterministic execution and replay are essential in the training process of a machine learning model. Nondeterminism in deep learning training may undermines productivity, model performance, robustness, and auditing. Even with a fixed random seed, multiple runs of a same training algorithm may yield models whose performance varies by 20% percent. With existing checkpointing support, developers cannot faithfully replay an interrupted training process. As a result, debugging may become difficult and results may not be reproducible.
In this paper, we propose DETrain, a comprehensive solution to deterministic execution and replay for long running machine learning training programs. We introduce a novel random number generation mechanism that can generate consistent random numbers in the presence of data parallelism. In addition, we design a language to model the randomness in machine learning programs, and use a type system to produce effective checkpoints that can be replayed from. DETrain is evaluated on 16 PyTorch models and 19 Tensorflow models. We can deterministically execute these programs and replay from our checkpoints with reasonable overhead for these real-world models.
Mon 16 MayDisplayed time zone: Eastern Time (US & Canada) change
09:30 - 11:00
|An Empirical Evaluation of Flow Based Programming in the Machine Learning Deployment ContextResearch Paper
Andrei Paleyes Department of Computer Science and Technology, Univesity of Cambridge, Christian Cabrera Department of Computer Science and Technology, Univesity of Cambridge, Neil D. Lawrence Department of Computer Science and Technology, Univesity of CambridgePre-print Media Attached
|Automatic Checkpointing and Deterministic Training for Deep LearningResearch Paper
|Influence-Driven Data Poisoning in Graph-Based Semi-Supervised ClassifiersResearch Paper
|Engineering a Platform for Reinforcement Learning WorkloadsIndustry Talk
|Method Cards for Prescriptive Machine-Learning TransparencyResearch Paper
|Discussion on Training & Learning