Machine learning systems with privacy and for privacy: TensorFlow & PATE-G
Machine learning is powered by training data. In this talk, we discuss the privacy of training data and how to protect it. We describe one recent technique for this purpose, PATE-G, where several models based on different subsets of the training data are combined into one model that does not depend “too much” on any particular piece of the training data.
Machine learning is enabled by software systems. These systems should efficiently support both established techniques (e.g., stochastic gradient descent) and newer ones (e.g., adversarial networks). In this talk, we focus on TensorFlow, a flexible, programmable system for large-scale machine learning.
TensorFlow and PATE-G go well together. In particular, PATE-G is not tied to one particular learning algorithm. Conversely, TensorFlow makes it easy to express the techniques on which PATE-G relies.
The talk is based on joint work with many people, primarily in Google Brain. More information on TensorFlow can be found at tensorflow.org. PATE-G is described in a paper available at https://arxiv.org/abs/1610.05755.
Tue 20 JunDisplayed time zone: Amsterdam, Berlin, Bern, Rome, Stockholm, Vienna change
13:50 - 15:30 | |||
13:50 40mTalk | Machine learning systems with privacy and for privacy: TensorFlow & PATE-G Curry On Talks Martín Abadi Google | ||
14:40 40mTalk | Compiled Machine Learning: Accelerated Linear Algebra (XLA) for TensorFlow Curry On Talks Peter Hawkins Google |