Write a Blog >>
ECOOP and ISSTA 2021
Sun 11 - Sat 17 July 2021 Online
Sun 11 Jul 2021 21:00 - 23:00 at Rebase - Session 6 Chair(s): Aleksandar Prokopec

The goal of deep learning is to find an intensional function f expressed as a “neural network” of weights and activation functions that minimizes a loss function over a set of input/output pairs. This training data serves an extensional representation of the desired function. The goal of the calculus of variations is to find a function f that minimizes a higher-order function H(f) that typically represents the modeler’s understanding of the physical world.

In physics applications, H[f] = ∫L(x, f(x), f’(x))dx is defined in terms of a Lagrangian L that relates the kinetic and potential energy of the system being modelled. In that case, the function f that minimizes H[f] describes the natural evolution of the system over time. But the calculus of variations is not limited to physics as it also covers the deep learning case by defining H[f]= ∑loss(f(xi)-yi) using a standard loss function such as squared error. Furthermore, there are many further applications of variational calculus in computer graphics, economics, and optimal control. Hence there are plenty of reasons for computer scientists to explore what is often considered one of the most beautiful branches of mathematics, and as we will show, has many deep ties to pure functional programming.

As Euler and Lagrange have shown 1755, to find stationary values of higher-order functions like H[f] = ∫L(x, f(x), f’(x))dx, we need to extend multivariate calculus from finite vectors to infinite vectors, or in other words functions. In many cases it is possible to solve H[f] analytically, but to search for stationary values of H[f] numerically like we do for neural nets, we must perform gradient descent in function space. To efficiently implement this, we rely on a concrete representation of the function value we are searching, such as a polynomial approximation or neural net over a set of weights, and implement function updating by updating the weights of the representation. This closes the circle with training neural nets using first-order gradient descent.

In this fun & informal talk we will develop the elementary calculus of variations via nilpotent infinitesimals and show that everything we know from regular calculus on finite vectors carries over directly to variational calculus on infinite vectors/functions. We will not assume any knowledge beyond basic high-school calculus and straightforward equational reasoning.

Sun 11 Jul

Displayed time zone: Brussels, Copenhagen, Madrid, Paris change

21:00 - 23:00
Session 6REBASE at Rebase
Chair(s): Aleksandar Prokopec Oracle Labs
21:00
2h
Talk
Variational Calculus for Dummies
REBASE