Probabilistic Programming Inference via Intensional Semantics
In recent years probabilistic programs have been used extensively to encode statistical models, and various tools have been developed to perform Bayesian inference on such programs, given some observed data. Formally, probabilistic programs can be thought of as denoting probability distributions which are hard to sample from in general.
The area of research known as probabilistic programming sets out to solve such inference problems via approximate sampling algorithms operating on programs. A standard approach, known as Trace Metropolis-Hastings, involves defining a Markov chain on the space of execution traces of a program, whose stationary distribution coincides with the probability distribution encoded by the program.
Naturally, the accuracy of the algorithm improves with the number of iterations, each of which produces a new program trace. For every iteration, the method consists in updating one of the sampled value in the current trace, and investigating how this affects the rest of the execution. Thus it is desirable to avoid a naive, linear representation of execution traces; instead we suggest a representation that highlights the control and data flow of the program. With this approach one can avoid recalculating the part of the program which remains unaffected by the changed value.
In this talk, we present a semantics of first-order probabilistic programs based on event structures, which computes probabilistic dependencies between program instructions: the causality information in event structures is used to represent probabilistic dependency. This information allows us to replace traces with partial orders in a principled way. Our model is compositional, thus easily compared to existing semantics for probabilistic programs, and convenient for showing correctness of an inference algorithm based on it.
Sun 7 Apr
|11:00 - 11:30|
|11:30 - 12:00|