Overview

Creating reliable and explainable probabilistic models is a major challenge to solving the artificial intelligence problem. This course covers some of the latest advances that bring us closer to constructing such models. These advances span the areas of program synthesis/induction, programming languages, machine learning, and probabilistic programming. The material consists of three interconnected parts:

Part I: Program Synthesis/Induction
Synthesis is a new frontier in AI where the computer programs itself from user provided examples. Synthesis has significant applications for non-programmers as well as for programmers where it can provide massive productivity increase (e.g., wrangling for data scientists). Modern synthesis techniques excel at learning functions over discrete spaces from (partial) intent. There have been a number of recent, exciting breakthroughs in techniques that discover complex, interpretable/explainable functions from few examples, partial sketches and other forms of supervision.
Topics covered:

  • Theory of program synthesis: version spaces, counter-example guided inductive synthesis (CEGIS) with SAT/SMT, synthesis from noisy examples, learning with few examples, compositional synthesis, lower bounds on learning.
  • Applications of techniques: synthesis for end users (e.g., spreadsheets), data analytics and financial computing, interpretable machine learning models for structured data.
  • Combining neural networks and synthesis.

Part II: Robustness of Deep Learning
Deep learning methods based on neural networks have made impressive advances in recent years. A fundamental challenge with these models is that of understanding what the trained neural network has actually learned, for example, how stable / robust the network is to slight variations of the input (e.g., an image or a video), how easy it is to fool the network into mis-classifying obvious inputs, etc.
Topics covered:

  • Basics of neural networks: fully connected, convolutional networks, residual networks, activation functions.
  • Finding adversarial examples in deep learning with SMT.
  • Methods and tools to guarantee robustness of deep nets (e.g., via affine arithmetic, SMT solvers); synthesis of robustness specs.

Part III: Probabilistic Programming
Probabilistic programming is an emerging direction whose goal is democratize the construction of probabilistic models. In probabilistic programming, the user specifies a model while inference is left to the underlying solver. The idea is that the higher level of abstraction makes it easier to express, understand and reason about probabilistic models.
Topics covered:

  • Inference: MCMC samplers and tactics (approximate), symbolic inference (exact).
  • Semantics: basic measure theoretic semantics of probability; bridging measure theory and symbolic inference.
  • Frameworks and languages: WebPPL (MIT/Stanford), PSI (ETH), Picture/Venture (MIT), Anglican (Oxford).
  • Synthesis for probabilistic programs: this connects to Part I.
  • Applications of probabilistic programming: using the above solvers for reasoning about bias in machine learning models (connects to Part II), reasoning about computer networks, security protocols, approximate computing, cognitive models, rational agents.