Sunday, February 12, 2017

Bayesian Deep Learning and Black Box Variational Inference

Bayesian Deep Learning and Black Box Variational Inference
Stanford University
Department of Statistics

Speaker:  Rajesh Ranganathan, Princeton University

Friday, February 24, 2017 - 3:30pm to 4:30pm

Title: Bayesian Deep Learning and Black Box Variational Inference
 

Abstract:
Scientists and scholars across many fields seek to answer questions in their respective disciplines using large data sets. One approach to answering such questions is to use probabilistic generative models. Generative models help scientists express domain knowledge, uncover hidden
structure, and form predictions. In this talk, I present my work on making generative modeling more expressive and easier to use. First, I present a multi-layer probabilistic model called deep exponential families (DEFs). Deep exponential families uncover coarse-to-fine hidden structure. These models can be used as components of larger models to solve applied problems, such as in recommendation systems or medical diagnosis. Though expressive, DEFs come with an analytical
challenge—scientists need to compute the hidden structure given observed data, i.e., posterior inference. Using classical methods for inference in DEFs is tedious and impractical for
non-experts. Thus, in the second part of the talk, I will describe my work on black box variational inference (BBVI). BBVI is an optimization-based algorithm to approximate the posterior. BBVI expands the reach of variational inference to new models, improves the fidelity of the approximation, and allows for new types of variational inference. We study BBVI in the context of DEFs to fit complex models of text, user behavior data, and medical records. Black box variational
methods make probabilistic generative models and Bayesian deep learning more accessible to the broader scientific community.

No comments:

Post a Comment