Neural networks are powerful function approximators that can be trained efficiently using backpropagation and gradient descent. Despite being capable of memorizing complex patterns, they demonstrate a non-trivial degree of generalization. However, the classic maximum likelihood approach to training neural networks imposes numerous limitations. An alternative approach of Bayesian inference overcomes some of these limitations, while also enabling additional applications. Unfortunately, inference in neural network models is intractable, so we have to rely on approximate inference methods. Current approximate inference methods are either making naive assumptions about the posterior form, or are too computationally inefficient to be practical. In this project we would like to investigate a few approaches for making scalable approximate inference methods such as Stochastic Variational Inference (SVI) more accurate. Specifically, we would like to look into reparameterizations and auxiliary variable methods, that allow for more expressive variational families without sacrificing computational efficiency.