Inverse problems involving partial differential equations have shown a great practical use in science and engineering. Although such problems are generally ill-posed, different regularization approaches have been developed to ameliorate this problem. Among them is the Bayesian approach, where a prior probability measure is placed on the quantity of interest. The resulting posterior probability measure is in most cases analytically intractable, and approximation techniques must be used. Markov Chain Monte Carlo (MCMC) method has been the go-to method for sampling from those posterior measures. This method, although asymptotically exact, is computationally infeasible for large-size problems that often arise in engineering practice. Variational Bayes (VB) was proposed as a more computationally tractable method for Bayesian inference, approximating a Bayesian posterior distribution with a simpler variational distribution by solving an optimization problem. In this work, we argue, through an extensive empirical assessment, that VB methods are a more flexible, faster, and more scalable alternative to MCMC methods for this class of problems. We propose a natural choice of a family of variational distributions parametrized by precision matrices, thus taking advantage of the sparse structure encoded in the discretization of the problem. We utilize stochastic optimization to efficiently estimate the variational objective and assess not only the expected error in the mean solution but also the ability to quantify the uncertainty of the estimate. We test this on PDEs based on the Poisson equation in 1D and 2D.