Probabilistic graphical models are a versatile tool for doing statistical inference with complex models. The main impediment for their use, especially with more elaborate models, is the heavy computational cost incurred. The development of approximations that enable the use of graphical models in various tasks while requiring less computational resources is therefore an important area of research. In this thesis, we test one such recently proposed family of approximations, called quasi-pseudolikelihood (QPL).
Graphical models come in two main variants: directed models and undirected models, of which the latter are also called Markov networks or Markov random fields. Here we focus solely on the undirected case with continuous valued variables. The specific inference task the QPL approximations target is model structure learning, i.e. learning the model dependence structure from data.
In the theoretical part of the thesis, we define the basic concepts that underpin the use of graphical models and derive the general QPL approximation. As a novel contribution, we show that one member of the QPL approximation family is not consistent in the general case: asymptotically, for this QPL version, there exists a case where the learned dependence structure does not converge to the true model structure.
In the empirical part of the thesis, we test two members of the QPL family on simulated datasets. We generate datasets from Ising models and Sherrington-Kirkpatrick models and try to learn them using QPL approximations. As a reference method, we use the well-established Graphical lasso (Glasso).
Based on our results, the tested QPL approximations work well with relatively sparse dependence structures, while the more densely connected models, especially with weaker interaction strengths, present challenges that call for further research.