The objective of this work is to generalize the pseudolikelihood-based inference method from ordinary Markov networks to an extension of the model containing context-specific independencies: the labelled graphical model. Probabilistic graphical models like the Markov and Bayes networks are used to represent the dependence structure of multivariate probability distributions. Machine learning methodology can then be used to learn these dependence structures from sample data. The Markov network is a model, which assigns no directionality to interactions between variables: the probability distribution is represented by an undirected graph, where nodes correspond to variables and edges to direct interactions. A labelled graphical model extends this idea by assigning labels to edges to represent contexts, i.e outcomes of other variables in the distribution, in which the associated variables are independent.
Bayesian inference can be used to learn the dependence structure of a set of variables using data. The standard procedure is to consider the posterior probability of a model given the data and aim to maximize this score. This involves explicitly calculating the marginal likelihood of the model. In the case of Markov networks and consequently labelled models, this can not be done analytically and approximation methods must be used. Pseudolikelihood is one such method, which allows for both the analytical calculation of the so-called marginal pseudolikehood replacing the actual marginal likelihood of a model and the computationally very advantageous property of a node-wise factorizable score-function.
This thesis presents the general theory behind the labelled graphical models and the basics of Bayesian inference. The pseudolikelihood approximation is introduced and applied to labelled models and the consistency of the score is proved. Lastly a greedy hill climb -algorithm is used to demonstrate the inference in practice by a synthetic and a real data example.