Skip to main content
Login | Suomeksi | På svenska | In English

Browsing by Subject "Bayesian statistics"

Sort by: Order: Results:

  • Nevala, Aapeli (2020)
    Thanks to modern medical advances, humans have developed tools for detecting diseases so early, that a patient would be better off had the disease gone undetected. This is called overdiagnosis. Overdiagnosisisaproblemespeciallycommoninacts,wherethetargetpopulationofanintervention consists of mostly healthy people. Colorectal cancer (CRC) is a relatively rare disease. Thus screening for CRC affects mostly cancerfree population. In this thesis I evaluate overdiagnosis in guaiac faecal occult blood test (gFOBT) based CRC screening programme. In gFOBT CRC screening there are two goals: to detect known predecessors of cancers called adenomas and to remove them (cancer prevention), and to detect malign CRCs early enough to be still treatable (early detection). Overdiagnosis can happen when detecting adenomas, but also when detecting cancers. This thesis focuses on overdiagnosis due to detection of adenomas that are non-progressive in their nature. Since there is no clinical means to make distinction between progressive and non-progressive adenomas, statistical methods must be applied. Classical methods to estimate overdiagnosis fail in quantifying this type of overdiagnosis for couple of reasons: incidence data of adenomas is not available, and adenoma removal results in lowering cancer incidence in screened population. While the latter is a desired effect of screening, it makes it impossible to estimate overdiagnosis by just comparing cancer incidences among screened and control populations. In this thesis a Bayesian Hidden Markov model using HMC NUTS algorithm via software Stan is fitted to simulate the natural progression of colorectal cancer. The five states included in the model were healthy (1), progressive adenoma (2), screen-detectable CRC (3), clinically apparent CRC (4) and non-progressive adenoma (5). Possible transitions are from 1 to 2, 1 to 5, 2 to 3 and 3 to 4. The possible observations are screen-negative (1), detected adenoma (2), screen-detected CRC (3), clinically manifested CRC (3). Three relevant estimands for evaluating this type of overdiagnosis with a natural history model are presented. Then the methods are applied to estimate overdiagnosis proportion in guaiac faecal occult blood test (gFOBT) based CRC screening programme conducted in Finland between 2004 and 2016. The resulting mean overdiagnosis probability for all the patients that had an adenoma detected for programme is 0.48 (0.38, 0.56, 95-percent credible interval). Different estimates for overdiagnosis in sex and age-specific stratas of the screened population are also provided. In addition to these findings, the natural history model can be used to gain more insight about natural progression of colorectal cancer.
  • Mäkelä, Noora (2022)
    Sum-product networks (SPN) are graphical models capable of handling large amount of multi- dimensional data. Unlike many other graphical models, SPNs are tractable if certain structural requirements are fulfilled; a model is called tractable if probabilistic inference can be performed in a polynomial time with respect to the size of the model. The learning of SPNs can be separated into two modes, parameter and structure learning. Many earlier approaches to SPN learning have treated the two modes as separate, but it has been found that by alternating between these two modes, good results can be achieved. One example of this kind of algorithm was presented by Trapp et al. in an article Bayesian Learning of Sum-Product Networks (NeurIPS, 2019). This thesis discusses SPNs and a Bayesian learning algorithm developed based on the earlier men- tioned algorithm, differing in some of the used methods. The algorithm by Trapp et al. uses Gibbs sampling in the parameter learning phase, whereas here Metropolis-Hasting MCMC is used. The algorithm developed for this thesis was used in two experiments, with a small and simple SPN and with a larger and more complex SPN. Also, the effect of the data set size and the complexity of the data was explored. The results were compared to the results got from running the original algorithm developed by Trapp et al. The results show that having more data in the learning phase makes the results more accurate as it is easier for the model to spot patterns from a larger set of data. It was also shown that the model was able to learn the parameters in the experiments if the data were simple enough, in other words, if the dimensions of the data contained only one distribution per dimension. In the case of more complex data, where there were multiple distributions per dimension, the struggle of the computation was seen from the results.
  • Tan, Shu Zhen (2021)
    In practice, outlying observations are not uncommon in many study domains. Without knowing the underlying factors to the outliers, it is appealing to eliminate the outliers from the datasets. However, unless there are scientific justification, outlier elimination amounts to alteration of the datasets. Otherwise, heavy-tailed distributions should be adopted to model the larger-than-expected variabiltiy in an overdispersed dataset. The Poisson distribution is the standard model to model the variation in count data. However, the empirical variability in observed datsets is often larger than the amount expected by the Poisson. This leads to unreliable inferences when estimating the true effect sizes of covariates in regression modelling. It follows that the Negative Binomial distribution is often adopted as an alternative to deal with the overdispersed datasets. Nevertheless, it has been proven that both Poisson and Negative Binomial observation distributions are not robust against the outliers, in a sense that the outliers have non-negligible influence on the estimation of the covariate effect size. On the other hand, the scale mixture of quasi-Poisson distributions (called the robust quasi-Poisson model), which is constructed similarly to the construction of the Student's t-distribution, is a heavy-tailed alternative to the Poisson. It is proven to be robust against outliers. The thesis shows the theoretical evidence on the robustness of the 3 aforementioned models in a Bayesian framework. Lastly, the thesis considers 2 simulation experiments with different kinds of the outlier source -- process error and covariate measurement error, to compare the robustness between the Poisson, Negative Binomial and robust quasi-Poisson regression models in the Bayesian framework. The model robustness was assessed, in terms of the model ability to infer correctly the covariate effect size, in different combination of error probability and error variability. It was proven that the robust quasi-Poisson regression model was more robust than its counterparts because its breakdown point was relatively higher than the others, in both experiments.