Skip to main content
Login | Suomeksi | På svenska | In English

Browsing by study line "Statistics"

Sort by: Order: Results:

  • Viholainen, Olga (2020)
    The Poisson regression is a well known generalized linear model that relates the expected value of the count to a linear combination of explanatory variables. Outliers affect severely the classical maximum likelihood estimator of the Poisson regression. Several robust alternatives for the maximum likelihood (ML) estimator have been developed, such as Conditionally unbiased bounded-influence (CU) estimator, Mallows quasi-likelihood (MQ) estimator and M-Estimators based on transformations (MT). The purpose of the thesis is to study robustness of the robust Poisson regression estimators in different conditions. Another goal is to compare their performance to each other. The robustness of the Poisson regression estimators is investigated by performing a simulation study, where the used estimators are the ML, CU, MQ and MT estimators. The robust estimators MQ and MT are studied with two different weight functions C and H and also without a weight function. The simulation is executed in three parts, where the first part handles a situation without any outliers, in the second part the outliers are in the X space and in the third part the outliers are in the Y space. The results of the simulation show that all the robust estimators are less affected by the outliers than the classical ML estimator, but nevertheless the outliers severely weaken the results of the CU estimator and the MQ based estimators. The MT based estimators and especially the MT and H-MT estimators have by far the lowest medians of the mean squared errors, when the data are contaminated with outliers. When there aren’t any outliers in the data, they compare favorably with the other estimators. Therefore the MT and H-MT estimators are an excellent option for fitting the Poisson regression model.
  • Smith, Dianna (2024)
    Statistician C. R. Rao made many contributions to multivariate analysis over the span of his career. Some of his earliest contributions continue to be used and built upon almost eighty years later, while his more recent contributions spur new avenues of research. This thesis discusses these contributions, how they helped shape multivariate analysis as we see it today, and what we may learn from reviewing his works. Topics include his extension of linear discriminant analysis, Rao’s perimeter test, Rao’s U statistic, his asymptotic expansion of Wilks’ Λ statistic, canonical factor analysis, functional principal component analysis, redundancy analysis, canonical coordinates, and correspondence analysis. The examination of his works shows that interdisciplinary collaboration and the utilization of real datasets were crucial in almost all of Rao’s impactful contributions.
  • Jeskanen, Juuso-Markus (2021)
    Developing reliable, regulatory compliant and customer-oriented credit risk models requires thorough knowledge of credit risk phenomenon. Tight collaboration between stakeholders is necessary and hence models need to be transparent, interpretable and explainable as well as accurate, for experts without statistical background. In the context of credit risk, one can speak of explainable artificial intelligence (XAI). Hence, practice and market standards are also underlined in this study. So far, credit risk research has mainly focused on the estimation of the probability of default parameter. However, as systems and processes have evolved to comply with regulation in the last decade, recovery data has improved, which has raised loss given default (LGD) up to the heart of credit risk. In the context of LGD, most of the studies have emphasized estimation of one-stage models. However, in practice, market standards support a multi-stage approach which follows the institution's simplified recovery processes. Generally, multi-stage models are more transparent and have better predictive power and compliant status with the regulation. This thesis presents a framework to analyze and execute sensitivity analysis for multi-stage LGD model. The main contribution of the study is to increase the knowledge of LGD modelling by giving insights to the sensitivity of discriminatory power between risk drivers, model components and LGD score. The study aims to answer two questions. Firstly, how sensitive the predictive power of multi-stage LGD model is on the correlation of risk drivers and individual components? Secondly, how to identify the most driving risk factors that need to be considered in multi-stage LGD modelling to achieve adequate level LGD score? The experimental part of this thesis is divided into two parts. The first one presents the motivation, study design and experimental setup used in this thesis to execute the study. The second part focuses on the sensitivity analysis of risk drivers, components and LGD score. Sensitivity analysis presented in this study gives important knowledge of behavior of multi-stage LGD and dependencies between independent risk drivers, components and LGD score with regards to the correlations and model performance metrics. Introduced sensitivity framework can be utilised in assessing the need and schedule for model calibrations with related to the changes in application portfolio. In addition, framework and results can be used in recognizing the needs for monthly performed IFRS 9 ECL calculation updates. The study also gives input for model stress testing where different scenarios and impacts are analyzed regarding the changes in macroeconomic conditions. Even though the focus of this study is in credit risk, the methods presented are also applicable in the different fields outside the financial sector.
  • Talvensaari, Mikko (2022)
    Gaussiset prosessit ovat satunnaisprosesseja, jotka soveltuvat erityisen hyvin ajallista tai avaruudellista riippuvuutta ilmentävän datan mallintamiseen. Gaussisten prosessien helppo sovellettavuus on seurausta siitä, että prosessin äärelliset osajoukot noudattavat moniulotteista normaalijakaumaa, jonka määrittävät täydellisesti prosessin odotusarvofunktio ja kovarianssifunktio. Multinormaalijakaumaan perustuvan uskottavuusfunktion ongelma on heikko skaalautuvuus, sillä uskottavuusfunktion evaluoinnissa välttämätön kovarianssimatriisin kääntäminen on aikavaativuudeltaan aineiston koon kuutiollinen funktio. Tässä tutkielmassa kuvataan temporaalisille gaussisille prosesseille esitysmuoto, joka perustuu stokastisten differentiaaliyhtälöryhmien määrittämiin vektoriarvoisiin Markov-prosesseihin. Menetelmän aikatehokkuushyöty perustuu vektoriprosessin Markov-ominaisuuteen, eli siihen, että prosessin tulevaisuus riippuu vain matalaulotteisen vektorin nykyarvosta. Stokastisen differentiaaliyhtälöryhmän määrittämästä vektoriprosessista johdetaan edelleen diskreettiaikainen lineaaris-gaussinen tila-avaruusmalli, jonka uskottavuusfunktio voidaan evaluoida lineaarisessa ajassa. Tutkielman teoriaosuudessa osoitetaan stationaaristen gaussisten prosessien spektraaliesitystä käyttäen, että stokastisiin differentiaaliyhtälöjärjestelmiin ja kovarianssifunktihin perustuvat määritelmät ovat yhtäpitäviä tietyille stationaarisille gaussisille prosesseille. Tarkat tila-avaruusmuodot esitetään Matérn-tyypin kovarianssifunktioille sekä kausittaiselle kovarianssifunktiolle. Lisäksi teoriaosuudessa esitellään tila-avaruusmallien soveltamisen perusoperaatiot Kalman-suodatuksesta silotukseen ja ennustamiseen, sekä tehokkaat algoritmit operaatioiden suorittamiseen. Tutkielman soveltavassa osassa tila-avaruusmuotoisia gaussisia prosesseja käytettiin mallintamaan ja ennustamaan käyttäjädatan läpisyöttöä 3g-solukkoverkon tukiasemissa. Bayesiläistä käytäntöä noudattaen epävarmuus malliparametreistä ilmaistiin asettamalla parametreille priorijakaumat. Aineiston 15 aikasarjaa sovitettiin sekä yksittäisille aikasarjoille määriteltyyn malliin että moniaikasarjamalliin, jossa aikasarjojen väliselle kovarianssille johdettiin posteriorijakauma. Moniaikasarjamallin viiden viikon ennusteet olivat 15 aikasarjan aineistossa keskimäärin niukasti parempia kuin yksisarjamallin. Kummankin mallin ennusteet olivat keskimäärin parempia kuin laajalti käytettyjen ARIMA-mallien ennusteet.
  • Rautavirta, Juhana (2022)
    Comparison of amphetamine profiles is a task in forensic chemistry and its goal is to make decisions on whether two samples of amphetamine originate from the same source or not. These decisions help identifying and prosecuting the suppliers of amphetamine, which is an illicit drug in Finland. The traditional approach of comparing amphetamine samples involves computation of the Pearson correlation coefficient between two real-valued sample vectors obtained by gas chromatography-mass spectrometry analysis. A two-sample problem, such as the problem of comparing drug samples, can also be tackled with methods such as a t-test or Bayes factors. Recently, a newer method called predictive agreement (PA) has been applied in the comparison of amphetamine profiles, comparing the posterior predictive distributions induced by two samples. In this thesis, we did a statistical validation of the use of this newer method in amphetamine profile comparison. In this thesis, we compared the performance of the predictive agreement method to the traditional method involving computation of the Pearson correlation coefficient. Techniques such as simulation and cross-validation were used in the validation. In the simulation part, we simulated enough data to compute 10 000 PA and correlation values between sample pairs. Cross-validation was used in a case-study, where a repeated 5-fold group cross-validation was used to study the effect of changes in the data used in training of the model. In the cross-validation, performance of the models was measured with area under curve (AUC) values of receiver operating characteristics (ROC) and precision-recall (PR) curves. For the validation, two separate datasets collected by the National Bureau of Investigation of Finland (NBI), were available. One of the datasets was a larger collection of amphetamine samples, whereas the other dataset was a more curated group of samples, of which we also know which samples are somehow linked to each other. On top of these datasets, we simulated data representing amphetamine samples that were either from different or same source. The results showed that with the simulated data, predictive agreement outperformed the traditional method in terms of distinguishing sample pairs consisting of samples from different sources, from sample pairs consisting of samples from the same source. The case-study showed that changes in the training data have quite a marginal effect on the performance of the predictive agreement method, and also that with real world data, the PA method outperformed the traditional method in terms of AUC-ROC and AUC-PR values. Additionally, we concluded that the PA method has the benefit of interpretation, where the PA value between two samples can be interpreted as the probability of these samples originating from the same source.
  • Tan, Shu Zhen (2021)
    In practice, outlying observations are not uncommon in many study domains. Without knowing the underlying factors to the outliers, it is appealing to eliminate the outliers from the datasets. However, unless there are scientific justification, outlier elimination amounts to alteration of the datasets. Otherwise, heavy-tailed distributions should be adopted to model the larger-than-expected variabiltiy in an overdispersed dataset. The Poisson distribution is the standard model to model the variation in count data. However, the empirical variability in observed datsets is often larger than the amount expected by the Poisson. This leads to unreliable inferences when estimating the true effect sizes of covariates in regression modelling. It follows that the Negative Binomial distribution is often adopted as an alternative to deal with the overdispersed datasets. Nevertheless, it has been proven that both Poisson and Negative Binomial observation distributions are not robust against the outliers, in a sense that the outliers have non-negligible influence on the estimation of the covariate effect size. On the other hand, the scale mixture of quasi-Poisson distributions (called the robust quasi-Poisson model), which is constructed similarly to the construction of the Student's t-distribution, is a heavy-tailed alternative to the Poisson. It is proven to be robust against outliers. The thesis shows the theoretical evidence on the robustness of the 3 aforementioned models in a Bayesian framework. Lastly, the thesis considers 2 simulation experiments with different kinds of the outlier source -- process error and covariate measurement error, to compare the robustness between the Poisson, Negative Binomial and robust quasi-Poisson regression models in the Bayesian framework. The model robustness was assessed, in terms of the model ability to infer correctly the covariate effect size, in different combination of error probability and error variability. It was proven that the robust quasi-Poisson regression model was more robust than its counterparts because its breakdown point was relatively higher than the others, in both experiments.
  • Kari, Daniel (2020)
    Estimating the effect of random chance (’luck’) has long been a question of particular interest in various team sports. In this thesis, we aim to determine the role of luck in a single icehockey game by building a model to predict the outcome based on the course of events in a game. The obtained prediction accuracy should also to some extent reveal the effect of random chance. Using the course of events from over 10,000 games, we train feedforward and convolutional neural networks to predict the outcome and final goal differential, which has been proposed as a more informative proxy for outcome. Interestingly, we are not able to obtain distinctively higher accuracy than previous studies, which have focused on predicting the outcome with infomation available before the game. The results suggest that there might exist an upper bound for prediction accuracy even if we knew ’everything’ that went on in a game. This further implies that random chance could affect the outcome of a game, although assessing this is difficult, as we do not have a good quantitative metric for luck in the case of single ice hockey game prediction.