Skip to main content
Login | Suomeksi | På svenska | In English

Browsing by study line "Tilastotiede"

Sort by: Order: Results:

  • Vuoristo, Varpu (2021)
    Puolueiden kannatusmittaukset vaalien välillä tehdään kyselytutkimusten avulla. Näitä mielipidetiedusteluita kutsutaan kansankielellä termillä gallup. Tässä työssä perehdytään poliittisten mielipidetutkimusten historiaan sekä tehdään lyhyt katsaus galluppien nykytilanteeseen Suomessa. Tässä maisterintutkielmassa on ollut käytössä kyselytutkimuksella kerätyt aineistot. Aineistoissa on kysytty vastaajien äänestyskäyttäytymistä seuraavissa vaaleissa: kuntavaalit 2012, eduskuntavaalit 2015 sekä kuntavaalit 2017. Tutkielmassa esitellään kyselytutkimuksien kysymyksen asettelu, aineistojen puhdistamisen työvaiheita sekä perusteet mitkä tiedot tarvitaan tilastollisen mallin sovittamista varten. Teoriaosuudessa esitellään yleistettyjä lineaarisia malleja. Menetelmänä sovitetaan yleistetty lineaarinen malli valittuihin ja puhdistettuihin aluperäisten aineistojen osa-aineistoihin. Näissä osa-aneistoissa on tiedot vastaajien äänestyskäyttäytymisestä kahdeksan eri eduskuntapuolueen kesken. Lisäksi tilastollisen mallin sovittamista varten osa-aineistossa on tiedot vastaajien sukupuolesta sekä asuinpaikasta NUTS 2 -aluejaon mukaisesti. Sukupuoli ja viisi eri aluetta toimivat mallissa selittävinä muuttujina, kun taas puoluekannatus selitettävänä muuttujana. Aineiston käsittely on toteutettu R-laskentaohjelmalla. Tuloksissa on esitetään taulukointina selittävien muuttujien vaikutusta tarkasteltavan puolueen äänestämiseen, niin itsenäisinä selittäjinä kuin niiden yhteisvaikuksina. Jokaista kahdeksaa puoluetta tarkastellaan kaikkien kolmen vaaliaineiston osalta erikseen. Analysoinnin työkaluina toimivat suurimman uskottavuuden estimaattit sekä niiden luottamusvälit.
  • Mattila, Mari (2023)
    Tilastokeskuksessa haluttiin kehittää omakotitalojen lämmitysöljyn käyttöä ja lämmitystapaa ku- vaavia tilastoja. Asumisen rahoitus- ja kehittämiskeskuksen ylläpitämä energiatodistusrekisteri näh- tiin yhdeksi mahdolliseksi aineistoksi, jota kehittämisessä voisi käyttää. Kun energiatodistusrekis- terin aineistoa alettiin tarkastella, huomattiin, että energiatodistusrekisteriin valikoituu keskimää- räistä suurempia ja uudempia rakennuksia. Valikoituneisuuden oletettiin muodostuvan keskeiseksi ongelmaksi energiatodistusrekisterin hyödyntämisessä. Tutkimuskysymykseksi muotoutui, ovatko rakennuskannan öljylämmitteisten omakotitalojen tiedot lämmitystavasta ja pinta-alasta energia- todistusrekisterissä yleistettävissä koko rakennuskantaan. Koska valikoituneisuus on yksi puuttuneisuuden ilmenemismuoto, tutkimuksen teoriaosuudessa pää- tettiin keskittyä puuttuneisuuteen. Puuttuneisuuden mekanismi on tutkimuksen teoriaosan kes- keisimpiä käsitteitä. Puuttuneisuuden mekanismi voi olla täysin satunnainen, satunnainen tai ei- satunnainen. Puuttuneisuuden mekanismi vaikuttaa siihen, mitä tilastollisia menetelmiä aineiston mallintamiseen soveltuu. Tässä tutkimuksessa puuttuneisuuden oletettiin olevan ei-satunnaista. Kun puuttuneisuuden mekanismi on ei-satunnainen, puuttuneisuutta käsitellään yleensä satunnai- silmiönä. Tutkimuksen aineistolle ja puuttuneisuutta kuvaavalle puuttuneisuusindikaattorille muo- dostetaan tilastollinen malli, johon voidaan soveltaa uskottavuuspäättelyä. Tutkimuksessa malliksi valittiin Heckmanin valintamalli. Malli on tarkoitettu käytettäväksi silloin, kun aineisto on valikoitunut tutkittavan ilmiön perusteella. Esimerkiksi öljyn kulutus voidaan esti- moida vain öljylämmittäjistä muodostetun aineiston perusteella. Hackmanin mallilla voidaan ottaa huomioon se, että ölyn kulutus puuttuu niiltä taloilta, jotka eivät lämmitä öljyllä. Kun Heckmanin malli oli estimoitu, sen hyvyyttä arvioitiin ristiinvalidoimalla. Ristiinvalidoinnissa ennustettiin öljylämmityksessä pysymistä. Malli ennusti vain noin 58 % tapauksista oikein. Tätä onnistumisprosenttia pidettiin liian pienenä, jotta mallia kannattaisi käyttää Tilastokeskuksessa energiankulutustietojen korjaamiseen. Syitä mallintamisen epäonnistumiselle voi olla esimerkiksi se, että öljylämmityksen vaihtaminen tapahtuu pitkän aikaikkunan sisällä. Mallin selittäjien vaikutus vasteeseen voi vaihdella eri ajan- kohtina. Malli ei ottanut aikaa huomioon, vaan kaikki asuntokunta kuvaavat muuttujat oli keskiar- voistettu. Malliyhtälö saattoi olla väärä myös siitä näkökulmasta, että siitä saattoi puuttua tärkeitä kotitalouskohtaisia selittäjiä, joita ei vain ollut rekisteriaineistosta saatavilla.
  • Halme, Topi (2021)
    In a quickest detection problem, the objective is to detect abrupt changes in a stochastic sequence as quickly as possible, while limiting rate of false alarms. The development of algorithms that after each observation decide to either stop and declare a change as having happened, or to continue the monitoring process has been an active line of research in mathematical statistics. The algorithms seek to optimally balance the inherent trade-off between the average detection delay in declaring a change and the likelihood of declaring a change prematurely. Change-point detection methods have applications in numerous domains, including monitoring the environment or the radio spectrum, target detection, financial markets, and others. Classical quickest detection theory focuses settings where only a single data stream is observed. In modern day applications facilitated by development of sensing technology, one may be tasked with monitoring multiple streams of data for changes simultaneously. Wireless sensor networks or mobile phones are examples of technology where devices can sense their local environment and transmit data in a sequential manner to some common fusion center (FC) or cloud for inference. When performing quickest detection tasks on multiple data streams in parallel, classical tools of quickest detection theory focusing on false alarm probability control may become insufficient. Instead, controlling the false discovery rate (FDR) has recently been proposed as a more useful and scalable error criterion. The FDR is the expected proportion of false discoveries (false alarms) among all discoveries. In this thesis, novel methods and theory related to quickest detection in multiple parallel data streams are presented. The methods aim to minimize detection delay while controlling the FDR. In addition, scenarios where not all of the devices communicating with the FC can remain operational and transmitting to the FC at all times are considered. The FC must choose which subset of data streams it wants to receive observations from at a given time instant. Intelligently choosing which devices to turn on and off may extend the devices’ battery life, which can be important in real-life applications, while affecting the detection performance only slightly. The performance of the proposed methods is demonstrated in numerical simulations to be superior to existing approaches. Additionally, the topic of multiple hypothesis testing in spatial domains is briefly addressed. In a multiple hypothesis testing problem, one tests multiple null hypotheses at once while trying to control a suitable error criterion, such as the FDR. In a spatial multiple hypothesis problem each tested hypothesis corresponds to e.g. a geographical location, and the non-null hypotheses may appear in spatially localized clusters. It is demonstrated that implementing a Bayesian approach that accounts for the spatial dependency between the hypotheses can greatly improve testing accuracy.
  • Laine, Riku (2021)
    People with a drug use disorder have a high risk of death following release from criminal sanctions due to increased risk of overdose. Time in prison has been associated with increased mortality from natural causes of death and suicides. In this thesis, the association of criminal sanctions with the mortality and causes of death of Finnish treatment-seeking individuals with substance use disorder was studied. Prior research on the topic is scarce and old. The data was the Register-based follow-up study on criminality, health and taxation of inpatients and outpatients entered into substance abuse treatment (RIPE, n = 10 887). The patients had been clients of A-Clinic Foundation between 1990 and 2009. Mortality was the modelled with logistic regression from 1.1.1992 to 26.8.2015. The time was divided into one-week episodes. For each client it was marked whether they were free, in prison or serving a community service, and whether they had died during the episode. Causes of death were studied using death records from 1992 to 2018. There was a 2,5-fold increase in overall mortality during the first two weeks after sentences. The risk stayed elevated even after the first 12 weeks (odds ratio 1,20; 95% confidence interval 1,08-1,32). The risk of a drug-related death (DRD) was almost 8,5-fold during the first two weeks. Poisonings excl. alcohol poisoning and assaults were more likely causes of death for patients with criminal history. DRD was over three times more likely among patients with criminal records. After validations, 33 individuals who had died during their sentence were identified from the data, of whom 14 (42,4%) had committed suicide. Approximately 10 percent of other deaths were suicides. Thus, it can be concluded that Finland has similar increased risk of death after sentences as has been observed in other countries despite frequent use of buprenorphine. Sentences affect causes of death for 2-5 years after the last sentence. Additionally, first signs of elevated mortality during community sanctions was observed, but further studies are required to confirm the finding.
  • Viholainen, Olga (2020)
    The Poisson regression is a well known generalized linear model that relates the expected value of the count to a linear combination of explanatory variables. Outliers affect severely the classical maximum likelihood estimator of the Poisson regression. Several robust alternatives for the maximum likelihood (ML) estimator have been developed, such as Conditionally unbiased bounded-influence (CU) estimator, Mallows quasi-likelihood (MQ) estimator and M-Estimators based on transformations (MT). The purpose of the thesis is to study robustness of the robust Poisson regression estimators in different conditions. Another goal is to compare their performance to each other. The robustness of the Poisson regression estimators is investigated by performing a simulation study, where the used estimators are the ML, CU, MQ and MT estimators. The robust estimators MQ and MT are studied with two different weight functions C and H and also without a weight function. The simulation is executed in three parts, where the first part handles a situation without any outliers, in the second part the outliers are in the X space and in the third part the outliers are in the Y space. The results of the simulation show that all the robust estimators are less affected by the outliers than the classical ML estimator, but nevertheless the outliers severely weaken the results of the CU estimator and the MQ based estimators. The MT based estimators and especially the MT and H-MT estimators have by far the lowest medians of the mean squared errors, when the data are contaminated with outliers. When there aren’t any outliers in the data, they compare favorably with the other estimators. Therefore the MT and H-MT estimators are an excellent option for fitting the Poisson regression model.
  • Smith, Dianna (2024)
    Statistician C. R. Rao made many contributions to multivariate analysis over the span of his career. Some of his earliest contributions continue to be used and built upon almost eighty years later, while his more recent contributions spur new avenues of research. This thesis discusses these contributions, how they helped shape multivariate analysis as we see it today, and what we may learn from reviewing his works. Topics include his extension of linear discriminant analysis, Rao’s perimeter test, Rao’s U statistic, his asymptotic expansion of Wilks’ Λ statistic, canonical factor analysis, functional principal component analysis, redundancy analysis, canonical coordinates, and correspondence analysis. The examination of his works shows that interdisciplinary collaboration and the utilization of real datasets were crucial in almost all of Rao’s impactful contributions.
  • Jeskanen, Juuso-Markus (2021)
    Developing reliable, regulatory compliant and customer-oriented credit risk models requires thorough knowledge of credit risk phenomenon. Tight collaboration between stakeholders is necessary and hence models need to be transparent, interpretable and explainable as well as accurate, for experts without statistical background. In the context of credit risk, one can speak of explainable artificial intelligence (XAI). Hence, practice and market standards are also underlined in this study. So far, credit risk research has mainly focused on the estimation of the probability of default parameter. However, as systems and processes have evolved to comply with regulation in the last decade, recovery data has improved, which has raised loss given default (LGD) up to the heart of credit risk. In the context of LGD, most of the studies have emphasized estimation of one-stage models. However, in practice, market standards support a multi-stage approach which follows the institution's simplified recovery processes. Generally, multi-stage models are more transparent and have better predictive power and compliant status with the regulation. This thesis presents a framework to analyze and execute sensitivity analysis for multi-stage LGD model. The main contribution of the study is to increase the knowledge of LGD modelling by giving insights to the sensitivity of discriminatory power between risk drivers, model components and LGD score. The study aims to answer two questions. Firstly, how sensitive the predictive power of multi-stage LGD model is on the correlation of risk drivers and individual components? Secondly, how to identify the most driving risk factors that need to be considered in multi-stage LGD modelling to achieve adequate level LGD score? The experimental part of this thesis is divided into two parts. The first one presents the motivation, study design and experimental setup used in this thesis to execute the study. The second part focuses on the sensitivity analysis of risk drivers, components and LGD score. Sensitivity analysis presented in this study gives important knowledge of behavior of multi-stage LGD and dependencies between independent risk drivers, components and LGD score with regards to the correlations and model performance metrics. Introduced sensitivity framework can be utilised in assessing the need and schedule for model calibrations with related to the changes in application portfolio. In addition, framework and results can be used in recognizing the needs for monthly performed IFRS 9 ECL calculation updates. The study also gives input for model stress testing where different scenarios and impacts are analyzed regarding the changes in macroeconomic conditions. Even though the focus of this study is in credit risk, the methods presented are also applicable in the different fields outside the financial sector.
  • Talvensaari, Mikko (2022)
    Gaussiset prosessit ovat satunnaisprosesseja, jotka soveltuvat erityisen hyvin ajallista tai avaruudellista riippuvuutta ilmentävän datan mallintamiseen. Gaussisten prosessien helppo sovellettavuus on seurausta siitä, että prosessin äärelliset osajoukot noudattavat moniulotteista normaalijakaumaa, jonka määrittävät täydellisesti prosessin odotusarvofunktio ja kovarianssifunktio. Multinormaalijakaumaan perustuvan uskottavuusfunktion ongelma on heikko skaalautuvuus, sillä uskottavuusfunktion evaluoinnissa välttämätön kovarianssimatriisin kääntäminen on aikavaativuudeltaan aineiston koon kuutiollinen funktio. Tässä tutkielmassa kuvataan temporaalisille gaussisille prosesseille esitysmuoto, joka perustuu stokastisten differentiaaliyhtälöryhmien määrittämiin vektoriarvoisiin Markov-prosesseihin. Menetelmän aikatehokkuushyöty perustuu vektoriprosessin Markov-ominaisuuteen, eli siihen, että prosessin tulevaisuus riippuu vain matalaulotteisen vektorin nykyarvosta. Stokastisen differentiaaliyhtälöryhmän määrittämästä vektoriprosessista johdetaan edelleen diskreettiaikainen lineaaris-gaussinen tila-avaruusmalli, jonka uskottavuusfunktio voidaan evaluoida lineaarisessa ajassa. Tutkielman teoriaosuudessa osoitetaan stationaaristen gaussisten prosessien spektraaliesitystä käyttäen, että stokastisiin differentiaaliyhtälöjärjestelmiin ja kovarianssifunktihin perustuvat määritelmät ovat yhtäpitäviä tietyille stationaarisille gaussisille prosesseille. Tarkat tila-avaruusmuodot esitetään Matérn-tyypin kovarianssifunktioille sekä kausittaiselle kovarianssifunktiolle. Lisäksi teoriaosuudessa esitellään tila-avaruusmallien soveltamisen perusoperaatiot Kalman-suodatuksesta silotukseen ja ennustamiseen, sekä tehokkaat algoritmit operaatioiden suorittamiseen. Tutkielman soveltavassa osassa tila-avaruusmuotoisia gaussisia prosesseja käytettiin mallintamaan ja ennustamaan käyttäjädatan läpisyöttöä 3g-solukkoverkon tukiasemissa. Bayesiläistä käytäntöä noudattaen epävarmuus malliparametreistä ilmaistiin asettamalla parametreille priorijakaumat. Aineiston 15 aikasarjaa sovitettiin sekä yksittäisille aikasarjoille määriteltyyn malliin että moniaikasarjamalliin, jossa aikasarjojen väliselle kovarianssille johdettiin posteriorijakauma. Moniaikasarjamallin viiden viikon ennusteet olivat 15 aikasarjan aineistossa keskimäärin niukasti parempia kuin yksisarjamallin. Kummankin mallin ennusteet olivat keskimäärin parempia kuin laajalti käytettyjen ARIMA-mallien ennusteet.
  • Rautavirta, Juhana (2022)
    Comparison of amphetamine profiles is a task in forensic chemistry and its goal is to make decisions on whether two samples of amphetamine originate from the same source or not. These decisions help identifying and prosecuting the suppliers of amphetamine, which is an illicit drug in Finland. The traditional approach of comparing amphetamine samples involves computation of the Pearson correlation coefficient between two real-valued sample vectors obtained by gas chromatography-mass spectrometry analysis. A two-sample problem, such as the problem of comparing drug samples, can also be tackled with methods such as a t-test or Bayes factors. Recently, a newer method called predictive agreement (PA) has been applied in the comparison of amphetamine profiles, comparing the posterior predictive distributions induced by two samples. In this thesis, we did a statistical validation of the use of this newer method in amphetamine profile comparison. In this thesis, we compared the performance of the predictive agreement method to the traditional method involving computation of the Pearson correlation coefficient. Techniques such as simulation and cross-validation were used in the validation. In the simulation part, we simulated enough data to compute 10 000 PA and correlation values between sample pairs. Cross-validation was used in a case-study, where a repeated 5-fold group cross-validation was used to study the effect of changes in the data used in training of the model. In the cross-validation, performance of the models was measured with area under curve (AUC) values of receiver operating characteristics (ROC) and precision-recall (PR) curves. For the validation, two separate datasets collected by the National Bureau of Investigation of Finland (NBI), were available. One of the datasets was a larger collection of amphetamine samples, whereas the other dataset was a more curated group of samples, of which we also know which samples are somehow linked to each other. On top of these datasets, we simulated data representing amphetamine samples that were either from different or same source. The results showed that with the simulated data, predictive agreement outperformed the traditional method in terms of distinguishing sample pairs consisting of samples from different sources, from sample pairs consisting of samples from the same source. The case-study showed that changes in the training data have quite a marginal effect on the performance of the predictive agreement method, and also that with real world data, the PA method outperformed the traditional method in terms of AUC-ROC and AUC-PR values. Additionally, we concluded that the PA method has the benefit of interpretation, where the PA value between two samples can be interpreted as the probability of these samples originating from the same source.
  • Tan, Shu Zhen (2021)
    In practice, outlying observations are not uncommon in many study domains. Without knowing the underlying factors to the outliers, it is appealing to eliminate the outliers from the datasets. However, unless there are scientific justification, outlier elimination amounts to alteration of the datasets. Otherwise, heavy-tailed distributions should be adopted to model the larger-than-expected variabiltiy in an overdispersed dataset. The Poisson distribution is the standard model to model the variation in count data. However, the empirical variability in observed datsets is often larger than the amount expected by the Poisson. This leads to unreliable inferences when estimating the true effect sizes of covariates in regression modelling. It follows that the Negative Binomial distribution is often adopted as an alternative to deal with the overdispersed datasets. Nevertheless, it has been proven that both Poisson and Negative Binomial observation distributions are not robust against the outliers, in a sense that the outliers have non-negligible influence on the estimation of the covariate effect size. On the other hand, the scale mixture of quasi-Poisson distributions (called the robust quasi-Poisson model), which is constructed similarly to the construction of the Student's t-distribution, is a heavy-tailed alternative to the Poisson. It is proven to be robust against outliers. The thesis shows the theoretical evidence on the robustness of the 3 aforementioned models in a Bayesian framework. Lastly, the thesis considers 2 simulation experiments with different kinds of the outlier source -- process error and covariate measurement error, to compare the robustness between the Poisson, Negative Binomial and robust quasi-Poisson regression models in the Bayesian framework. The model robustness was assessed, in terms of the model ability to infer correctly the covariate effect size, in different combination of error probability and error variability. It was proven that the robust quasi-Poisson regression model was more robust than its counterparts because its breakdown point was relatively higher than the others, in both experiments.
  • Kari, Daniel (2020)
    Estimating the effect of random chance (’luck’) has long been a question of particular interest in various team sports. In this thesis, we aim to determine the role of luck in a single icehockey game by building a model to predict the outcome based on the course of events in a game. The obtained prediction accuracy should also to some extent reveal the effect of random chance. Using the course of events from over 10,000 games, we train feedforward and convolutional neural networks to predict the outcome and final goal differential, which has been proposed as a more informative proxy for outcome. Interestingly, we are not able to obtain distinctively higher accuracy than previous studies, which have focused on predicting the outcome with infomation available before the game. The results suggest that there might exist an upper bound for prediction accuracy even if we knew ’everything’ that went on in a game. This further implies that random chance could affect the outcome of a game, although assessing this is difficult, as we do not have a good quantitative metric for luck in the case of single ice hockey game prediction.