Skip to main content
Login | Suomeksi | På svenska | In English

Browsing by master's degree program "Matematiikan ja tilastotieteen maisteriohjelma"

Sort by: Order: Results:

  • Parviainen, Katariina (2021)
    Tutkielmassa käsitellään avaruuden $\cc^n$ aitoja holomorfisia kuvauksia. Niiden määritelmät perustuvat aidon kuvauksen lauseeseen ja kuvausten holomorfisuuteen. Olkoon $\Omega,D\subset\cc^n$, ja olkoon $n>1$. Kuvaus $F:\Omega\to D$ on aito kuvaus, jos $F^{-1}(K)$ on kompakti $\Omega$:n osajoukko jokaiselle kompaktille joukolle $K\subset D$. Holomorfisuus tarkoittaa kuvauksen kompleksista analyyttisyyttä, kompleksista differentioituvuutta sekä sitä, että kuvaus toteuttaa Cauchy-Riemannin yhtälöt. Funktio $f$ on holomorfinen avaruuden $\cc^n$ avoimessa joukossa $\Omega$, jos sille pätee $f:\Omega\to\cc$, $f\in C^1(\Omega)$, ja jos se toteuttaa Cauchy-Riemannin yhtälöt $\overline{\partial}_jf=\frac{\partial f}{\partial\overline{z_j}}=0$ jokaiselle $j=1,\ldots,n$. Kuvaus $F=(f_1,\ldots,f_m):\Omega\to\cc^m$ on holomorfinen joukossa $\Omega$, jos funktiot $f_k$ ovat holomorfisia jokaisella $k=1,\ldots,m$. Jos $\Omega$ ja $D$ ovat kompleksisia joukkoja, ja jos $F:\Omega\to D$ on aito holomorfinen kuvaus, tällöin $F^{-1}(y_0)$ on joukon $\Omega$ kompakti analyyttinen alivaristo jokaiselle pisteelle $y_0\in D$. Aito kuvaus voidaan määritellä myös seuraavasti: Kuvaus $F:\Omega\to D$ on aito jos ja vain jos $F$ kuvaa reunan $\partial\Omega$ reunalle $\partial D$ seuraavalla tavalla: \[\text{jos}\,\{z_j\}\subset\Omega\quad\text{on jono, jolle}\,\lim_{j\to\infty}d(z_j,\partial\Omega)=0,\,\text{niin}\,\lim_{j\to\infty}d(F(z_j),\partial D)=0.\] Tämän määritelmän perusteella kuvausten $F:\Omega\to D$ tutkiminen johtaa geometriseen funktioteoriaan kuvauksista, jotka kuvaavat joukon $\partial\Omega$ joukolle $\partial D.$ Käy ilmi, että aidot holomorfiset kuvaukset laajenevat jatkuvasti määrittelyalueittensa reunoille. Holomorfisten kuvausten tutkiminen liittyy osaltaan Dirichlet-ongelmien ratkaisemiseen. Klassisessa Dirichlet-ongelmassa etsitään joukon $\partial\Omega\subset\mathbf{R}^m$ jatkuvalle funktiolle $f$ reaaliarvoista funktiota, joka on joukossa $\Omega$ harmoninen ja joukon $\Omega$ sulkeumassa $\overline{\Omega}$ jatkuva ja jonka rajoittuma joukon reunalle $\partial\Omega$ on kyseinen funktio $f$. Tutkielmassa käydään läpi määritelmiä ja käsitteitä, joista aidot holomorfiset kuvaukset muodostuvat, sekä avataan matemaattista struktuuria, joka on näiden käsitteiden taustalla. Tutkielmassa todistetaan aidolle holommorfiselle kuvaukselle $F:\Omega\to\Omega'$ ominaisuudet: $F$ on suljettu kuvaus, $F$ on avoin kuvaus, $F^{-1}(w)$ on äärellinen jokaiselle $w\in\Omega'$, on olemassa kokonaisluku $m$, jolle joukon $F^{-1}(w)$ pisetiden lukumäärä on $m$ jokaiselle $F$:n normaalille arvolle, joukon $F^{-1}(w)$ pisteiden lukumäärä on penempi kuin $m$ jokaiselle $F$:n kriittiselle arvolle, $F$:n kriittinen joukko on $\Omega'$:n nollavaristo, $F(V)$ on $\Omega'$:n alivaristo aina, kun $V$ on $\Omega$:n alivaristo, $F$ laajenee jatkuvaksi kuvaukseksi aidosti pseudokonveksien määrittelyjoukkojensa reunoille, $F$ kuvaa aidosti pseudokonveksin lähtöjoukkonsa jonon, joka suppenee epätangentiaalisesti kohti joukon reunaa, jonoksi joka suppenee hyväksyttävästi kohti kuvauksen maalijoukon reunaa, kuvaus $F$ avaruuden $\cc^n$ yksikköpallolta itselleen on automorfismi.
  • Nenonen, Veera (2022)
    Sosiaalietuudet ovat kokeneet monenlaisia muutoksia vuosien aikana, ja niihin liittyviä lakeja pyritään kehittämään jatkuvasti. Myös aivan viimesijaiseen valtion tarjoamaan taloudellisen tuen muotoon, toimeentulotukeen, on kohdistettu merkittäviä toimenpiteitä, mikä on vaikuttanut useiden suomalaisten elämään. Näistä toimenpiteistä erityisesti perustoimeentulotuen siirtäminen Kansaneläkelaitoksen vastuulle on vaatinut paljon sopeutumiskykyä tukea käsitteleviltä ja hakevilta tahoilta. Tämä on voinut herättää voimakkaitakin mielipiteitä, joiden ilmaisuun keskustelufoorumit ovat otollinen alusta. Suomen suurin keskustelufoorumi Suomi24 sisältää paljon yhteiskuntaan ja politiikkaan liittyviä keskusteluketjuja, joiden sisällön kartoittaminen kiinnostaviin aiheisiin liittyen voi tuottaa oikeanlaisilla menetelmillä mielenkiintoista ja hyödyllistä tietoa. Tässä tutkielmassa pyritään luonnollisen kielen prosessoinnin menetelmiä, tarkemmin aihemallinnusta, hyödyntämällä selvittämään, onko vuonna 2017 voimaan tulleen toimeentulotukilain muutos mahdollisesti näkynyt jollakin tavalla Suomi24-foorumin toimeentulotukea käsittelevissä keskusteluissa. Tutkimus toteutetaan havainnollistamalla valittua aineistoa erilaisilla visualisoinneilla sekä soveltamalla LDA algoritmia, ja näiden avulla yritetään havaita keskusteluiden keskeisimmät aiheet ja niihin liittyvät käsitteet. Jos toimeentulotukilain muutos on herättänyt keskustelua, se voisi ilmetä aiheista sekä niiden sisältämien sanojen käytön jakautumisesta ajalle ennen muutosta ja sen jälkeen. Myös aineiston rajaus ja poiminta tietokannasta, sekä aineiston esikäsittely aihemallinnusta varten kattaa merkittävän osan tutkimuksesta. Aineistoa testataan yhteensä kaksi kertaa, sillä ensimmäisellä kerralla havaitaan puutteita esikäsittelyvaiheessa sekä mallin sovittamisessa. Iterointi ei ole epätavanomaista tällaisissa tutkimuksissa, sillä vasta tuloksia tulkitessa saattaa nousta esille asioita, jotka olisi pitänyt ottaa huomioon jo edeltävissä vaiheissa. Toisella testauskerralla aiheiden sisällöistä nousi esille joitain mielenkiintoisia havaintoja, mutta niiden perusteella on vaikea tehdä päätelmiä siitä, näkyykö toimeentulotukilain muutos keskustelualustan viesteistä.
  • Saarinen, Tapio (2019)
    Tutkielman tarkoituksena on johdattaa lukija Ext-funktorin ja ryhmien kohomologian määritelmien ja teorian äärelle ja siten tutustuttaa lukija homologisen algebran keskeisiin käsitteisiin. Ensimmäisessä luvussa esitellään tutkielman olettamia taustatietoja, algebran ja algebrallisen topologian peruskurssien sisältöjen lisäksi. Toisessa luvussa esitellään ryhmien laajennosongelma ja ratkaistaan se tapauksessa, jossa annettu aliryhmä on vaihdannainen. Ryhmälaajennosten näytetään olevan yksi yhteen -vastaavuudessa tietyn ryhmän alkioiden kanssa, ja lisäksi tutkitaan erityisesti niitä ryhmälaajennoksia, jotka ovat annettujen ryhmien puolisuoria tuloja. Vastaan tulevien kaavojen todetaan vastaavan eräitä singulaarisen koketjukompleksin määritelmässä esiintyviä kaavoja. Kolmannessa luvussa määritellään viivaresoluutio sekä normalisoitu viivaresoluutio, sekä niiden pohjalta ryhmien kohomologia. Aluksi määritellään teknisenä sivuseikkana G-modulin käsite, jonka avulla ryhmien toimintoja voi käsitellä kuten moduleita. Luvun keskeisin tulos on se, että viivaresoluutio ja normalisoitu viivaresoluutio ovat homotopiaekvivalentit -- tuloksen yleistys takaa muun muassa, että Ext-funktori on hyvin määritelty. Luvun lopuksi lasketaan syklisen ryhmän kohomologiaryhmät. Neljännessä luvussa määritellään resoluutiot yleisyydessään, sekä projektiiviset että injektiiviset modulit ja resoluutiot. Viivaresoluutiot todetaan projektiivisiksi, ja niiden homotopiatyyppien samuuden todistuksen todetaan yleistyvän projektiivisille ja injektiivisille resoluutioille. Samalla ryhmien kohomologian määritelmä laajenee, kun viivaresoluution voi korvata millä tahansa projektiivisella resoluutiolla. Luvussa määritellään myös funktorien eksaktisuus, ja erityisesti tutkitaan Hom-funktorin eksaktiuden yhteyttä projektiivisiin ja injektiivisiin moduleihin. Viidennessä luvussa määritellään oikealta johdetun funktorin käsite, ja sen erikoistapauksena Ext-funktori, joka on Hom-funktorin oikealta johdettu funktori. Koska Hom-funktori on bifunktori, on sillä kaksi oikealta johdettua funktoria, ja luvun tärkein tulos osoittaa, että ne ovat isomorfiset. Ryhmien kohomologian määritelmä laajenee entisestään, kun sille annetaan määritelmä Ext-funktorin avulla, mikä mahdollistaa ryhmien kohomologian laskemisen myös injektiivisten resoluutioiden kautta. Viimeiseen lukuun on koottu aiheeseen liittyviä asioita, joita tekstissä hipaistaan, mutta joiden käsittely jäi rajaussyistä tutkielman ulkopuolelle.
  • Suominen, Henri (2021)
    Online hypothesis testing occurs in many branches of science. Most notably it is of use when there are too many hypotheses to test with traditional multiple hypothesis testing or when the hypotheses are created one-by-one. When testing multiple hypotheses one-by-one, the order in which the hypotheses are tested often has great influence to the power of the procedure. In this thesis we investigate the applicability of reinforcement learning tools to solve the exploration – exploitation problem that often arises in online hypothesis testing. We show that a common reinforcement learning tool, Thompson sampling, can be used to gain a modest amount of power using a method for online hypothesis testing called alpha-investing. Finally we examine the size of this effect using both synthetic data and a practical case involving simulated data studying urban pollution. We found that, by choosing the order of tested hypothesis with Thompson sampling, the power of alpha investing is improved. The level of improvement depends on the assumptions that the experimenter is willing to make and their validity. In a practical situation the presented procedure rejected up to 6.8 percentage points more hypotheses than testing the hypotheses in a random order.
  • Mustonen, Aleksi (2021)
    Electrical impedance tomography is a differential tomography method where current is injected into a domain and its interior distribution of electrical properties are inferred from measurements of electric potential around the boundary of the domain. Within the context of this imaging method the forward problem describes a situation where we are trying to deduce voltage measurements on a boundary of a domain given the conductivity distribution of the interior and current injected into the domain through the boundary. Traditionally the problem has been solved either analytically or by using numerical methods like the finite element method. Analytical solutions have the benefit that they are efficient, but at the same time have limited practical use as solutions exist only for a small number of idealized geometries. In contrast, while numerical methods provide a way to represent arbitrary geometries, they are computationally more demanding. Many proposed applications for electrical impedance tomography rely on the method's ability to construct images quickly which in turn requires efficient reconstruction algorithms. While existing methods can achieve near real time speeds, exploring and expanding ways of solving the problem even more efficiently, possibly overcoming weaknesses of previous methods, can allow for more practical uses for the method. Graph neural networks provide a computationally efficient way of approximating partial differential equations that is accurate, mesh invariant and can be applied to arbitrary geometries. Due to these properties neural network solutions show promise as alternative methods of solving problems related to electrical impedance tomography. In this thesis we discuss the mathematical foundation of graph neural network approximations of solutions to the electrical impedance tomography forward problem and demonstrate through experiments that these networks are indeed capable of such approximations. We also highlight some beneficial properties of graph neural network solutions as our network is able to converge to an arguably general solution with only a relatively small training data set. Using only 200 samples with constant conductivity distributions, the network is able to approximate voltage distributions of meshes with spherical inclusions.
  • Aholainen, Kusti (2022)
    Tämän tutkielman tarkoitus on tarkastella robustien estimaattorien, erityisesti BMM- estimaattorin, soveltuvuutta ARMA(p, q)-prosessin parametrien estimointiin. Robustit estimaattorit ovat estimaattoreita, joilla pyritään hallitsemaan poikkeavien havaintojen eli outlierien vaikutusta estimaatteihin. Robusti estimaattori sietääkin outliereita siten, että outlierien läsnäololla havainnoissa ei ole merkittävää vaikutusta estimaatteihin. Outliereita vastaan saatu suoja kuitenkin yleensä näkyy menetettynä tehokkuutena suhteessa suurimman uskottavuuden menetelmään. BMM-estimaattori on Mulerin, Peñan ja Yohain Robust estimation for ARMA models-artikkelissa (2009) esittelemä MM-estimaattorin laajennus. BMM-estimaattori pohjautuu ARMA-mallin apumalliksi kehitettyyn BIP-ARMA-malliin, jossa innovaatiotermin vaikutusta rajoitetaan suodattimella. Ajatuksena on näin kontrolloida ARMA-mallin innovaatioissa esiintyvien outlierien vaikutusta. Tutkielmassa BMM- ja MM- estimaattoria verrataan klassisista menetelmistä suurimman uskottavuuden (SU) ja pienimmän neliösumman (PNS) menetelmiin. Tutkielman alussa esitetään tarvittava todennäköisyysteorian, aikasarja-analyysin sekä robustien menetelmien käsitteistö. Lukija tutustutetaan robusteihin estimaattoreihin ja motivaatioon robustien menetelmien taustalla. Outliereita sisältäviä aikasarjoja käsitellään tutkielmassa asymptoottisesti saastuneen ARMA-prosessin realisaatioina ja keskeisimmille kirjallisuudessa tunnetuille outlier-prosesseille annetaan määritelmät. Lisäksi kuvataan käsiteltyjen BMM-, MM-, SU- ja PNS-estimaattorien laskenta. Estimaattorien yhteydessä käsitellään lisäksi alkuarvomenetelmiä, joilla estimaattorien minimointialgoritmien käyttämät alkuarvot valitaan. Tutkielman teoriaosuudessa esitetään lauseet ja todistukset MM-estimaattorin tarkentuvuudesta ja asymptoottisesta normaaliudesta. Kirjallisuudessa ei kuitenkaan tunneta todistusta BMM-estimaattorin vastaaville ominaisuuksille, vaan samojen ominaisuuksien otaksutaan pätevän myös BMM-estimaattorille. Tulososuudessa esitetään simulaatiot, jotka toistavat Muler et al. artikkelissa esitetyt simulaatiot monimutkaisemmille ARMA-malleille. Simulaatioissa BMM- ja MM-estimaattoria verrataan keskineliövirheen suhteen SU- ja PNS-estimaattoreihin, verraten samalla eri alkuarvomenetelmiä samalla. Lisäksi estimaattorien asymptoottisia robustiusominaisuuksia käsitellään. Estimaattorien laskenta on toteutettu R- ohjelmistolla, missä BMM- ja MM-estimaattorien laskenta on toteutettu pääosin C++-kielellä. Liite käsittää BMM- ja MM- estimaattorien laskentaan tarvittavan lähdekoodin.
  • Pyrylä, Atte (2020)
    In this thesis we will look at the asymptotic approach to modeling randomly weighted heavy-tailed random variables and their sums. The heavy-tailed distributions, named after the defining property of having more probability mass in the tail than any exponential distribution and thereby being heavy, are essentially a way to have a large tail risk present in a model in a realistic manner. The weighted sums of random variables are a versatile basic structure that can be adapted to model anything from claims over time to the returns of a portfolio, while giving the primary random variables heavy-tails is a great way to integrate extremal events into the models. The methodology introduced in this thesis offers an alternative to some of the prevailing and traditional approaches in risk modeling. Our main result that we will cover in detail, originates from "Randomly weighted sums of subexponential random variables" by Tang and Yuan (2014), it draws an asymptotic connection between the tails of randomly weighted heavy-tailed random variables and the tails of their sums, explicitly stating how the various tail probabilities relate to each other, in effect extending the idea that for the sums of heavy-tailed random variables large total claims originate from a single source instead of being accumulated from a bunch of smaller claims. A great merit of these results is how the random weights are allowed for the most part lack an upper bound, as well as, be arbitrarily dependent on each other. As for the applications we will first look at an explicit estimation method for computing extreme quantiles of a loss distributions yielding values for a common risk measure known as Value-at-Risk. The methodology used is something that can easily be adapted to a setting with similar preexisting knowledge, thereby demonstrating a straightforward way of applying the results. We then move on to examine the ruin problem of an insurance company, developing a setting and some conditions that can be imposed on the structures to permit an application of our main results to yield an asymptotic estimate for the ruin probability. Additionally, to be more realistic, we introduce the approach of crude asymptotics that requires little less to be known of the primary random variables, we formulate a result similar in fashion to our main result, and proceed to prove it.
  • Häggblom, Matilda (2022)
    Modal inclusion logic is modal logic extended with inclusion atoms. It is the modal variant of first-order inclusion logic, which was introduced by Galliani (2012). Inclusion logic is a main variant of dependence logic (Väänänen 2007). Dependence logic and its variants adopt team semantics, introduced by Hodges (1997). Under team semantics, a modal (inclusion) logic formula is evaluated in a set of states, called a team. The inclusion atom is a type of dependency atom, which describes that the possible values a sequence of formulas can obtain are values of another sequence of formulas. In this thesis, we introduce a sound and complete natural deduction system for modal inclusion logic, which is currently missing in the literature. The thesis consists of an introductory part, in which we recall the definitions and basic properties of modal logic and modal inclusion logic, followed by two main parts. The first part concerns the expressive power of modal inclusion logic. We review the result of Hella and Stumpf (2015) that modal inclusion logic is expressively complete: A class of Kripke models with teams is closed under unions, closed under k-bisimulation for some natural number k, and has the empty team property if and only if the class can be defined with a modal inclusion logic formula. Through the expressive completeness proof, we obtain characteristic formulas for classes with these three properties. This also provides a normal form for formulas in MIL. The proof of this result is due to Hella and Stumpf, and we suggest a simplification to the normal form by making it similar to the normal form introduced by Kontinen et al. (2014). In the second part, we introduce a sound and complete natural deduction proof system for modal inclusion logic. Our proof system builds on the proof systems defined for modal dependence logic and propositional inclusion logic by Yang (2017, 2022). We show the completeness theorem using the normal form of modal inclusion logic.
  • Kukkola, Johanna (2022)
    Can a day be classified to the correct season on the basis of its hourly weather observations using a neural network model, and how accurately can this be done? This is the question this thesis aims to answer. The weather observation data was retrieved from Finnish Meteorological Institute’s website, and it includes the hourly weather observations from Kumpula observation station from years 2010-2020. The weather observations used for the classification were cloud amount, air pressure, precipitation amount, relative humidity, snow depth, air temperature, dew-point temperature, horizontal visibility, wind direction, gust speed and wind speed. There are four distinct seasons that can be experienced in Finland. In this thesis the seasons were defined as three-month periods, with winter consisting of December, January and February, spring consisting of March, April and May, summer consisting of June, July and August, and autumn consisting of September, October and November. The days in the weather data were classified into these seasons with a convolutional neural network model. The model included a convolutional layer followed by a fully connected layer, with the width of both layers being 16 nodes. The accuracy of the classification with this model was 0.80. The model performed better than a multinomial logistic regression model, which had accuracy of 0.75. It can be concluded that the classification task was satisfactorily successful. An interesting finding was that neither models ever confused summer and winter with each other.
  • Virtanen, Jussi (2022)
    In the thesis we assess the ability of two different models to predict cash flows in private credit investment funds. Models are a stochastic type and a deterministic type which makes them quite different. The data that has been obtained for the analysis is divided in three subsamples. These subsamples are mature funds, liquidated funds and all funds. The data consists of 62 funds, subsample of mature funds 36 and subsample of liquidated funds 17 funds. Both of our models will be fitted for all subsamples. Parameters of the models are estimated with different techniques. The parameters of the Stochastic model are estimated with the conditional least squares method. The parameters of the Yale model are estimated with the numerical methods. After the estimation of the parameters, the values are explained in detail and their effect on the cash flows are investigated. This helps to understand what properties of the cash flows the models are able to capture. In addition, we assess to both models' ability to predict cash flows in the future. This is done by using the coefficient of determination, QQ-plots and comparison of predicted and observed cumulated cash flows. By using the coefficient of determination we try to explain how well the models explain the variation around the residuals of the observed and predicted values. With QQ-plots we try to determine if the values produced of the process follow the normal distribution. Finally, with the cumulated cash flows of contributions and distributions we try to determine if models are able to predict the cumulated committed capital and returns of the fund in a form of distributions. The results show that the Stochastic model performs better in its prediction of contributions and distributions. However, this is not the case for all the subsamples. The Yale model seems to do better in cumulated contributions of the subsample of the mature funds. Although, the flexibility of the Stochastic model is more suitable for different types of cash flows and subsamples. Therefore, it is suggested that the Stochastic model should be the model to be used in prediction and modelling of the private credit funds. It is harder to implement than the Yale model but it does provide more accurate results in its prediction.
  • Lundström, Teemu (2022)
    Spatial graphs are graphs that are embedded in three-dimensional space. The study of such graphs is closely related to knot theory, but it is also motivated by practical applications, such as the linking of DNA and the study of chemical compounds. The Yamada polynomial is one of the most commonly used invariants of spatial graphs as it gives a lot of information about how the graphs sit in the space. However, computing the polynomial from a given graph can be computationally demanding. In this thesis, we study the Yamada polynomial of symmetrical spatial graphs. In addition to being symmetrical, the graphs we study have a layer-like structure which allows for certain transfer-matrix methods to be applied. There the idea is to express the polynomial of a graph with n layers in terms of graphs with n − 1 layers. This then allows one to obtain the polynomial of the original graph by computing powers of the so-called transfer-matrix. We introduce the Yamada polynomial and prove various properties related to it. We study two families of graphs and compute their Yamada polynomials. In addition to this, we introduce a new notational technique which allows one to ignore the crossings of certain spatial graphs and turn them into normal plane graphs with labelled edges. We prove various results related to this notation and show how it can be used to obtain the Yamada polynomial of these kinds of graphs. We also give a sketch of an algorithm with which one could, at least in principle, obtain the Yamada polynomials of larger families of graphs.
  • Rautio, Siiri (2019)
    Improving the quality of medical computed tomography reconstructions is an important research topic nowadays, when low-dose imaging is pursued to minimize the X-ray radiation afflicted on patents. Using lower radiation doses for imaging leads to noisier reconstructions, which then require postprocessing, such as denoising, in order to make the data up to par for diagnostic purposes. Reconstructing the data using iterative algorithms produces higher quality results, but they are computationally costly and not quite powerful enough to be used as such for medical analysis. Recent advances in deep learning have demonstrated the great potential of using convolutional neural networks in various image processing tasks. Performing image denoising with deep neural networks can produce high-quality and virtually noise-free predictions out of images originally corrupted with noise, in a computationally efficient manner. In this thesis, we survey the topics of computed tomography and deep learning for the purpose of applying a state-of-the-art convolutional neural network for denoising dental cone-beam computed tomography reconstruction images. We investigate how the denoising results of a deep neural network are affected if iteratively reconstructed images are used in training the network, as opposed to using traditionally reconstructed images. The results show that if the training data is reconstructed using iterative methods, it notably improves the denoising results of the network. Also, we believe these results can be further improved and extended beyond the case of cone-beam computed tomography and the field of medical imaging.
  • Laiho, Aleksi (2022)
    In statistics, data can often be high-dimensional with a very large number of variables, often larger than the number of samples themselves. In such cases, selection of a relevant configuration of significant variables is often needed. One such case is in genetics, especially genome-wide association studies (GWAS). To select the relevant variables from high-dimensional data, there exists various statistical methods, with many of them relating to Bayesian statistics. This thesis aims to review and compare two such methods, FINEMAP and Sum of Single Effects (SuSiE). The methods are reviewed according to their accuracy of identifying the relevant configurations of variables and their computational efficiency, especially in the case where there exists high inter-variable correlations within the dataset. The methods were also compared to more conventional variable selection methods, such as LASSO. The results show that both FINEMAP and SuSiE outperform LASSO in terms of selection accuracy and efficiency, with FINEMAP producing sligthly more accurate results with the expense of computation time compared to SuSiE. These results can be used as guidelines in selecting an appropriate variable selection method based on the study and data.
  • Kauppala, Tuuli (2021)
    Children’s height and weight development remains a subject of interest especially due to increasing prevalence of overweight and obesity in the children. With statistical modeling, height and weight development can be examined as separate or connected outcomes, aiding with understanding of the phenomenon of growth. As biological connection between height and weight development can be assumed, their joint modeling is expected to be beneficial. One more advantage of joint modeling is its convenience of the Body Mass Index (BMI) prediction. In the thesis, we modeled longitudinal data of children’s heights and weights of the dataset obtained from Finlapset register of the Institute of Health and Welfare (THL). The research aims were to predict the modeled quantities together with the BMI, interpret the obtained parameters with relation to the phenomenon of growth, as well as to investigate the impact of municipalities on to the growth of children. The dataset’s irregular, register-based nature together with positively skewed, heteroschedastic weight distributions and within- and between-subject variability suggested Hierarchical Linear Models (HLMs) as the modeling method of choice. We used HLMs in Bayesian setting with the benefits of incorporating existing knowledge, and obtaining full posterior predictive distribution for the outcome variables. HLMs were compared with the less suitable classical linear regression model, and bivariate and univariate HLMs with or without area as a covariate were compared in terms of their posterior predictive precision and accuracy. One of the main research questions was the model’s ability to predict the BMI of the child, which we assessed with various posterior predictive checks (PPC). The most suitable model was used to estimate growth parameters of 2-6 year old males and females in Vihti, Kirkkonummi and Tuusula. With the parameter estimates, we could compare growth of males and females, assess the differences of within-subject and between-subject variability on growth and examine correlation between height and weight development. Based on the work, we could conclude that the bivariate HLM constructed provided the most accurate and precise predictions especially for the BMI. The area covariates did not provide additional advantage to the models. Overall, Bayesian HLMs are a suitable tool for the register-based dataset of the work, and together with log-transformation of height and weight they can be used to model skewed and heteroschedastic longitudinal data. However, the modeling would ideally require more observations per individual than we had, and proper out-of-sample predictive evaluation would ensure that current models are not over-fitted with regards to the data. Nevertheless, the built models can already provide insight into contemporary Finnish childhood growth and to simulate and create predictions for the future BMI population distributions.
  • Frosti, Miika (2022)
    Tämä tutkielma käsittelee C^2:n hyperbolisessa yksikkökuulassa asetettuja Dirichlet'n ongelmia. Työn tavoitteena on löytää ongelman ratkaisujen joukosta ne funktiot, jotka ovat sileitä, eli rajattomasti derivoituvia. Tätä varten kuvaillaan aluksi R^2:n yksikköympyrässä ja puoliavaruudessa määritellyt Dirichlet'n ongelmat ja miten muodostaa niille ratkaisut. Molempien alueiden ongelmia varten luodaan aluekohtaiset Greenin funktiot, joiden avulla johdetaan Poissonin ydin. Tämän ytimen avulla saadaan sileä ratkaisu Dirichlet'n ongelmaan. Tämän jälkeen tutustutaan C^2:n hyperboliseen yksikkökuulaan, ja miten siinä määritellyt Dirichlet'n ongelmat eroavat R^2:n yksikkökuulan ongelmista. Aiheen kannalta merkittävintä on ero euklidisen ja hyperbolisen Laplace-Beltramin operaattorin ominaisuuksissa. Kun tärkeimmät eroavaisuudet ovat selvitetty, voidaan todistaa, että Poisson-Szegön ytimen avulla määritelty funktio ratkaisee Dirichlet'n ongelman. On kuitenkin mahdollista näyttää esimerkillä, että ratkaisut eivät ole välttämättä sileitä. Jotta näistä ratkaisuista voidaan erottaa sileät funktiot, on hyödynnettävä palloharmonisia funktioita. Näiden tärkeimpiä piirteitä kuvaillaan sekä reaaliavaruudessa että kompleksiavaruudessa. Näiden funktioiden ja hypergeometristen funktioiden avulla voidaan määritellä uusi muoto Poisson-Szegön ytimelle, josta voidaan puolestaan johtaa tutkielman lopputulos. Kyseiseksi lopputulokseksi saadaan se, että yksikkökuulan Dirichlet'n ongelmien ratkaisut ovat sileitä jos ja vain jos ratkaisut ovat pluriharmonisia.
  • Virri, Maria (2021)
    Bonus-malus systems are used globally to determine insurance premiums of motor liability policy-holders by observing past accident behavior. In these systems, policy-holders move between classes that represent different premiums. The number of accidents is used as an indicator of driving skills or risk. The aim of bonus-malus systems is to assign premiums that correspond to risks by increasing premiums of policy-holders that have reported accidents and awarding discounts to those who have not. Many types of bonus-malus systems are used and there is no consensus about what the optimal system looks like. Different tools can be utilized to measure the optimality, which is defined differently according to each tool. The purpose of this thesis is to examine one of these tools, elasticity. Elasticity aims to evaluate how well a given bonus-malus system achieves its goal of assigning premiums fairly according to the policy-holders’ risks by measuring the response of the premiums to changes in the number of accidents. Bonus-malus systems can be mathematically modeled using stochastic processes called Markov chains, and accident behavior can be modeled using Poisson distributions. These two concepts of probability theory and their properties are introduced and applied to bonus-malus systems in the beginning of this thesis. Two types of elasticities are then discussed. Asymptotic elasticity is defined using Markov chain properties, while transient elasticity is based on a concept called the discounted expectation of payments. It is shown how elasticity can be interpreted as a measure of optimality. We will observe that it is typically impossible to have an optimal bonus-malus system for all policy-holders when optimality is measured using elasticity. Some policy-holders will inevitably subsidize other policy-holders by paying premiums that are unfairly large. More specifically, it will be shown that, for bonus-malus systems with certain elasticity values, lower-risk policy-holders will subsidize the higher-risk ones. Lastly, a method is devised to calculate the elasticity of a given bonus-malus system using programming language R. This method is then used to find the elasticities of five Finnish bonus-malus systems in order to evaluate and compare them.
  • Heikkuri, Vesa-Matti (2022)
    This thesis studies equilibrium in a continuous-time overlapping generations (OLG) model. OLG models are used in economics to study the effect of demographics and life-cycle behavior on macroeconomic variables such as the interest rate and aggregate investment. These models are typically set in discrete time but continuous-time versions have also received attention recently for their desirable properties. Competitive equilibrium in a continuous-time OLG model can be represented as a solution to an integral equation. This equation is linear in the special case of logarithmic utility function. This thesis provides the necessary and sufficient conditions under which the linear equation is a convolution type integral equation and derives a distributional solution using Fourier transform. We also show that the operator norm of the integral operator is not generally less than one. Hence, the equation cannot be solved using Neumann series. However, in a special case the distributional solution is characterized by a geometric series on the Fourier side when the operator norm is equal to one.
  • Lahdensuo, Sofia (2022)
    The Finnish Customs collects and maintains the statistics of the Finnish intra-EU trade with the Intrastat system. Companies with significant intra-EU trade are obligated to give monthly Intrastat declarations, and the statistics of the Finnish intra-EU trade are compiled based on the information collected with the declarations. In case of a company not giving the declaration in time, there needs to exist an estimation method for the missing values. In this thesis we propose an automatic multivariate time series forecasting process for the estimation of the missing Intrastat import and export values. The forecasting is done separately for each company with missing values. For forecasting we use two dimensional time series models, where the other component is the import or export value of the company to be forecasted, and the other component is the import or export value of the industrial group of the company. To complement the time series forecasting we use forecast combining. Combined forecasts, for example the averages of the obtained forecasts, have been found to perform well in terms of forecast accuracy compared to the forecasts created by individual methods. In the forecasting process we use two multivariate time series models, the Vector Autoregressive (VAR) model, and a specific VAR model called the Vector Error Correction (VEC) model. The choice of the model is based on the stationary properties of the time series to be modelled. An alternative option for the VEC model is the so-called augmented VAR model, which is an over-fitted VAR model. We use the VEC model and the augmented VAR model together by using the average of the forecasts created with them as the forecast for the missing value. When the usual VAR model is used, only the forecast created by the single model is used. The forecasting process is created as automatic and as fast as possible, therefore the estimation of a time series model for a single company is made as simple as possible. Thus, only statistical tests which can be applied automatically are used in the model building. We compare the forecast accuracy of the forecasts created with the automatic forecasting process to the forecast accuracy of forecasts created with two simple forecasting methods. In the non-stationary-deemed time series the Naïve forecast performs well in terms of forecast accuracy compared to the time series model based forecasts. On the other hand, in the stationary-deemed time series the average over the past 12 months performs well as a forecast in terms of forecast accuracy compared to the time series model based forecasts. We also consider forecast combinations where the forecast combinations are created by calculating the average of the time series model based forecasts and the simple forecasts. In line with the literature, the forecast combinations perform overall better in terms of the forecast accuracy than the forecasts based on the individual models.
  • Nikkanen, Leo (2022)
    Often in spatial statistics the modelled domain contains physical barriers that can have impact on how the modelled phenomena behaves. This barrier can be, for example, land in case of modelling a fish population, or road for different animal populations. Common model that is used in spatial statistics is a stationary Gaussian model, because of its computational requirements, relatively easy interpretation of results. The physical barrier does not have an effect on this type of models unless the barrier is transformed into variable, but this can cause issues in the polygon selection. In this thesis I discuss how the non-stationary Gaussian model can be deployed in cases where spatial domain contains physical barriers. This non-stationary model reduces spatial correlation continuously towards zero in areas that are considered as a physical barrier. When the correlation is selected to reduce smoothly to zero, the model is more likely to results similar output with slightly different polygons. The advantage of the barrier model is that it is as fast to train as the stationary model because both models can be trained using finite equation method (FEM). With FEM we can solve stochastic partial differential equations (SPDE). This method interprets continuous random field as a discrete mesh, and the computational requirements increases as the number of nodes in mesh increases. In order to create stationary and non-stationary models, I have described the required methods such as Bayesian statistics, stochastic process, and covariance function in the second chapter. I use these methods to define spatial random effect model, and one commonly used spatial model is the Gaussian latent variable model. At the end of second chapter, I describe how the barrier model is created, and what types of requirements this model has. The barrier model is based on a Matern model that is a Gaussian random field, and it can be represented by using Matern covariance function. The second chapter ends with description of how to create a mesh mentioned above, and how the FEM is used to solve SPDE. The performance of stationary and non-stationary Gaussian models are first tested by training both models with simulated data. This simulated data is a random sample from polygon of Helsinki where the coastline is interpreted as a physical barrier. The results show that the barrier model estimates the true parameters better than the stationary model. The last chapter contains data analysis of the rat populations in Helsinki. The data contains number of rat observations in each zip code, and a set of covariates. Both models, stationary and non-stationary, are trained with and without covariates, and the best model out of these four models was the stationary model with covariates.
  • Sohkanen, Pekka (2021)
    The fields of insurance and financial mathematics require increasingly intricate descriptors of dependency. In the realm of financial mathematics, this demand arises from globalisation effects over the past decade, which have caused financial asset returns to exhibit increasingly intricate dependencies between each other. Of particular interest are measurements describing the probabilities of simultaneous occurrences between unusually negative stock returns. In insurance mathematics, the ability to evaluate probabilities associated with the simultaneous occurrence of unusually large claim amounts can be crucial for both the solvency and the competitiveness of an insurance company. These sorts of dependencies are referred to by the term tail dependence. In this thesis, we introduce the concept of tail dependence and the tail dependence coefficient, a tool for determining the amount of tail dependence between random variables. We also present statistical estimators for the tail dependence coefficient. Favourable properties of these estimators are investigated and a simulation study is executed in order to evaluate and compare estimator performance under a variety of distributions. Some necessary stochastics concepts are presented. Mathematical models of dependence are introduced. Elementary notions of extreme value theory and empirical processes are touched on. These motivate the presented estimators and facilitate the proofs of their favourable properties.