Browsing by master's degree program "Master's Programme in Mathematics and Statistics"
Now showing items 1-20 of 87
-
Adjusting contacts with observed infections: consequences on predictions about vaccine effectiveness (2023)Contacts between individuals play a central part in infectious disease modelling. Social or physical contacts are often determined through surveys. These types of contacts may not accurately represent the truly infectious contacts due to demographic differences in susceptibility and infectivity. In addition, surveyed data is prone to statistical biases and errors. For these reasons, a transmission model based on surveyed contact data may make predictions that are in conflict with real-life observations. The surveyed contact structure must be adjusted to improve the model and produce reliable predictions. The adjustment can be done in multiple different ways. We present five adjustment methods and study how the choice of method impacts a model’s predictions about vaccine effectiveness. The population is stratified into n groups. All five adjustment methods transform the surveyed contact matrix such that its normalised leading eigenvector (the model-predicted stable distribution of infections) matches the observed distribution of infections. The eigenvector method directly adjusts the leading eigenvector. It changes contacts antisymmetrically: if contacts from group i to group j increase, then contacts from j to i decrease, and vice versa. The susceptibility method adjusts the group-specific susceptibility of individuals. The changes in the contact matrix occur row-wise. Analogously, the infectivity method adjusts the group-specific infectivity; changes occur column-wise. The symmetric method adjusts susceptibility and infectivity in equal measure. It changes contacts symmetrically with respect to the main diagonal of the contact matrix. The parametrised weighting method uses a parameter 0 ≤ p ≤ 1 to weight the adjustment between susceptibility and infectivity. It is a generalisation of the susceptibility, infectivity and symmetric methods, which correspond to p = 0, p = 1 and p = 0.5, respectively. For demonstrative purposes, the adjustment methods were applied to a surveyed contact matrix and infection data from the COVID-19 epidemic in Finland. To measure the impact of the method on vaccination effectiveness predictions, the relative reduction of the basic reproduction number was computed for each method using Finnish COVID-19 vaccination data. We found that the eigenvector method has no impact on the relative reduction (compared to the unadjusted baseline case). As for the other methods, the predicted effectiveness of vaccination increased the more infectivity was weighted in the adjustment (that is, the larger the value of the parameter p). In conclusion, our study shows that the choice of adjustment method has an impact on model predictions, namely those about vaccination effectiveness. Thus, the choice should be considered when building infectious disease models. The susceptibility and symmetric methods seem the most natural choices in terms of contact structure. Choosing the ”optimal” method is a potential topic to explore in future research.
-
(2021)Tutkielmassa käsitellään avaruuden $\cc^n$ aitoja holomorfisia kuvauksia. Niiden määritelmät perustuvat aidon kuvauksen lauseeseen ja kuvausten holomorfisuuteen. Olkoon $\Omega,D\subset\cc^n$, ja olkoon $n>1$. Kuvaus $F:\Omega\to D$ on aito kuvaus, jos $F^{-1}(K)$ on kompakti $\Omega$:n osajoukko jokaiselle kompaktille joukolle $K\subset D$. Holomorfisuus tarkoittaa kuvauksen kompleksista analyyttisyyttä, kompleksista differentioituvuutta sekä sitä, että kuvaus toteuttaa Cauchy-Riemannin yhtälöt. Funktio $f$ on holomorfinen avaruuden $\cc^n$ avoimessa joukossa $\Omega$, jos sille pätee $f:\Omega\to\cc$, $f\in C^1(\Omega)$, ja jos se toteuttaa Cauchy-Riemannin yhtälöt $\overline{\partial}_jf=\frac{\partial f}{\partial\overline{z_j}}=0$ jokaiselle $j=1,\ldots,n$. Kuvaus $F=(f_1,\ldots,f_m):\Omega\to\cc^m$ on holomorfinen joukossa $\Omega$, jos funktiot $f_k$ ovat holomorfisia jokaisella $k=1,\ldots,m$. Jos $\Omega$ ja $D$ ovat kompleksisia joukkoja, ja jos $F:\Omega\to D$ on aito holomorfinen kuvaus, tällöin $F^{-1}(y_0)$ on joukon $\Omega$ kompakti analyyttinen alivaristo jokaiselle pisteelle $y_0\in D$. Aito kuvaus voidaan määritellä myös seuraavasti: Kuvaus $F:\Omega\to D$ on aito jos ja vain jos $F$ kuvaa reunan $\partial\Omega$ reunalle $\partial D$ seuraavalla tavalla: \[\text{jos}\,\{z_j\}\subset\Omega\quad\text{on jono, jolle}\,\lim_{j\to\infty}d(z_j,\partial\Omega)=0,\,\text{niin}\,\lim_{j\to\infty}d(F(z_j),\partial D)=0.\] Tämän määritelmän perusteella kuvausten $F:\Omega\to D$ tutkiminen johtaa geometriseen funktioteoriaan kuvauksista, jotka kuvaavat joukon $\partial\Omega$ joukolle $\partial D.$ Käy ilmi, että aidot holomorfiset kuvaukset laajenevat jatkuvasti määrittelyalueittensa reunoille. Holomorfisten kuvausten tutkiminen liittyy osaltaan Dirichlet-ongelmien ratkaisemiseen. Klassisessa Dirichlet-ongelmassa etsitään joukon $\partial\Omega\subset\mathbf{R}^m$ jatkuvalle funktiolle $f$ reaaliarvoista funktiota, joka on joukossa $\Omega$ harmoninen ja joukon $\Omega$ sulkeumassa $\overline{\Omega}$ jatkuva ja jonka rajoittuma joukon reunalle $\partial\Omega$ on kyseinen funktio $f$. Tutkielmassa käydään läpi määritelmiä ja käsitteitä, joista aidot holomorfiset kuvaukset muodostuvat, sekä avataan matemaattista struktuuria, joka on näiden käsitteiden taustalla. Tutkielmassa todistetaan aidolle holommorfiselle kuvaukselle $F:\Omega\to\Omega'$ ominaisuudet: $F$ on suljettu kuvaus, $F$ on avoin kuvaus, $F^{-1}(w)$ on äärellinen jokaiselle $w\in\Omega'$, on olemassa kokonaisluku $m$, jolle joukon $F^{-1}(w)$ pisetiden lukumäärä on $m$ jokaiselle $F$:n normaalille arvolle, joukon $F^{-1}(w)$ pisteiden lukumäärä on penempi kuin $m$ jokaiselle $F$:n kriittiselle arvolle, $F$:n kriittinen joukko on $\Omega'$:n nollavaristo, $F(V)$ on $\Omega'$:n alivaristo aina, kun $V$ on $\Omega$:n alivaristo, $F$ laajenee jatkuvaksi kuvaukseksi aidosti pseudokonveksien määrittelyjoukkojensa reunoille, $F$ kuvaa aidosti pseudokonveksin lähtöjoukkonsa jonon, joka suppenee epätangentiaalisesti kohti joukon reunaa, jonoksi joka suppenee hyväksyttävästi kohti kuvauksen maalijoukon reunaa, kuvaus $F$ avaruuden $\cc^n$ yksikköpallolta itselleen on automorfismi.
-
(2024)Altruism refers to behavior by an individual that increases the fitness of another individual while decreasing their own, and despite seemingly going against traditional theories of evolution, it's actually quite common in the animal kingdom. Understanding why and how altruistic behaviors happen has long been a central focus in evolutionary ecology, and this thesis aims to contribute to this area of study. This thesis focuses on infinite lattice models. Lattice models are a type of spatially explicit models, which means that they describe the dynamics of a population in both time and space. In particular, we consider a modification of the simplest type of lattice models (called the contact process), which considers only birth and death events. The objective is to study altruistic behaviours to help neighbours within populations residing on a lattice. To achieve this, we assume that, apart from giving birth and dying, individuals transition to a permanently non-reproductive state at a certain rate. We use ordinary differential equations to describe the dynamics of this population and to develop our model. The population we initially have in the lattice (the resident population) reaches a positive equilibrium, which we calculate numerically using Matlab. Trough linear stability analysis, we can show that this equilibrium is asymptotically stable, which means that with time, the resident population will stabilize at this equilibrium. Once the resident reaches this equilibrium, we introduce a mutant population in the lattice with the same characteristics as the resident, except that it has a different post-reproductive death rate. Linear stability analysis of the extinct equilibrium of the mutant shows that mutants with a higher post-reproductive death rate than the residents gain a competitive advantage. This is because by dying faster, post-reproductive mutants make more space for other mutants to reproduce. That result changes if we make the assumption that post-reproductive individuals help their neighbours produce more offspring. In this case, we find that depending on the amount of reproductive help given by the post-reproductive individuals, a higher post-reproductive death rate no longer is evolutionary advantageous. In fact, we are able to determine that, in general, helping neighbours reproduce is a better strategy than sacrificing oneself to make room for reproductive neighbours. Lastly, we examine this reproductive help as a function of the post-reproductive mortality rate. With this, our goal is to find an evolutionary stable strategy (ESS) for the resident population, that is, a strategy that cannot be displaced by any alternative strategies.
-
(2024)This paper contains Hilbert's Nullstellensatz, one of the fundamental theorems of algebraic geometry. The main approach uses the algebraic proof and all of the necessary background to understand and appreciate the Nullstellensatz. The paper also contains the Combinatorial Nullstellensatz, the proof and some applications. This thesis is aimed at any science student who wants to understand Hilbert's Nullstellensatz or gain intuitive insight. The paper starts with an explanation of the approach: the theorem is first presented in its algebraic form and the proof follows from the given background. After that, there are some more geometric definitions relating to common zero sets or varieties, and then Hilbert's Nullstellensatz is given in a slightly different manner. At the end of the paper some combinatorial definitions and theorems are introduced so that the Combinatorial Nullstellensatz can be proven. From this follows the Cauchy-Davenport theorem. At the very end of the introduction is presented a case of the theorem for the readers who do not yet know what Hilbert's Nullstellensatz is about. The second section of the paper contains all of the algebraic background material. It starts from the most basic definitions of groups and homomorphisms and then continues onto rings, ideals, radicals and quotient rings and relations between them. Some additional theorems are marked with a star to denote that the statement is not necessary for the algebraic proof of Hilbert's Nullstellensatz, but it might be necessary for the geometrical or combinatorial proofs. Since these statements are fully algebraic and mostly contain definitions that have been introduced early in the paper, they are placed in the algebraic section near similar theorems and definitions. The second section also contains fields, algebras, polynomials and transcendental elements. In Sections 3 and 4 we follow the algebraic proof of Daniel Allcock and expand on some steps. Section 3 contains Zariski's Lemma and a form of the Weak Nullstellensatz, along with their proofs. These statements allow us to skip Noetherian and Jacobson rings by finding the necessary conclusions from polynomial rings over fields. This section also includes an existence conclusion that also can be found in other literature under the name of Weak Nullstellensatz. Afterwards, Section 4 follows Allcock's proof of Hilbert's Nullstellensatz, working from the Weak Nullstellensatz and applying the Rabinowitsch trick. Section 5 explains how the Nullstellensatz brings together algebra and geometry. It is split up into three parts: first some preliminary definitions and theorems are defined. One of the fundamental definitions is a variety, which is simply the common zero set of some polynomials. After the new definitions we again meet Hilbert's Nullstellensatz and show that it is equivalent to the previous form. Using the newfound equivalence we show that varieties constitute a topology and consider the dimension of such spaces. Lastly, we show the equivalence of algebra homomorphisms and the corresponding variety morphisms. Section 6 slightly diverges and considers the Combinatorial Nullstellensatz introduced by Noga Alon. This theorem resembles Hilbert's Nullstellensatz, yet is different when looked at carefully. We consider the original proof by Alon, and additionally show how the Combinatorial Nullstellensatz follows from Hilbert's Nullstellensatz, using the paper by Goel, Patil and Verma. We conclude the paper by proving the Cauchy-Davenport theorem, using the Combinatorial Nullstellensatz. This paper does not delve deep in any particular topic. Quite the contrary, it connects many different theorems and shows equivalences between them while referencing additional reading material.
-
(2019)Tutkielman tarkoituksena on johdattaa lukija Ext-funktorin ja ryhmien kohomologian määritelmien ja teorian äärelle ja siten tutustuttaa lukija homologisen algebran keskeisiin käsitteisiin. Ensimmäisessä luvussa esitellään tutkielman olettamia taustatietoja, algebran ja algebrallisen topologian peruskurssien sisältöjen lisäksi. Toisessa luvussa esitellään ryhmien laajennosongelma ja ratkaistaan se tapauksessa, jossa annettu aliryhmä on vaihdannainen. Ryhmälaajennosten näytetään olevan yksi yhteen -vastaavuudessa tietyn ryhmän alkioiden kanssa, ja lisäksi tutkitaan erityisesti niitä ryhmälaajennoksia, jotka ovat annettujen ryhmien puolisuoria tuloja. Vastaan tulevien kaavojen todetaan vastaavan eräitä singulaarisen koketjukompleksin määritelmässä esiintyviä kaavoja. Kolmannessa luvussa määritellään viivaresoluutio sekä normalisoitu viivaresoluutio, sekä niiden pohjalta ryhmien kohomologia. Aluksi määritellään teknisenä sivuseikkana G-modulin käsite, jonka avulla ryhmien toimintoja voi käsitellä kuten moduleita. Luvun keskeisin tulos on se, että viivaresoluutio ja normalisoitu viivaresoluutio ovat homotopiaekvivalentit -- tuloksen yleistys takaa muun muassa, että Ext-funktori on hyvin määritelty. Luvun lopuksi lasketaan syklisen ryhmän kohomologiaryhmät. Neljännessä luvussa määritellään resoluutiot yleisyydessään, sekä projektiiviset että injektiiviset modulit ja resoluutiot. Viivaresoluutiot todetaan projektiivisiksi, ja niiden homotopiatyyppien samuuden todistuksen todetaan yleistyvän projektiivisille ja injektiivisille resoluutioille. Samalla ryhmien kohomologian määritelmä laajenee, kun viivaresoluution voi korvata millä tahansa projektiivisella resoluutiolla. Luvussa määritellään myös funktorien eksaktisuus, ja erityisesti tutkitaan Hom-funktorin eksaktiuden yhteyttä projektiivisiin ja injektiivisiin moduleihin. Viidennessä luvussa määritellään oikealta johdetun funktorin käsite, ja sen erikoistapauksena Ext-funktori, joka on Hom-funktorin oikealta johdettu funktori. Koska Hom-funktori on bifunktori, on sillä kaksi oikealta johdettua funktoria, ja luvun tärkein tulos osoittaa, että ne ovat isomorfiset. Ryhmien kohomologian määritelmä laajenee entisestään, kun sille annetaan määritelmä Ext-funktorin avulla, mikä mahdollistaa ryhmien kohomologian laskemisen myös injektiivisten resoluutioiden kautta. Viimeiseen lukuun on koottu aiheeseen liittyviä asioita, joita tekstissä hipaistaan, mutta joiden käsittely jäi rajaussyistä tutkielman ulkopuolelle.
-
(2021)Online hypothesis testing occurs in many branches of science. Most notably it is of use when there are too many hypotheses to test with traditional multiple hypothesis testing or when the hypotheses are created one-by-one. When testing multiple hypotheses one-by-one, the order in which the hypotheses are tested often has great influence to the power of the procedure. In this thesis we investigate the applicability of reinforcement learning tools to solve the exploration – exploitation problem that often arises in online hypothesis testing. We show that a common reinforcement learning tool, Thompson sampling, can be used to gain a modest amount of power using a method for online hypothesis testing called alpha-investing. Finally we examine the size of this effect using both synthetic data and a practical case involving simulated data studying urban pollution. We found that, by choosing the order of tested hypothesis with Thompson sampling, the power of alpha investing is improved. The level of improvement depends on the assumptions that the experimenter is willing to make and their validity. In a practical situation the presented procedure rejected up to 6.8 percentage points more hypotheses than testing the hypotheses in a random order.
-
(2024)This thesis studies methods for finding crease patterns for surfaces of revolution with different Gaussian curvatures using variations of the Miura-ori origami pattern. Gaussian curvature is an intrinsic property of a surface in that it depends only on the inner properties of the surface in question. Usually determining the Gaussian curvature of a surface can be difficult, but for surfaces of revolution it can be calculated easily. Examples of surfaces of revolution with different Gaussian curvatures include cylinders, spheres, catenoids, pseudospheres and tori, which are the surfaces of interest in the work. Miura-ori is a family of flat-foldable origami patterns which consist of a quadrilateral mesh. The regular pattern is a two-way periodic tessellation which is determined by the parameters around a single vertex and it has a straight profile in all of its the semi-folded forms. By relaxing the pattern to a one-way periodic tessellation we get a more diverse set of patterns called the semi-generalized Miura-ori (SGMO) which are determined by the parameters of single column of vertices. By varying the angles of the creases related to these vertices we are also able to approximate curved profiles. Patterns for full surfaces of revolution can then be found by folding a thin strip of paper to an SGMO configuration that follows a wanted profile, after which the strip is repeated enough times horizontally to be able to join the ends of the paper to form a full revolution. Three algorithms for finding a crease pattern that follows a wanted profile curve are discussed in the work. This includes a simple algorithm by Robert J. Lang in addition to two algorithms developed by the author called the Equilateral triangles method and the Every second major fold follows the curve method. All three algorithms are explored both geometrically and by their pen-and-paper implementations which are described in detail so that the reader can utilize them without making any computations. Later, the three algorithms are tested on a set of profile curves for the surfaces of interest. Examples of full surfaces folded in real life are also given and the crease patterns for the models are included. The results showcase that each algorithm is suitable for finding patterns for our test set of surfaces and they usually have visually distinct appearances. The scale and proportions of the approximation matter greatly in terms of looks and feasibility of the pattern with all algorithms.
-
(2021)Electrical impedance tomography is a differential tomography method where current is injected into a domain and its interior distribution of electrical properties are inferred from measurements of electric potential around the boundary of the domain. Within the context of this imaging method the forward problem describes a situation where we are trying to deduce voltage measurements on a boundary of a domain given the conductivity distribution of the interior and current injected into the domain through the boundary. Traditionally the problem has been solved either analytically or by using numerical methods like the finite element method. Analytical solutions have the benefit that they are efficient, but at the same time have limited practical use as solutions exist only for a small number of idealized geometries. In contrast, while numerical methods provide a way to represent arbitrary geometries, they are computationally more demanding. Many proposed applications for electrical impedance tomography rely on the method's ability to construct images quickly which in turn requires efficient reconstruction algorithms. While existing methods can achieve near real time speeds, exploring and expanding ways of solving the problem even more efficiently, possibly overcoming weaknesses of previous methods, can allow for more practical uses for the method. Graph neural networks provide a computationally efficient way of approximating partial differential equations that is accurate, mesh invariant and can be applied to arbitrary geometries. Due to these properties neural network solutions show promise as alternative methods of solving problems related to electrical impedance tomography. In this thesis we discuss the mathematical foundation of graph neural network approximations of solutions to the electrical impedance tomography forward problem and demonstrate through experiments that these networks are indeed capable of such approximations. We also highlight some beneficial properties of graph neural network solutions as our network is able to converge to an arguably general solution with only a relatively small training data set. Using only 200 samples with constant conductivity distributions, the network is able to approximate voltage distributions of meshes with spherical inclusions.
-
(2024)A presentetaion of the basic tools of traditional audio deconvolution and a supervised NMF algorithm to enhance a filtered and noisy speech signal.
-
(2020)In this thesis we will look at the asymptotic approach to modeling randomly weighted heavy-tailed random variables and their sums. The heavy-tailed distributions, named after the defining property of having more probability mass in the tail than any exponential distribution and thereby being heavy, are essentially a way to have a large tail risk present in a model in a realistic manner. The weighted sums of random variables are a versatile basic structure that can be adapted to model anything from claims over time to the returns of a portfolio, while giving the primary random variables heavy-tails is a great way to integrate extremal events into the models. The methodology introduced in this thesis offers an alternative to some of the prevailing and traditional approaches in risk modeling. Our main result that we will cover in detail, originates from "Randomly weighted sums of subexponential random variables" by Tang and Yuan (2014), it draws an asymptotic connection between the tails of randomly weighted heavy-tailed random variables and the tails of their sums, explicitly stating how the various tail probabilities relate to each other, in effect extending the idea that for the sums of heavy-tailed random variables large total claims originate from a single source instead of being accumulated from a bunch of smaller claims. A great merit of these results is how the random weights are allowed for the most part lack an upper bound, as well as, be arbitrarily dependent on each other. As for the applications we will first look at an explicit estimation method for computing extreme quantiles of a loss distributions yielding values for a common risk measure known as Value-at-Risk. The methodology used is something that can easily be adapted to a setting with similar preexisting knowledge, thereby demonstrating a straightforward way of applying the results. We then move on to examine the ruin problem of an insurance company, developing a setting and some conditions that can be imposed on the structures to permit an application of our main results to yield an asymptotic estimate for the ruin probability. Additionally, to be more realistic, we introduce the approach of crude asymptotics that requires little less to be known of the primary random variables, we formulate a result similar in fashion to our main result, and proceed to prove it.
-
(2024)Tutkielma käsittelee BFGS-menetelmää, joka on iteratiivinen optimointimenetelmä. Se on eräs kvasi-Newton-menetelmistä, ja sitä käytetään rajoittamattomassa epälineaarisessa optimoinnissa. Kvasi-Newton-menetelmissä approksimoidaan Newtonin menetelmässä esiintyvää Hessen matriisia, jonka laskeminen on usein vaikeaa tai liian kallista. Tutkielman luvussa 2 käydään läpi perustietoja optimoinnista ja lisäksi joitain muita esitietoja. Luvussa 3 käsitellään viivahakumenetelmiä. Ne ovat optimointimenetelmiä, joissa määritetään ensin hakusuunta ja sen jälkeen askelpituus. Ensin käydään läpi sopivan askelpituuden valintaa ja tutustutaan Wolfen ehtoihin, minkä jälkeen käsitellään viivahakumenetelmien suppenemista yleisesti. Lopuksi käsitellään Newtonin menetelmää ja Kvasi-Newton-menetelmiä sekä todistetaan, että Newtonin menetelmässä suppeneminen on neliöllistä ja kvasi-Newton-menetelmissä superlineaarista. Luvussa 4 käsitellään BFGS-menetelmää, jossa approksimoidaan Hessen matriisin käänteismatriisia. Ensin johdetaan BFGS-kaava, jonka jälkeen käydään läpi BFGS-algoritmin toteutusta. Tämän jälkeen todistetaan, että menetelmä suppenee, jos kohdefunktio on sileä ja aidosti konveksi, ja että suppeneminen on superlineaarista. Lisäksi tutkitaan menetelmän toimintaa käytännössä esimerkkien avulla. Lopuksi luvussa 5 tutustutaan rajatun muistin BFGS-menetelmään, joka ei vaadi kokonaisen matriisin tallentamista ja sopii täten erityisesti suurten ongelmien ratkaisuun.
-
(2019)Improving the quality of medical computed tomography reconstructions is an important research topic nowadays, when low-dose imaging is pursued to minimize the X-ray radiation afflicted on patents. Using lower radiation doses for imaging leads to noisier reconstructions, which then require postprocessing, such as denoising, in order to make the data up to par for diagnostic purposes. Reconstructing the data using iterative algorithms produces higher quality results, but they are computationally costly and not quite powerful enough to be used as such for medical analysis. Recent advances in deep learning have demonstrated the great potential of using convolutional neural networks in various image processing tasks. Performing image denoising with deep neural networks can produce high-quality and virtually noise-free predictions out of images originally corrupted with noise, in a computationally efficient manner. In this thesis, we survey the topics of computed tomography and deep learning for the purpose of applying a state-of-the-art convolutional neural network for denoising dental cone-beam computed tomography reconstruction images. We investigate how the denoising results of a deep neural network are affected if iteratively reconstructed images are used in training the network, as opposed to using traditionally reconstructed images. The results show that if the training data is reconstructed using iterative methods, it notably improves the denoising results of the network. Also, we believe these results can be further improved and extended beyond the case of cone-beam computed tomography and the field of medical imaging.
-
(2024)This thesis is an empirical comparison of various methods of statistical matching applied to Finnish income and consumption data. The comparison is performed in order to map out some possible matching strategies for Statistics Finland to use in this imputation task and compare the applicability of the strategies within specific datasets. For Statistics Finland, the main point of performing these imputations is in assessing consumption behaviour in years when consumption-related data is not explicitly collected. Within this thesis I compared the imputation of consumption data by imputing 12 consumption variables as well as their sum using the following matching methods: draws from the conditional distribution distance hot deck, predictive mean matching, local residual draws and a gradient boosting approach. The used donor dataset is a sample of households collected for the 2016 Finnish Household Budget Survey (HBS). The recipient dataset is a sample of households collected for the 2019 Finnish Survey of Income and Living Conditions (EU-SILC). In order to assess the quality of the imputations, I used numerical and visual assessments concerning the similarity of the weighted distributions of the consumption variables. The applied numerical assessments were the Kolmogorov-Smirnov (KS) test statistic as well as the Hellinger Distance (HD), the latter of which was calculated for a categorical transformation of the consumption variables. Additionally, the similarities of the correlation matrices were assessed using correlation matrix distance. Generally, distance hot deck and predictive mean matching fared relatively well in the imputation tasks. For example, in the imputation of transport-related expenditure, both produced KS test statistics of approximately 0.01-0.02 and HD of approximately 0.05, whereas the next best-performing method received scores of 0.04 and 0.09, thus representing slightly larger discrepancies. Comparing the two methods, particularly in the imputation of semicontinuous consumption variables, distance hot deck fared notably better than the predictive mean matching approach. As an example, in the consumption expenditure of alcoholic beverages and tobacco, distance hot deck produced values of the KS test statistic and HD of approximately 0.01 and 0.02 respectively whereas the corresponding scores for predictive mean matching were 0.21 and 0.16. Eventually, I would recommend for further application a consideration of both predictive mean matching and distance hot deck depending on the imputation task. This is because predictive mean matching can be applied more easily in different contexts but in certain kinds of imputation tasks distance hot deck clearly outperforms predictive mean matching. Further assessment for this data should be done, in particular the results should be validated with additional data.
-
(2024)Työ käsittelee tunnettujen virtausdynamiikan yhtälöiden, Navier-Stokesin yhtälöiden ja Eulerin yhtälöiden välistä yhteyttä ja näiden ratkaisujen välisiä suppenemisehtoja. Työn ensimmäisessä kappaleessa esitellään työlle tärkeät perustiedot sisältäen esimerkiksi heikon derivaatan ja Sobolev-avaruuksien määritelmät ja useamman tärkeän funktioavaruuden ja jälkilauseiden määritelmät. Työn toinen kappale käsittelee tarkemmin Navier-Stokesin yhtälöitä ja Eulerin yhtälöitä. Kappaleessa esitellään ensin Navier-Stokesin yhtälöiden määritelmät ja sen jälkeen esitellään määritelmä ratkaisun olemassaololle. Kappaleen päätteeksi esitellään myös Eulerin yhtälön määritelmä. Neljännessä kappaleessa esitellään tutkielman pääaihe, eli Navier-Stokesin ja Eulerin yhtälöiden ratkaisujen välinen yhteys viskositeettitermin lähestyessä nollaa. Kappaleessa esitellään Tosio Katon tulos, jossa annetaan ekvivalentteja ehtoja sille, että Navier-Stokesin yhtälön heikko ratkaisu suppenee viskositeetin supetessa kohti Eulerin yhtälön ratkaisua. Tämä tulos todistetaan tutkiel- massa yksityiskohtaisesti. Lopuksi työn viimeisessä kappaleessa esitellään James. P. Kelliherin lisäykset Katon tuloksiin, jotka näyttävät, että Navier-Stokesin yhtälön ratkaisun u gradientti ∇u voidaan korvata ratkaisun u pyörteisyydellä ω(u). Kuten aiemmassa kappaleessa, niin myös tämä tulos esitellään yksityiskohtaisesti työssä. Työssä on vaadittu laajaa ymmärrystä monelta eri matematiikan osa-alueelta. Työn toinen kappale sisältää pitkälti analyysin metodeja sivuten muun muassa funktionaalianalyysiä ja funktioavaruuksien teoriaa. Kolmannessa ja neljännessä kappaleessa keskitytään pitkälti osittais- differentiaaliyhtälöiden teoriaan. Lisäksi työssä käsitellään laajalti myös reaalianalyysin aiheita. Päälähteinä työssä on käytetty Lawrence C. Evansin ”Partial Differential Equations” -teosta, Tosio Katon artikkelia ”Remarks on Zero Viscosity Limit” ja James P. Kelliherin artikkelia ”On Kato’s conditions for vanishing viscosity limit”.
-
(2024)A continuous-time Markov chain is a stochastic process which has the Markov property. The Markov property states that the transition to a next state of the process only depends on the current state, that is, it does not depend on the process’ preceding states. Continuous-time Markov Chains are fundamental tools to model stochastic systems in finance and insurance such as option pricing and modelling insurance claim processes. This thesis examines continuous-time Markov chains and their most important concepts and typical properties. For instance, we introduce and investigate the Kolmogorov forward and backward equations, which are essential for continuous-time systems. However, the main aim of the thesis is to present a method and proof for constructing a Markov process from continuous transition intensity matrix. This is achieved by generating a transition probability matrix from given transition intensity matrix. When the transition intensities are known, the challenge is to determine the transition probabilities since the calculations can easily become difficult to solve analytically. Through the introduced theorem it becomes possible to simplify the calculations by approximations. In this thesis, we also make applications of the theory. We demonstrate how determining transition probabilities using Kolmogorov’s forward equations can become challenging in a simple setup. Furthermore, we will compare the approximations of transition probabilities derived from the main theorem to the actual transition probabilities. We make observations about the theorem’s transition probability function; the approximations derived from the main theorem provides quite satisfactory estimates of the actual transition probabilities.
-
(2023)Työssä todistetaan Delignen–Mumfordin kompaktifikaatiolause. Delignen–Mumfordin kompaktifikaatiolause sanoo, että tietyillä ehdolla jonolle saman signaturen hyperbolisia pintoja on olemassa rajapinta, josta on olemassa diffeomorfismit jokaiseen tämän jonon jäseneen ja että näillä diffeomorfismeilla nykäistyt metriikat suppenevat rajapinnalla rajametriikkaan. Työssä Delignen–Mumfordin kompaktifikaatiolause todistetaan osoittamalla vastaava tulos ensin hyperbolisten pintojen yksinkertaisille rakennuspalikoille. Työ todistaa vastaavan tuloksen ensin Y -palan kanonisille kauluksille ja käyttää tätä tulosta todistamaan vastaavan tuloksen Y -paloille. Tämän jälkeen työ todistaa jokaisen hyperbolisen pinnan olevan rakennettavissa Y -paloista antaen välittömästi tuloksen kaikille hyperbolisille pinnoille.
-
(2024)Sileät monistot laajentavat matemaattisen analyysin keinoja euklidisista avaruuksista yleisemmille topologisille avaruuksille. De Rhamin lause lisää tähän vielä yhteyden algebralliseen topologiaan näyttämällä, että tietyt monistojen topologiset invariantit voidaan karakterisoida joko analyysin tai topologian keinoin. Toisin sanottuna moniston analyyttiset ominaisuudet paljastavat jotain sen topologisista ominaisuuksista ja päinvastoin. Tässä gradussa esitetään De Rhamin lauseelle kaksi todistusta. Ensimmäinen niistä todistaa lauseen sen klassisessa muodossa, joka vaatii vain monistojen ja singulaarihomologian perusteorian ymmärtämisen. Toinen todistus on muotoiltu hyvin yleisesti lyhteiden avulla; tarvittava lyhteiden teoria esitellään lähes kokonaan tekstissä. Tämä rakenne jakaa tekstin luontevasti kahtia. Ensimmäisessä osassa kerrataan ensin lyhyesti de Rhamin kohomologian ja singulaarihomologian perusteet. Seuraavaksi esitellään singulaarikohomologia sekä ketjujen integrointi monistoilla, jotka johtavat klassisen de Rhamin lauseen todistukseen. Toisessa osassa tutustutaan aluksi esilyhteiden ja lyhteiden teoriaan. Sitten esitellään lyhdekoho- mologiateoriat ja niiden yhteys ensimmäisen osan kohomologiaryhmiin. Lopulta näytetään, että kaikki lyhdekohomologiateoriat ovat yksikäsitteisesti isomorfisia keskenään. De Rhamin kohomologian ja singulaarikohomologian tapauksessa tälle isomorfismille annetaan lisäksi suoraviivainen konstruktio.
-
(2021)Bonus-malus systems are used globally to determine insurance premiums of motor liability policy-holders by observing past accident behavior. In these systems, policy-holders move between classes that represent different premiums. The number of accidents is used as an indicator of driving skills or risk. The aim of bonus-malus systems is to assign premiums that correspond to risks by increasing premiums of policy-holders that have reported accidents and awarding discounts to those who have not. Many types of bonus-malus systems are used and there is no consensus about what the optimal system looks like. Different tools can be utilized to measure the optimality, which is defined differently according to each tool. The purpose of this thesis is to examine one of these tools, elasticity. Elasticity aims to evaluate how well a given bonus-malus system achieves its goal of assigning premiums fairly according to the policy-holders’ risks by measuring the response of the premiums to changes in the number of accidents. Bonus-malus systems can be mathematically modeled using stochastic processes called Markov chains, and accident behavior can be modeled using Poisson distributions. These two concepts of probability theory and their properties are introduced and applied to bonus-malus systems in the beginning of this thesis. Two types of elasticities are then discussed. Asymptotic elasticity is defined using Markov chain properties, while transient elasticity is based on a concept called the discounted expectation of payments. It is shown how elasticity can be interpreted as a measure of optimality. We will observe that it is typically impossible to have an optimal bonus-malus system for all policy-holders when optimality is measured using elasticity. Some policy-holders will inevitably subsidize other policy-holders by paying premiums that are unfairly large. More specifically, it will be shown that, for bonus-malus systems with certain elasticity values, lower-risk policy-holders will subsidize the higher-risk ones. Lastly, a method is devised to calculate the elasticity of a given bonus-malus system using programming language R. This method is then used to find the elasticities of five Finnish bonus-malus systems in order to evaluate and compare them.
-
(2021)The fields of insurance and financial mathematics require increasingly intricate descriptors of dependency. In the realm of financial mathematics, this demand arises from globalisation effects over the past decade, which have caused financial asset returns to exhibit increasingly intricate dependencies between each other. Of particular interest are measurements describing the probabilities of simultaneous occurrences between unusually negative stock returns. In insurance mathematics, the ability to evaluate probabilities associated with the simultaneous occurrence of unusually large claim amounts can be crucial for both the solvency and the competitiveness of an insurance company. These sorts of dependencies are referred to by the term tail dependence. In this thesis, we introduce the concept of tail dependence and the tail dependence coefficient, a tool for determining the amount of tail dependence between random variables. We also present statistical estimators for the tail dependence coefficient. Favourable properties of these estimators are investigated and a simulation study is executed in order to evaluate and compare estimator performance under a variety of distributions. Some necessary stochastics concepts are presented. Mathematical models of dependence are introduced. Elementary notions of extreme value theory and empirical processes are touched on. These motivate the presented estimators and facilitate the proofs of their favourable properties.
-
(2024)The presence of 1/f type noise in a variety of natural processes and human cognition is a well-established fact, and methods of analysing it are many. Fractal analysis of time series data has long been subject to limitations due to the inaccuracy of results for small datasets and finite data. The development of artificial intelligence and machine learning algorithms over the recent years have opened the door to modeling and forecasting such phenomena as well which we do not yet have a complete understanding of. In this thesis principal component analysis is used to detect 1/f noise patterns in human-played drum beats typical to a style of playing. In the future, this type of analysis could be used to construct drum machines that mimic the fluctuations in timing associated with a certain characteristic in human-played music such as genre, era, or musician. In this study the link between 1/f-noisy patterns of fluctuations in timing and the technical skill level of the musician is researched. Samples of isolated drum tracks are collected and split into two groups representing either low or high level of technical skill. Time series vectors are then constructed by hand to depict the actual timing of the human-played beats. Difference vectors are then created for analysis by using the least-squares method to find the corresponding "perfect" beat and subtracting them from the collected data. These resulting data illustrate the deviation of the actual playing from the beat according to a metronome. A principal component analysis algorithm is then run on the power spectra of the difference vectors to detect points of correlation within different subsets of the data, with the focus being on the two groups mentioned earlier. Finally, we attempt to fit a 1/f noise model to the principal component scores of the power spectra. The results of the study support our hypothesis but their interpretation on this scale appears subjective. We find that the principal component of the power spectra of the more skilled musicians' samples can be approximated by the function $S=1/f^{\alpha}$ with $\alpha\in(0,2)$, which is indicative of fractal noise. Although the less skilled group's samples do not appear to contain 1/f-noisy fluctuations, its subsets do quite consistently. The opposite is true for the first-mentioned dataset. All in all, we find that a much larger dataset is required to construct a reliable model of human error in recorded music, but with the small amount of data in this study we show that we can indeed detect and isolate defining rhythmic characteristics to a certain style of playing drums.
Now showing items 1-20 of 87