Skip to main content
Login | Suomeksi | På svenska | In English

Browsing by master's degree program "Master's Programme in Mathematics and Statistics"

Sort by: Order: Results:

  • Oksa, Ella (2024)
    Sobolev functions generalize the concept of differentiability for functions beyond classical settings. The spaces of Sobolev functions are fundamental in mathematics and physics, particularly in the study of partial differential equations and functional analysis. This thesis provides an overview of construction of an extension operator on the space of Sobolev functions on a locally uniform domain. The primary reference is Luke Rogers' work "A Degree-Independent Sobolev Extension Operator". Locally uniform domains satisfy certain geometric properties, for example there are not too thin cusps. However locally uniform domains can possess highly non-rectifiable boundaries. For instance, the interior of the Koch snowflake represents a locally uniform domain with a non-rectifiable boundary. First we will divide the interior points of the complement of our locally uniform domain into dyadic cubes and use a collection of the cubes having certain geometric properties. The collection is called Whitney decomposition of the locally uniform domain. To extend a Sobolev function to a small cube in the Whitney decomposition one approach is to use polynomial approximations to the function on an nearby piece of the domain. We will use a polynomial reproducing kernel in order to obtain a degree independent extension operator. This involves defining the polynomial reproducing kernel in sets of the domain that we call here twisting cones. These sets are not exactly cones, but have some similarity to cones. Although a significant part of Rogers' work deals extensively with proving the existence of the kernel with the desired properties, our focus will remain in the construction of the extension operator so we will discuss the polynomial reproducing kernel only briefly. The extension operator for small Whitney cubes will be defined as convolution of the function with the kernel. For large Whitney cubes it is enough to set the extension to be 0. Finally the extension operator will be the smooth sum of the operators defined for each cube. Ultimately, since the domain is locally uniform the boundary is of measure zero and no special definition for the extension is required there. However it is necessary to verify that the extension "matches" the function correctly at the boundary, essentially that their k-1-th derivatives are Lipschitz there. This concludes the construction of a degree independent extension operator for Sobolev functions on a locally uniform domain.
  • Heikkilä, Tommi (2019)
    Computed tomography (CT) is an X-ray based imaging modality utilized not only in medicine but also in other scientific fields and industrial applications. The imaging process can be mathematically modelled as a linear equation and finding its solution is a typical example of an inverse problem. It is ill-posed especially if the number of projections is sparse. One approach is to combine the data mismatch term with a regularization one, and look for the minimizer of such a functional. The regularization is a penalty term that introduces prior information that might be available on the solution. Numerous algorithms exist to solve a problem of this type. For example the iterative primaldual fixed point algorithm (PDFP) is well suited for reconstructing CT images when the functional to minimize includes a non-negativity constraint and the prior information is expressed by an l1-norm of the shearlet transformed target. The motivation of this thesis stems from CT imaging of plants perfused with a liquid contrast agent aimed at increasing the contrast of the images and studying the ow of liquid in the plant over time. Therefore the task is to reconstruct dynamic CT images. The main idea is to apply 3D shearlets as a prior, treating time as the third dimension. For comparison, both Haar wavelet transform as well as 2D shearlet transform were tested. In addition a recently proposed technique based on the sparsity levels of the target was used to ease the non-trivial choice of the regularization parameter. The quality of di erent set-ups were assessed for said problem with simulated measurements, a real life scenario where the contrast agent is applied to a gel and, finally, to real data where the contrast agent is perfused to a real plant. The results indicate that the 3D shearlet-based approach produce suitable reconstructions for observing the changes in the contrast agent even though there are no drastic improvements to the quality of reconstructions compared to using the Haar transform.
  • Halme, Eetu (2024)
    Solving partial differential equations using methods of probability theory has a long history. In this thesis we show that the solutions of the conductivity equation in Lipschitz domain D with Neumann boundary conditions and uniformly elliptic, measurable conductivity parameter $\kappa$, can be represented using a Feynman-Kac formula for reflecting diffusion processes X on the domain D. We begin with history and connection to statistical experiments in Chapter 1. Chapter 2 starts by introducing Banach and Hilbert spaces with spectral theory of bounded operators, together with Hölder and Sobolev spaces. Sobolev spaces provide the right properties for the boundary data and solutions. In Chapter 3, we introduce the basics of stochastic processes, martingale and continuous semimartingales. We also need the theory of Markov processes, which Hunt and Feller processes are based on. Hunt processes are because of their correspondence with Dirichlet forms to define the reflecting diffusion process X. We also introduce the concept of local time of a process. In Chapter 4 we introduce Dirichlet forms, their correspondence with self-adjoint operators and Revuz measure. In Chapter 5, we introduce the conductivity equation and the Dirichlet-to-Neumann map $\Lambda_\kappa$. The goal of the Calderón's problem is to reconstruct the conductivity parameter $\kappa$ from the map $\Lambda_\kappa$, which is a difficult, non-linear and ill-posed inverse problem. The Chapter 6 constitutes the main body of the thesis, and here we prove the Feynman-Kac formula for solutions of the conductivity equation. We use the correspondence between the Dirichlet forms and selfadjoint operators, to define a semigroup (T_t) of solutions to a abstract Cauchy equation and from the semigroup (T_t) which we can associate by the Dunford-Pettis theorem a transition density function p for the reflecting diffusion process X. We show that using De-Giorgi-Nash-Moser estimates that p is Hölder continuous and defined everywhere. We also prove p converges exponentially to stationary distribution. We generalise the concept of boundary local times using the Revuz measure, and prove the occupation formula. These results together with the Skorohod decomposition for Lipschitz conductivities is used in the four part proof of Feynman-Kac formula. In Chapter 7, we introduce the boundary trace process, which is a pure jump process corresponding to the hitting times on the boundary of the reflecting diffusion process. We state that the trace process is the infinitesimal generator of the Dirichlet-to-Neumann map -\Lambda_\kappa and thus provides a probabilistic interpetation for Calderón's problem. We end with discussion on applications of the theory and potential directions for new research. The main references of the thesis are the articles of Piiroinen and Simon ''From Feynman–Kac Formulae to Numerical Stochastic Homogenization in Electrical Impedance Tomography'' and ''Probabilistic interpretation of the Calderón problem''.
  • Matala-aho, Hannu (2024)
    This thesis covers martingale based theory of interest modelling. Short-rate models and their characteristics are introduced to deal with the pricing of zero-coupon bonds, including a defaultable bond. In order to back the results concerning the characteristics of the short-rate models, there are few simulations done with Matlab. The thesis covers a formulation for a necessary condition for Heath-Jarrow-Morton-model (HJM) to have a martingale property. As a practical example of zero-coupon bonds, a self-financing hedging strategy for the bond call option is presented. The last part of the thesis handles the derivatives of Secured Overnight Financing Rate (SOFR) and The London Inter-Bank Offered Rate (LIBOR). It is shown that the Gaussian cumulative distribution may be used in forming the arbitrage free price of SOFR and LIBOR derivatives. Also a hedging strategy for the swaption is introduced. The receipt to manage these merely complex looking equations is quite simple. The effective use of martingale representation theorem together with Girsanov-Meyer’s theorem and forward measure is the key. The first theorem guarantees that the square-integrable martingale Mt admits to representation dMt =θtdBt, where Bt is a Brownian motion and θt is unique adapted square-integrable stochastic process. The second one guarantees us a way to find Brownian motion process under the new measure. It is of the form dB∗ t = Htdt+dBt, In addition to the concepts of mathematical finance there are results governing the probability theory. For example, the construction of Wiener and Itô integrals, Lèvy’s characterization theorem and already mentioned Martingale represantation theorem with proofs are covered.
  • Hednäs, Mats (2023)
    The history of set theory is a long and winding road. From its inception, set theory has grown to become its own flourishing branch of mathematics with a pivotal role in the attempt to establish a foundation for all of mathematics and as such its influence is felt in every corner of the mathematical world as it exists today. This foundational effort, in the form of establishing new set theoretic axioms, is still ongoing and a big driving force behind this movement is the many unanswered questions that remain out of reach of the set theory of today. One of the most well known of these open questions is that of the Continuum Hypothesis. In this thesis we will first dive into the history of set theory, starting by looking at the role that infinity has played in the history of mathematics. From the ancients Greeks to Cantor who finally brings infinity into mathematics in a major way through set theory. We look at the development of a foundation for mathematics through the axiomatization of set theory and then focus on the role the Continuum Hypothesis played in this effort, leading up to Gödel’s and Cohen’s proofs that showed its independence and beyond that to the research being done today. We then turn our attention to potential candidates for new axioms that would solve the Continuum Hypothesis. First we take a closer look at Gödel’s constructible universe, in which the Continuum Hypothesis is true. We look at how it is built and consider the potential results of accepting the corresponding Axiom of Constructibility as a new axiom of set theory. In the final section we examine Chris Freiling’s proposed Axioms of Symmetry, which imply the negation of the Continuum Hypothesis. After looking at Freiling’s constructions in detail we consider the arguments for and against accepting them as new axioms.
  • Penttinen, Jussi (2021)
    HMC is a computational method build to efficiently sample from a high dimensional distribution. Sampling from a distribution is typically a statistical problem and hence a lot of works concerning Hamiltonian Monte Carlo are written in the mathematical language of probability theory, which perhaps is not ideally suited for HMC, since HMC is at its core differential geometry. The purpose of this text is to present the differential geometric tool's needed in HMC and then methodically build the algorithm itself. Since there is a great introductory book to smooth manifolds by Lee and not wanting to completely copy Lee's work from his book, some basic knowledge of differential geometry is left for the reader. Similarly, the author being more comfortable with notions of differential geometry, and to cut down the length of this text, most theorems connected to measure and probability theory are omitted from this work. The first chapter is an introductory chapter that goes through the bare minimum of measure theory needed to motivate Hamiltonian Monte Carlo. Bulk of this text is in the second and third chapter. The second chapter presents the concepts of differential geometry needed to understand the abstract build of Hamiltonian Monte Carlo. Those familiar with differential geometry can possibly skip the second chapter, even though it might be worth while to at least flip through it to fill in on the notations used in this text. The third chapter is the core of this text. There the algorithm is methodically built using the groundwork laid in previous chapters. The most important part and the theoretical heart of the algorithm is presented here in the sections discussing the lift of the target measure. The fourth chapter provides brief practical insight to implementing HMC and also discusses quickly how HMC is currently being improved.
  • Ovaskainen, Osma (2024)
    Abstract Objective The objective of this thesis is to create methods to transform the most accessible digitalized version of an apartment, the floor plan, into a format that can be analyzed by statistical modeling and use the created data to find if there are any spatial or temporal effects in the geometry of apartments floor plans. Methods The first part of the thesis was created using a mix of computer vision image manipulation methods combined with text recognition. The second portion was performed using a oneway ANOVA model. Results With the computer vision portion, we were able to successfully classify a portion of the data, however, there is a lot of room for improvement due to the recognition had a lot of room for improvement. From the created data, we were able to identify some key differences concerning our parameters, location, and year of construction. The analysis however sufferers from a quite limited dataset, where few housing corporations play a large role in the final results, so it would be wise to repeat this experiment with a more comprehensive dataset for more accurate results
  • Latif, Khalid (2023)
    The evolution of number systems, demonstrating the remarkable cognitive abilities of early humans, exemplifies the progress of civilization. Rooted in ancient Mesopotamia and Egypt, the origins of number systems and basic arithmetic trace back to tally marks, symbolic systems, and position-based representations. The development of these systems in ancient societies, driven by the needs of trade, administration, and science, showcases the sophistication of early mathematical thinking. While the Roman and Greek numeral systems emerged, they were not as sophisticated or efficient as their Mesopotamian and Egyptian counterparts. Greek or Hellenic culture, which preceded the Romans, played a crucial role in mathematics, but Europe's true impact emerged during the Middle Ages when it played a pivotal role in the development of algorithmic arithmetic. The adoption of Hindu-Arabic numerals, featuring a placeholder zero, marked a paradigm shift in arithmetic during the Middle Ages. This innovative system, with its simplicity and efficiency, revolutionized arithmetic and paved the way for advanced mathematical developments. European mathematicians, despite not being the primary innovators of number systems, contributed significantly to the development of algorithmic methods. Techniques such as division per galea, solutions for quadratic equations, and proportional reduction emerged, setting the foundation for revolutionary inventions like Pascal's mechanical calculator. Ancient mathematical constants such as zero, infinity, and pi played deeply influential roles in ancient arithmetic. Zero, initially perceived as nothing, became a crucial element in positional systems, enabling the representation of larger numbers and facilitating complex calculations. Infinity, a limitless concept, fascinated ancient mathematicians, leading to the exploration of methods to measure infinite sets. Pi, the mysterious ratio of a circle's circumference to its diameter, sparked fascination, resulting in ingenious methods to compute its value. The development of ancient computational devices further highlights the remarkable ingenuity of early mathematicians, laying the groundwork for future mathematical advancements. The abacus, with its ability to facilitate quick calculations, became essential in trade and administration. The Antikythera mechanism, a 2nd-century astronomical analog computer, showcased the engineering skill of ancient Greeks. Mechanical calculators like the slide rule and Pascaline, emerging during the Renaissance, represented significant developments in computational technology. These tools, driven by practical needs in commerce, astronomy, and mathematical computations, paved the way for future mathematical breakthroughs. In conclusion, the evolution of number systems and arithmetic is a fascinating narrative of human ingenuity and innovation. From ancient Mesopotamia to the Renaissance, this journey reflects the intertwined nature of mathematics, culture, and civilization.
  • Cosgaya Arrieta, Juan José (2024)
    In the field of insurance mathematics, it is critical to control the solvency of an insurance company. In particular, calculating the probability of ruin, which is the probability that the company’s surplus falls below zero. In this thesis a review of the fundamentals of ruin theory, the modelling process and some results and methods for the estimation of ruin probabilities is made. Most of the theorems are taken from different bibliographical sources, but a good amount of the proofs presented are original, in order to provide a more rigorous and detailed explanation. A central focus of this thesis is the Pollaczek-Khinchine formula. This formula provides a solution for the probability distribution of the maximum potential loss of an insurance company in terms of convolutions of a particular function related to the claim sizes. Apart from the theoretical results that may be derived from it and its elegance, its usefulness lies in the ideas underlying it. Specially, the idea to understand the maximum potential loss of the company as the biggest of the historical records in the loss process. Using these ideas, a recursive approach to estimating ruin probabilities is ex- plained. This approach results in an easy to program and efficient bounds method which allows for any type of claim sizes (that is, the random variables that model how big are the claims of the insureds). The only restrictions imposed come from the fact that this discussion takes place within the Poisson model. This framework allows for various claim size distributions and models the number of claims as a Poisson process. Finally, two examples of light and heavy-tailed claim size distributions are simu- lated using this recursive approach. This shows the applicability of the method and the differences between light and heavy-tailed distributions with regards to the ruin probabilities that emerge from them.
  • Tan, Shu Zhen (2021)
    In practice, outlying observations are not uncommon in many study domains. Without knowing the underlying factors to the outliers, it is appealing to eliminate the outliers from the datasets. However, unless there are scientific justification, outlier elimination amounts to alteration of the datasets. Otherwise, heavy-tailed distributions should be adopted to model the larger-than-expected variabiltiy in an overdispersed dataset. The Poisson distribution is the standard model to model the variation in count data. However, the empirical variability in observed datsets is often larger than the amount expected by the Poisson. This leads to unreliable inferences when estimating the true effect sizes of covariates in regression modelling. It follows that the Negative Binomial distribution is often adopted as an alternative to deal with the overdispersed datasets. Nevertheless, it has been proven that both Poisson and Negative Binomial observation distributions are not robust against the outliers, in a sense that the outliers have non-negligible influence on the estimation of the covariate effect size. On the other hand, the scale mixture of quasi-Poisson distributions (called the robust quasi-Poisson model), which is constructed similarly to the construction of the Student's t-distribution, is a heavy-tailed alternative to the Poisson. It is proven to be robust against outliers. The thesis shows the theoretical evidence on the robustness of the 3 aforementioned models in a Bayesian framework. Lastly, the thesis considers 2 simulation experiments with different kinds of the outlier source -- process error and covariate measurement error, to compare the robustness between the Poisson, Negative Binomial and robust quasi-Poisson regression models in the Bayesian framework. The model robustness was assessed, in terms of the model ability to infer correctly the covariate effect size, in different combination of error probability and error variability. It was proven that the robust quasi-Poisson regression model was more robust than its counterparts because its breakdown point was relatively higher than the others, in both experiments.
  • Hirvonen, Minna (2020)
    Several extensions of first-order logic are studied in descriptive complexity theory. These extensions include transitive closure logic and deterministic transitive closure logic, which extend first-order logic with transitive closure operators. It is known that deterministic transitive closure logic captures the complexity class of the languages that are decidable by some deterministic Turing machine using a logarithmic amount of memory space. An analogous result holds for transitive closure logic and nondeterministic Turing machines. This thesis concerns the k-ary fragments of these two logics. In each k-ary fragment, the arities of transitive closure operators appearing in formulas are restricted to a nonzero natural number k. The expressivity of these fragments can be studied in terms of multihead finite automata. The type of automaton that we consider in this thesis is a two-way multihead automaton with nested pebbles. We look at the expressive power of multihead automata and the k-ary fragments of transitive closure logics in the class of finite structures called word models. We show that deterministic twoway k-head automata with nested pebbles have the same expressive power as first-order logic with k-ary deterministic transitive closure. For a corresponding result in the case of nondeterministic automata, we restrict to the positive fragment of k-ary transitive closure logic. The two theorems and their proofs are based on the article ’Automata with nested pebbles capture first-order logic with transitive closure’ by Joost Engelfriet and Hendrik Jan Hoogeboom. In the article, the results are proved in the case of trees. Since word models can be viewed as a special type of trees, the theorems considered in this thesis are a special case of a more general result.
  • Nyberg, Jonne (2020)
    Spectral theory is a powerful tool when applied to differential equations. The fundamental result being the spectral theorem of Jon Von Neumann, which allows us to define the exponential of an unbounded operator, provided that the operator in question is self-adjoint. The problem we are considering in this thesis, is the self-adjointness of the Schr\"odinger operator $T = -\Delta + V$, a linear second-order partial differential operator that is fundamental to non-relativistic quantum mechanics. Here, $\Delta$ is the Laplacian and $V$ is some function that acts as a multiplication operator. We will study $T$ as a map from the Hilbert space $H = L^2(\mathbb{R}^d)$ to itself. In the case of unbounded operators, we are forced to restrict them to some suitable subspace. This is a common limitation when dealing with differential operators such as $T$ and the choice of the domain will usually play an important role. Our aim is to prove two theorems on the essential self-adjointness of $T$, both originally proven by Tosio Kato. We will start with some necessary notation fixing and other preliminaries in chapter 2. In chapter 3 basic concepts and theorems on operators in Hilbert spaces are presented, most importantly we will introduce some characterisations of self-adjointness. In chapter 4 we construct the test function space $D(\Omega)$ and introduce distributions, which are continuous linear functionals on $D(\Omega).$ These are needed as the domain for the adjoint of a differential operator can often be expressed as a subspace of the space of distributions. In chapter 5 we will show that $T$ is essentially self-adjoint on compactly supported smooth functions when $d=3$ and $V$ is a sum consisting of an $L^2$ term and a bounded term. This result is an application of the Kato-Rellich theorem which pertains to operators of the form $A+B$, where $B$ is bounded by $A$ in a suitable way. Here we will also need some results from Fourier analysis that will be revised briefly. In chapter 6 we introduce some mollification methods and prove Kato's distributional inequality, which is important in the proof of the main theorem in the final chapter and other results of similar nature. The main result of this thesis, presented in chapter 7, is a theorem originally conjectured by Barry Simon which says that $T$ is essentially self-adjoint on $C^\infty_c(\mathbb{R}^d)$, when $V$ is a non-negative locally square integrable function and $d$ is an arbitrary positive integer. The proof is based around mollification methods and the distributional inequality proven in the previous chapter. This last result, although fairly unphysical, is somewhat striking in the sense that usually for $T$ to be (essentially) self-adjoint, the dimension $d$ restricts the integrability properties of $V$ significantly.
  • Törmi, Henrik (2024)
    Tässä tutkielmassa tarkastellaan, miten VAR-mallien avulla tehdyt menneistykset soveltuvat Tilastokeskuksen uusimman vuonna 2021 voimaan tulleen työvoimatutkimuksen aikasarjojen taaksepäin korjaukseen vuosille 2020 - 2000. Tarkasteltavat aikasarjat ovat työikäisten miesten ja naisten kuukausittaiset työllisyys- ja työttömyysluvut. Tilastokeskus on luotettavasti taaksepäin korjannut yllä mainitut aikasarjat vuosille 2020 - 2009. Tässä tutkielmassa verrataan estimoitujen VAR-mallien menneistyksiä Tilastokeskuksen virallisiin lukuihin, joita ovat ennen vuotta 2021 voimassa olleen Tilastokeskuksen työvoimatutkimuksen mukaiset luvut sekä Tilastokeskuksen taaksepäin korjatut aikasarjat vuosille 2020 - 2009. Taaksepäin korjauksella yhdenmukaistetaan ennen 2021 olevat aikasarjat uusimman vuoden 2021 voimaan tulleen työvoimatutkimuksen mukaiseksi. Tässä tutkielmassa ei pyritä etsimään parasta mahdollista tapaa taaksepäin korjata Tilastokeskuksen uusimman vuonna 2021 voimaan tulleen työvoimatutkimuksen aikasarjoja, eikä ottamaan kantaa sihen tulisiko VAR-mallien menneistyksillä taaksepäin korjata Tilastokeskuksen uusimman vuoden 2021 voimaan tulleen työvoimatutkimuksen aikasarjoja. Käytettävissäni oli systemaattisella satunnaisotannalla poimittu Tilastokeskuksen työvoimatutkimuksen aineisto, joka sisältää kuukausittaiset tiedot vastaajien työmarkkina asemasta sekä muista keskeisistä muuttujista ajalla 2000 tammikuu - 2023 helmikuu. Lisäksi käytettävissäni oli Työ- ja elinkeinoministeriön aineisto rekisterityöttömien lukumääristä ajalla 2000 tammikuu 2023 helmikuu. Näiden aineistojen pohjalta muodostin VAR-mallit, joiden avulla menneistin aikasarjoja käyttäen ehdollista odotusarvoa, ehdolla käytettävissä oleva aineisto. Mallien estimoinnissa sekä menneistämisessä käytin eksogeenisten muuttujien havaittuja arvoja, joita ovat Työ- ja elinkeinoministeriön rekisterityöttömien lukumäärät sekä osa keskeisistä muuttujista käytettävissä olleesta Tilastokeskuksen työvoimatutkimuksen aineistosta. Mallit estimoitiin pienimmän neliösumman menetelmällä. Varmistin mallien hyvyyden tarkistaen stationaarisuuden, testaamalla homoskedastisuutta, normaalisuutta ja tutkimalla standardoituja residuaaleja ja niiden auto- ja ristikorrelaatioita. Mitä enemmän VAR-mallin estimointiin käytettiin havaintoja, sitä lähempänä aikasarjojen menneistetyt arvot ovat Tilastokeskuksen työvoimatutkimuksen virallisia lukuja, ja niissä on vähemmän radikaaleja muutoksia. VAR-mallien avulla menneistetyissä aikasarjoissa on paljon samankaltaisuutta Tilastokeskuksen työvoimatutkimuksen virallisten lukujen kanssa. Menneistykset noudattavat samankaltaista kausivaihtelua kuin viralliset luvut. Lisäksi monen aikasarjan menneistykset noudattavat samankaltaista trendiä kuin viralliset luvut, joskin tasoerot ovat aika ajoin suuria. Työ- ja elinkeinoministeriön rekisterityöttömien lukumäärät on merkitsevänä selittävänä muuttujana estimoiduissa VAR-malleissa, ja ne selittävät merkittävästi menneistyksien arvoja.
  • Kari, Daniel (2020)
    Estimating the effect of random chance (’luck’) has long been a question of particular interest in various team sports. In this thesis, we aim to determine the role of luck in a single icehockey game by building a model to predict the outcome based on the course of events in a game. The obtained prediction accuracy should also to some extent reveal the effect of random chance. Using the course of events from over 10,000 games, we train feedforward and convolutional neural networks to predict the outcome and final goal differential, which has been proposed as a more informative proxy for outcome. Interestingly, we are not able to obtain distinctively higher accuracy than previous studies, which have focused on predicting the outcome with infomation available before the game. The results suggest that there might exist an upper bound for prediction accuracy even if we knew ’everything’ that went on in a game. This further implies that random chance could affect the outcome of a game, although assessing this is difficult, as we do not have a good quantitative metric for luck in the case of single ice hockey game prediction.
  • Kurki, Joonas (2021)
    The goal of the thesis is to prove the Dold-Kan Correspondence, which is a theorem stating that the category of simplicial abelian groups sAb and the category of positively graded chain complexes Ch+ are equivalent. The thesis also goes through these concepts mentioned in the theorem, starting with categories and functors in the first section. In this section, the aim is to give enough information about category theory, so that the equivalence of categories can be understood. The second section uses these category theoretical concepts to define the simplex category, where the objects are ordered sets n = { 0 -> 1 -> ... -> n }, where n is a natural number, and the morphisms are order preserving maps between these sets. The idea is to define simplicial objects, which are contravariant functors from the simplex category to some other category. Here is also given the definition of coface and codegeneracy maps, which are special kind of morphisms in the simplex category. With these, the cosimplicial (and later simplicial) identities are defined. These identities are central in the calculations done later in the thesis. In fact, one can think of them as the basic tools for working with simplicial objects. In the third section, the thesis introduces chain complexes and chain maps, which together form the category of chain complexes. This lays the foundation for the fourth section, where the goal is to form three different chain complexes out of any given simplicial abelian group A. These chain complexes are the Moore complex A*, the chain complex generated by degeneracies DA* and the normalized chain complex NA*. The latter two of these are both subcomplexes of the Moore complex. In fact, it is later on shown that there exists an isomorphism An = NAn +DAn between the abelian groups forming these chain complexes. This connection between these chain complexes is an important one, and it is proved and used later on in the seventh section. At this point in the thesis, all the knowledge for understanding the Dold-Kan Correspondence has been presented. Thus begins the forming of the functors needed for the equivalence, which the theorem claims to exist. The functor from sAb to Ch+ maps a simplicial abelian group A to its normalized chain complex NA*, the definition of which was given earlier. This direction does not require that much additional work, since most of it was done in the sections dealing with chain complexes. However, defining the functor in the opposite direction does require some more thought. The idea is to map a chain complex K* to a simplicial abelian group, which is formed using direct sums and factorization. Forming it also requires the definition of another functor from a subcategory of the simplex category, where the objects are those of the simplex category but the morphisms are only the injections, to the category of abelian groups Ab. After these functors have been defined, the rest of the thesis is about showing that they truly do form an equivalence between the categories sAb and Ch+.
  • Kartau, Joonas (2024)
    A primary goal of human genetics research is the investigation of associations between genetic variants and diseases. Due to the high number of genetic variants, sophisticated statistical methods for high dimensional data are required. A genome-wide association study (GWAS) is the initial analysis used to measure the marginal associations between genetic variants and biological traits, but because it ignores correlation between variants, identification of truly causal variants remains difficult. Fine-mapping refers to the statistical methods that aim to identify causal variants from GWAS results by incorporating information about correlation between variants. One such fine-mapping method is FINEMAP, a widely used Bayesian variable selection model. To make computations efficient, FINEMAP assumes a constant sample size for the measured genetic variants, but in a meta-analysis that combines data from several studies, this assumption may not hold. This results in miscalibration of the FINEMAP model with meta-analyzed data. In this thesis, a novel extension for FINEMAP is developed, named FINEMAP-MISS. With an additional inversion of the variants' correlation matrix and other less demanding computational adjustments, FINEMAP-MISS makes it possible to fine-map meta-analyzed GWAS data. To test the effectiveness of FINEMAP-MISS, genetic data from the UK Biobank is used to generate sets of simulated data, where a single variant has a non-zero effect on the generated trait. For each simulated dataset, a meta-analysis with missing information is emulated, and fine-mapping is performed with FINEMAP and FINEMAP-MISS. The results verify that with missing data FINEMAP-MISS clearly performs better than FINEMAP in identification of causal variants. Additionally, with missing data the posterior probability estimates provided by FINEMAP-MISS are properly calibrated, whereas the estimates by FINEMAP exhibit miscalibration. FINEMAP-MISS enables the use of fine-mapping for meta-analyzed genetic studies, allowing for greater power in the detection of causal genetic variants.
  • Hietala, Micke (2024)
    In the fields of insurance and financial mathematics, robust modeling tools are essential for accura tely assessing extreme events. While standard statistical tools are effective for data with light-tailed distributions, they face significant challenges when applied to data with heavy-tailed characteristics. Identifying whether data follow a light- or heavy-tailed distribution is particularly challenging, often necessitating initial visualization techniques to provide insights into the nature of the distribution and guide further statistical analysis. This thesis focuses on visualization techniques, employing basic visual techniques to examine the tail behaviors of probability distributions, which are crucial for understanding the implications of extreme values in financial and insurance risk assessments. The study systematically applies a series of visualizations, including histograms, Q-Q plots, P-P plots, and Hill plots. Through the interpretation of these techniques on known distributions, the thesis aims to establish a simple framework for analyzing unknown data. Using Danish fire insurance data as our empirical data, this research simulates various probability distributions, emphasizing the visual distinction between light-tailed and heavy-tailed distributions. The thesis examines a range of distributions, including Normal, Exponential, Weibull, and Power Law, each selected for its relevance in modeling different aspects of tail behavior. The mathema tical exploration of these distributions provides a standard basis for assessing their effectiveness in capturing the nature of possibility of extreme events in data. The visual analysis of empirical data reveals the presence of heavy-tailed characteristics in the Danish fire insurance data and is not very well modeled by common light-tailed models such as the Normal and Exponential distributions. These findings underscore the need for more refined approaches that better accommodate the complexities of heavy-tailed phenomena. The thesis advocates for the further use of more advanced statistical tools and extreme value theory to asses heavy-tailed behaviour more accurately. Such tools are important for developing financial and insurance models that can effectively handle the extremities present in real-world data. This thesis contributes to the understanding and application of visualization techniques in the analysis of heavy-tailed data, laying a foundation to build on with more advanced tools to have more accurate risk management practices in the financial and insurance sectors.
  • Kovanen, Ville (2021)
    Maxwell’s equations are a set of equations which describe how electromagnetic fields behave in a medium or in a vacuum. This means that they can be studied from the perspective of partial differential equations as different kinds of initial value problems and boundary value problems. Because often in physically relevant situations the media are not regular or there can be irregular sources such as point sources, it’s not always meaningful to study Maxwell’s equations with the intention of finding a direct solution to the problem. Instead in these cases it’s useful to study them from the perspective of weak solutions, making the problem easier to study. This thesis studies Maxwell’s equations from the perspective of weak solutions. To help understand later chapters, the thesis first introduces theory related to Hilbert spaces, weak derivates and Sobolev spaces. Understanding curl, divergence, gradient and their properties is important for understanding the topic because the thesis utilises several different Sobolev spaces which satisfy different kinds of geometrical conditions. After going through the background theory, the thesis introduces Maxwell’s equations in section 2.3. Maxwell’s equations are described in both differential form and timeharmonic differential forms as both are used in the thesis. Static problems related to Maxwell’s equations are studied in Chapter 3. In static problems the charge and current densities are stationary in time. If the electric field and magnetic field are assumed to have finite energy, it follows that the studied problem has a unique solution. The thesis demonstrates conditions on what kind of form the electric and magnetic fields must have to satisfy the conditions of the problem. In particular it’s noted that the electromagnetic field decomposes into two parts, out of which only one arises from the electric and magnetic potential. Maxwell’s equations are also studied with the methods from spectral theory in Chapter 4. First the thesis introduces and defines a few concepts from spectral theory such as spectrums, resolvent sets and eigenvalues. After this, the thesis studies non-static problems related to Maxwell’s equations by utilising their time-harmonic forms. In time-harmonic forms the Maxwell’s equations do not depend on time but instead on frequencies, effectively simplifying the problem by eliminating the time dependency. It turns out that the natural frequencies which solve the spectral problem we study belong to the spectrum of Maxwell’s operator iA . Because the spectrum is proved to be discrete, the set of eigensolutions is also discrete. This gives the solution to the problem as the natural frequency solving the problem has a corresponding eigenvector with finite energy. However, this method does not give an efficient way of finding the explicit form of the solution.
  • Toijonen, Tomi (2024)
    Tutkielman aiheena oleva Whitneyn upotuslause osoittaa todeksi sen intuitiivisen ajatuksen, että jokainen sileä monisto voidaan upottaa johonkin euklidiseen avaruuteen. Lause on nimetty Hassler Whitneyn mukaan, joka ensimmäisenä todisti sen vuonna 1936. Tutkielma jakautuu kolmeen lukuun, joista ensimmäisessä käymme lävitse tarpeellisia yleisen topologian käsitteitä ja niistä johdettuja lauseita aloittaen tärkeimmistä eli topologisen avaruuden ja kuvauksen jatkuvuuden määritelmistä. Esiteltäviin käsitteisiin kuuluvat lisäksi esimerkiksi homeomorfismi, topologinen upotus, topologian kanta, aliavaruudet ja yhtenäisyys. Luvun lopuksi tutustutaan kompaktiuteen, joka on tutkielman kannalta suuressa osassa, koska sitä käytetään myöhemmin Whitnyeyn upotuslauseen todistuksessa. Toisessa luvussa keskitytään sileiden monistojen teoriaan siinä laajuudessa kuin se tutkielman kannalta on tarpeen. Luvun aluksi määritellään monisto, joka on numeroituvan kannan omaava, lokaalisti euklidinen Hausdorffin avaruus. Sen jälkeen määritellään monistolle sileä rakenne, eli maksimaalinen sileä kartasto, jonka avulla monistosta saadaan sileä monisto. Molemmista edellä mainituista annetaan useita esimerkkejä. Lisäksi todistetaan monistojen ja sileiden monistojen ominaisuuksia. Erityisesti todistetaan esikompaktien kantojen olemassaolo kaikille monistoille. Tämän jälkeen määritellään slieä ykkösen ositus ja todistetaan sen olemassaolo sileille monistoille. Käyttäen edellistä voidaan todistaa sileiden töyssy- ja tyhjennysfunktioiden olemassaolo sileille monistoille. Näitä funktioita käytetään myöhemmin Whitneyn upotuslauseen todistuksessa. Seuraavaksi määritellään monistojen välinen sileä kuvaus ja sen jälkeen derivaatio, tangenttivektori ja tangenttiavaruus, joiden avulla voidaan määritellä sileän kuvauksen differentiaali. Tämän jälkeen siirrytään sileän immersion määritelmään, joka tehdään dfferentiaalin avulla. Sileän immersion avulla saadaan sitten määriteltyä sileä upotus. Luvun viimeisessä osiossa määritellään alimonistot ja niihin liittyvät tasojoukot. Erityisesti määritellään vielä kriittiset pisteet ja arvot. Kolmannessa ja viimeisessä luvussa todistetaan ensin joitain aputuloksia ja sen jälkeen Sardin lause, jota käytetään myöhemmin Whitneyn upotuslauseen todistamiseen. Sardin lauseen mukaan sileiden monistojen välisen sileän kuvauksen kriittisten arvojen joukko on nollamittainen. Tämän jälkeen todistetaan vielä kaksi aputulosta ennen Whitneyn upotuslauseen todistusta. Näistä jälkimmäisessä osoitetaan, että jos on olemassa sileä upotus sileältä n-monistolta jollekin euklidiselle avaruudelle, niin on olemssa sileä upotus avaruudelle R2n+1. Näiden jälkeen päästään todistamaan tutkielman päätulos Whitneyn upotuslause, jonka mukaan jokaisella reunallisella tai reunattomalla sileällä n-monistolla on olemassa vahva sileä upotus euklidiseen avaruuteen R2n+1. Todistus jakautuu kahteen osaan, joissa ensimmäisessä todistetaan tapaus, jossa sileä monisto on kompakti. Tämä tehdään rakentamalla sileä upotus sileän töyssyfunktion avulla. Todistuksen toisessa osassa jaetaan ei-kompakti sileä monisto kompakteihin alimonistoihin tyhjennysfunktion avulla. Tämän jälkeen näiden sileät upotukset yhdistetään töyssyfunktion avulla uudeksi koko moniston peittäväksi sileäksi upotukseksi.