Skip to main content
Login | Suomeksi | På svenska | In English

Browsing by study line "Tillämpad matematik"

Sort by: Order: Results:

  • Huggins, Robert (2023)
    In this thesis, we develop a Bayesian approach to the inverse problem of inferring the shape of an asteroid from time-series measurements of its brightness. We define a probabilistic model over possibly non-convex asteroid shapes, choosing parameters carefully to avoid potential identifiability issues. Applying this probabilistic model to synthetic observations and sampling from the posterior via Markov Chain Monte Carlo, we show that the model is able to recover the asteroid shape well in the limit of many well-separated observations, and is able to capture posterior uncertainty in the case of limited observations. We greatly accelerate the computation of the forward problem (predicting the measured light curve given the asteroid’s shape parameters) by using a bounding volume hierarchy and by exploiting data parallelism on a graphics processing unit.
  • Orue Arruabarrena, Aida (2024)
    Altruism refers to behavior by an individual that increases the fitness of another individual while decreasing their own, and despite seemingly going against traditional theories of evolution, it's actually quite common in the animal kingdom. Understanding why and how altruistic behaviors happen has long been a central focus in evolutionary ecology, and this thesis aims to contribute to this area of study. This thesis focuses on infinite lattice models. Lattice models are a type of spatially explicit models, which means that they describe the dynamics of a population in both time and space. In particular, we consider a modification of the simplest type of lattice models (called the contact process), which considers only birth and death events. The objective is to study altruistic behaviours to help neighbours within populations residing on a lattice. To achieve this, we assume that, apart from giving birth and dying, individuals transition to a permanently non-reproductive state at a certain rate. We use ordinary differential equations to describe the dynamics of this population and to develop our model. The population we initially have in the lattice (the resident population) reaches a positive equilibrium, which we calculate numerically using Matlab. Trough linear stability analysis, we can show that this equilibrium is asymptotically stable, which means that with time, the resident population will stabilize at this equilibrium. Once the resident reaches this equilibrium, we introduce a mutant population in the lattice with the same characteristics as the resident, except that it has a different post-reproductive death rate. Linear stability analysis of the extinct equilibrium of the mutant shows that mutants with a higher post-reproductive death rate than the residents gain a competitive advantage. This is because by dying faster, post-reproductive mutants make more space for other mutants to reproduce. That result changes if we make the assumption that post-reproductive individuals help their neighbours produce more offspring. In this case, we find that depending on the amount of reproductive help given by the post-reproductive individuals, a higher post-reproductive death rate no longer is evolutionary advantageous. In fact, we are able to determine that, in general, helping neighbours reproduce is a better strategy than sacrificing oneself to make room for reproductive neighbours. Lastly, we examine this reproductive help as a function of the post-reproductive mortality rate. With this, our goal is to find an evolutionary stable strategy (ESS) for the resident population, that is, a strategy that cannot be displaced by any alternative strategies.
  • Suominen, Henri (2021)
    Online hypothesis testing occurs in many branches of science. Most notably it is of use when there are too many hypotheses to test with traditional multiple hypothesis testing or when the hypotheses are created one-by-one. When testing multiple hypotheses one-by-one, the order in which the hypotheses are tested often has great influence to the power of the procedure. In this thesis we investigate the applicability of reinforcement learning tools to solve the exploration – exploitation problem that often arises in online hypothesis testing. We show that a common reinforcement learning tool, Thompson sampling, can be used to gain a modest amount of power using a method for online hypothesis testing called alpha-investing. Finally we examine the size of this effect using both synthetic data and a practical case involving simulated data studying urban pollution. We found that, by choosing the order of tested hypothesis with Thompson sampling, the power of alpha investing is improved. The level of improvement depends on the assumptions that the experimenter is willing to make and their validity. In a practical situation the presented procedure rejected up to 6.8 percentage points more hypotheses than testing the hypotheses in a random order.
  • Kamutta, Emma (2024)
    This thesis studies methods for finding crease patterns for surfaces of revolution with different Gaussian curvatures using variations of the Miura-ori origami pattern. Gaussian curvature is an intrinsic property of a surface in that it depends only on the inner properties of the surface in question. Usually determining the Gaussian curvature of a surface can be difficult, but for surfaces of revolution it can be calculated easily. Examples of surfaces of revolution with different Gaussian curvatures include cylinders, spheres, catenoids, pseudospheres and tori, which are the surfaces of interest in the work. Miura-ori is a family of flat-foldable origami patterns which consist of a quadrilateral mesh. The regular pattern is a two-way periodic tessellation which is determined by the parameters around a single vertex and it has a straight profile in all of its the semi-folded forms. By relaxing the pattern to a one-way periodic tessellation we get a more diverse set of patterns called the semi-generalized Miura-ori (SGMO) which are determined by the parameters of single column of vertices. By varying the angles of the creases related to these vertices we are also able to approximate curved profiles. Patterns for full surfaces of revolution can then be found by folding a thin strip of paper to an SGMO configuration that follows a wanted profile, after which the strip is repeated enough times horizontally to be able to join the ends of the paper to form a full revolution. Three algorithms for finding a crease pattern that follows a wanted profile curve are discussed in the work. This includes a simple algorithm by Robert J. Lang in addition to two algorithms developed by the author called the Equilateral triangles method and the Every second major fold follows the curve method. All three algorithms are explored both geometrically and by their pen-and-paper implementations which are described in detail so that the reader can utilize them without making any computations. Later, the three algorithms are tested on a set of profile curves for the surfaces of interest. Examples of full surfaces folded in real life are also given and the crease patterns for the models are included. The results showcase that each algorithm is suitable for finding patterns for our test set of surfaces and they usually have visually distinct appearances. The scale and proportions of the approximation matter greatly in terms of looks and feasibility of the pattern with all algorithms.
  • Pyrylä, Atte (2020)
    In this thesis we will look at the asymptotic approach to modeling randomly weighted heavy-tailed random variables and their sums. The heavy-tailed distributions, named after the defining property of having more probability mass in the tail than any exponential distribution and thereby being heavy, are essentially a way to have a large tail risk present in a model in a realistic manner. The weighted sums of random variables are a versatile basic structure that can be adapted to model anything from claims over time to the returns of a portfolio, while giving the primary random variables heavy-tails is a great way to integrate extremal events into the models. The methodology introduced in this thesis offers an alternative to some of the prevailing and traditional approaches in risk modeling. Our main result that we will cover in detail, originates from "Randomly weighted sums of subexponential random variables" by Tang and Yuan (2014), it draws an asymptotic connection between the tails of randomly weighted heavy-tailed random variables and the tails of their sums, explicitly stating how the various tail probabilities relate to each other, in effect extending the idea that for the sums of heavy-tailed random variables large total claims originate from a single source instead of being accumulated from a bunch of smaller claims. A great merit of these results is how the random weights are allowed for the most part lack an upper bound, as well as, be arbitrarily dependent on each other. As for the applications we will first look at an explicit estimation method for computing extreme quantiles of a loss distributions yielding values for a common risk measure known as Value-at-Risk. The methodology used is something that can easily be adapted to a setting with similar preexisting knowledge, thereby demonstrating a straightforward way of applying the results. We then move on to examine the ruin problem of an insurance company, developing a setting and some conditions that can be imposed on the structures to permit an application of our main results to yield an asymptotic estimate for the ruin probability. Additionally, to be more realistic, we introduce the approach of crude asymptotics that requires little less to be known of the primary random variables, we formulate a result similar in fashion to our main result, and proceed to prove it.
  • Rautio, Siiri (2019)
    Improving the quality of medical computed tomography reconstructions is an important research topic nowadays, when low-dose imaging is pursued to minimize the X-ray radiation afflicted on patents. Using lower radiation doses for imaging leads to noisier reconstructions, which then require postprocessing, such as denoising, in order to make the data up to par for diagnostic purposes. Reconstructing the data using iterative algorithms produces higher quality results, but they are computationally costly and not quite powerful enough to be used as such for medical analysis. Recent advances in deep learning have demonstrated the great potential of using convolutional neural networks in various image processing tasks. Performing image denoising with deep neural networks can produce high-quality and virtually noise-free predictions out of images originally corrupted with noise, in a computationally efficient manner. In this thesis, we survey the topics of computed tomography and deep learning for the purpose of applying a state-of-the-art convolutional neural network for denoising dental cone-beam computed tomography reconstruction images. We investigate how the denoising results of a deep neural network are affected if iteratively reconstructed images are used in training the network, as opposed to using traditionally reconstructed images. The results show that if the training data is reconstructed using iterative methods, it notably improves the denoising results of the network. Also, we believe these results can be further improved and extended beyond the case of cone-beam computed tomography and the field of medical imaging.
  • Ronkainen, Arttu (2023)
    Gaussiset prosessit ovat stokastisia prosesseja, joiden äärelliset osajoukot noudattavat multinormaa- lijakaumaa. Niihin pohjautuvien mallien käyttö on suosittua bayesiläisessä tilastotieteessä, sillä ne mahdollistavat monimutkaisten ajallisten tai avaruudellisten riippuvuuksien joustavan mallintami- sen. Gaussisen latentin muuttujan malleissa havaintojen oletetaan noudattavan ehdollista jakaumaa, joka riippuu priorijakaumaltaan gaussisen latentin prosessin saamista arvoista. Havaintoaineiston koostuessa kategorisista arvoista, ovat gaussisen latentin muuttujan mallit ovat laskennallisesti han- kalia, sillä latenttien muuttujien posteriorijakaumaa ei yleensä voida käsitellä analyyttisesti. Täl- löin posterioripäättelyyn on käytettävä analyyttisiä approksimaatioita tai numeerisia menetelmiä. Laskennalliset hankaluudet korostuvat entisestään, kun latentin gaussisen muuttujan kovarianssi- funktion parametreille asetetaan oma priorijakauma. Tässä työssä käsitellään approksimatiivisia menetelmiä, joita voidaan käyttää posterioripäättelyyn gaussisen latentin muuttujan malleissa. Työssä keskitytään pääasiallisesti usean luokan luokitte- lumalliin, jossa havaintomallina on softmax-funktio, mutta suuri osa esitellyistä ideoista on so- vellettavissa myös muille havaintomalleille. Tutkielmassa käsiteltäviä approksimatiivisia menetel- miä on kolme. Ensimmäinen menetelmä on bayesiläisessä tilastotieteessä usein käytetty, satunnai- sotantaan perustuva Markovin ketju Monte Carlo -menetelmä, joka on asymptoottisesti eksakti, mutta laskennallisesti raskas. Toinen menetelmä käyttää Laplace-approksimaatioksi kutsuttua ana- lyyttista approksimaatiota latentin muuttujan posteriorille, yhdessä Markovin ketju Monte Carlo -menetelmän kanssa. Kolmas menetelmä yhdistää Laplace-approksimaation ja hyperparametrien piste-estimoinnin. Käsiteltävien menetelmien perustana oleva teoria esitellään tutkielmassa mini- maalisesti, jonka jälkeen approksimatiivisten menetelmien suoriutumista vertaillaan usean luokan luokittelumallis- sa simuloidulla havaintoaineistolla. Vertailussa voidaan havaita Laplace-approksimaation vaikutus hyperparametrien, sekä latentin muuttujan posteriorijakaumiin.
  • Mäkinen, Sofia (2023)
    In this thesis we consider the inverse problem for the one-dimensional wave equation. That is, we would like to recover the velocity function, the wave speed, from the equation given Neumann and Dirichlet boundary conditions, when the solution to the equation is known. It has been shown that an operator Λ corresponding to the boundary conditions determines the volumes of the domain of influence, which is the set where the travel time for the wave is limited. These volumes then in turn determine the velocity function. We present some theorems and propositions about determining the wave speed and present proofs for a few of them. Artificial neural networks are a form of machine learning widely used in various applications. It has been previously proven that a one-layer feedforward neural network with a non-polynomial activation function with some additional constraints on the activation function can approximate any continuous real valued functions. In this thesis we present proof of this result for a continuous non-polynomial activation function. Furthermore, in this thesis we apply two neural network architectures to the volume inversion problem, which means that we train the networks to approximate a single volume when the operator Λ is given. The neural networks in question are the feedforward neural network and the operator recurrent neural network. Before the volume inversion problem, we consider a simpler problem of finding an inverse matrix of a small invertible matrix. Finally, we compare the performances of these two neural networks for both the volume and matrix inversion problems.
  • Shabani, Mirjeta (2024)
    A continuous-time Markov chain is a stochastic process which has the Markov property. The Markov property states that the transition to a next state of the process only depends on the current state, that is, it does not depend on the process’ preceding states. Continuous-time Markov Chains are fundamental tools to model stochastic systems in finance and insurance such as option pricing and modelling insurance claim processes. This thesis examines continuous-time Markov chains and their most important concepts and typical properties. For instance, we introduce and investigate the Kolmogorov forward and backward equations, which are essential for continuous-time systems. However, the main aim of the thesis is to present a method and proof for constructing a Markov process from continuous transition intensity matrix. This is achieved by generating a transition probability matrix from given transition intensity matrix. When the transition intensities are known, the challenge is to determine the transition probabilities since the calculations can easily become difficult to solve analytically. Through the introduced theorem it becomes possible to simplify the calculations by approximations. In this thesis, we also make applications of the theory. We demonstrate how determining transition probabilities using Kolmogorov’s forward equations can become challenging in a simple setup. Furthermore, we will compare the approximations of transition probabilities derived from the main theorem to the actual transition probabilities. We make observations about the theorem’s transition probability function; the approximations derived from the main theorem provides quite satisfactory estimates of the actual transition probabilities.
  • Virri, Maria (2021)
    Bonus-malus systems are used globally to determine insurance premiums of motor liability policy-holders by observing past accident behavior. In these systems, policy-holders move between classes that represent different premiums. The number of accidents is used as an indicator of driving skills or risk. The aim of bonus-malus systems is to assign premiums that correspond to risks by increasing premiums of policy-holders that have reported accidents and awarding discounts to those who have not. Many types of bonus-malus systems are used and there is no consensus about what the optimal system looks like. Different tools can be utilized to measure the optimality, which is defined differently according to each tool. The purpose of this thesis is to examine one of these tools, elasticity. Elasticity aims to evaluate how well a given bonus-malus system achieves its goal of assigning premiums fairly according to the policy-holders’ risks by measuring the response of the premiums to changes in the number of accidents. Bonus-malus systems can be mathematically modeled using stochastic processes called Markov chains, and accident behavior can be modeled using Poisson distributions. These two concepts of probability theory and their properties are introduced and applied to bonus-malus systems in the beginning of this thesis. Two types of elasticities are then discussed. Asymptotic elasticity is defined using Markov chain properties, while transient elasticity is based on a concept called the discounted expectation of payments. It is shown how elasticity can be interpreted as a measure of optimality. We will observe that it is typically impossible to have an optimal bonus-malus system for all policy-holders when optimality is measured using elasticity. Some policy-holders will inevitably subsidize other policy-holders by paying premiums that are unfairly large. More specifically, it will be shown that, for bonus-malus systems with certain elasticity values, lower-risk policy-holders will subsidize the higher-risk ones. Lastly, a method is devised to calculate the elasticity of a given bonus-malus system using programming language R. This method is then used to find the elasticities of five Finnish bonus-malus systems in order to evaluate and compare them.
  • Sohkanen, Pekka (2021)
    The fields of insurance and financial mathematics require increasingly intricate descriptors of dependency. In the realm of financial mathematics, this demand arises from globalisation effects over the past decade, which have caused financial asset returns to exhibit increasingly intricate dependencies between each other. Of particular interest are measurements describing the probabilities of simultaneous occurrences between unusually negative stock returns. In insurance mathematics, the ability to evaluate probabilities associated with the simultaneous occurrence of unusually large claim amounts can be crucial for both the solvency and the competitiveness of an insurance company. These sorts of dependencies are referred to by the term tail dependence. In this thesis, we introduce the concept of tail dependence and the tail dependence coefficient, a tool for determining the amount of tail dependence between random variables. We also present statistical estimators for the tail dependence coefficient. Favourable properties of these estimators are investigated and a simulation study is executed in order to evaluate and compare estimator performance under a variety of distributions. Some necessary stochastics concepts are presented. Mathematical models of dependence are introduced. Elementary notions of extreme value theory and empirical processes are touched on. These motivate the presented estimators and facilitate the proofs of their favourable properties.
  • Hanninen, Elsa (2020)
    Vakuutussopimusten tappion arvioiminen on tärkeää vakuutusyhtiön riskienhallinnan kannalta. Tässä työssä esitellään Hattendorffin lause vakuutussopimuksen tappion odotusarvon ja varianssin arvioimiseksi sekä sovelletaan sen tuloksia monitilaisella Markov-prosessilla mallinnettavalle henkivakuutussopimukselle. Hattendorffin lauseen nojalla ekvivalenssiperiaatteen mukaan hinnoitellun vakuutussopimuksen erillisillä aikaväleillä syntyneiden tappioiden odotusarvo on nolla, ja tappiot ovat korreloimattomia, jonka seurauksena tappion varianssi voidaan laskea erillisillä aikaväleillä muodostuneiden tappioiden varianssien summana. Työn soveltavana osana simuloidaan Markov-prosesseja sopivassa monitilaisessa mallissa mallintamaan henkivakuutussopimuksien realisaatioita. Tutkitaan, onko simuloitujen polkujen tuottamien vuosittaisten tappioiden keskiarvo lähellä nollaa, ja onko koko sopimusajan tappioiden varianssin arvo lähellä summaa vuosittaisten tappioiden variansseista. Lisäksi lasketaan simulaation asetelmalle Hattendorffin lauseen avulla teoreettiset vastineet ja verrataan näitä simuloituihin arvoihin. Vakuutussopimus pitää karkeasti sisällään kahdenlaisia maksuja: vakuutusyhtiön maksamat korvausmaksut ja vakuutetun maksamat vakuutusmaksut. Vakuutussopimuksen kassavirta on jollain aikavälillä tapahtuvien vakuutuskorvausten ja -maksujen erotuksen hetkeen nolla diskontattu arvo. Vastuuvelka on määrittelyhetken jälkeen syntyvän, määrittelyhetkeen diskontatun, kassavirran odotusarvo. Vakuutussopimuksen tappio jollain aikavälillä määritellään kyseisen aikavälin kassavirran ja vastuuvelan arvonmuutoksen summana. Kun määritellään stokastinen prosessi, joka laskee tietyllä hetkellä siihen mennessä kumuloituneet kustannukset sekä tulevan vastuuvelan nykyarvon, voidaan tappio ilmaista tämän prosessin arvonmuutoksena. Kyseinen prosessi on neliöintegroituva martingaali, jolloin Hattendorffin lauseen tulokset ovat seurausta neliöintegroituvien martingaalien arvonmuutoksen ominaisuuksista. Hattendorffin lauseen tulokset löydettiin jo 1860-luvulla, mutta martingaaliteorian hyödyntäminen on moderni lähestymistapa ongelmaan. Esittämällä monitilaisella Markov-prosessilla mallinnettavan sopimuksen kustannukset Lebesgue-Stieltjes integraalina, saadaan tappion varianssille laskukelpoiset muodot. Markov-prosessilla mallinnettavilla sopimuksille voidaan johtaa erityistapaus Hattendorffin tuloksesta, missä tappiot voidaan allokoida eri vuosien lisäksi eri tiloihin liittyviksi tappioiksi. Soveltavassa osiossa nähdään, että yksittäisinä sopimusvuosina syntyneiden tappioiden odotusarvot ovat lähellä nollaa, ja otosvarianssien summa lähestyy koko sopimusajan tappion otosvarianssia, mikä on yhtäpitävää Hattendorffin lauseen väitteiden kanssa. Simuloidut otosvarianssit eivät täysin vastaa teoreettisia vastineitaan.
  • Holopainen, Jonathan (2021)
    Perinteisesti henkivakuutusten hinnoittelutekijöihin lisätään turvamarginaali. Diskonttauskorko on markkinakorkoa matalampi ja kuolevuuteen on lisätty turvamarginaali. Kuolemanvaraturvassa hinnoittelukuolevuus on korkeampi ja annuiteettivakuutuksessa(eläkevakuutus) matalampi kuin havaittu kuolevuus. Koska henkivakuutukset ovat usein pitkäkestoisia, on turvaavuudella hyvin tärkeä rooli tuotteen kannattavuuden ja henkivakuutusyhtiön vakavaraisuuden kannalta. Monesti myös laki määrää henkivakuutusyhtiöt hinnoittelemaan tuotteensa turvaavasti jotta yhtiöt voivat huonossakin tilanteessa edelleen turvata etuudet vakuutuksenottajille. Henkivakuutusyhtiöt ovat myös kehittäneet monimutkaisempia tuotteita, jossa voi olla useampia riskitekijöitä, joiden kehittymistä pitkällä aikavälillä voi olla vaikea ennustaa. Turvaavat hinnoittelutekijät tarkoittavat, että keskimäärin vakuutusyhtiöille kertyy tuottoja yli ajan. Tässä työssä tutkitaan vakuutusyhtiöön kertyvän tuoton tai tappion satunnaisuuden ominaisuuksia. Jätämme tämän työn ulkopuolelle vakuutusyhtiön sijoitustuoton, liikekulut sekä vakuutusyhtiöiden tavat jakaa ylijäämää vakuutetuille bonuksina. Työssä seurataan Henrik Ramlau-Hansenin artikkelia 'The emergence of profit in life insurance' keskittyen kuitenkin yleiseen tuoton odotusarvoon, odotusarvoon liittyen tiettyyn tilaan sekä määritetyn ajan sisällä kertyneeseen tuoton odotusarvoon. Tuloksia pyritään myös avaamaan niin, että ne olisi helpompi ymmärtää. Henkivakuutusyhtiön tuotto määritellään matemaattisesti käyttäen Markov prosesseja. Määritelmää käyttäen lasketaan tietyn aikavälin kumulatiivisen tuoton odotusarvo ja hajonta. Tulokseksi saadaan, että tuoton odotusarvo on Markov prosessin eri tilojen tuottaman ensimmäisen kertaluvun prospektiivisen vastuuvelan ja toisen kertaluvun retrospektiivisen vastuuvelan erotuksien summa kerrottuna todennäköisyyksillä, joilla ollaan kyseisessä tilassa aikavälin lopussa. Lopuksi työssä lasketaan vielä 10 vuoden kertamaksuisen kuolemanvaravakuutuksen odotettu tuotto käyttäen työn tuloksia. Lisäksi sama vakuutus simuloitiin myös 10 000 000 kertaa päästen hyvin lähelle kaavan antamaa lopputulosta.
  • Ersalan, Muzaffer Gür (2019)
    In this thesis, Convolutional Neural Networks (CNN) and Inverse Mathematic methods will be discussed for automated defect detection in materials that are used for radiation detectors. The first part of the thesis is dedicated to the literature review on the methods that are used. These include a general overview of Neural Networks, computer vision algorithms and Inverse Mathematics methods, such as wavelet transformations, or total variation denoising. In the Materials and Methods section, how these methods can be utilized in this problem setting will be examined. Results and Discussions part will reveal the outcomes and takeaways from the experiments. A focus of this thesis is put on the CNN architecture that fits the task best, how to optimize that chosen CNN architecture and discuss, how selected inputs created by Inverse Mathematics influence the Neural Network and it's performance. The results of this research reveal that the initially chosen Retina-Net is well suited for the task and the Inverse Mathematics methods utilized in this thesis provided useful insights.
  • Lankinen, Petra (2021)
    Vahinkovakuutusyhtiöiden on Suomen lainsäädännön nojalla kyettävä arvioimaan vakavaraisuuttaan. Jotta arvion voi tehdä, tulee yhtiöiden tunnistaa ja pyrkiä hallitsemaan liiketoiminta-alueeseensa liittyviä riskejä. Taloudelliset riskit ovat eri vakuutuslajeilla erilaisia, sillä tulokseen liittyvät todennäköisyysjakaumat voivat olla keskenään hyvin erilaisia — toisilla vakuutuslajeilla vahingot ovat tyypillisesti pieniä ja niitä tulee yhtiön korvattavaksi vuosittain paljon, kun taas joidenkin vakuutusten riskit realisoituvat harvoin, mutta myös korvaussummat voivat olla todella suuria. Tutkielman tavoitteena on tarkastella, kuinka vahinkovakuutusyhtiön vakavaraisuuslaskentaa voidaan käsitellä teoreettisessa viitekehyksessä. Tutkielmassa tarkastellaan vuosittaista kokonaistappiota, eli korvausvaateiden yhteissumman ja asiakkailta saatavan maksutulon välistä erotusta silloin, kun korvaukset ovat keskenään samoin jakautuneita ja riippumattomia. Kun yhden vuoden tappion jakauma on tiedossa, on tietyissä tapauksissa mahdollista arvioida vararikon todennäköisyyttä pitkällä aikavälillä. Tutkielmassa todistetaan Cramérin lause ja Cramér-Lundbergin approksimaatio, joiden avulla kevythäntäistä todennäköisyysjakaumaa noudattavalle satunnaismuuttujalle voidaan löytää vararikon todennäköisyyden paras mahdollinen yläraja tiettyjen oletusten vallitessa. Paksuhäntäisten jakaumien osalta tutustutaan vararikkotodennäköisyyden arviointiin simuloinnin kautta. Jotta tässä tutkielmassa esitettyjä tuloksia voidaan soveltaa, on hyödyllistä tuntea erilaisia menetelmiä tunnistaa jakauman kevyt- tai paksuhäntäisyysominaisuus havaintoaineistosta. Tätä varten tutkielmassa esitellään kolme visuaalista menetelmää jakauman tunnistamiseen sekä niiden teoreettiset perustat. Lisäksi näitä keinoja testataan aineistolla, joka on otos Pohjola Vakuutuksen korvausdataa vuodelta 2015. Menetelmien perusteella voidaan ajatella, että molemmissa aineistoissa korvaukset vaikuttavat noudattavan jotakin paksuhäntäistä jakaumaa, mutta aineistojen välillä oli merkittäviä eroja.
  • Nuutinen, Joonas (2021)
    Tässä tutkielmassa käsitellään log-optimaalisen salkun käsitettä jatkuvassa markkinamallissa. Jatkuva markkinamalli koostuu instrumenteista, joiden arvoja mallinnetaan jatkuvilla stokastisilla prosesseilla. Mahdollisia sijoitusstrategioita kuvataan salkuilla, jotka ovat instrumenttien määristä koostuvia moniulotteisia stokastisia prosesseja. Log-optimaalinen salkku määritellään siten, että se jokaisella hetkellä maksimoi salkun arvon logaritmin lyhyen aikavälin muutoksen odotusarvon. Lokaalisti optimaalinen salkku puolestaan maksimoi jokaisella hetkellä salkun arvon lyhyen aikavälin muutoksen odotusarvon valitulla varianssilla. Tutkielmassa todistetaan, että jokainen lokaalisti optimaalinen salkku voidaan esittää yhdistelmänä log-optimaalista salkkua ja pankkitalletusta vastaavaa instrumenttia. Saman osoitetaan pätevän myös log-optimaalisen salkun ja instrumenttien kokonaismääristä koostuvan markkinasalkun välillä, jos jokaisella markkinoilla toimivista sijoittajista on jokin lokaalisti optimaalinen salkku. Tutkielmassa käsitellään lisäksi minimaalista markkinamallia, joka on eräs yksinkertainen malli log-optimaaliseksi oletettavan markkinasalkun arvolle. Tähän liittyen johdetaan myös yksittäisten instrumenttien arvoja mallintava jatkuva markkinamalli, jossa instrumentteja vakiomäärät sisältävä markkinasalkku on minimaalisen markkinamallin mukainen log-optimaalinen salkku.
  • Bernardo, Alexandre (2020)
    In insurance and reinsurance, heavy-tail analysis is used to model insurance claim sizes and frequencies in order to quantify the risk to the insurance company and to set appropriate premium rates. One of the reasons for this application comes from the fact that excess claims covered by reinsurance companies are very large, and so a natural field for heavy-tail analysis. In finance, the multivariate returns process often exhibits heavy-tail marginal distributions with little or no correlation between the components of the random vector (even though it is a highly correlated process when taking the square or the absolute values of the returns). The fact that vectors which are considered independent by conventional standards may still exhibit dependence of large realizations leads to the use of techniques from classical extreme-value theory, that contains heavy-tail analysis, in estimating an extreme quantile of the profit-and-loss density called value-at-risk (VaR). The need of the industry to understand the dependence between random vectors for very large values, as exemplified above, makes the concept of multivariate regular variation a current topic of great interest. This thesis discusses multivariate regular variation, showing that, by having multiple equivalent characterizations and and by being quite easy to handle, it is an excellent tool to address the real-world issues raised previously. The thesis is structured as follows. At first, some mathematical background is covered: the notions of regular variation of a tail distribution in one dimension is introduced, as well as different concepts of convergence of probability measures, namely vague convergence and $\mathbb{M}^*$-convergence. The preference in using the latter over the former is briefly discussed. The thesis then proceeds to the main definition of this work, that of multivariate regular variation, which involves a limit measure and a scaling function. It is shown that multivariate regular variation can be expressed in polar coordinates, by replacing the limit measure with a product of a one-dimensional measure with a tail index and a spectral measure. Looking for a second source of regular variation leads to the concept of hidden regular variation, to which a new hidden limit measure is associated. Estimation of the tail index, the spectral measure and the support of the limit measure are next considered. Some examples of risk vectors are next analyzed, such as risk vectors with independent components and risk vectors with repeated components. The support estimator presented earlier is then computed in some examples with simulated data to display its efficiency. However, when the estimator is computed with real-life data (the value of stocks for different companies), it does not seem to suit the sample in an adequate way. The conclusion is drawn that, although the mathematical background for the theory is quite solid, more research needs to be done when applying it to real-life data, namely having a reliable way to check whether the data stems from a multivariate regular distribution, as well as identifying the support of the limit measure.
  • Sanders, Julia (2022)
    In this thesis, we demonstrate the use of machine learning in numerically solving both linear and non-linear parabolic partial differential equations. By using deep learning, rather than more traditional, established numerical methods (for example, Monte Carlo sampling) to calculate numeric solutions to such problems, we can tackle even very high dimensional problems, potentially overcoming the curse of dimensionality. This happens when the computational complexity of a problem grows exponentially with the number of dimensions. In Chapter 1, we describe the derivation of the computational problem needed to apply the deep learning method in the case of the linear Kolmogorov PDE. We start with an introduction to a few core concepts in Stochastic Analysis, particularly Stochastic Differential Equations, and define the Kolmogorov Backward Equation. We describe how the Feynman-Kac theorem means that the solution to the linear Kolmogorov PDE is a conditional expectation, and therefore how we can turn the numerical approximation of solving such a PDE into a minimisation. Chapter 2 discusses the key ideas behind the terminology deep learning; specifically, what a neural network is and how we can apply this to solve the minimisation problem from Chapter 1. We describe the key features of a neural network, the training process, and how parameters can be learned through a gradient descent based optimisation. We summarise the numerical method in Algorithm 1. In Chapter 3, we implement a neural network and train it to solve a 100-dimensional linear Black-Scholes PDE with underlying geometric Brownian motion, and similarly with correlated Brownian motion. We also illustrate an example with a non-linear auxiliary Itô process: the Stochastic Lorenz Equation. We additionally compute a solution to the geometric Brownian motion problem in 1 dimensions, and compare the accuracy of the solution found by the neural network and that found by two other numerical methods: Monte Carlo sampling and finite differences, as well as the solution found using the implicit formula for the solution. For 2-dimensions, the solution of the geometric Brownian motion problem is compared against a solution obtained by Monte Carlo sampling, which shows that the neural network approximation falls within the 99\% confidence interval of the Monte Carlo estimate. We also investigate the impact of the frequency of re-sampling training data and the batch size on the rate of convergence of the neural network. Chapter 4 describes the derivation of the equivalent minimisation problem for solving a Kolmogorov PDE with non-linear coefficients, where we discretise the PDE in time, and derive an approximate Feynman-Kac representation on each time step. Chapter 5 demonstrates the method on an example of a non-linear Black-Scholes PDE and a Hamilton-Jacobi-Bellman equation. The numerical examples are based on the code by Beck et al. in their papers "Solving the Kolmogorov PDE by means of deep learning" and "Deep splitting method for parabolic PDEs", and are written in the Julia programming language, with use of the Flux library for Machine Learning in Julia. The code used to implement the method can be found at https://github.com/julia-sand/pde_approx
  • Joutsela, Aili (2023)
    In my mathematics master's thesis we dive into the wave equation and its inverse problem and try to solve it with neural networks we create in Python. There are different types of artificial neural networks. The basic structure is that there are several layers and each layer contains neurons. The input goes to all the neurons in the first layer, the neurons do calculations and send the output to all the neurons in the next layer. In this way, the input data goes through all the neurons and changes and the last layer outputs this changed data. In our code we use operator recurrent neural network. The biggest difference between the standard neural network and the operator recurrent neural network is, that instead of matrix-vector multiplications we use matrix-matrix multiplications in the neurons. We teach the neural networks for a certain number of times with training data and then we check how well they learned with test data. It is up to us how long and how far we teach the networks. Easy criterion would be when a neural network has learned the inversion completely, but it takes a lot of time and might never happen. So we settle for a situation when the error, the difference between the actual inverse and the inverse calculated by the neural network, is as small as we wanted. We start the coding by studying the matrix inversion. The idea is to teach the neural networks to do the inversion of a given 2-by-2 real valued matrix. First we deal with networks that don't have the activation function ReLU in their layers. We seek a learning rate, a small constant, that speeds up the learning of a neural network the most. After this we start comparing networks that don't have ReLU layers to networks that do have ReLU layers. The hypothesis is that ReLU assists neural networks to learn quicker. After this we study the one-dimensional wave equation and we calculate its general form of solution. The inverse problem of the wave equation is to recover wave speed c(x) when we have boundary terms. Inverse problems in general do not often have a unique solution, but in real life if we have measured data and some additional a priori information, it is possible to find a unique solution. In our case we do know that the inverse problem of the wave equation has a unique solution. When coding the inverse problem of the wave equation we use the same approach as with the matrix inversion. First we seek the best learning rate and then start to compare neural networks with and without ReLU layers. The hypothesis once again is that ReLU supports the learning of the neural networks. This turns out to be true and happens more clearly with wave equation than with matrix inversion. All the teaching was run on one computer. There is a chance to get even better results if a more powerful computer is used.
  • Laarne, Petri (2021)
    The nonlinear Schrödinger equation is a partial differential equation with applications in optics and plasma physics. It models the propagation of waves in presence of dispersion. In this thesis, we will present the solution theory of the equation on a circle, following Jean Bourgain’s work in the 1990s. The same techniques can be applied in higher dimensions and with other similar equations. The NLS equation can be solved in the general framework of evolution equations using a fixed-point method. This method yields well-posedness and growth bounds both in the usual L^2 space and certain fractional-order Sobolev spaces. The difficult part is achieving good enough bounds on the nonlinear term. These so-called Strichartz estimates involve precise Fourier analysis in the form of dyadic decompositions and multiplier estimates. Before delving into the solution theory, we will present the required analytical tools, chiefly related to the Fourier transform. This chapter also describes the complete solution theory of the linear equation and illustrates differences between unbounded and periodic domains. Additionally, we develop an invariant measure for the equation. Invariant measures are relevant in statistical physics as they lead to useful averaging properties. We prove that the Gibbs measure related to the equation is invariant. This measure is based on a Gaussian measure on the relevant function space, the construction and properties of which we briefly explain.