Browsing by study line "Matematik och tillämpande matematik"
Now showing items 1-20 of 38
-
(2024)A presentetaion of the basic tools of traditional audio deconvolution and a supervised NMF algorithm to enhance a filtered and noisy speech signal.
-
(2024)Tutkielma käsittelee BFGS-menetelmää, joka on iteratiivinen optimointimenetelmä. Se on eräs kvasi-Newton-menetelmistä, ja sitä käytetään rajoittamattomassa epälineaarisessa optimoinnissa. Kvasi-Newton-menetelmissä approksimoidaan Newtonin menetelmässä esiintyvää Hessen matriisia, jonka laskeminen on usein vaikeaa tai liian kallista. Tutkielman luvussa 2 käydään läpi perustietoja optimoinnista ja lisäksi joitain muita esitietoja. Luvussa 3 käsitellään viivahakumenetelmiä. Ne ovat optimointimenetelmiä, joissa määritetään ensin hakusuunta ja sen jälkeen askelpituus. Ensin käydään läpi sopivan askelpituuden valintaa ja tutustutaan Wolfen ehtoihin, minkä jälkeen käsitellään viivahakumenetelmien suppenemista yleisesti. Lopuksi käsitellään Newtonin menetelmää ja Kvasi-Newton-menetelmiä sekä todistetaan, että Newtonin menetelmässä suppeneminen on neliöllistä ja kvasi-Newton-menetelmissä superlineaarista. Luvussa 4 käsitellään BFGS-menetelmää, jossa approksimoidaan Hessen matriisin käänteismatriisia. Ensin johdetaan BFGS-kaava, jonka jälkeen käydään läpi BFGS-algoritmin toteutusta. Tämän jälkeen todistetaan, että menetelmä suppenee, jos kohdefunktio on sileä ja aidosti konveksi, ja että suppeneminen on superlineaarista. Lisäksi tutkitaan menetelmän toimintaa käytännössä esimerkkien avulla. Lopuksi luvussa 5 tutustutaan rajatun muistin BFGS-menetelmään, joka ei vaadi kokonaisen matriisin tallentamista ja sopii täten erityisesti suurten ongelmien ratkaisuun.
-
(2023)Both descriptive combinatorics and distributed algorithms are interested in solving graph problems with certain local constraints. This connection is not just superficial, as Bernshteyn showed in his seminal 2020 paper. This thesis focuses on that connection by restating the results of Bernshteyn. This work shows that a common theory of locality connects these fields. We also restate the results that connect these findings to continuous dynamics, where they found that solving a colouring problem on the free part of the subshift 2^Γ is equivalent to there being a fast LOCAL algorithm solving this problem on finite sections of the Cayley graph of Γ. We also restate the result on the continuous version of Lovász Local Lemma by Bernshteyn. The LLL is a powerful probabilistic tool used throughout combinatorics and distributed computing. They proved a version of the lemma that, under certain topological constraints, produces continuous solutions.
-
(2022)This thesis studies equilibrium in a continuous-time overlapping generations (OLG) model. OLG models are used in economics to study the effect of demographics and life-cycle behavior on macroeconomic variables such as the interest rate and aggregate investment. These models are typically set in discrete time but continuous-time versions have also received attention recently for their desirable properties. Competitive equilibrium in a continuous-time OLG model can be represented as a solution to an integral equation. This equation is linear in the special case of logarithmic utility function. This thesis provides the necessary and sufficient conditions under which the linear equation is a convolution type integral equation and derives a distributional solution using Fourier transform. We also show that the operator norm of the integral operator is not generally less than one. Hence, the equation cannot be solved using Neumann series. However, in a special case the distributional solution is characterized by a geometric series on the Fourier side when the operator norm is equal to one.
-
(2024)The presence of 1/f type noise in a variety of natural processes and human cognition is a well-established fact, and methods of analysing it are many. Fractal analysis of time series data has long been subject to limitations due to the inaccuracy of results for small datasets and finite data. The development of artificial intelligence and machine learning algorithms over the recent years have opened the door to modeling and forecasting such phenomena as well which we do not yet have a complete understanding of. In this thesis principal component analysis is used to detect 1/f noise patterns in human-played drum beats typical to a style of playing. In the future, this type of analysis could be used to construct drum machines that mimic the fluctuations in timing associated with a certain characteristic in human-played music such as genre, era, or musician. In this study the link between 1/f-noisy patterns of fluctuations in timing and the technical skill level of the musician is researched. Samples of isolated drum tracks are collected and split into two groups representing either low or high level of technical skill. Time series vectors are then constructed by hand to depict the actual timing of the human-played beats. Difference vectors are then created for analysis by using the least-squares method to find the corresponding "perfect" beat and subtracting them from the collected data. These resulting data illustrate the deviation of the actual playing from the beat according to a metronome. A principal component analysis algorithm is then run on the power spectra of the difference vectors to detect points of correlation within different subsets of the data, with the focus being on the two groups mentioned earlier. Finally, we attempt to fit a 1/f noise model to the principal component scores of the power spectra. The results of the study support our hypothesis but their interpretation on this scale appears subjective. We find that the principal component of the power spectra of the more skilled musicians' samples can be approximated by the function $S=1/f^{\alpha}$ with $\alpha\in(0,2)$, which is indicative of fractal noise. Although the less skilled group's samples do not appear to contain 1/f-noisy fluctuations, its subsets do quite consistently. The opposite is true for the first-mentioned dataset. All in all, we find that a much larger dataset is required to construct a reliable model of human error in recorded music, but with the small amount of data in this study we show that we can indeed detect and isolate defining rhythmic characteristics to a certain style of playing drums.
-
(2023)Tiivistelmä – Referat – Abstract Tässä työssä on tarkoituksena esittää Fukushiman hajotelma, jota voidaan käyttää yleistyksenä Itôn lemmalle. Ensimmäisessä luvussa käydään läpi perusteita stokastiselle analyysille. Työ etenee stokastisen analyysin perusteista Markovin prosesseihin ja tähän liittyviin käsitteisiin. Käydään läpi additiivisen funktionaalin käsite ja miten se liittyy käsiteltäviin prosesseihin. Martingaalien kohdalla käydään läpi peruskäsitteet. Tämän jälkeen siirrytään käsittelemään Itôn lemmaan ja tämän todistukseen. Itôn lemma on tärkeä työkalu taloustieteessä, etenkin kun työskennellään varallisuushintojen ja osakemarkkinoiden parissa. Itôn lemma luo pohja sille, kuinka varallisuushinnat voidaan määritellä Brownin liikkeen avulla. Samassa luvussa käsitellään myös muita hyödyllisiä stokastisen analyysin työkaluja. Yksi tällainen työkalu on Doobin-Meyer’n hajotelma martingaaleille ja ennustettavissa oleville prosesseille. Hajotelma on tärkeä työkalu, kun siirrytään korkeammalle tasolle stokastisten yhtälöiden kanssa. Ensimmäisen luvun lopussa käsitellään Sobolevin avaruutta, Dirichlet’n avaruutta ja Dirichlet’n muotoja. Näiden tarkoituksena on valmistaa lukijaa pohjatiedoiltaan seuraavaan lukuun, jossa käsitellään yhtä työn päälauseista Toisessa luvussa käsitellään additiivisen funktionaalin ja martingaaliadditiivisen funktionaalin energiaa ja Radon mittaa. Näiden käsittelyn jälkeen, siirrytään Itôn lemman yleistyksen pariin. Lopulta käsitellään yleistystä Itôn lemmalle. Yleistyksen pohjalla on mahdollisuus ottaa lauseesta “heikompi” versio, jolloin kaikkein vahvimpien ehtojen ja oletusten ei välttämättä tarvitse olla voimassa. Tämä on tärkeää, sillä Itôn lemman ehtona on jatkuvasti kahdesti differentioituvuus, joka ei läheskään aina toteudu stokastisissa prosesseissa. Näin ollen voidaan saavuttaa Itôn lemman edut kevyemmillä ehdoilla. Lopulta käsitellään Fukushiman hajotelmaa, joka on käytännöllinen prosesseille, jotka ovat semimartingaaleja. Fukushiman hajotelman avulla voidaan käsitellä tapauksia, joissa aiemmin käsiteltyjen lauseiden oletukset eivät täyty. Fukushiman hajotelma saadaan rakennettua aiemmin esitellyn lauseen avulla.
-
(2022)The focus of this work is to efficiently sample from a given target distribution using Monte Carlo Makov Chain (MCMC). This work presents No-U-Turn Sampler Lagrangian Monte Carlo with the Monge metric. It is an efficient MCMC sampler, with adaptive metric, fast computations and with no need to hand-tune the hyperparameters of the algorithm, since the parameters are automatically adapted by extending the No-U-Turn Sampler (NUTS) to Lagrangian Monte Carlo (LMC). This work begins by giving an introduction of differential geometry concepts. The Monge metric is then constructed step by step, carefully derived from the theory of differential geometry giving a formulation that is not restricted to LMC, instead, it is applicable to any problem where a Riemannian metric of the target function comes into play. The main idea of the metric is that it naturally encodes the geometric properties given by the manifold constructed from the graph of the function when embedded in higher dimensional Euclidean space. Hamiltonian Monte Carlo (HMC) and LMC are MCMC samplers that work on differential geometry manifolds. We introduce the LMC sampler as an alternative to Hamiltonian Monte Carlo (HMC). HMC assumes that the metric structure of the manifold encoded in the Riemannian metric to stay constant, whereas LMC allows the metric to vary dependent on position, thus, being able to sample from regions of the target distribution which are problematic to HMC. The choice of metric affects the running time of LMC, by including the Monge metric into LMC the algorithm becomes computationally faster. By generalizing the No-U-Turn Sampler to LMC, we build the NUTS-LMC algorithm. The resulting algorithm is able to estimate the hyperparameters automatically. The NUTS algorithm is constructed with a distance based stopping criterion, which can be replaced by another stopping criteria. Additionally, we run LMC-Monge and NUTS-LMC for a series of traditionally challenging target distributions comparing the results with HMC and NUTS-HMC. The main contribution of this work is the extension of NUTS to generalized NUTS, which is applicable to LMC. It is found that LMC with Monge explores regions of target distribution which HMC is unable to. Furthermore, generalized NUTS eliminates the need to choose the hyperparameters. NUTS-LMC makes the sampler ready to use for scientific applications since the only need is to specify a twice differentiable target function, thus, making it user friendly for someone who does not wish to know the theoretical and technical details beneath the sampler.
-
(2022)In this thesis, we explore financial risk measures in the context of heavy-tailed distributions. Heavy-tailed distributions and the different classes of heavy-tailed distributions will be defined mathematically in this thesis but in more general terms, heavy-tailed distributions are distributions that have a tail or tails that are heavier than the exponential distribution. In other words, distributions which have tails that go to zero more slowly than the exponential distribution. Heavy-tailed distributions are much more common than we tend to think and can be observed in everyday situations. Most extreme events, such as large natural phenomena like large floods, are good examples of heavy-tailed phenomena. Nevertheless, we often expect that most phenomena surrounding us are normally distributed. This probably arises from the beauty and effortlessness of the central limit theorem which explains why we can find the normal distribution all around us within natural phenomena. The normal distribution is a light-tailed distribution and essentially it assigns less probability to the extreme events than a heavy-tailed distribution. When we don’t understand heavy tails, we underestimate the probability of extreme events such as large earthquakes, catastrophic financial losses or major insurance claims. Understanding heavy-tailed distributions also plays a key role when measuring financial risks. In finance, risk measuring is important for all market participants and using correct assumptions on the distribution of the phenomena in question ensures good results and appropriate risk management. Value-at-Risk (VaR) and the expected shortfall (ES) are two of the best-known financial risk measures and the focus of this thesis. Both measures deal with the distribution and more specifically the tail of the loss distribution. Value-at-Risk aims at measuring the risk of a loss whereas ES describes the size of a loss exceeding the VaR. Since both risk measures are focused on the tail of the distribution, mistaking a heavy-tailed phenomena for a light-tailed one can lead to drastically wrong conclusions. The mean excess function is an important mathematical concept closely tied to VaR and ES as the expected shortfall is mathematically a mean excess function. When examining the mean excess function in the context of heavy-tails, it presents very interesting features and plays a key role in identifying heavy-tails. This thesis aims at answering the questions of what heavy-tailed distributions are and why are they are so important, especially in the context of risk management and financial risk measures. Chapter 2 of this thesis provides some key definitions for the reader. In Chapter 3, the different classes of heavy-tailed distributions are defined and described. In Chapter 4, the mean excess function and the closely related hazard rate function are presented. In Chapter 5, risk measures are discussed on a general level and Value-at-Risk and expected shortfall are presented. Moreover, the presence of heavy tails in the context of risk measures is explored. Finally, in Chapter 6, simulations on the topics presented in previous chapters are shown to shed a more practical light on the presentation of the previous chapters.
-
(2022)In recent years, there has been a great interest in modelling financial markets using fractional Brownian motions. It has been noted in studies that ordinary diffusion based stochastic volatility models cannot reproduce certain stylized facts that are observed in financial markets, such as the fact that the at the money (ATM) volatility skew tends to infinity at short maturities. Rough stochastic volatility models, where the spot volatility process is driven by a fractional Brownian motion, can reproduce these effects. Although the use of long memory processes in finance has been advocated since the 1970s, it has taken until now for fractional Brownian motion to gain widespread attention. This thesis serves as an introduction to the subject. We begin by presenting the mathematical definition of fractional Brownian motion and its basic mathematical properties. Most importantly, we show that fractional Brownian motion is not a semimartingale, which means that the theory of Itô calculus cannot be applied to stochastic integrals with fractional Brownian motion as integrator. We also present important representations of fractional Brownian motion as moving average process of a Brownian motion. In the subsequent chapter, we show that we can define a Wiener integral with respect to fractional Brownian motion as a Wiener integral with respect to Brownian motion with transformed integrand. We also present divergence type integrals with respect to fractional Brownian motion and an Itô type formula for fractional Brownian motion. In the last chapter, we introduce rough volatility. We derive the so called rough Bergomi model model that can be seen as an extension of the Bergomi stochastic volatility model. We then show that for a general stochastic volatility model, there is an exact analytical expression for the ATM volatility skew, defined as the derivative of the volatility smile slope with respect to strike price evaluated at the money. We then present an expression for the short time limit of the ATM volatility skew under general assumptions which shows that in order to reproduce the observed short time limit of infinity, the volatility must be driven by a fractional process. We conclude the thesis by comparing the rough Bergomi model to SABR- and Heston stochastic volatility models.
-
(2023)Tutkielma keskittyy algebralliseen topologiaan ja vielä tarkemmin homologian ja kohomologian tutkimiseen. Tutkielman tavoite on todistaa Künnethin kaava tulokohomologialle, jota varten ensin esitellään homologia ja siitä johdettuna dualisaation kautta kohomologia. Homologia ja kohomologia tutkielmassa esitellään singulaarisessa muodossa. Johdannon jälkeen tutkielma aloitetaan esittelemällä kategoriateorian perusteet. Kategoria kappaleessa annetaan esimerkkejä kategorioista, joita käytetään pitkin tutkielmaa. Kategoria käsitteen esittelyn jälkeen jatketaan määrittelemään kuvaus jolla pystytään siirtymään kategoriasta toiseen eli funktorit. Funktorit jaetaan kovariantteihin ja kontravariantteihin riippuen siitä säilyttääkö se morfismien suunnan. Funktoreista esille nostetaan Hom-funktori, jonka kontravarianttia muotoa hyödyntämällä saadaan myöhemmin muodostettua kohomologia. Funktoreiden käsittelyn myötä pystytään niiden välille muodostamaan kuvauksia, jonka vuoksi esitellään luonnollinen transformaatio. Toisen luvun viimeisimpänä aihealueena käsitellään eksakteja jonoja. Toinen kappale kokoaa tarvittavat esitiedot, jotta voidaan siirtyä käsittelemään homologiaa ja kohomologiaa. Kolmas kappale käy läpi homologian ja kohomologian käsitteistöä. Homologia ja kohomologia esitellään pääasiassa singulaarisessa muodossa. Homologiasta käydään läpi peruskäsitteet, jonka jälkeen siirrytään singulaariseen homologiaan. Tässä yhteydessä määritelmään muun muassa simpleksi, jotta voidaan avata singulaarisen homologian perusteita. Singulaarisesta homologiasta edetään singulaariseen kohomologiaan, joka saadaan aiemmin esitellyn Hom-funktorin avulla homologiasta. Singulaarisen kohomologia kappaleen lopuksi esitellään vielä uusi laskutoimitus kohomologiaryhmille eli kuppitulo. Tutkielman viimeinen kappale käsittelee itse Künnethin kaavan ja sen todistuksen. Lisäksi käydään läpi muita tarvittavia esitietoja kaavan todistuksen ymmärtämiselle, jotka eivät ole vielä nousseet esille aikaisemmissa luvuissa. Tutkielma päättyy Künnethin kaavan todistukseen.
-
(2023)Suomen lakisääteisissä työeläkevakuutuksissa yrittäjien ja palkansaajien eläkkeet on jaoteltu erillisiin järjestelmiin. Näiden vakuutusten ehdot ovat pitkälti samanlaiset, mutta yrittäjien järjestelmä on varsinkin viime vuosina tuottanut huomattavasti huonompaa tulosta. Yksi merkittävä tekijä eläkevakuutustoiminnan kannattavuudessa on kuolevuusmallin soveltuvuus, ja tämän tutkielman tavoitteena on selvittää, selittävätkö mahdolliset kuolevuuserot YEL- ja TyEL-vakuutusten eriävää kannattavuutta. Kuolevuuden arviointiongelman ratkaisemiseksi esittelemme tutkielman ensimmäisessä osassa yleistä selviytymisanalyysin teoriaa. Tässä määrittelemme laskuprosessien, martingaalien sekä Lebesgue-Stieltjes-integraalien avulla Nelson-Aalen-estimaattorin kumulatiiviselle kuolevuudelle. Toisessa osassa sovellamme ensimmäisen osan työkaluja Eläketurvakeskuksen vuosien 2007–2020 kuolevuusdataan. Arvioimme näin TyEL- ja YEL-vakuutuksissa käytetyn teoreettisen kuolevuusmallin soveltuvuutta sekä vakuutuskantojen kuolevuuseroja. Saamme selville, että kuolevuusmalli kuvaa hyvin toteutunutta kuolevuutta, ja että YEL-kuolevuus on maltillisesti TyEL-kuolevuutta alhaisempaa. Tärkeämpää roolia kannattavuuseron kannalta näyttelee kuitenkin ero populaatioiden ikärakenteissa.
-
(2022)Large deviations theory is a branch of probability theory which studies the exponential decay of probabilities for extremely rare events in the context of sequences of probability distributions. The theory originates from actuaries studying risk and insurance from a mathematical perspective, but today it has become its own field of study, and is no longer as tightly linked to insurance mathematics. Large deviations theory is nowadays frequently applied in various fields, such as information theory, queuing theory, statistical mechanics and finance. The connection to insurance mathematics has not grown obsolete, however, and these new results can also be applied to develop new results in the context of insurance. This paper is split into two main sections. The first presents some basic concepts from large deviations theory as well as the Gärtner-Ellis theorem, the first main topic of this thesis, and then provides a fairly detailed proof of this theorem. The Gärtner-Ellis theorem is an important result in large deviations theory, as it gives upper and lower bounds relating to asymptotic probabilities, while allowing for some dependence structure in the sequence of random variables. The second main topic of this thesis is the presentation of two large deviations results developed by H. Nyrhinen, concerning the random time of ruin as a function of the given starting capital. This section begins with introducing the specifics of this insurance setting of Nyrhinen’s work as well as the ruin problem, a central topic of risk theory. Following this are the main results, and the corresponding proofs, which rely to some part on convex analysis, and also on a continuous version of the Gärtner-Ellis theorem. Recommended preliminary knowledge: Probability Theory, Risk Theory.
-
(2022)Työn päätarkoitus on esittää Lindemannin-Weierstrassin lause todistuksineen. Todistusta varten tarvitsemme erinäisiä tietoja algebrallisista luvuista, transkendenttisista luvuista sekä tässä työs sä Galois'n ryhmistä ja Galois'n laajennoksista. Lindemannin-Weierstrassin lauseen todistuksen jälkeen esitetään lauseesta seuraavia tuloksia. Historian saatossa matemaatikot ovat halunneet jakaa lukuja erilaisiin lukujoukkoihin, kuten kokonaislukuihin ja kompleksilukuihin. Luvut pystytään jakamaan myös transkendenttisiin lukuihin ja algebrallisiin lukuihin. Lukua kutsutaan algebralliseksi, jos se on jonkin kokonaislukukertoimisen polynomin juuri. Jos luku ei ole algebrallinen, niin se on transkendenttinen. Matemaatikkojen ongelmana oli pitkään kuinka luvun transkendenttisuus todistetaan. Lindemannin-Weierstrassin lause on ratkaisu tähän ongelmaan. Lindemannin-Weierstrassin lause on seuraava: Olkoot α1, α2, . . . , αn erillisiä algebrallisia lukuja, jotka ovat lineaarisesti riippumattomia rationaalilukujen suhteen. Tällöin luvut e^α1, e^α2, . . . , e^αn ovat algebrallisesti riippumattomia algebrallisten lukujen suhteen. Työn päälauseen avulla pystytään siis todistamaan joidenkin lukujen transkendenttisuus. Tälläisiä lukuja ovat esimerkiksi Neperin luku e ja π, joiden transkendenttisuuden todistan työn lopussa lauseen avulla. Työn päälähteessä käytetään lauseen todistuksessa Galois'n ryhmiä ja laajennoksia, minkä vuoksi käsittelen myös niitä työssäni.
-
(2022)Tutkielmassa annetaan teoreettinen oikeutus sille, että pörssiosakkeen tuotto on lognormaalijakautunut kunhan se täyttää tietyn tyyppiset ehdot. Kun oletamme, että pörssiosakkeen tuotto täyttää nämä ehdot, voimme todistaa Lindebergin-Fellerin raja-arvolauseen avulla, että silloin pörssiosakkeen tuotto lähenee lognormaalijakaumaa mitä useammin pörssiosakkeella tehdään kauppaa tarkastetun ajanjakson aikana. Kokeilemme Coca-Colan ja Freeport-McMoranin osakkeilla empiirisiesti, noudattavatko niiden pörssiosakeiden tuotot lognormaalijakaumaa käyttämällä Kolmogorovin-Smirnovin -testiä. Nämä kyseiset osakkeet edustavat eri teollisuudenaloja, joten niiden pörssiosakkeet käyttäytyvät eri lailla. Lisäksi ne ovat hyvin likvidejä ja niillä käydään kauppaa tiheästi. Testeistä käy ilmi, että emme voi poissulkea Coca-Colan pörssiosakkeen tuoton noudattavan lognormaalijakaumaa, mutta Freeport-McMoranin voimme. Usein kirjallisuudessa oletetaan, että pörssiosakkeen tuotto on lognormaalijakautunut. Esimerkiksi alkuperäisessä Black-Scholes-mallissa oletetaan, että pörssiosakkeentuotto on lognormaalijakautunut. Se miten pörssiosakkeen tuotto on jakautunut vaikuttaa siihen, miten Black-Scholes-mallin mallintamat osakejohdannaiset hinnoitellaan ja kyseistä hinnoittelumallia saatetaan käyttää yritysten kirjanpidossa. Black-Scholes-malli, jossa pörssiosakkeen tuotto on lognormaalijakautunut, esitetään tutkielmassa.
-
(2024)In this thesis, we prove the existence of a generalization of the matrix product state (MPS) decomposition in infinite-dimensional separable Hilbert spaces. Matrix product states, as a specific type of tensor network, are typically applied in the context of finite-dimensional spaces. However, as quantum mechanics regularly makes use of infinite-dimensional Hilbert spaces, it is an interesting mathematical question whether certain tensor network methods can be extended to infinite dimensions. It is a well-known result that an arbitrary vector in a tensor product of finite-dimensional Hilbert spaces can be written in MPS form by applying repeated singular value or Schmidt decompositions. In this thesis, we use an analogous method in the infinitedimensional context based on the singular value decomposition of compact operators. In order to acquire sufficient theoretical background for proving the main result, we first discuss compact operators and their spectral theory, and introduce Hilbert-Schmidt operators. We also provide a brief overview of the mathematical formulation of quantum mechanics. Additionally, we introduce the reader to tensor products of Hilbert spaces, in both finite- and infinite-dimensional contexts, and discuss their connection to Hilbert-Schmidt operators and quantum mechanics. We also prove a generalization of the Schmidt decomposition in infinite-dimensional Hilbert spaces. After establishing the required mathematical background, we provide an overview of matrix product states in finite-dimensional spaces. The thesis culminates in the proof of the existence of an MPS decomposition in infinite-dimensional Hilbert spaces.
-
(2023)Predator-prey models are mathematical models widely used in ecology to study the dynamics of predator and prey populations, to better understand the stability of such ecosystems and to elucidate the role of various ecological factors in these dynamics. An ecologically important phenomenon studied with these models is the so-called Allee effect, which refers to populations where individuals have reduced fitness at low population densities. If an Allee effect results in a critical population threshold below which a population cannot sustain itself it is called a strong Allee effect. Although predator-prey models with strong Allee effects have received a lot of research attention, most of the prior studies have focused on cases where the phenomenon directly impacts the prey population rather than the predator. In this thesis, the focus is placed on a particular predator-prey model where a strong Allee effect occurs in the predator population. The studied population-level dynamics are derived from a set of individual-level behaviours so that the model parameters retain their interpretation at the level of individuals. The aim of this thesis is to investigate how the specific individual-level processes affect the population dynamics and how the population-level predictions compare to other models found in the literature. Although the basic structure of the model precedes this paper, until now there has not been a comprehensive analysis of the population dynamics. In this analysis, both the mathematical and biological well-posedness of the model system are established, the feasibility and local stability of coexistence equilibria are examined and the bifurcation structure of the model is explored with the help of numerical simulations. Based on these results, the coexistence of both species is possible either in a stable equilibrium or in a stable limit cycle. Nevertheless, it is observed that the presence of the Allee effect has an overall destabilizing effect on the dynamics, often entailing catastrophic consequences for the predator population. These findings are largely in line with previous studies of predator-prey models with a strong Allee effect in the predator.
-
(2022)Tutkielman aiheena ovat Möbius-kuvaukset, jotka ovat olennainen osa kompleksianalyysia ja täten edelleen analyysia. Möbius-kuvauksiin tutustutaan yleensä matematiikan syventävällä kurssilla Kompleksianalyysi 1, jonka lisäksi lukijalta toivotaan analyysin perustulosten tuntemista. Möbius-kuvaukset ovat helposti lähestyttäviä ja mielenkiintoisia ensimmäisen asteen rationaalifunktioita. Kuvauksilla on useita hyödyllisiä geometrisia ominaisuuksia ja niillä voidaan ratkaista kätevästi erilaisia kuvaustehtäviä, minkä vuoksi ne ovatkin erityisen tärkeitä. Tutkielman luku 1 on lyhyt johdatus Möbius-kuvauksiin. Luvussa 2 tutustutaan Möbius-kuvausten kannalta olennaisiin kompleksianalyysin käsitteisiin, kuten laajennettu kompleksitaso, Riemannin pallo sekä alkeisfunktiot. Kolmannessa luvussa määritellään itse Möbius-kuvaukset ja esitetään esimerkkejä erilaisista Möbius-kuvauksista. Luvussa näytetään lisäksi muun muassa, että Möbius-kuvaukset ovat bijektioita sekä konformisia, ja tutkitaan kuvausten analyyttisuutta. Luvussa 4 tutustutaan kaksoissuhteen käsitteeseen ja todistetaan Möbius-kuvausten myös säilyttävän kaksoisuhteet. Luvussa määritellään lisäksi kompleksitason erilaisia puolitasoja sekä ratkaistaan kaksoissuhteen avulla erilaisia kuvaustehtäviä tätä myös kuvin havainnollistaen. Viidennessä luvussa tutustutaan kvasihyperboliseen metriikkaan ja näytetään Möbius-kuvaukset hyperbolisiksi isometrioiksi. Aineistonani tutkielmassa on käytetty pääsääntöisesti Ritva Hurri-Syrjäsen Kompleksianalyysi 1- kurssin sisältöä. Lisäksi luvussa 5 pohjataan Paula Rantasen työhön Uniformisista alueista sekä F. W. Gehringin ja B. P. Palkan teokseen Quasiformally homogeneous domains.
-
(2022)This thesis analyses the colonization success of lowland herbs in open tundra using Bayesian inference methods. This was done with four different models that analyse the the effects of different treatments, grazing levels and environmental covariates on the probability of a seed growing into a seedling. The thesis starts traditionally with an introduction chapter. The second chapter goes through the data; where and how it was collected, different treatments used and other relevant information. The third chapter goes through all the methods that you need to know to understand the analysis of this thesis, which are the basics of Bayesian inference, generalized linear models, generalized linear mixed models, model comparison and model assessment. The actual analysis starts in the fourth chapter that introduces the four models used in this thesis. All of the models are binomial generalized linear mixed models that have different variables. The first model only has the different treatments and grazing levels as variables. The second model also includes interactions between these treatment and grazing variables. The third and fourth models are otherwise the same as the first and the second but they also have some environmental covariates as additional variables. Every model also has the block number, where the seeds were sown as a random effect. The fifth chapter goes through the results of the models. First it shows the comparison of the predictive accuracy of all models. Then the gotten fixed effects, random effects and draws from posterior predictive distribution are presented for each model separately. Then the thesis ends with the sixth conclusions chapter
-
(2023)A major innovation in statistical mechanics has been the introduction of conformal field theory in the mid 1980’s. The theory postulates the existence of conformally invariant scaling limits for many critical 2D lattice models, and then uses representation theory of a certain algebraic object that can be associated to these limits to derive exact solvability results. Providing mathematical foundations for the existence of these scaling limits has been a major ongoing project ever since, and lead to the introduction of Schramm-Löwner evolution (or SLE for short) in the early 2000’s. The core insight behind SLE is that if a conformally invariant random planar curve can be described by Löwner evolution and fulfills a condition known as the domain Markov property, it must be driven by a Wiener process with no drift. Furthermore, the variance of the Wiener process can be used to define a family SLE𝜅 of random curves which are simple, self-touching or space-filling depending on 𝜅 ≥ 0. This combination of flexibility and rigidity has allowed the scaling limits of various lattice models, such as the loop-erased random walk, the harmonic explorer, and the critical Ising model with a single interface, to be described by SLE. Once we move (for example) to the critical Ising model with multiple interfaces it turns out that the standard theory of SLE is inadequate. As such we would like establish the existence of multiple SLE to handle these more general situations. However, conformal invariance and the domain Markov property no longer guarantee uniqueness of the object so the situation is more complicated. This has led to two main approaches to the study of multiple SLE, known as the global and local approaches. Global methods are often simpler, but they often do not yield explicit descriptions of the curves. On the other hand, local methods are far more involved but as a result give descriptions of the laws of the curves. Both approaches have lead to distinct proofs that the laws of the driv- ing terms of the critical Ising model on a finitely-connected domain are described by multiple SLE3 . The aim of this thesis is to provide a proof of this result on a simply-connected domain that is simpler than the ones found in the literature. Our idea is to take the proof by local approach as our base, simplify it after restricting to a simply-connected domain, and bypass the hard part of dealing with a martingale observable. We do this by defining a function as a ratio of what are known as SLE3 partition functions, and use it as a Radon-Nikodym derivative with respect to chordal SLE3 to construct a new measure. A convergence theorem for fermionic observables shows that this measure is the scaling limit of the law of the driving term of the critical Ising model with multiple interfaces, and due to our knowledge of the Radon-Nikodym derivative an application of Girsanov’s theorem shows that the measure we constructed is just local multiple SLE3.
-
(2023)Stochastic homogenization consists of qualitative and quantitative homogenization. It studies the solutions of certain elliptic partial differential equations that exhibit rapid random oscillations in some heterogeneous physical system. Our aim is to homogenize these perturbations to some regular large-scale limiting function by utilizing particular corrector functions and homogenizing matrices. This thesis mainly considers elliptic qualitative homogenization and it is based on a research article by Scott Armstrong and Tuomo Kuusi. The purpose is to elaborate the topics presented there by viewing some other notable references in the literature of stochastic homogenization written throughout the years. An effort has been made to explain further details compared to the article, especially with respect to the proofs of some important results. Hopefully, this thesis can serve as an accessible introduction to the qualitative homogenization theory. In the first chapter, we will begin by establishing some notations and preliminaries, which will be utilized in the subsequent chapters. The second chapter considers the classical case, where every random coefficient field is assumed to be periodic. We will examine the general situation later that does not require periodicity. However, the periodic case still provides useful results and strategies for the general situation. Stochastic homogenization theory involves multiple random elements and hence, it heavily applies probability theory to the theory of partial differential equations. For this reason, the third chapter assembles the most important probability aspects and results that will be needed. Especially, the ergodic theorems for R^d and Z^d will play a central part later on. The fourth chapter introduces the general case, which does not require periodicity anymore. The only assumption needed for the random coefficient fields is stationarity, that is, the probability measure P is translation invariant with respect to translations in Zd. We will state and prove important results such as the homogenization for the Dirichlet problem and the qualitative homogenization theorem for stationary random coefficient fields. In the fifth chapter, we will briefly consider another approach to qualitative homogenization. This so-called variational approach was discovered in the 1970s and 1980s, when Ennio De Giorgi and Sergio Spagnolo alongside with Gianni Dal Maso and Luciano Modica studied qualitative homogenization. We will provide a second proof for the qualitative homogenization theorem that is based on their work. An additional assumption regarding the symmetricity of the random coefficient fields is needed. The last chapter is dedicated to the large-scale regularity theory of the solutions for the uniformly elliptic equations. We will concretely see the purpose of the stationarity assumption as it turns out that it guarantees much greater regularity properties compared to non-stationary coefficient fields. The study of large-scale regularity theory is very important, especially in the quantitative side of stochastic homogenization.
Now showing items 1-20 of 38